The Key to Effective Analytics: Fast-Returning Queries

Why Trust Techopedia
KEY TAKEAWAYS

Host Eric Kavanagh along with Dr. Robin Bloor, Dez Blanchfield and IDERA's Bullett Manale discuss queries and how their efficiency can have far-reaching effects.

Eric Kavanagh: Ladies and gentlemen, hello and welcome back once again. It is four o’clock Eastern Time on a Wednesday, and these days, that means it’s time for Hot Technologies! Yes, indeed. We’re talking about cool stuff today. Of course, I am your host, Eric Kavanagh. The title for today’s show is “The Key to Effective Analytics: Fast-Returning Queries.” That’s right, folks, we all want fast. Who doesn’t want fast? There’s a slide about yours truly, and enough about me. Hit me up on Twitter, @eric_kavanagh. I’ll be happy to connect with you there and have a conversation in social media. It can be fun, just don’t talk politics.

The year’s hot. We’re been talking about different analytical issues this year, and the one topic for today really is just central to getting the job done. I remember it was probably five or six years ago that I first heard someone use the expression “have a conversation with your data,” and even though it can sound a bit cheesy, the point is that, if you cannot have an iterative experience with your data, if you cannot quickly amend your queries, send new queries, get answers back fast, then you’re not having a conversation with your data and the whole analytical process is truncated. That’s not good.

When you have a conversation with your data, what that means is you’re able to go back and forth, and in my opinion, that is when you find the insight. Because very rarely are you gonna come up with the perfect query the first time. Unless you’re the Mozart of analytics – and I’m sure that person is out there – you’re going to have to spend some time modifying, adding some dimension, trying to fine-tune what it is that you’re looking for.

Because, again, these are not tremendously wieldy environments that we’re dealing with in the world of analytics; we’re dealing with very unwieldy environments and very complex and multidimensional environments. And so the whole idea of the webcast today is to talk about how to enable that kind of iterative interaction with your data.

We have three presenters. Of course, in Hot Technologies, as opposed to Briefing Room, we have two analysts; they each give their take first, then the guest comes in, gives their presentation, and we have sort of a roundtable. And you, our audience, can play a big part in that. Please don’t be shy; send your questions in at any time. Use the Q&A panel if you can, otherwise the chat panel is fine; I’ll try to monitor both during the show. And we do record these, so if you miss something or want to share it with your colleagues, come back later. We post them at Techopedia.com and also at InsideAnalysis.com.

And with that, I’m going to bring in the smart people. I’m going to hand it off to Dr. Robin Bloor. Let me give him the keys, change presenter, and there you go. Robin, take it away.

Robin Bloor: Okay. Thanks for that intro. About a month and a half ago, I had a chat with a developer who’s actually a DBA. He isn’t really a DBA – he was a DBA at a particular company, and he was the only person that could actually make the queries perform. But he got sick of doing that, because he’s really, he is actually a fairly smart developer. So he left.

And he has to do a couple of days every month for them anyway, because they couldn’t find anyone to take his place and they haven’t got a clue what the database does or how to tune it at all. And I was kind of thinking about that, and just, you know, they didn’t really have an IT department, but this guy was doing support for them. Actually, it was DBA work that he was doing most of the time.

For sophisticated databases – Oracle, SQL Server, DB2, all of those big, expensive ones – database tuning is a tough job. It’s a secure job, as well. And the reason, really, for saying this is that, it’s a changing landscape. I’ll kinda go through this. You know, relational databases – usually the big picture is, the relational databases still dominate in popularity. They’re likely to dominate for a long time to come. Yes, there are other databases now that get more airtime, but, you know, when you actually look at what’s going on out there, Oracle’s doing most of it, Microsoft SQL Server is second, and there are various things happening in the cloud that may cause a challenge, though. They’re the big giants in the game. And they’re the databases that you can use both for OLTP and actually data warehouse workloads. Alternatives are normally used mainly in analytical environments, and then normally it’s determined by the data as to why we’d choose that rather than relational. Mostly people don’t.

Companies tend to standardize on a single database. I came across a company recently that had over 5,000 instances of Oracle. And I kind of, the person I was talking to from that company, I kind of asked them about the DBAs. They said they had about 10 DBAs and about 30 databases were being looked after. And the rest, Oracle was just being used as a final system by and large. There was very little stress on the data from the applications that used them. But that just kind of amazed me – 5,000 instances of Oracle.

And, by the way, they had an Oracle estate license. Well, you know, corporate license, obviously. But they also had other databases because sometimes, you know, applications come with a preferred database. It wasn’t like Oracle was the only thing. And worth mentioning that neither Hadoop nor Spark is actually a database, and it’ll be a long time before they acquire what I think of as a database rule. Good for data links, of course.

With DBA activities – probably Bullett can say an awful lot more about this than me – but I’ll just run through them. These are what I tend to think of, you know, what the DBA does. They install, config, upgrade, do license management. They do a lot of ETL and replication work in one way or another. They do storage and capacity planning. They do troubleshooting or they’re part of the troubleshooting team. Performance monitoring and tuning is pretty much most of their activity, but all of this other stuff, it’s not small, you know. Security, they’re responsible for backup and recovery. They ought to be involved in software test systems, and they could be involved in data life cycle.

Performance. When I used to be one of these guys. When I was running and tuning databases, this was how I understood it, you know? There’s the CPU, and in one way or another in our day, the CPU is pretty much normally idle, because it would be one of the other two or th– Well, one of the other bottlenecks would actually be causing the problem. Memory, thrashing and fragmentation, or disk, or disk I/O saturation, sometimes network overhead, if you’re running in multiple nodes of a network and you could actually run into some locking, probably.

But that was the world as I saw it. I took a look recently at Oracle and the number of tuning parameters that there are in Oracle. It was over 300. You know, and if you actually think about it, a DBA that really knows what he’s doing has to have some idea as to why you would ever mess with any one of those. So it’s a complicated job, you know, and it’s more complicated by this.

You know, right now we’ve got CPUs, but you’ve got…the CPUs already existed, GPUs on the CPU, or with FPGAs on the CPU. So there a kind of crossbreeding going on of what actually happens on a CPU. CPUs became multicore a long time ago; actually, I was no longer tuning databases when that happened. I have no idea what difference it actually makes, now that I think about it.

We’ve got, you know, 3D Xpoint and IBM’s PCM coming up as an extra layer of memory, and we’ve got SSDs, but you know, they are replacing spinning rust. But the SSDs can be varying in speeds. With so many, you can have parallel access and it makes them go incredibly fast – close to RAM speed. And you’ve got all of the parallel hardware architectures.

And this is all, you know, the costs are falling, which is a really nice thing, but this is all making – you know, if you take the next release of a database and then you start implementing it on machines, even some of this, you’ve actually lost any gut feeling you might have for the way the data behaves, because the latencies are just very, very different. And here, you know, you’ve got four layers rather than three layers of storage.

Database issues. You get database entropy – proliferating instances is very common. Databases being used as cupboards, which is what actually that example I gave was. Very few databases are self-tuning, and the ones that claim to be self-tuning are not actually that good, you know. But the other thing is, very few databases are properly tuned. It’s a tough job, being able to balance workloads. I mean, when you think about a database, what a database may be doing over a 24-hour period, the workloads may be very, very different. The database has to have a particularly true [inaudible] data warehouse.

And therefore, tuning that is not a trivial matter, you know, because what you’re doing is tuning parameters that have got to cater for a whole range of workloads over a given point in time. It’s a tough job, basically. And SQL needs to be tuned particularly for SQL JOINs. They can be extremely, you know, resource-consuming. And if the database has materialized views, to be honest, you should be investigating the use of those, because they’ll make everything go incredibly faster. And that requires somebody that understands the workloads and understands the SQL traffic and so on and so forth.

And most companies employ very few DBAs – very expensive. I’ve known fairly large companies with, like, three guys, you know, massive number of instances. Really, they cost a lot, it’s a hard job in terms of the complexity. They need tools.

And I think that’s all I’ve got to say. Oh, yeah. Let’s hand on to Dez, see what Dez has got to say.

Dez Blanchfield: Thank you, Robin. This is a massive topic. I’m going to keep to the things that I think that are effectively day-to-day challenges that we face. Because let’s face it, there’s an entire library of books written on this topic. Who hasn’t gone to a technical bookstore and found walls and walls of books written on just the general topic of database performance and database tuning, and monitoring. And sometimes it’s a great way to kill time.

The general topic: getting performance queries. There are a number of different parts of the organization that sweat this topic – at your end-user level, in my experience, you know, people just experience performance, that things are slow. Spinning wheels take a while to get the queries coming back. At the opposite end of the spectrum, you’ve got infrastructure and network and storage engineering people who are being beat up by database specialists because things aren’t running as well as they expect. And it’s a very broad spectrum, in my experience, the things that can impact our lives in that spectrum.

If you think about, from the physical [inaudible] upwards, you know, just the computer space. It’s got memory, you know, RAM, if you like – disk space, network, and all the bits around that. In this space, we’ve got, you know, it stores the thought that, say, that, you know, it’s better to use raw disk or a JBOD and just, you know, rise as fast as possible the disk and let the database sort out the data protection layer. Other people are big fans of RAID, the redundant array of inexpensive disks, and have different religious experiences with RAID 0, 1, 3, sometimes 5 and 6 different types of striping or replication on disk, in case the hard disk fails. Even at storage level and engineering level, still, we’ve got people who have different views and experience around performance, on types of storage.

Whether it’s direct-attached disks and the servers themselves, or whether it’s connected via a fiber channel with a storage area network of some form, whether it’s storage mounted from a server somewhere via iSCSI or is it Ethernet, for example. And that’s before you even really get to the database layer, where, you know, the sorts of things that we take for granted that – you know, just maintaining that, as Eric outlined, you know, what we call the conversation with your data. Just being able to identify patterns and meaningful patterns that we think we can start to dive into and look for performance issues.

And it’s a very broad topic, so I’m going to dive into two areas where, in my experience, time and energy and effort invested gets some good returns. So let me just quickly skip to the first of these. And I only half-jokingly went looking for a picture of something that had a skeleton on the inside and skin on the outside, but the Lego block was probably the least gruesome. But in many ways this is how I kind of imagine and mentally picture the challenge that we face sometimes with analytics platforms and databases supporting them. And that is that, you really only, as a consumer and end-user or even a developer, often see the veneer skin layer, but it’s actually the skeleton underneath – it’s really the issue that you need to focus on.

You know, in this case, when we think about the things that can impact database performance and analytics resulting from that particular day, the performance hits, the core infrastructure and just monitoring that core infrastructure, and as I outlined just a moment ago, around your disk and memory and CPU. And as Dr. Robin Bloor highlighted, challenges now in virtualization and things that are happening in the chips themselves, and performance down to core level, and the amount of memory that’s now being put into each chip in each core. These are very technical challenges to look into for an everyday person.

Keeping on top of query monitoring. You know, one of the challenges around monitoring queries and query queues is for example – I mean, SQL as a language and the database tools that come around analytics tools, are very powerful, and particularly SQL as a language. But with that power and simplicity also comes a [inaudible], in many cases, and that is that, if it isn’t an application doing the same thing over and over and over, written by a good developer and spotted by a good DBA, it might be people doing unstructured queries.

And the problem with that is, it’s quite easy to learn a little bit of SQL and start making queries, but as a result of that, you don’t necessarily have all the skills and experience and knowledge to know whether you’re doing a good or bad thing to do the database. So continually running the same big, broad, [inaudible] wrong can just take the building down. Keeping on top of query monitoring is an interesting challenge.

Just monitoring response times as far as what the platform’s doing and what users are getting. Again, you know, without the right tools, this is not something that you just intuitively look at the thing and think, “Oh, they network’s running slow,” or “User memory are not performing well,” or “Indexes are performing badly” or “[inaudible] are bloating.”

And then, you know, how do you get to the point where you, once you’ve seen an issue with it, how do you pull it apart and unbundle it and address the whole challenge of poorly structured queries? And, you know, is it an ad hoc query that someone’s running by hand, or is it an analytics tool with a dashboard front-end that’s performing badly because they’re asking the questions the wrong way, or is it just a really, really badly written piece of code?

And then doing that iterative of, Eric said in the set-up initially, you know, just iteratively going over and over and over and fine-tuning those workflows. You know, what workflows am I running, how are they running, how often they’re running, what code’s running against them, where are they running against it in CPU and memory and disk and network? Yeah, that’s just a really, really technical challenge.

And then the nirvana that people are looking for in this world, while shifting from historical analytics and performance tuning and alerting against your environment, which is great to see because you might get a plan in the future for it if you know why things went slow yesterday morning at nine o’clock. But that doesn’t help you right now, and it doesn’t help your plan going forward.

I think that capacity planning and sizing and scaling and tuning, so you know, I think there’s a trend we’re seeing now, where there’s a shift in very large environments where people’ve got big database platforms and broadly spread database environments to go from historical alerting and planning to predictive alerting and planning, where they want to know what’s happening right now and be able to plan for it going forward. Or are we running out of memory and are we going to run out of memory in the next hour, and what can we do about it? What capacity planning can we do in the real time?

Excuse me. It gets to the point where, you know, just the whole challenge of discovering these hurdles get in the way of essentially what we refer to as fluid analytics, and making that the norm in your organization. As you can see, it’s a non-trivial challenge for, you know, just the everyday great, unwashed masses. And it’s still a non-trivial challenge for even the more technically savvy.

You know, if it’s difficult for mere mortals, how do we make this a thing that’s possible? Because, you know, most of these are things that regular users can’t resolve, and we may have some special database engineers, database developers, code developers, programmers, but they’ve still really got to be able to unbundle the environment. They’ve got to pull apart, you know, issues like people reusing code.

You know, one of the worst things that I’ve seen in this space around performance hits in analytics platforms in very large deployments of database server infrastructure is people taking a piece of code, a chunk of SQL or a stolen procedure that they didn’t write, and they don’t know whether it’s a good or bad piece of code, and they just reuse it because it gives them the outcome they want. But it turns out that it may have been just something that was written on the fly to get one or two outcomes, like an [inaudible] report – someone was in a hurry.

And so people are using complex code they didn’t write, and just slap it into a piece of application development, not knowing that they’re actually punishing the back end. Even just monitoring that performance hit and looking at where the queries are coming from and drilling down, that, you know, that’s an everyday challenge I see.

Basic behavioral things like pre-staging data for performance where it’s possible. Things that just experience only teach you, like deleting indexes if you’re going to do bulk imports and then re-index so the indexes aren’t being maintained when you’re pulling in terabytes of data. You know, without the appropriate tools, that’s almost impossible to see because you don’t know the index is getting hammered.

Optimizing indexes regularly is sort of a 101, but what about, you know, when you do bulk imports or, you know, creating a table on queries if someone does a really big query? You know, that can be a massive performance hit, and again, if you’re not monitoring, you don’t have the tools to see that, that sort of just happens in the background and you don’t know how to address it.

Limiting queries to just the number of columns you need – I mean, it sounds really basic, but again, if you can’t see it, you don’t know it’s happening, and then it just happens in the background and it hurts you, [inaudible] at you.

Knowing when and where to use temporary tables, batching up large deletes and updates. Again, all very simple things, but without that visibility, without the tools to do that, they just sit in the background and keep hurting you and you just keep throwing more memory or CPU at a database environment to get better analytics platform performance, when really you should be able to drill into the detail of what’s hurting you and address that particular thing. And then, you know, things like foreign key constraints and how do you find that, how do you even know that’s an issue?

That brings me to the conclusion of my key point here, and that is that, you know, on a day-to-day basis, we see these problems all over the place. And as database environments get bigger and bigger and more and more broad, and as Dr. Robin Bloor highlighted here, we get more and more complex environmental models with database times.

And then also the need to integrate into some of the big data platforms like Hadoop and Spark that are coming along, and more and more at a time. It behooves us, in my view, to find better ways, and particular tools, to perform this real-time platform performance and analytics and diagnostics intelligently. Because it costs real time and real money and frustration for end users and real dollars if we don’t start to get to the tools to dive into these things.

And with that, I’m going to hand over to our friends from IDERA, because I believe they’ve got a good story to tell on how we might be able to address this very problem.

Bullett Manale: Sounds good. Thank you very much, and I will go ahead and kick things off. I’ve got a few slides here as well, and let me go ahead and kind of bring that up. Some of these we’re going to jump through pretty quickly.

Just to give you some insight, I’m the director of sales engineering here at IDERA, and so what we do is kind of talk to DBAs pretty regularly about the pains and the challenges that they have, specific to, in a lot of cases, performance monitoring and those kinds of things, obviously. And we hear a lot from that audience, and so I think I can share some of the information that I receive from them on a regular basis that’ll make sense. I’m going to jump through a few of these, ’cause I don’t think that they’re real pertinent to the conversation.

You know, I have my own list here of the responsibilities of the DBA – it looks a lot like Robin’s list, and I think that it’s pretty consistent. I think when you talk to a database administrator, though, it’s always – you know, they’re lumped into some of these areas more so than others and there’s no rhyme or reason to that, it just depends on the environment.

You hear a pretty wider, broad range of things that people want to be able to do. And a lot of times, the people that want these things don’t– they’ll ask for them and, in some cases, you start kind of drilling into what they’re really asking for, and then you find out that they’re really looking for more. They really want more information than what they initially think that they need, and when you start drilling into the tool, I think that that’s where you can start saying they’re having a conversation with the data.

And I think that that’s a real interesting phrase, and it makes a lot of sense in terms of being able to say, yeah, well, if you have a bad query, what is really a bad query? Is it a query that is consuming a lot of reads or writes or CPU? It could be one that runs a lot, it could one, you know, that’s, like you said, poorly written.

In terms of how we identify that, there’s a number of ways that you’ll see in terms of our product, the Diagnostic Manager product, that we show the DBAs that they can go about that. And it’s real flexible, and I think that’s one of the big things – you have to have a tool that’s going to help you with these performance problems, is everybody’s environment’s a little bit different.

And there’s going to be a lot of, you know, needs and maybe even obscure requirements in terms of monitoring, so you’ve got to have something that is flexible and something that is going to work and be able to conform to the environment that you’re trying to manage. You know, and I have a lot of these examples – I’m not going to go through each one of them, but you need something that you can pivot back and forth between one piece of data and another, and I’ll kind of talk about that when we get into the product a little bit and show you that, and in terms of how we do it.

But the other thing that I think in terms of any good analytics tool is, you know, there’s some core things you’re really looking for. You obviously first and foremost don’t want a tool that’s going to cause its own performance problems in the name of performance. When I say collect the data at no cost, I’m not talking about the cost in terms of, you know, monetary cost, but in terms of the cost in terms of overhead and the cost in terms of the amount of resources that we’re going to use in the name of performance. You definitely want something there that’s going to help.

You need something that’s going to be able to get the data that you’re looking for specific to the problems that you face within your day-to-day, and there might be some things that you don’t need and that you don’t want, and there’s no sense in collecting that data if you’re not going to ever report on it or going to have any needs around trying to manage that data. In terms of the metadata associated to performance, for example.

You know, a good example is, I don’t need to be alerted if the Distributed Transaction Coordinator service in SQL is down if I don’t want it to be running in the first place. So don’t alert me, don’t collect the data against it – I don’t need that information. So having the ability to turn those things on and off is real important.

The ability also to, once you do collect the data, having access to it pretty quickly – you don’t have to, you know, run and massage the data, manipulate the data – being able to do it quickly and efficiently. And then once you have the data, obviously it’s really important to be able to understand it.

Now, this is where, with our – with, like, for example, the Diagnostic Manager product I’m going to show you a little bit today – that product, you know, I would love to tell you that that product is going to replace and be a DBA in a box. The reality is, it requires some knowledge of what your environment is and what you’re trying to accomplish. Having some, obviously, understanding of the role of the DBA itself is obviously important.

Now, what we try to do is educate through the help and through other methods. But you’re always going to want to tie this, obviously, with some type of experience levels or somebody that has some knowledge of what to do once they’ve received the data. And being able to have a person that can ask the right questions to a product, and having that conversation with the data, is obviously key. And then obviously being able to make sense of the data.

Once I have the information, being able to get that out to the right people. My developers, my operations team – whoever it might be, I might need to integrate with other products, having the hooks to be able to do that. These are all real important things. And then, obviously, last but not least, if I need to know more, being able to do that. Whether it means turning on some more to be collected on, or whether it means just going to a little bit deeper into the data. You’re hoping that, with a tool that’s going to be, you know, helping with performance, you’re getting all of the things that you need to be able to answer those questions.

The one thing that I didn’t put on here that I think is probably worth noting is, you need a tool that’s going to help you differentiate what’s normal and what’s not normal. And I think that’s a big one, because, you know, there’s a ton of alerting products and things that are out there, but if you’re getting an alert and the alert is a false alert, it doesn’t do you any good; it’s more of a waste of time and it’s going to reduce your efficiencies more than it’s going to help them. So, you know, those are some things I would keep in mind.

When I talk about the product that I’m kind of tying all of these things to within the IDERA products suite, it’s the Diagnostic Manager product I think that has probably the main kind of characteristics in what we’re talking here in terms of database tuning and performance and monitoring and those kinds of things.

People are looking for enterprise-level monitoring; they want to be able to have access, to be able to, in one screen, know that things are working it the way they should be. Or they want to be able to, obviously, if there’s a problem, to see where the problem is and then be able to drill down into it. Real big part of, I think, what people are looking for with these types of ways in which you can really hone in on your performance.

The other thing that obviously goes along with that is, I can’t just operate in the present, and I need to be able to go back over periods of time, whether that means looking at queries that ran poorly or whether it means, you know, looking at the way that the host VM itself was behaving in terms of resources. All of those kinds of things you’ve got to be able to do, and you’re not going to be sitting there staring at your console 24 hours a day, 7 days a week.

If you’re on vacation or if it’s in the middle of the night, or whatever it might be, you need something that’s going to be able to go back in time with you to be able to say what was going on in the instance at the time we had a problem. And being able to do that, once again, efficiently and quickly and be able to drill down into it is definitely an important piece in terms of this discussion. And I’d say it’s probably one of the more important things in terms of what people are looking for. They’re always looking for that window into the past, because that’s really an im– You know, you don’t want to have to sit there and wait for something to happen again.

The next thing on the list is really just tying back to what we were talking about earlier, with the query performance itself. And I’m going to show you a couple different examples within the Diagnostic Manager product, how we do that, which, surely at the end of the day, it’s going to provide you a lot of options around the queries themselves in terms of what you want to gather.

In terms of whether you’re interested in queries that are causing resource pain, consumption of CPU or consumption of I/O. Whether it’s queries that take a long time to complete or queries that just in general may not be the worst offending in terms of performance, but may run so frequently that the sheer frequency of it itself running could be a problem. And obviously being able to spot trends over time with those queries as well is an important part of it to.

There’s a lot of different ways in which we can do that within this product, and I think that obviously that’s a real important piece to most DBAs. And even if you don’t have your own internally developed applications, it’s still nice to be able to go to your software vendors and say, “Hey, you know what? You know, two o’clock in the afternoon every day when this job takes off,” or whatever it is, “It’s your application that’s causing this, and we needed to get it fixed.” So even if you don’t have complete control over the code itself, it’s still nice to know when problems are happening.

And then, you know, the other part is just obviously being more proactive. Being able to be the first to know, being able to understand when a problem is occurring. To not only be able to be the first to know so you can correct it, but in a lot of cases, when you need is something that will be able to automate a response, in a lot of cases too. You can, say, you know, rather than getting an email saying, “Hey, you need to go fix this,” if I’m in a meeting or if I’m, you know, on the road or whatever it is I’m doing, it’s obviously very nice to be able to say I’ve got something in place that’s going to be able to address that in an automated way.

And if it’s not addressed in an automated way, at least being able to be the first to know so you can take corrective action or contact somebody that can. And so those are obviously big important pieces to, you know, these types of problems you might run into in terms of the monitoring of your machines and your instances and the analytics themselves.

Now, I talked about this earlier, which is the flexibility of things. I can’t stress this enough, being able to say, you know, out-of-the-box, if there’s something that is not being monitored, being able to have the functionality within a product to be able to add those things to be monitored. And in the sense with the example of Diagnostic Manager, we have obviously, you know, WMI counters, [inaudible] counters, SQL Server counters, you can create your own queries.

You can even, you know, if you want to, pull the data from your vCenter environment or your Hyper-V environment, as a result of the polling that’s taking place and being able to, you know, do that on a regular basis and pull that data and be able to view it. And, once again, pivot from one place to another as you’re looking at this information.

So those are the kinds of things that, in terms of what I see people asking for when they’re talking about a tool that’s going to help them in terms of tuning and performance – the product I’m going to show you in just a second is Diagnostic Manager, and it supports everything from 2000 all the way up to 2016. It is specific to SQL Server, and so we monitor manage those things. There’s no agents on the instances themselves that are monitoring instance.

That goes back to collecting the information at a little cost, that, you know, we tried obviously more gathering this information, not use a lot of resources too, do we? We’re trying to leverage the things that SQL Server’s already providing to us and making it better, whether it’s dynamic management views, or whether it’s extended events, or whatever the case may be in terms of the collection itself. Being able to leverage that information and make it better is one of our mandates.

Now, if you look through this real quickly, I’m not going to go through the architecture in too much detail, but having a back-end repository with all of our historical data that you can manage and you can keep for as long as you want. You can even choose the type of information that you want to keep, and for how long. That goes kind of back to that, collecting the appropriate data and leaving the unnecessary data out. If you want to keep the queries for five days that are core-performing and then keep your alerts for two years, that’s up to you and that’s completely your prerogative in being able to do that.

A number of different consoles with this product. You have a web-based version, you have a thick client version as well. And so that’s having the flexibility of jumping on a browser and seeing what’s going on, or if you have a laptop where you have a dedicated client installed, either of those approaches would work for you.

Now, what I’d like to do is kind of do a quick demonstration. And I would point out – I’m going back to this other slide here – that we do have, we’ve just added, just as an FYI for those folks that are familiar with the product, we have a new offering which is the Diagnostic Manager Pro. A professional offering which includes with that something we call Workload Analysis.

And really it’s about being able to interactively look at very large periods of time and go from that, you know, 30-day view to the, you know, five-minute view in about three clicks. And being able to see the spike in performance or the problem in the bottleneck that you might be able, you know, you’d be able to see at a very high level, and drilling down to a very low level. And especially that as well today, that’s new to the product.

But what I want to kind of do is just kind of first start off, and I want to talk a little bit about that pivoting and going back and forth. And I’ve brought up an example, and I’m going to share on my screen here. And, let’s see… There we go. My screen. And let me know, guys, that you can see it.

Eric Kavanagh: There you go.

Bullett Manale: Everything’s okay over there? Alright. So, what you’re looking at right now – and this is the Diagnostic Manager product – and I just wanted to give you a kind of a high-level demonstration of what’s going on here. In this particular example, what we’re doing is we’re showing you the queries that are associated to waits. And so when I talk about being able to go back and forth, drill down deeper, and pivot, that’s – this view here is a good example of that. I can go from a timeline view like we see here, which is going to display now. In our case we’re looking at the waits themselves and the categories of the waits themselves. We can see the statements that are tied to those waits, we can see the applications.

Notice it’s on a timeline view here, so I can identify that information linearly based off of when the problem happened, but then again, if I want to just, once again, pivot, and I say, “You know what, let’s look at this from a different perspective,” let’s go ahead and look at this from the standpoint of, “I want to see the queries or the waits or the applications that are causing me the most pain, and rank them.” And that’s what we’re going to see by “query waits by duration.” Now we’re seeing the applications themselves that are causing me my most amounts of pain, or the waits.

And then, here’s the part that’s really the most important part, is being able to isolate these things. I can see this NoSQL application is kicking off here. It’s causing me a good amount of wait time, well into the 25 seconds amounts of wait time within this 30-minute window that we’re drilled into. And I can then isolate that application and I can see the statements, in this case, that are directly affecting this particular instance.

And so this is just one example of how you would be able to identify a bottleneck, be able to rank the information, being able to prioritize the issues that need to be addressed first. These are all things that you have to consider. You know, you can fix problems all day long, but if you’re fixing the problems that are at the bottom of the list to be fixed, then you’re wasting your time. You have an opportunity cost associated to that.

I’ll give you another example, and this is a little bit of a different example. Rather than specifically pointing to a problem or pointing to an area, you also need a tool that’s going to be able to help you in a broad sense, in being able to say, “Hey, have we had any problems?” or “Are there things that I can do to improve the performance?” and to have something kind of behind the scenes, watching what’s going on. And in this case, this can be related to the configuration; it can be related to the, you know, way in which the health of the instance itself is being managed. And also, obviously, performance things as well.

If I go to this Analyze button over here, the thing I’m going to show you is that, within this product, we also have kind of a proactive listing of things that can be performed in a ranked format that will essentially provide you insight into the things that will likely give you an increase into your performance on that instance, or an increase to the health of that instance. And it’s in a ranked format in the sense that you have that ability to see which ones are more likely to improve your performance specific to a particular type of problem that’s been identified.

And so, when I look at these things and I identify them, not only do I see that I have a problem and I have also, in a lot of cases, a script that can be built automatically to fix that problem. But in many of these cases, we also have external links that will reference the type of problem that we’re experiencing, and then why we’re giving this recommendation as well, so you get that educational aspect of things. Which, once again, I think is very important when you’re talking about, you know, fixing problems.

I don’t want to just blindly follow these recommendations, I want to understand why these recommendations are being made. And I might be a senior DBA that’s been doing this for 30 years and I need something that’s going to, you know, check the – or dot the i’s and cross the t’s, in this case – or maybe I’m a junior DBA and I need a little bit of help in terms of understanding these problems as they’re happening, and why these recommendations are being made.

Like I said, I’m just going to take you through a couple of different parts of the product. This tool’s been around, you know, it’s been around since 2004, 2003. And it really has a lot of development put into it, a lot of information, so it wouldn’t make sense to try and show you everything here. But I think one of the things that’s worth noting is that, when we go in and we start talking about all of the things that you can monitor and all the things that you can alert on, once again, going back to that flexibility of things, here’s a listing of all the items that we are monitoring.

Now, it doesn’t necessarily mean I want to consider these things to be in an alert state if they get out of whack in terms of the threshold, so you can turn these things on and off. This goes back to that, “Hey, I only need to do certain things to certain metrics. I only need to, you know, alert on certain problems.” And be able to make sure that we’re not going to, you know, saturate you with a bunch of false positives. Not only do you have the ability to turn these things on and off, but in many cases, you’ll notice that we also provide that band of normalcy as it relates to each metric. So if I’m looking at this particular, in this case, a baseline, I would notice that the threshold’s probably higher where they’re at right now.

On the other side of the coin is, what if I have an instance of SQL, where I’m tracking some metrics and those metrics, for whatever reason, the thresholds I’ve set are incorrect? In other words, the thresholds are smack dab in the middle of where the baseline is actually sitting, which means if I have an alert tied to that threshold, I’m probably gonna be getting an alert for something that’s a normal event. And so, in those kinds of situations, we can provide you that insight as well across the board.

For all of the metrics on this particular instance, I can see those thresholds that are probably likely going to show a false positive here in terms of what’s normal and what’s not. This is going to be something that would be considered more of a normal usage thing on the memory side, and if I wanted to increase that threshold, I could, but that’s kind of the idea with the baselines.

And the cool thing about the Diagnostic Manager product in terms of the baselines themselves is an ability to set multiple baselines. And you may ask, “Why would I want to do that?” And the answer is, well, if you have a maintenance window that runs from, let’s say, midnight to 4 a.m., where you’re really taxing your resources, you’re really using the resources as much as possible, then you want to be able to, once again, shift, and you want to pivot a little bit and say, “Look, we’re going to change our thresholds for that.” And we can actually dynamically adjust our thresholds particular to whichever baseline we happen to be in, based on the time of day or day of the week and so on, that it is. So it’ll then dynamically adjust those thresholds for us.

Let’s take a step again. Once we’ve identified those thresholds, once we’ve gone through, and, in terms of setting up alerts and notification and being apprised of these situations that might happen, once again, flexibility is paramount here. You want to be able to alert in specific situations. In other situations, you might want to send an email to somebody else, you might want to run a PowerShell script, you might, you know, the list goes on.

I might want to integrate with something via SNMP trap or even directly with, for example, SCOM. The point is, you have the flexibility to do that, and you can set up whatever types of conditions would warrant that, whether it’s a very broad-reaching condition – you know, my CPU and memory or whatever resources – across all of my instances, or maybe I have a very specific type of thing I want to monitor for because, when I find that we’re in violation, I want to run a very specific and directed script at that problem. So this is where you would be able to do that kind of stuff inside of the Diagnostic Manager product, just, you know, in terms of the alerting and the notification, and being able to be flexible from that standpoint.

Now, I won’t go through all of the alerting and all of that good stuff. I did want to talk about the reports. And, once again, being able to take the information and leverage that data in a number of different ways – and this goes back once again to the conversation with your data. And a lot of people, when they first see this product, they think, “Oh, well, I’m going to have a tool that’s going to alert me when there’s problems. That’s what I need.” And the reality is, is they need that tool, but the other side of that is, if they really – they also need a tool to help them make decisions, and they can leverage this information that we’re collecting in the name of performance and also in the name of alerting, to be able to help you make decisions down the road moving forward.

You know, a good example would be my growth forecasts within my database. If I have a specific database that’s growing, being able to point to that database or even multiple databases to be able to see what the growth rates are. We’re not showing you based off of what, you know, what it is today; it’s going to forecast it out based off of the past growth that we’ve experienced.

If I’ve got a few databases here – which I happen to have, imagine that – I could go in and say, “Let’s take the last, you know, year’s worth of data, let’s correlate that by month, and in a sample rate of months, let’s go ahead and see how much growth we’re going to see in the next three years, or 36 units.” In which case, we can very quickly answer that question. Now, try to do that on your own, right? Try to do that in as much time as I did it in on your own. It’s going to take you a while.

Now, to even kind of further stress that, let’s take another report, which is my top servers report. Imagine I have a hundred production instances, which in this case, I don’t. But if somebody comes to me and says, “I need you to tell me – we’re going to put this new database out there for this great new application; it’s going to change everything as we know it; it’s going to make life so wonderful. Oh, by the way, the database itself is going to be really I/O intensive, or it’s going to be CPU intensive, or really memory intensive…,” whatever fill-in-the-blank it is, I want to be able to see, of all of my production instances, where does it make sense to put that database? And I can rank all of my instances against each other in terms of the contingent type, whether it be CPU, memory, disk or whatever the case may be. And so the point here is being able to answer that question quickly and efficiently and making the right decision rather than guessing when you do it – those are all obviously real important, and you need something that’s going to help you.

And when we talk about analytics, it can range from anything like what we’re talking about with capacity planning to the, you know, alerts that you’re running into on a day-to-day basis that might deal with CPU, as well as obviously the queries themselves, whether there’s blocking and so on and so forth.

Another example of that would be, if I went to the administration section over here – actually, I take that back, the alerting section over here – querying the depository of our historical information for things that have happened in the past. Have I had blocking that’s occurred in my production environment? I don’t know, let’s find out.

I can go back to my Production tag and I can say, for all of my production instances, given whatever period of time, for any metric that I want to identify. If I’ve gone into an alert state on, in our case, let’s say blocking by count, not by seconds of blocking, and I can go back and, in this case, a few months, if I need to – or in this case, one month – and I can see that blocking. I can see when it started, I can see when it ended, and I can drill down into any of these pulling intervals if I need to, to see the specifics of the blocking incident in itself. You need to be able to have something that’s very quick, being able to find what you’re needing and looking for rather than spinning a lot of cycles to do it in. And so that, I think that’s also important.

The last thing I want to kind of show you – and showing you this product, the Diagnostic Manager product – is we have, as I’ve mentioned before, we’ve gone in and we’ve added another component to our SQL Diagnostic Manager Pro offering. And that’s the Workload Analysis component. And this is a web-based version of this, in this case that we’re showing you here. But the point here is that, this allows you to look at a really broad period of time or a very specific window of time, and from, you know, a few clicks being able to see the code directly related to problems that might have happened.

As an example of that, if I’m looking at a four-week view, here I can see, right here, all the spikes in terms of the databases and the performance of those databases and where we saw wait activity on those databases. Now, and you can see, if I see a spike here, the benefit of this tool itself is just being able to highlight that little bar right there. And then, when I do that, all of the stuff over here changes. We would be able to see the databases, we would be able to see all the commands are tied to what’s behind that bar.

Same thing if I said, “Let’s look at the last four hours,” rather than the last four weeks. I can still do that. I can still highlight that period of time, and then from there – here’s, once again, here’s my pivot points – all of these things here I can link to. The top SQL statements, I can see those queries, in this case, that were causing waits that were related to CPU consumption. Just by drilling in, I can see those queries related here – whoops – and I can also see the programs and whatnot associated with this as well.

You get a lot of insight here, and not only that, but you can see, when you get down to the command level, it’s going to tell you things. It’s going to tell you whether it sees heavy operators, you can then view the execution plans. This is taking a little bit of time because it’s pretty extensive to load this one. But the point here is that you have a lot of different ways to view the data, to see what it is you’re looking for, and then obviously be able to take action from there as you need to, so, and this one’s taking longer than it normally does, so I’ll leave it at that.

And so with that said, I’m going to pass it back over. And hopefully this was a good demonstration of kind of the things we were talking about. And like I said, the product itself that we were using to kind of give these examples has been around quite a long time, and so a lot of other things we could talk about and show you, but if this is something that is of interest of you, you can always go out to our website and download it and play around with it.

Eric Kavanagh: And I love that you show all this detail. If you go back a couple screens – even this screen is pretty good. Because there’s so many different ways to visualize what’s actually happening, and I think this is one of the more under-appreciated aspects of computing these days. It’s certainly a database environment that, in many ways, I have this half-joke I say: “We’re still learning to speak silicon.” We’re still learning to understand how to see what’s happening, and to your point, which was very well-taken, you need to have that conversation with data to better understand what’s going on, why things are going slowly, because there’s so many possible problems. And, of course, IDERA’s got a number of different products, one of which is the old Precise products that I think could be complimentary to this.

But maybe Robin, I’ll throw it over to you for a couple of questions, and then Dez, a couple questions from you, and then maybe anyone from the audience, don’t be shy. Send them in now.

Bullett Manale: Robin, are you on mute?

Robin Bloor: Yes. It’s alright, I’m just taking myself off mute. I must say, it’s incredibly – the thing that actually struck me as most dramatic about this tool, because it really – especially given the fact it’s quite obvious that a whole series of dimensions you just didn’t go into – the thing that actually, I think, was most impressive about this is, it must be a really, really good way to train a DBA. You know, it’s – so when you first get into doing database work and you actually don’t know much about what is actually happening in a database, it is actually really, really hard to get an understanding. So is this used a lot, specifically for training? I would use it.

Bullett Manale: Yeah. I mean, when you say training, you mean kind of like a training-in-progress as a DBA kind of thing, right? In terms of…

Robin Bloor: Yes, yes, yes, yeah. A learning tool. You know, a [crosstalk].

Bullett Manale: Yeah, I would think for sure that’s the case, and even more so that we’ve added this, the Analyze component that we were showing you earlier, that has all of the recommendations that are tied to it. But I think for sure you’ll find, within the help and a lot of different areas within the product, it does provide you with, you know, a lot of insight. A lot of information.

And the reality is, like I said, you can use this if you’re not a DBA. You’ll probably find yourself doing some Google searches and things like that, just to the general knowledge of what most DBAs have, but you can correlate this and it’s definitely going to help you in terms of, “Hey, you know, hey what’s this thing called fragmentation?” or, “Why is this query running 6,000 times?” I mean, because these things will be brought up to you and they will bubble up, and you will see them. You’ll see you’re, you know, what’s normal and what’s not. You’ll see the things that are spiking and the things that aren’t.

As a rule, we try to set this thing up as, in terms of best practices. So that, when you point it to an instance, it’s going to show you the things that are identified as outside of best practices. I mean, of course, you know, the reality is that best practices is best practices and it’s not always real practices. But, you know, it will show you the outliers, even from the initial point that you install it and point it to an instance.

And then from there you can kind of move along as you need to necessarily to fix the problems and identify whether that’s really a problem or something that’s normally happening on a day-to-day basis. And then, because you have a lot of information to help and the recommendations, yes, absolutely.

Robin Bloor: Alright. And another question – but I’m sure the answer to this is very swift – is that, you do have the granularity to go right down to the individual query and individual point in time and look from that dimension, [crosstalk].

Bullett Manale: Sure, yes. Depending on what you’re wanting to do, you can look at a one-minute window of time or you can look at a three-day window of time or, you know, a three-week window of time. And, you know, like I said, it depends on how you want to look at the data, and also what you want to collect. In some cases, we only collect the queries that are reaching a threshold that you’ve identified. In other cases we might collect, you know, every query that causes a wait.

But you also have the ability to say, “Look, those thresholds that I identified, maybe it’s just for writes, or maybe it’s just for reads, or maybe it’s just for CPU.” So, assuming that it’s surpassed that threshold, then that’s what you want to collect on. Then whatever timeframe that you want to look at, you would be able to see those queries that are offending, based off of what you consider to be offending.

You have a lot of different ways to look at the data. You can look at it in consolidated view to see, you know, the queries that – how many behind-the-scenes queries kicked off, versus, you know, every single incident of that query kicking off, to watch a pattern, if you will, to see if it’s continually getting worse.

But to answer your question, you can definitely point to whatever time you want. You have this thing called the History Browser – and I was kind of using it a little bit – but basically whatever point in time you select, whatever day on the calendar that you select, you can go directly to that point in time.

Right now I’m looking in November 15th at 7:05 p.m., and we can look at queries specific to that time. If I had any that were running poorly given that window of time, we would be able to look at the session details specific to that window of time to see what sessions were running. I mean, there’s a whole slew of data here, and like I said, the hardest part, really, is the maybe 30 minutes of playing around with the console and figuring out how to do this stuff.

But once you recognize that most of the data here is in this ribbon and it’s divided by these tabs, and each tab has its own set of dynamically changing buttons that appear every time you click on it, then whether you’re looking at real-time stuff or stuff that happened last week, it’s the same process. It’s basically, I’m looking right now November 15th, but I can just as easily look at the real time just by clicking that button. And I’m going to interact with the data the same way.

But to answer your question, yeah, there’s a lot of different ways to view historical information, and that also pertains to the queries themselves.

Robin Bloor: I see. It’s very impressive. And I love the fact that the windows synchronize, even though that’s kind of become pretty much necessary in anything that’s dealing with real-time data nowadays.

Bullett Manale: Yeah. Sure.

Robin Bloor: Here’s just a point of information that I actually don’t know the answer to. As your offers – SQL Server and the cloud – can you point to the cloud on under Ratio?

Bullett Manale: You can. You can point this under the cloud. When you actually add instances, it’ll ask you if it’s RDS or Azure. Now, there are gonna be some limitations based off what’s being exposed to us from the cloud, so there might be a – there’s a little bit of a difference in terms of what we can monitor, simply because the instrumentation, in some cases, isn’t there for us to gather, based off of what Microsoft is exposing.

Now, if it’s something like, you know, infrastructure as a platform, like, you know, or EC2 or something like that, that’s not a problem at all. We get everything. And as we work with Microsoft and we work with Amazon; we’re working to expose that information in further more detail. But absolutely yes, we do support those environments.

Robin Bloor: Okay, that’s interesting. Well, I’ll hand on to Dez, who I’m sure will throw you questions from a different direction.

Bullett Manale: Alright.

Dez Blanchfield: Thank you. I’ve got two very quick ones for you. I think, you know, the first one is, the scales, you know, I think one of the things that strikes me is that the general theme of the performance tends to be something that we think about when we get very big, very large, very large-scale and broad, and terabytes of data. Watching the demo, it struck me as, this is something that actually applies to even the very small environments, sort of just getting performance hits.

What kind of spread do you see in the uptake of this, and do you think it’s, you know, do you think it’s a tool that has a good, you know – in my mind, it does, so I think it’s a yes – but I’m just keen to see what you’re seeing. Smaller organizations are having the same conversations and looking for a tool to do this, or is it really something at the bigger end of town?

Bullett Manale: It’s funny – that’s a good question. It’s a little bit of a mix, but I’d say that we have a ton of small customers. And when I say small customers, I mean, you know, one to five instance purchases to license to manage. Now, in some cases they might have 30 instances, right, of SQL, and they only really care about the five really, really importantly enough to invest in a tool like this, for those five instances.

But the reality is, is that, even the smaller shops, you’ve got a handful of SQL Servers out there. In most cases, or in a lot of cases, that small shop is very, very dependent on those databases, because of, you know, what they do. And so they don’t, they can’t let it go down. They can’t, you know, they have to have a tool.

The other side of that coin is that, in some of those smaller shops, they don’t have dedicated DBAs, so the guy that’s the smartest guy in the room or the more technical guy in the room ends up being the assigned DBA. And so, in that situation, they’re definitely looking for some help, and this tool will obviously help them in that regard as well.

For your larger environments, as I think it was Dez that mentioned it – or Robin, I’m not sure – but, you know, the larger environments, you’d be surprised at how many DBAs they have, I mean, we’re talking huge numbers of instances of SQL, and you’ve got literally handfuls of DBAs that are tasked to be responsible for them. And so from that perspective, those guys, you know, they’re looking for some help because they don’t have the resources really adequately enough to really help them, and so a tool will help offset some of that.

And so we see that quite a bit as well, where, you know, you’ve got three guys managing 200 instances. And so you can imagine the logistics of that if you don’t have a tool like this, to try to figure out when even there’s a problem. It’s not going to be a proactive way, I can assure you. So hopefully that answers your question. Yeah.

Dez Blanchfield: It does, yeah. It did strike me – and I think Robin sort of alluded to it – but, you know, the sort of promise that you’re describing when you did the demo, I mean, they’re not exclusive to very large environments. You know, you can buy a common off-the-shelf platform that’s designed for one thing and put it into a database shared environment for something else, and it’ll just punish the entire environment.

The other thing that struck me – it’s not so much a question, just an observation, but I’ll lead it to a question, though – and that is that, you know, when organizations have already made an investment in their infrastructure and their platform and their database and the servers and the infrastructure around that, and they’re going to buy a product, whatever it might be – an HR, an ERP, a BI tool – they’ve already kind of made a fairly big investment.

What sort of response do you see when you have a conversation with people and they’ve realized they’ve got a performance issue, but they feel now they’ve got to make yet another investment to get to it? Is there a point where they realize once you demo it that they [inaudible] this thing as a no-brainer, and it’s not so much a sales pitch, but it’s more of an epiphany. It just is, you know, “We’re immediately going to see benefit from this.” As opposed to just having to sell the product? It seems to me that it sells itself, and the ROI just jumps off the page.

Bullett Manale: Yeah, and that’s funny you say that because, what a lot of times will happen is that, somebody will, like a DBA or even the sales reps, will come and they’ll say, “Hey, these guys want to see a, like, an ROI sheet on this.” And more like a, something on paper that we would send to them. And the demo is always 10 times better, especially being, you can do it with the DBAs themselves, because–

Dez Blanchfield: Yeah.

Bullett Manale: Like you said, the product sells itself. It’s really hard to put an ROI on a piece of paper and say, “Okay, how many clicks does a DBA typically, you know, click in an hour?” as it relates to backups, you know, or whatever the case may be, you know? And trying to put that on a piece of paper, it’s really hard to do that. But when you get somebody and you show them the product, and they see it, it’s exactly what you said.

People realize the value of it. Because not only is it helping them understand and make better decisions, but it’s also, it’s helping, you know, them to not be the bad guy. They can be the first to know; they can correct the problem before it’s ever even identified that there was a problem.

The other part of that is that, you know, as a DBA, whether it’s a, you know, real or perception – and I think it’s perception – you own the performance problems, really. You’re the guy that gets the finger pointed at you when the performance goes down, and the reality is that it could be the developer that really is causing the problem.

Having a tool to be able to say, “Hey, this is not my problem, I need to be able to take this to the developer and they need to correct this,” or, you know, along those lines. It’s a nice way to be able to have something in your arsenal to be able to say, “This is where the real problem is.” You know?

Dez Blanchfield: Yeah. The last one for you, and the thing that strikes me, looking at this as we went through it was that, often when we think about performance issues, we tend to bring in special skills. They come with 20 years of experience, they look at it, and they sort of [inaudible], you know, the classic joke of the guy that walks into the engineering shop and has a tiny little hammer and hits the machine in the right spot and then says, “That’s a $15,000 fix,” and people go, “We’re not paying for that,” you know, because it’s five minutes of the work. And he says, “Well, that five minutes’ work took 15 years of experience to fix and it saved you millions.”

To me it seems like, you know, there’s a middle process of, people go through this thing saying, “Okay, bring the special skills in, fix the problem, it’ll go away.” But what they’ve done then is they’ve just put a Band-Aid on it, right? As opposed to a scenario where, from what I can see here, where when this goes in, yes they may have addressed some performance issues that they thought they were experiencing, but it seems to me, just then, just to have this 24/7 kind of, you know, set of eyes watching the environment real-time.

You really end up getting away from the scenario of DBAs getting woken at four in the morning because reports are running. Is it the case – and maybe it’s rhetorical – but is it the case that people quickly transition from looking to invest in a product to get it to solve a particular problem, but then it generally just becomes part of the DNA?

Bullett Manale: Yeah, and it varies from place to place, but, I mean, I’ve got some folks that originally purchased the product, like, back in 2006, and they’ve been to three different jobs at different companies, and they’ve gone in and, when they go to that next company, they promote this as something to get because they have a workflow. And call it that, I hate to call it that, but, you know, that workflow involves this product and they’re used to it on a day-to-day basis and it helps them, and so they don’t want to learn something new.

But absolutely. I mean, most of the time we get people to download this product, it’s not because they have a budget and that they’re going out and they’re saying, “Hey, well, we have this performance budget, we need to do a proof of concept, and we need to step in and figure out, do an evaluation and all that stuff.” Usually what happens is, they’ve got a problem on an instance of SQL, and they’re looking for some help to fix that problem. They go and download our tool, they get the problem fixed, and then they realize that this, the tool itself is going to do more than just fix the problem that they had at the time, that it would actually help them improve the overall performance and keep other problems from happening, moving forward. And that’s for sure. And you can definitely keep using this tool to continuously tune the environment because you’re always going to be able to see not only what happened right now, but what happened last week, last month, last year, and compare that to what’s gonna happen tomorrow. You know? That kind of thing.

Dez Blanchfield: Yeah.

Bullett Manale: So, for sure.

Dez Blanchfield: Perfect. So you’ve mentioned, you mentioned something about– I’m just going to wrap up before I hand back to Eric to close. One of the things I’m always interested in is, you know, how do people get their hands on it? You mentioned download it. What’s the 30-second summary of just how they get their hands on it, get a copy, spin it up and play with it, and what they might need infrastructure-wise, just to get an instance.

Bullett Manale: So that’s going to be, you go to IDERA (i-d-e-r-a).com. IDERA.com is the company, and if you hit that website – and I can actually show you here – I don’t know if I’m still sharing my screen, but if you go to the Products page, then go to the Diagnostic Manager link, there will be a little Download button, and you can just download the build after you fill out your information. They’ll ask you for the 32- or 64-bit build, and you’re off to the races, as they say.

Dez Blanchfield: And will it run on a laptop for someone to play with it, or do they need to load it on a server somewhere?

Bullett Manale: No, no. In fact, what I showed you today was all running from my laptop. Now, my laptop has 32 gigs and 8-core processor, but it’s still a laptop. But it doesn’t necessarily have to have that much resources, to answer your question. The evaluation itself is good for 14 days, but you’re more than welcome to give it a longer trial. If you just give us a call, we can extend that for you if you’d like.

Dez Blanchfield: I think that should be [inaudible] something to take away, ’cause I’m definitely going to do that. I think, you know, from the looks of things, it seems to me a no-brainer to download it and play with it. Probably go to one of your environments and just see what you can see, ’cause I suspect that – like everything I’ve seen in a database background in the last 20+ years, which ages me – once you get to see what’s under the hood, it’s amazing what you realize you can fix quickly and just get little gains in performance.

Awesome, thanks for the demo. It was really great. Thanks for all the time to discuss the questions.

Bullett Manale: You’re welcome. Thanks for–

Dez Blanchfied: Eric, I’m going to hand back to you.

Eric Kavanagh: Yeah, we do have a really good question from the audience member. You kind of talked about this in your presentation, and I actually tweeted about this because it was such a great quote. You said you don’t want to be using a tool to monitor performance that impacts negatively your performance.

Bullett Manale: Right. That’s right. That’s kind of an important part of a performance-monitoring tool, is it doesn’t cause performance problems. Exactly right.

Eric Kavanagh: Exactly. Well, it’s like those darned – it’s like the anti-viral programs which can just wreak havoc on systems. I mean, I’ve used a number of different technologies for broadcasting where the anti-virus program kicks in and will truncate your stream. So there are things that happen that you don’t expect, but the question, it relates to that specific comment you made. And what kind of performance hits do you see? Is it two percent, is it five percent, is it one percent? Do you have any numbers you can throw at us?

Bullett Manale: Well, I mean, the challenge with this question is that, you know, part of the discussion we were talking about earlier. I can give you the– it’s usually around one to three percent, to answer your question. But there’s more explanation that I think would be required which is, we provide you a lot of ways to be able to tell the tool what you want to monitor, right? And so it goes back to that. Well, I might want to get a sample of every query that’s running. So I want to have a tool that’s flexible enough to be able to turn that on so I can see that.

And so, part of that flexibility includes, you know, there’s a cost to it. If I need to collect more data because I want a sample of every query that’s running in the last, you know, 20 minutes, I can turn that on and it can do that. And so, but generally speaking, yeah, one to three percent is what we see, in terms of overhead. But that’s going to vary, and most of that’s going to be dependent upon your things that you turn on and turn off, in terms of your thresholds, how much data you want to collect, your polling intervals, all that kind of stuff ties into that.

In fact, if you go out to the instance itself that you’re managing, one of the things you’ll see is, we have multiple polling intervals that you can specify. And that’s simply because we want to, you know, I don’t need to check every– If I want to do a heartbeat check on an instance, I don’t need to poll the CPU and everything else along with it if I’m doing it every 20 seconds. So you have multiple polling intervals that you can specify.

You also have, like I said, your query monitoring that you can specify. And this can be done for each instance independently, so you can really cater to that specific instance in terms of what you want to monitor. For my wait statistics and wait monitoring, I can turn that on or off. And I can tell it to capture everything, I can tell it, you know, what I want to capture and when I want to capture it. So a lot of that will also– You have to take into consideration what you’re doing, in terms of what you’re telling the tool to monitor.

But generally speaking, what I would say, is, like I said, around one to three percent is what we see. We’ve been selling this tool a long time – since, like I said, about 2003 or 2004 – and we’ve got thousands of customers, so I can assure you that, you know, we don’t have– we try our best not to cause performance problems in the name of performance.

Eric Kavanagh: Yeah, that’s really good information. I just thought that was a brilliant quote because, you know, again, you don’t want to defeat the purpose of what you’re trying to accomplish, right?

Bullett Manale: Exactly.

Eric Kavanagh: And I appreciate Robin’s question, too; this really is an excellent platform for helping DBAs understand the many different aspects and dimensions and layers of what we’re talking about. And I think the concept of conversation with your data is highly appropriate here, because, to your point earlier, you’re not gonna figure it out on the first try, usually. You need to spend some time looking at the data, looking at historical data, doing that synthesis in your mind. And that’s the job of the human, right? The job of the profession that sits back there and takes heat from the business on a fairly regular basis, to get that job done and to keep the trains running on time, right?

Bullett Manale: Absolutely.

Eric Kavanagh: Well, folks, this has been another fantastic event. If any question you asked was not answered, by all means, let me know. Send an email to [email protected]. We do archive all these events, so you can always go to InsideAnalysis.com to find the archive, or go to our partner, Techopedia.com. If you look on the right-hand side of their page, you will see Events, and the webcasts listed there. If you click on More Events, you can see all of the webcasts that we do listed there, past, present and future.

And with that, we’re going to bid you farewell. We’ve got five more webcasts for the rest of this year, folks. We may schedule one more. But otherwise, it’s going to be on to 2017. The ed cal is out. Let us know, and if you have someone that wants to showcase their technology, send an email to [email protected].

With that, we’re gonna bid you farewell, folks. Thanks again for your time and attention, we’ll talk to you next time. Take care. Bye-bye.