Performance Play: Say Goodbye to Latency

Why Trust Techopedia
KEY TAKEAWAYS

Host Eric Kavanagh interviews Mark Madsen, Dez Blanchfield and Bullett Manale on latency and performance.

Eric Kavanagh: Ladies and gentlemen, hello and welcome back once again to Hot Technologies! Yes, indeed! My name is Eric Kavanagh, this is our Hot Tech show, a partnership with our good friends from Techopedia. Hop online to Techopedia.com for all the latest in the broad field of enterprise technology; they, of course, cover consumer stuff, too. We focus on the enterprise here on our program, so that’s what we’ll be doing today.

There’s a spot about yours truly and enough about me, hit me up on Twitter @eric_kavanagh, I love the Twitter, I love checking out that stuff, it’s a great way to stay in touch with people and have good conversations, and one-on-one conversations.

So what are we talking about? This year is hot, this is a whole universe of opportunity that we look at today in the world of information management, and what we’re talking about today is going to be queries, it’s going to be speeding up queries.

I think I forgot to mention the title, “Performance Play: Say Goodbye to Latency.” Well who wants latency? Nobody wants latency, latency is when you sit there, click the button and wait for something to happen, and nobody wants that. The kids don’t like it, they don’t think it’s cool, the adults don’t like it either. We’ve all been spoiled by the speed of the web, and we want things quickly, we want things now, and we’re going to be talking all about that today on our show.

Analyst Mark Madsen is with us today from Third Nature, one of our regulars. Our new data scientist, Dez Blanchfield, calling in from Sydney, Australia. And then Bullett Manale, yes indeed, that’s his name, actually it’s supposed to be two T's. Bullett Manale's on as our guest from Idera, a very, very interesting company, does a lot of stuff. I know about them already, one of which being they bought a company called Precise a while back. I knew their CEO named Zohar Gilad, how is that for a name? He was one heck of a smart guy.

But folks, you play a significant role in this webcast in the questions that you ask, so please don’t be shy, send your questions in at any time – you could do so using the Q&A component of the webcast console, that’s down there in the bottom right-hand corner. You can also chat me and I’ll chat it over to the speakers. We already have someone calling in from Italy so, “Ciao, ciao. Come stai?” Alright, with that I’m going to push Mark's first line, I’m going to hand the deck over to Mark. Mark, you now have the WebEx. Take it away, the floor is yours.

Mark Madsen: Thanks, Eric. I’m not going to start in the middle though, I’ll start at the beginning. So just a few comments to set up the discussion with Dez and Idera, a sort of state of the state with development, and databases and operations. And you know, If you kind of look at this, we have these sort of two worlds problems still in the database and application market, because developers view the DBAs as the people who hassle them. You’ve got to build data models, you can’t have access to that, you can’t create that thing, you can’t put an index on every column of every table on the database to make it faster. And of course, why do we need the models? It’s just data structures, if we change them, can't you just write them out in a serialized form?

The problem is that developers know code, and applications, but two things they often don't know, are concurrency, concurrent programming, and databases and the operating systems underneath them. Having been a kernel developer and operating systems and databases, I can say that concurrency and parallelism are really hard, and so a lot of the things that you learn to get good performance out of your code, really starts to fall apart when you’re working with a database. And performance looks great, the test environment looks great, and the Q&A environment, and then it hits the real system, and then suddenly it’s not so great. Because it’s multifaceted, how the code works with the database, how it works with the environment, and really simple little practices can have drastic effects depending on which scale you’re running.

And when you start talking about external applications, of course, externally facing applications, web applications, can be really difficult because things are great until suddenly they flatline, and they’re not. You’ll hit these interesting plateaus that require a lot of nuance to understand.

The flipside of things is the DBA view. The DBA view is that there are operations, they spend the bulk of their time, 80 to 90 percent, in ops, and maybe 10 to 20 percent dealing with the development stuff that’s going on upfront. From this perspective, you either pay now or you pay later, and if you’re spending all of your time upfront, then you’re going to have a much better chance later on, as opposed to development which tends to be exploring a feature space, and trying to figure out how best to do things. And so we have problems, and now we have methodologies that are incompatible – continuous deployment, rolling up your apps whenever they’re ready, doing code pushes periodically, working in a shop that is practicing dev ops. This sort of thing speeds up development, but all the practices around the database and what DBAs do and what system managers have been trained to do, IT ops practices haven’t kept pace.

If you think about it, most DBAs operate under a change control environment versus a continuous deployment environment. It’s all about stability and control, versus speed of change and reversibility. Continuous deployment, if you can’t back out of change, you’re in trouble, so everything has to be built to be easily reversible and code-switchable, which is very not the way relational database, development practices and management practices have worked.

You’re also running into these problems of having to be more proactive if you can, as a DBA, because by the time you hear about a problem, a hundred thousand people are filling out complaint forms on your website. That leaves you to need some new things that you don’t get out of your old environment. You know, things like better monitoring and alerting. At the same time, databases have been multiplying, we got more applications than ever to support more things than ever, they’re inside, they’re outside, they’re all over the place. And more independent sets of data for analyses, people are starting up databases all over because, of course, it’s easy now, you can set up a virtual machine. If you’ve got a cloud provider or an internal cloud, you can immediately pop things up, and this changes your entire procurement path.

The old procurement path was, “I have time to get a server, shove it in a rack, allocate space, get storage, get the database installed and do things,” versus somebody swiping a credit card and going in five minutes. If you do that, that modern development environment is operating at a pace that’s very different, and so it’s easy to create databases, and that just creates this problem of a proliferation, like nothing we have seen before. And this has been going on for ten years now, this is not news to anybody, but it also means that operating environments have grown in complexity.

The whole client server environment has really changed, because it’s not a client server world anymore. Back then you had a server, you had a database, if something was wrong you knew which server to go to, you knew how to manage the resources on it because best practice was one database, one server. Virtualization started to break that apart, cloud breaks it even more, because what you think is a database server, is just software. So the environment isn’t real. It’s what contains the environment that is the reality, and that might be a rack of blades or a big server carved up into pieces, you don’t really know.

Everything around database administration and performance management, and what databases have been built around tight control with one server, or a handful of servers and a couple of databases, you can’t control everything. You’re sitting there on a machine, but bandwidth cannot be partitioned easily by the virtual managers, and so everything can be fine with memory and CPU, but you’re bottlenecked on some resource that can’t be dealt with, and then when you try to fix it, the old model would have been at hard work, getting a bigger server and do something like that, now it could be really simple, just add virtual course, just add memory to the VM and it’s solved. But what happens if your VM is on an overcrowded server and needs to migrate? Or what happens if you’re at the size of an AWS system, and the max size has been reached, now where do you go?

So you have all of these problems where the environment is part of the database now, you package an environment with a database, all the special resources, everything in the application it’s part of the configuration, the configuration gets pushed out there. This is from the database environment, it’s a lot harder to manage and control.

If you look at what the database centers have been doing, they’ve been sitting on their hands, right? We’ve been moving away from this idea of treating databases and servers like pets. Servers have names, you treat them like they’re individually unique things, you’re treating them like cattle, it’s managing a herd. And the problem with managing herds is that if you don't control them, they eventually can stampede, and a stampede is not a good thing. We need better monitoring tools, we need better ways to deal with this stuff, and know what has been affected. In the old model it was easier because your ops and all your control systems told you, but when your server name is a UPC code, it’s kind of hard to figure things out.

You can’t afford false alerts, you can’t afford things that say, “There’s a problem with this machine, and that machine hosts 30 databases.” You can’t afford to have things give you no history. Monitoring consoles are great when they light up, but if the red light turns green again and you don’t know why, and you don’t have any history to go back into to look at what was leading up to that, and what the context was, you’re in trouble. We need systems that will monitor for us, we need better monitoring, dealing with the cursive intermittent problems that maintain that data history.

Better things and simple metrics thresholds that get us key metrics, but don’t guide us directly into what’s normal, what’s abnormal and how frequently these problems occur. What we’re really talking about is a combination of monitoring environment, and dealing with performance, and the vendors have been sitting on their hands. They haven’t given us better tools. We have systems with more CPU and memory than we know what to do with it all, and yet we still rely on manual intervention models, we haven’t put the machine to work, to guide us, to get us to the point of problems, we haven’t gotten to this new style which is, “There’s a problem here, you can do this to fix it,” or, “There’s a performance problem, it’s actually with this specific SQL statement, here are three things you could use to fix that SQL statement.” Applying heuristics, applying machine learning models that can look at the usage patterns of your system to spot problems and avoid false alerts. Using the machine to do what the machine does best, to augment the DBA, or to augment the person who has to deal with performance problems.

That’s the new way, as opposed to the old style. There’s a problem with this database, things are slow, and so we have new techniques, new ways to do it, and we should be applying those, and that’s where the market is heading. You’re seeing it begin to crop up, not with the big vendors, but with third-party companies, and this is mirroring something that happened 20 years ago when the database vendors didn’t provide a single thing to help manage the systems. So that’s kind of what the direction of the market is, and with that, I’d like to turn it back over to Eric.

Eric Kavanagh: Alright, I’m going to hand it over to Dez. And Dez, take it away, the floor is yours.

Dez Blanchfield: Thank you, Mark. You’ve done a fantastic job of covering the technical component of it. I’m going to come at it from a slightly different angle to highlight what’s happened in the rest of the world, as far as impact to businesses and the databases around them. Let me just jump to my first slide.

On the back of what you’ve just covered from the technical side of things and the developer side of things, I’m seeing businesses having to confront the challenge of data and databases in particular, and obviously we’ve had this significant shift towards this concept of big data, but databases effectively are still the heart and soul of where organizations retain their business information, and it’s from the front door all the way through to the back office. Every part of the organization is touched by a database of some sort, and powered by a database, and very rarely do I go into either project discussions, or some form of innovative strategic conversation in an organization where the topic of the database or database system doesn’t come up, and there are always questions around the types of things we’ve just heard about, in performance and security and how does development come approach this challenge, and where do the databases fit, and our aware of environments and application environments talk to, what about devices and mobility?

It’s still a very, very hot topic, and it’s been one for a long, long time in the grand scheme of things as far as modern technology goes. To that point, I believe it’s a fact that almost everything we do in our day-to-day lives, our daily lives that is, that is now being supported by some form of database. When we think about all the things around us, whether it’s a bill that comes in the mail every day for some service we’re buying, it’s inevitably being printed by a system that’s talking to a database, and we’re in there. Our phones have databases on them with the contacts and call logs, and other things.

Wherever we go, there’s some form of database behind the talk and the systems we’re using, and more often than not, they’re fairly transparent to us, but the fact is that they’re there. So I thought I’d just quickly cover why this has become a bit of an issue in a very short period of time. In the beginning, the concept of database came from this lovely gentleman, Edgar Codd. Whilst working at IBM, he changed the world as far as data management goes by creating a concept that we refer to now as a relational database.

In the beginning, the database was a database and life was good, it was fairly straightforward both in columns, and references, and so forth, and tables, and developing software was pretty straightforward, and performance wasn’t really that big an issue – it was a new exciting technology. We accessed the databases through some form of terminal, and you can only really create so much havoc at the end of a 3270 terminal on a mainframe, and invariably other types of terminals, those other systems came along. And in most cases, the old-style terminals were very similar to what web environments are now, and that is that you’d fill in a form on the screen on the terminal itself and hit Enter and off it’d go, it would shoot off as one packet, as a request, and the back-end system would deal with it. That’s essentially what happens on a web browser these days, when you type in a link on a web browser and that form it doesn’t usually go in real time back to the system, although with AJAX these days, that isn’t entirely the case.

But then something happened, the future arrived, and more recently the internet, and almost yesterday, in a sec web 2.0, and just around the corner we’ve got the Internet of Things. And in the process of the future happening, the world of database has just exploded, and the interactions with databases just became a thing that we all did by default, it wasn’t a case that you would go somewhere to do something, like buy a ticket for an airplane, and want to travel to the other side of the planet, somebody had to type in the terminal all your details and go into a database and print out a ticket.

Almost everything we do now, whether it’s hailing a cab on Google with an application, whether it’s jumping on internet banking, everything we do on a day-to-day basis, with some sort of system, it’s powered by a database. And when the internet came along, that was a little bit easier to bring to us, our everyday lives through a web browser, and then web 2.0 came along and things became mobile, and the scale of things just exploded. In fact, my favorite line in this topic is that, “The internet connected everything, web 2.0 made it mobile and social, and things got very, very big and now we have the internet and things and, and IoT… Yikes!!” We haven’t even begun to imagine the impact of the Internet of Things when it comes to the world on database systems.

So in modern terms, what we used to think of as a terminal has effectively become these things, it’s mobile phones, it’s various kinds of tablets, either personal consumer- or enterprise-grade large-screen tablets, it’s laptops and it’s the traditional desktop in some form. In that one image you can see almost every form of interface that we’re now using to talk to database systems and apps that are powered by those, from the little gadgets in our hands that walk around and we seem to be glued to, all the way through to the slightly bigger versions, and iPads, and other tablets and Microsoft Surfaces, to everyday laptops, which invariably are now the case in professional environments and so forth. People tend to get a laptop and not a fixed desktop, but they’re the modern terminal in my view and part of the reason that databases are experiencing all kinds of challenges in the management performance part of our lives, and not just development.

So I assume it’s one of the biggest challenges that businesses are still facing on a day-to-day basis. Everyone thought databases were our sole problems, they’re not. So what’s all the fuss about? Well when we go from one end through to the other with all things related to databases, from a commercial sense, and Mark’s covered the technical components very, very well, but in the commercial sense, as an organization, we think about databases. We’re dealing with things all the way from the basic design and development front end. When a business starts, they’ll think about developing applications, developing a capability, or even implementing an existing application in some form. Some form of design and development has to take place and a great deal of thought has to be brought into how these database systems are going to be implemented, and supported and managed, and performances tracked and so forth.

The integration of the database environment and applications, and the types of API, the types of access that are being provided now are getting more and more challenging, more complex. Day-to-day administration, support and backups, again, these are things that we thought were solved, but then all of the sudden the scale got much bigger, and things moved faster, and the volume is so much larger; the size of the environments, the database systems had to support the speed at which transactions are moving.

Think about a database in a very, very high frequency trading environment, there’s just no way humans can keep track with that, it’s just a cluster of machines fighting another cluster of machines to do high frequency trading, buying and selling, and the volume at which those transactions happen. Think of a modern-day scenario, like an early release of a Netflix movie where you’re not talking about just hundreds or thousands, or even hundreds of thousands, potentially millions of people wanting to see that movie from the very second it’s released. All of that information is captured, and tracked, and logged and analyzed in a database platform.

And then there’s the always-on world that we live in now, 24/7, not just follow the Sun but there’s always somebody up at midnight wanting to do something, and business hours follow the Sun all around the world. So uptime and availability are by default, are a climate now, having an outage really just isn’t an acceptable thing. And redundancy, if there’s a performance issue or if we need a maintenance window to do an upgrade or a patch, or a security, really, we need to be able to cut from one database environment to another and do it seamlessly and automatically.

Security and standards and compliance, we’ve had some pretty big things happen in the world of late, GFC in particular, and so we have a whole range of new challenges to meet around compliance, and security, and matching standards, and we need to be able to report on those in real time, and ideally in a dashboard form. We don’t want to send a team of monkeys out to a data center trying to find things, we need the system to tell us that immediately, in real time.

And the two big fun ones that almost no one ever talks about, we generally push them under the rug and hope that they don’t ever raise their ugly head, but disaster recovery and business continuity – these are things as well that should, for the most part, happen automatically, should the need arise.

We could spend days talking about the types of things that can go wrong in database environments, and that humans generally have responded, but now we need systems and tools to do that for us. One example is a data breach and so, when we think about databases, and I ask this question quite openly in various forms: what happens to databases when we take our eyes off the ball, and something critical goes wrong? Particularly if there isn’t a system watching performance and security and other major aspects of running databases.

Well, what could happen is this, this is a screenshot of some of the recent breaches in the last two to three years. Invariably, these have all come from a database system, and invariably, there’s been some issue in security or control, or access that’s come about, and in the top left-hand corner we’re looking at 152 million Adobe accounts, where every detail of those customers was breached. And were it the case of the appropriate tools might have been in place to track and capture the incident, and control security, we may have avoided some of those, the first couple of hundred records being stolen might have alerted us, and we would have stopped the next hundred and fifty million.

Then we get to the key point of this whole journey, taken us through, that is: why do we need better systems? Why can’t we just throw more bodies at this thing, that we have well and truly crossed the tipping point in my view, and certainly I believe there’s a case that’s been evidence of late, that throwing more DBAs, administrators and more people at this thing doesn’t fix the issue. We need a better set of tools and a better set of systems.

Here are my top five reasons that I believe support this, and they’re ranked in order of importance, based on what I’m seeing across these private enterprises and states that are governed environments, the challenges they’re facing with database environments, and managing them.

Security and compliance – number one. You know, controlling who has access, where do they have access, when they have access, how often they have access, where they have accessed it from. Potentially the devices they’ve actually touched and the types of things they’ve looked at, and the compliance that goes around that. Having human beings run reports 30 days later to tell us whether things are okay just isn’t appropriate anymore, it has to happen in real time.

Performance and monitoring – it seems like a no brainer, but invariably it’s not. Whether we’re using open-source tools or some third-party commercial tools, invariably we haven’t missed the boat, in many ways, with the types of performance monitoring that’s required and the detail which, and the ability to respond in time.

Incident detection and response – it has to be an instant real-time thing, and invariably we need a system to do it for us, or at least alert us quickly so we can deal with it, so that the few issues that arise are dealt with quickly, and don’t cascade out of control.

Management and administration – again, we think these problems are solved, they’re not. The goal of issues being faced by database teams, particularly the DBAs where a system should be taking care of things for us, we haven’t solved that problem yet, it’s still a real thing.

And right from the front end with design and development, when we start building these tools, we build the database environments, be able to throw the appropriate tools at development and testing, and integration, platforms. This still isn’t an easy thing for us to do, and this whole journey, it sort of drives us to the same message, that in my mind we do need better systems and better tools to help us deliver the outcomes that we need from our database environment, so the businesses that are driving value from our customers. We can’t just keep throwing more bodies and more DBAs, the scale is too big, the speed is too fast and the volume is too high. With that, Eric I might pass back to you.

Eric Kavanagh: I love it, we’ve got a lot of ground covered there folks, a lot of prospective leads, and we go ahead and hand they keys over to Bullett in just one second.

Bullett Manale: Alright.

Eric Kavanagh: Oh, let's take it away and Bullett, now I’m handing it to you, and the floor is yours.

Bullett Manale: Alright, thank you. I think a lot of good points have been made. I wanted to just quickly talk for just a second about Idera, who we are, and then we’ll jump in. I’m going to talk about the tool that I think a lot of this stuff we’re talking about, we can kind of set and kind of discuss some of the areas where these align, with this tool, the Diagnostic Manager product.

Now, what I want to do first, is just kind of give you a little bit of a background about who Idera is; we’ve been around since about 2003, and so we’ve started off with just SQL Server tools, and that’s what we’re going to focus on today, is, would be the Diagnostic Manager product. But you can see all the buckets of things that we have here, and we’ve recently, as was mentioned before, we acquired Precise and through acquisition, we also have Embarcadero, and so we’ve got a pretty good portfolio of products.

In terms of performance monitoring, in terms of SQL Server, the product that I want to talk about, which aligns these topics we’re discussing, is Diagnostic Manager. Now, this is a product that’s been around since pretty close to the beginning of days of Idera, and I’ve been lucky enough to be a part of that since about 2005. And I’ve seen a lot of the changes in terms of SQL Server, the shifts from physical to virtual, all that kind of stuff that’s happened, and also the needs of the DBAs as the environments grow, and those types of things.

What I started off with, was the typical user of our product is the DBA, and so when we’re talking to folks for the first time, prospective customers, it’s mostly the DBAs we’re talking to. We’re not talking to the IT managers, or the directors, it may at some point get to that level, but the initial onset is that the DBA has a problem, the DBA tries to fix the problem, and a lot of times we’ll go and download and trial the product as part of that.You either get the data manager or the DBA or the acting DBA, the guy that’s lucky enough to be the most technical in the room, in some cases. Now, when you get to the larger enterprise environments, obviously, then you’re going to get the full-blown DBAs typically they will be the ones using the tool. And I went ahead and just added a little blurb here from Wikipedia. It kind of goes over the responsibilities of the DBA as Wikipedia says, that’s what they do.

If you go through the listing here, a lot of these things, I’m not going to read it off, but you get a lot of the typical things you would think of, and then on one of them, you’ve got monitoring and optimizing the performance of the database, and that’s a pretty big one. And what’s interesting, is when you talk to the DBA, they’re always the ones that are blamed first, when it comes to problems, and it may not really be their fault, but when there’s a performance issue, typically with an application that is tied to a DBA database, they’re the ones who get the blame, so they’re always looking for the reasons why it’s not their fault. In a lot of cases that’s what they can use this tool, Diagnostic Manager, to help them do.

But at the end of the day, also, if the database isn’t performing, then a lot of this other stuff doesn’t really matter, your applications don’t work, then it doesn’t really matter for a lot of these things. First and foremost, we want to be able to make sure that the user experience the way we know it, is not diminished, it’s something that DBAs are always striving towards. And I think that, if you kind of look at the reasons why people typically buy and use the SQL Diagnostic Manager product, one of the first reasons, probably not the foremost, not last or least, but it’s kind of equal across the board, and depending on who you talk to, these reasons, almost one or two of them are always, there’s some kind of need around.

But the first one is just being able to have that centralized view of the instances as a SQL that they’re managing. And the funny thing is that in a lot of cases, if you ask a DBA, “How many instances do you manage?” The number changes so often, that they’re not really sure in some cases. So you need something more than just being able to throw everything up on the screen. You want to grip that information, you want to make sense of it, and so that’s one of the things that Diagnostic Manager can definitely help with, is to be able to provide you with that kind of view into the environment.

And it’s not just a view into the environment, but it’s a view that the DBA, the database administrator, is comfortable with and it’s a console that’s DBA centric, if you will. It’s made for a database administrator. There’s plenty of monitoring tools out there, there’s plenty of performance tools out there, but like I said, at the end of the day, the DBA wants a tool that’s designed for a DBA, because there’s a lot of things specific to what they do in their day to day.

And with that said, you’ve got SCOM, you’ve got HPF, you’ve got all of these other technologies, but they want something that’s particular to what they’re doing. I think that we can help in that area with this product, you’ll see when we get into it in a second. The other thing that we see with the DBA that is definitely one of the things we touched upon earlier as well, is that they need to be able to see what’s going on, obviously, and they need to be able to look across the entire enterprise and have some peace of mind in knowing what’s happening. But at the same time, they’re not sitting there staring at consoles.

Remember all those bullet points that you saw on that list, that I just pulled up? They have to do those other things too, so it’s not just about waiting for fires to put out. In a lot of cases there will be meetings, or a lot of the maintenance windows related to the database administrator are running in the middle of the night when they’re sleeping, so they have to have the ability to go back and see what happened. In a lot of cases, if you don’t catch something when it’s happening, once the problem has gone away, or at least with SQL Server, it becomes kind of an issue where you’re dealing with the situation where you don’t have any remnants of that problem anymore. And those problems go away, and so do the remnants, which means that you have less to troubleshoot with, you have less information to work with.

With that said, that's definitely one of the things that Diagnostic Manager can help with, is to give you that view into the past to query the information from the past, “Did I have an alert with blocking, did I have issues with deadlocking, did we have things that were happening in terms of our resources?” I can go back and query that information. I can drill into specific points in time. I'd be able to do all of those things directly from within the tool.

All of those things, despite whether or not it's internal or an external application, the DBA want to know, because they want to be able to see what is causing the problem. It doesn't really matter if it was somebody inside of the organization, or somebody outside of the organization that wrote the code; they still want to be able to isolate it, so that they know that the problem is happening, and they know where it's coming from.

So performance and accountability are a key part of what our product does. We can provide all of those details, and what's kind of nice, is you have the ability to drill down. If there’s a bottleneck, you can correlate that to the application, to the user, to the database, to the query. And once again, it's kind of a smoking gun. You get a direct correlation between when this query runs, what is it doing? And it's not just about the query itself, in terms of it executing in and of itself, but also is the query over time getting worse? And those things can be answered as well, with the product, which is definitely something that if you’re trying to be proactive, it's nice to be able to say, "Hey, here's a query that ran bad, but boy look at it as it runs further, we can see it's running even worse and worse, I can do something about that."

If we go into the next area here; and this is probably – I'd say this is one of the big ones. One of the questions I ask, when I'm showing our product is, I will always ask the database administrator, "How do you hear about a problem related to your SQL Server databases?" And it's very funny, because most of the time – now granted, most of the time they’re looking at our product, because in a lot of cases they’re trying to solve a particular need. But it's interesting to hear the initial kind of thing – at least with SQL Server, is that it was kind of the – you know, in the early days of SQL Server you had SQL Server and then you had Oracle. And everybody had Oracle, and SQL Server was kind of like the, for lack of a better expression, the redheaded stepchild of databases, when it first started.

And then as Microsoft added more features to it, it became a little bit more of an enterprise tool. And obviously, it's come a long way since then. But the point is that, one time you could argue that the databases weren't considered to as critical back in the day. And that's changed over time. Now because of that, in a lot of cases people are trying to get their hands around it, and saying, “You know what? I've got all these SQL Server databases, I'm trying to get a handle on it." And rather than hearing about problems from the help desk, or hearing about problems from the specific people that – like the users themselves, they’re looking for some ways to go around that. They’re looking for ways to be able to be made aware of those situations before they ever happen.

And so with Diagnostic Manager, that's one of the things, we’re trying to do too, is at the very least be able to make that the DBA is the first to know about those situations, or those problems, so that they can do something about it, either right when they happen, or to take it even a step further, to analyze these systems that it's monitoring. And to be able to give you proactive advice that will improve the performance of that instance, and to be able to do that on a regular basis. For instance, we need to add an index, based on the workload; those types of things, the tools capable of doing as well. So we'll see a lot of that in the tool.

The other thing and the last thing that's on this list, which is kind of more of a general description, but it's something definitely worth noting. And especially, as you get into the larger enterprise-level types of situations, where you have lots of instances, there’s always going to be some obscure thing that I'm going to want to monitor, if I'm the database administrator, for example. And so what we try to do is anticipate in terms of what the typical DBA is going to want to monitor.

With that being said, you would also be able to in terms of – there’s always going to be something new. So we provided a way for you to add whatever metrics you need to monitor and manage after the point of installation can be added. So any PerfMon counters, WMI counters, SQL Server counter objects; all those can be incorporated into the tool. You have the ability to add additional queries that can be incorporated into your polling intervals.

And, the last thing that is also worth noting is we can add, and actually communicate with both vCenter and Hyper-V to be able to pull the metrics from those environments. Because one of the things we've identified with the DBA, is that they're typically not part of operations specifically. And they don't necessarily typically have, you know, the vCenter environment, available to them, or those kinds of things available to them.

And so the problem is that if they’re dealing with a SQL Server instance, and they have been allocated resources, but that instance is virtualized, it may look like they have all the resources in the world, when they’re just monitoring what's on the guest operating system. The reality is, on the host, there might be 30, or 40, or 50 or 100 other VMs they’re trying to access, and have contention of those same resources. And the only way to actually see that is to communicate to those other environments, and to those interfaces, in this case, which we do.

You have the ability to add those other types of counters into the tool. Now it's not just about being able to monitor those counters, but it's about being able to make those new counters, that you introduce to the product, make them part of the tool, as if they were an out-of-the-box metric. An out-of-the-box thing that you would want to monitor; so that means being able to incorporate them into their dashboards. It means being able to add them to your own custom reports, being able to obviously set thresholds and alert on them, but also baseline them and be able to set the thresholds with some knowledge, of where to set them to based on things like your baselines and what's normal. So, you have a lot of those kinds of things that are also in product.

What I've kind of provided you with is what I call “the core deliverables for Diagnostic Manager," and I can go ahead and just give you a little taste of that by going into the product. What I'm going to do is share out my screen, okay, and just going to drag this over. So what you're going to see, this is the console for Diagnostic Manager. And as I mentioned before, going to that first core deliverable, being able to look at things from kind of an enterprise-level view. There are lots of different examples of that within the tool. We have a kind of thumbnail view; we have more of a grid-like view. We also have, in terms of flexibility, we have a web-based console as well. The web-based console has other views that are available to you, like key maps and things like that. But the point is, is that you have that ability to kind of look at and see things at a high level. But as problems occur, you’re going dig down a little bit further into the tool, and actually see the specific problems, and have some way to understand and know what’s going on. And obviously that's very important.

Now, in terms of being able to actually see what happened in the past; if I'm looking at a problem that happened yesterday, or a week ago, then in that situation, you know, you’re going to have the need to be able to go out to a particular instance of SQL. And the good news is, if you know what time that problem happened within the product, you can go directly to the history browser. And I can point to a specific time of the day; it could be from a couple of weeks ago, it could be from yesterday. But whatever day I choose from in the calendar, I’m going to then be presented with the different polling intervals. In which case now, I'm effectively seeing what I would have seen if I'd been viewing the console on April 20th at 1:37 p.m.

So I’m able to go back in time, and then once I do that, all of the different tabs that we see here are going to reflect that specific point in time, including the queries that might have been running poorly, including maybe if I had sessions with blocking. All that kind of stuff would show up in the tool, and it's going to allow me to obviously leverage that historical information to be able to, you know, fix the problem. Now on that note, when we’re talking about the history, the other thing that's worth noting here is it's not just using the history for fixing problems. That history is very valuable obviously, for other reasons. And, one of the big ones is to be able to make decisions efficiently, and to be able to make decisions quickly, with the right information. So all of that history, all the information we're collecting, we can report against.

If somebody comes to me and says, "I got this really great new application. It's going to change the world as we know it. Oh, by the way it's going require a database, and oh by the way it's going to really peg the I/O on the machine where that database is." If I know that going into it, then I can leverage that information to be able to provide a ranking of all my production servers, based maybe on the last seven days of collection. And I would be able to very quickly come to the conclusion of which instances makes the most sense to employ that database on. So it's that type of historical information which is also obviously very valuable.

In terms of the queries themselves; in terms of looking at the queries, we have a lot of different ways to do that in the tool. And the one I like to look at is the Query Waits View, because the Query Waits View is very helpful in terms of being able to assess. If I have a bottleneck that's occurring, to be able to essentially identify all of the different areas that are affecting that specific, particular query; not just the query itself and what the impact of that query is, but also, you know, which application it came from, which session it came from, which user called it and all of that stuff, I can view that, obviously, information in real time, but I also have the ability to look at that data from the past. And so that's one of things here, and I kicked off a script, but I have to wait for it to kind of pop up.

While we wait on that, I want to – and I know we’re short on time, so I wanted to kind of talk a little bit also about alerting notification being proactive. And when you’re talking about that kind of stuff, like I said, being the proactive part, there's a lot of tools that do alerting. The hard part is not sending an email. The hard part is not writing to the event log or generating a SNMP trap. The hard part is knowing when to send that alert at the appropriate times. And so with that comes a lot of having to do some calculations, having to understand, "What is it about that particular instance and what is normal as it pertains to that instance?"

And so for all of the metrics that make sense to do so with, we baseline those metrics. We actually show you the baseline, we'll show you the threshold that it’s currently set to. And then the other nice thing about it, is that let's say, I set my thresholds to in this case six and ten just for this example. Six weeks from now, if I come back to this instance, this baseline can completely change, because one of the things we're doing when we calculate the baseline, by default, is a rolling seven-day calculation. So it's always giving me an up-to-the-date version of the baseline. And what happens if that baseline shifts up into my thresholds? In this case, I can see and alert recommendations that basically says, "Hey, you've got a threshold that's probably set incorrectly, specific to where we see the threshold being, and obviously where the baseline is, you're probably going to be getting an alert for something that's a normal occurrence."

And so rather than treating a symptom of something that's normal, I'm able to identify that type of situation where the actual threshold is set incorrectly. And what that allows me to do obviously, is to set the thresholds in accordance to where I'm going to get an alert. It's something I know to be more of a call to action versus an investigation to see if it's really a problem. And I think that part of the tool is really helpful in terms of the baseline itself, and being able to calculate.

Now, with this product you have the ability to actually have multiple baselines; you can set them for different periods of time, and you can dynamically adjust the thresholds based on your baselines, which is also very important part of kind of adapting to the changes that happen on the day-to-day basis to your SQL Server instances. Now, in this case here, we kind of cover a lot of the settings of the thresholds, and showing you the baselines. But as far as the actual alerts are concerned, the notification themselves, the cool thing about Diagnostic Manager, is it does provide you multiple alerting profiles. So if you have for example an on-call profile that is from 2:00 a.m. to 5:00 a.m., then I can have a profile specific to just that time range, and I can set all the conditions, and the appropriate settings here for my response.

Now, the thing about the response is that, in some cases, yes I can send an email, or I can shoot off and generate an SNMP trap, or write to the event log. There are a lot of other things we can do, but as I talk to DBAs, what they really, really like is the fact that in most cases a lot of the work that is performed is repetitive stuff. It’s stuff that they know exactly when the problem is happening, what to do to fix it. They just have to go and intervene. And so as you grow your environment, as you have more instances, that becomes a lot more difficult to do. So one of the things you can do within the tool that I think is worth noting, is you do have the ability to set up a condition, but based on that condition to be able to set a response to run a script, to run a job, to run an executable. And, the point is if you do decide to run a script I can use parameters, inside of that script that will be at run time, populated with the actual information.

So, if there are problems with a specific database, the script will be designed to run just against the database where the problem is happening. So, you can dynamically address issues in an automated way, and then I can still receive an email to come back and tell me that, "Hey there was a problem, but by the way, it was fixed." The script was run, and as the DBA you know about it, but you didn't actually have to go in and intervene. Now, on that same note about being proactive, obviously we also have another feature in here which is the "Analyze" feature. And, what this will do is it will do a regular check, against the instance of SQL. And, in some cases it will do a deeper dive in terms of what it's looking for. Things like hypothetical index analysis will be performed. Do I add an index? Do I remove an index? And, all those kinds of things obviously are going to help with my performance, but once again, it's all about being proactive. It's about being able to make decisions before stuff breaks, and to make it run better. And, so in a lot of cases, that's really what we're trying to do here.

Going back to Query Waits that we were talking about earlier; as you can see, there's a big spike here. I ran a script earlier that just caused some wait activity, and as I mentioned before, we have a really unique way you can drill down into this information. If I want to see what application it was; I can see it was coming from the NoSQL application. We would be able to see the database it was tied to, the session, the user, and then if I want to, I can rank this, in terms of my waits, as well. So, I can say, of all of the waits that were happening in that window of time, which ones were happening the most? And if I see that when it's happened the most, the really nice thing is I can drill into that wait type and I can see all of the commands. If you look here, they were making that wait occur. And I can also see primarily, which application it was, that was making that wait occur.

So it sticks out like a sore thumb. I can immediately go and say, "This is the application that's causing my bottleneck. Now what was the query that was run? Which user ran it? Which database did it run against?” and so on. So hopefully that makes sense, and it also helps in terms of making sure that you don't have the latency within your environment, as it relates to your databases. Hopefully this is helpful. I'm going to go ahead at this point and pass it back, and I guess we can continue from there.

Eric Kavanagh: Sure thing. So, I guess I'll just throw it to our experts of the day. Mark, maybe first you want to comment and ask a couple of questions. Then Dez, you can chime in.

Mark Madsen: Yes, thanks, I really enjoyed watching some of this. It's a much more intelligent monitoring than I'm used to seeing. I'm curious with the managing of the data behind this; managing of the metrics that you can track, and you know, look for things like shifting baselines in particular, that being one of my pet pain points, with dashboards. How do you deal with that data, and the second part of that, is with, you know, baseline metrics, like kind of shift – do you have the ability to automatically shift the thresholds as well, so that I don't have to go back in and reset thresholds by hand, when a baseline shifts?

Bullett Manale: You do, and so the nice thing about it is you can decide that. You can do either. I can set a threshold and make it a static setting, or I can check the box to say, "Make this a dynamic threshold, that will change as my baselines change.” And I have the ability and the tool to set a default window of time for my baseline. But then if I need to, I might have a separate baseline window, for example, from my maintenance window from 2:00 a.m. let’s say until 5:00 a.m.; because I'm going to be taxing my CPU, my drives, and everything else because that's when we do all of our maintenance. It would then automatically, if I had it selected to do so, it would automatically adjust my thresholds to be outside of where whatever is normal for those metrics that I choose to do that with. It would allow me to do that. Basically you have an ability within the tool to set windows of time, that are your baseline windows, and each window can be treated as a separate entity, in terms of the dynamic baselining adjust that can be done. And you can add as many windows of your baseline as you need to, if that makes sense. You could have a weekend window, a weekday during working hours, a maintenance window that happens in the middle of the night and so on and so forth.

Mark Madsen: Thanks.

Bullett Manale: I guess going back to the first part of the question, we do have, and collect all of this information. I didn't really talk about the architecture, but we do have a back-end repository, that you have complete control over the retention of that data, but we also have a service that runs in the middle of the night that goes and does all of our baseline calculations and it takes that data, collects, and makes sense of it. And obviously, along with that, you also have numerous reports that we can use to report against your baselines, for specific metrics. And, you even have the ability to compare your baselines of the same server, for the same metric for different periods of time. You can see if there are differences that have occurred, or what the delta is. There’s a lot of those types of options as well.

Eric Kavanagh: Dez.

Dez Blanchfield: One quick question I have for you – there’s a broad spectrum of what this tool can do. Are you seeing an uptake in the use of it in the early stage of development now, or is it still primarily a production environment tool? In other words, are developers getting access and using it through their early development, and then testing integration phase? Or is it still predominantly used in production environments?

Bullett Manale: I'd say that, for the majority of the times we see it in production environments. It depends on the situations, but for the most part I'd say primarily production and we do – and it's also, you know, fair to mention that we have different pricing for dev and test environments, so it's a little bit more attractive. We do see people using it for those environments but I'd say, if I had to give you an answer one way or the other, I'd say it's primarily still production environments where we’re seeing people make an investment for this product.

Dez Blanchfield: Sure, yes and it was interesting to hear that you’ve got different pricing points, because obviously there are different workloads, and the heavier the jobs is going to be where all the real work is being done. But I'm seeing a lot of organizations, particularly in government, and certainly in defense, where development now is getting the same level of investment in tools and systems as production environments, because they’re doing a lot more up-front testing. In defense for example there are teams who run billions of tests, hundreds of billions of tests on applications and systems and tools, and monitor them before they even go into integration testing, because they want to make sure there's a code that’s built and the database it’s sitting under it. It gets to the one hundred and one million iteration or something, whilst you’re out in the field shooting at someone, it doesn't go "bang."

Bullett Manale: Sure.

Dez Blanchfield: In old-school database world in my experience, thinking that database environment’s something that just left in data and some of you know, are very rarely seen, and very rarely spoken of, so when we get the point now where tools and apps are being developed, particularly with analytic platforms, they are now in our handsets, and our devices. Are you seeing clients bring the conversation of database performance and database management sort of in a more day-to-day discussion as opposed to just purely techies? And I know you mentioned before that predominantly you’re talking to DBAs, but is there a trend now where it's in the general vocabulary, are you seeing people where they're discussing these topics, as opposed to just the geeks?

Bullett Manale: Well it's a tough one to say. I mean, like I said for the most part, the people that we deal with in terms of the selling process anyway is with the practitioners, which are the DBAs. So in terms of your question are you just saying, "In terms of generally, the people within the IT organization, are they becoming more database aware?” I guess is the question and I would say probably the answer is “yes." I probably don't see it as much, based on where I am, on a day-to-day basis, but I think if I'm understanding your question, that would be my answer, I guess.

Dez Blanchfield: Yes, that’s okay. It's probably a loaded question, sorry, because obviously your predominant interests, in your world, is the technical side of things. I'm curious in that in my day-to-day activities, I'm seeing organizations start to bring this into the conversation very early. So, when they're talking about new initiatives, new projects, new programs of work, one of the things that come immediately is, "How are we monitoring it, how are we tracking it, how are dealing with issues as they arise, as opposed to launching, going live?"

Bullett Manale: I would say that –

Dez Blanchfield: Sorry, go ahead.

Bullett Manale: I was going to say that I do see a trend I guess I should say in – you know, a lot of times in the past you'd get, "We had a problem, and so now we need a tool." And I think that we’re seeing a little bit more of acceptance around having the tool in place before the problem happens, if that makes sense. So I would say that's definitely becoming more normal to be, you know, “Hey, we need a monitoring tool, we need something." And people are definitely seeing the value of this product, because like you said earlier, just adding DBAs and adding new instances, you need something that manages that. You need something that helps with the management of that, and that's why we're seeing a lot of acceptance around this product as well, or we have.

Dez Blanchfield: Quick question. Where does this need to live? Does it have to sit right on the back burn on the LAN, within the data center, as close as possible to the database environments, or is it comfortable placed somewhere, potentially out in the cloud, a third-party cloud with some sort of either VPN tunnel or remote access to various environments? Where does that need to sit, as far as environments and monitoring are concerned?

Bullett Manale: In terms of the architecture, there’s a back-end repository, and that's a SQL Server database. We have the console which can either be a fat client, or a thin client; we give you the option of both. And we also have a thin client that's really geared specifically to mobile devices, as well. But in terms of where this can actually sit; it can sit in an environment, really the trickier part about it is, from a lot of the information we need to collect, does require administrative rights, in some cases, or in a lot cases. Now we don't make you do that; if you want, you can collect data and just for the things we can't gather, because we don't have admin rights, we will just let you not see that information, if that's the choice that you make.

Depending upon the flavor, like if you’re talking about AWS, some environments, it works better than others, but as far as the actual environment itself, typically either using SA authentication to collect the data against the instances is all that's necessary. Or if it’s an untrusted domain, that's usually when you'd want to do that, but multiple domains; as long as there's a trust between them, we can collect against those. It doesn't really matter if it's on a LAN or it’s on the WAN, the actual collection itself is pretty negligible in terms of the amount of data we’re collecting. If we have sufficient size WAN connection, it's not a problem. I've seen environments where they have branches where they have SQL Servers all over the United States. And it's one server on each of those different locations, and they're monitoring it centrally. The tricky part is just making sure that you have a decent amount of connectivity to do that. Hopefully, that answers your question, it was kind of all over the map.

Dez Blanchfield: It does, absolutely. Thank you. So, two quick questions that have come through attendees this morning; one is: what's the impact of – often we see system-monitoring tools generate load themselves by just monitoring things, so the question was, sorry it scrolled off my screen now, but to just paraphrase it; by monitoring are we generating load ourselves? Is there a measurable impact of the tool, just watching the environment, or is it a negligible impact?

Bullett Manale: There's going to always be a little bit of an impact because it has to query the SQL Server instance to pull back the data. The question like you said is, "Is it negligible or is it significant?" Out of the box you’re pointing to an instance, it's negligible. We've been doing this for, like I said, quite a while now. We have over 20,000 customers, and I can assure you that if it causes significant performance impact, we wouldn't be in business. With that said, we also allow the user to decide what they want they want to monitor. So I think that's an important thing to mention, is that every environment is a little bit different.

An example would be, with the query monitoring component, one of the things we have the ability to do, is we can set the threshold of what you consider to be your boundary of normalcy. So it could be based on time of the execution of the query. It could be based on the CPU, I/O, but as an example, let's just say I've set my time of execution to zero milliseconds. Effectively what I'm telling the tool to do is to collect all the queries that ran since the last pulling interval, and make that part of my historical collection, as well.

Now when we do that, we're going to collect whatever amount of queries we were running on the box since the last polling. Now that's elective, and the user has the ability to do that. Do we say, "That’s what you should do”? No. But we also give you the option to do that in case you want a sample of data that allows you to collect that information. So generally speaking, you have the means within the tool to set it up and tune it exactly how you want based on what you're comfortable with. But you do have the ability to really open it up if you want to, and collect a lot of additional information that you might not necessarily regularly collect, if that makes sense.

Dez Blanchfield: Yes, absolutely. I know we’re running a little bit long, but there are two really great questions I want to throw at you before I wrap up. They both come directly to me, but I think it's best if you answer them. The question generally was, "What's the scope of the tool's reach as far as knowledge of existing systems?” So can we just plug this in, and have it automatically detect the platform that's there, and know what's normal for that platform, and immediately pick up as Mark was talking about earlier on? Some of the baseline knowledge of the platforms by putting into, you know, I don't know, it could be Microsoft Dynamics. What's the scope of the knowledge of the platform with what's normal and in some of the current off-the-shelf tools that are being used around business?

Bullett Manale: I would say that, generally speaking, when we start collecting data on the SQL instance, we work with best practices to begin with, in terms of our thresholds and where they’re set to. That said, we also recognize that whoever you’re talking to, in terms of best practices, every environment is different. What we'll do initially we just collect the data, and what we recommend people do, you can try the product for 14 days longer if you need to. But after about two days, you'll start to see the baseline data populate. Once it has enough sample information to work with, then it will start providing you the context in terms of the baseline, where the range is, and all that kind of stuff. Then from there, if you want to, you can automatically set your thresholds from that information that's been collected. It does take a little bit of initial collection and polling to be able to start to determine what is normal, so that you can start shifting your thresholds.

But the thing that I think is worth noting as well is that, when you change those thresholds, it can be done on a group-by-group basis of your instances. It can be specific to one instance or you can do it against all of your instances, as well as the ability to create things like templates, so that you can say, "This is a production instance, but this is the template that I want to assign to it." And so when a new production instance comes online, we automatically apply those thresholds to it, because it has the same type of hardware, and it usually has the same workloads, so we would be able to do it that way as well. Hopefully that helps in terms of the question.

Dez Blanchfield: It does, absolutely. In fact, you actually answered another question that just came in to me, and that was, "Is there a trial download?" There is, I can answer that, I know. I'm sure you'll confirm that there's a free download, and I think you said it was 14 days from the website. You can download it and play with it. I guess just quickly with that though, "What kind of environment do I need to be able to run the trial? Can I run it on my laptop and play with it or do I really need a server?”

Bullett Manale: The main thing it needs is a repository, a SQL Server database that's 2005 or above. Other than that, there are some minimal resource requirements, a .NET requirement, and that's it. So, it’s just a matter of installing the product and creating a database.

Dez Blanchfield: Perfect. One last question that I'll throw at you, because we’re just out of time now, but quickly, about two or three people asked me, "Do I need to be a DBA to actually be able to get up and running with this, and have a play with it?”

Bullett Manale: No. I would say that, if you’re a DBA, you’re going to have different uses of the tool. I mean, there's going to be probably a little bit more value if you're a seasoned DBA. You’re going to see a lot more depth to the tool that you'd be able to take advantage of. But also as a new DBA, or even a person who, that's not a DBA, we do have a lot of recommendations, and I'm on that page right now. These recommendations will come up on a regular basis, and the really nice thing about the recommendations, is they provide you with the reasons why the recommendations are being made. But in addition to that, they also will have links to external content that describe more in detail about the reasons why those recommendations are being made as well. So that will link to external Microsoft websites, blogs, and all kinds of stuff like that, that's external.

But to answer your question, it's kind of, you know, if you're a senior DBA, there's going to be stuff in here, you'll probably take advantage of, that you probably wouldn't as a novice DBA. But then at the same time, it's kind of a learning tool as well, because as you go through these recommendations, you'll start to pick up some of these things on its own through the use of the recommendations.

Dez Blanchfield: Fantastic. Thank you. I really enjoyed the demo part. The presentation was great. The demo was fantastic. Quickly from memory, there's a whole resource center on your website that I recommend people have a look at as well. I remember going through that last night to get some details. You've got a whole range of things, just from your blogs and data and conversations through to, from memory, you’ve got most of your product documentation online as well, yeah?

Bullett Manale: Yes, that’s correct, and the form I think that you're referencing is the community.idera.com website. And then one thing I would mention also, earlier you'd asked about, "Is it going to recognize the environment?" In terms of new instances or adding instances, there’s another tool that we have which does discovery of instances. And it's all about inventory and managing your inventory. I would just kind of point you in that direction, in terms of actually discovering the instances. But as far as actually the performance and monitoring, all that kind of stuff we talked about, that's where the Diagnostic Manager would come into play.

Dez Blanchfield: Fantastic. Look, great coverage. Really enjoyed your presentation. Loved the live demo and that's all from me this morning, as I know we’ve gone probably 10 minutes over time. Eric, I'm going to pass back to you.

Eric Kavanagh: Alright. I just loved the demo. I'm glad you did the demo. I'm glad we got to take a nice hard look at that as we went through the Q&A.

Bullett Manale: Great.

Eric Kavanagh: Because this gives people an idea of what you're looking at, and it really does kind of amaze me to think that we’re still learning about how to talk to these computers, when you get right down to it. I mean, this level of diagnostics is pretty sophisticated, and it's getting better every day. We’re getting a lot more insight into what's actually happening. But you really do need a person overlooking this stuff, reading it, putting that cognitive ability behind what you’re doing, right?

Bullett Manale: Yes, I mean in a lot of cases – I wish I could tell you this is a DBA in the box, but there's just too many things that are going on. I mean, we do provide guidance, and we do help out, but at the end of the day it requires people making decisions about the data that we're presenting. I don't think that’s going to change any time soon.

Eric Kavanagh: Well that's good news for the real people out there, folks.

Bullett Manale: That's right.

Eric Kavanagh: You’re going to want to have someone watching this, a team watching this, and you’ll learn, as you’ve heard from Bullett here, looking at these recommendations you’re going to pick up what’s going on. And I’m guessing from that history, and I think you’ve touched on this, Bullett, but very quickly, that history allows you to recognize significant patterns and then therefore be able to identify them when they happen in the future, right?

Bullett Manale: That is correct. One of the things we can do is track a query’s performance over time. We can also obviously look at other things, like baselines and see them shifting, and obviously get alerts and things like that when that happens, so you definitely have that ability.

Eric Kavanagh: That sounds good, folks. We wouldn’t have been long here, but I wanted to get to those questions. Thank you so much for your time and attention. We do archive all these webcasts. Hop online to Techopedia.com or to InsideAnalysis.com, you’ll see links from both places.

And with that, we bid you farewell. Thanks again, folks, we’ll catch up to you next week, three more webcasts next week, Tuesday, Wednesday, Thursday. So we’ll talk to you next week, folks. Take care. Bye, bye.