The DBA’s Dream: Discovery and Management Across the Environment

Why Trust Techopedia

Host Eric Kavanagh discusses database management with Dr. Robin Bloor, Dez Blanchfield and IDERA's Binh Chau.

Eric Kavanagh: Okay, ladies and gentlemen. Hello and welcome back once again. It’s a Wednesday, it’s four o’clock Eastern Time and for the last few years that means it’s time for Hot Technologies. That’s right, this is our show with our friends Techopedia – Check them out online. They get monster traffic, 1.5 million unique visitors a month. That’s a lot of web traffic. The topic today, “The DBA’s Dream: Discovery and Management Across the Environment.” Yes indeed, it’s a big issue, especially for larger organizations. There’s a slide about yours truly, and enough about me, hit me up on Twitter @eric_kavanagh, I always try to follow back and engage in conversation out there.

Again, we’re talking about database technologies today and really being able to understand what’s going on across a wide landscape of database instances. As many of you know, once you start growing your organization, you get many more of these instances out there and keeping a handle on that stuff can be a bit of an interesting challenge. In fact, I remember a number of years ago, I had a great conversation with a guy who was the director of data governance for the CIO’s office at the Department of Defense. And I was telling him all these interesting things, we had this great conversation and I told him my background story about lobbying for transparency in federal spending, and he laughed and he said, “Oh, so it’s your house where I should send that next predator drone strike.” He said, “Transparency in federal spending? I don’t even know how many Oracle licenses I have around here.” When I heard that, I really could appreciate the magnitude of the challenge that some organizations face.

Now, these days there are lots of interesting tools – we’ll hear about one today – for understanding what’s flying around out there, but even 20 years ago, that was a really serious challenge. When it comes to organizations the size of DOD, you can just imagine that getting a handle on that’s going to save a lot of money, it’s going to save a lot of time, it’s going to solve some governance problems; you wind up solving multiple challenges all at once if you do this sort of thing correctly. We’ll learn about that today.

We have our own Dr. Robin Bloor on, chief analyst of The Bloor Group. We have Dez Blanchfield, our data scientist, calling in from down under, Sydney, Australia. And Binh Chau, senior product manager of IDERA, is on the line as well.

We do #HOTTECH as the hashtag – feel free to tweet away during the show. And we do rely on you guys for good questions, so please don’t be shy: ask questions any time using the Q&A component of your webcast console or that chat window, either way. And with that I’m going to hand it off to Dr. Robin Bloor. Let me hand him the keys to the WebEx. There it goes, and take it away.

Dr. Robin Bloor: Okay. Well, here we go, let’s move on to the first slide. In Italy, they call them Stanlio and Olio, Laurel and Hardy. Back in the 1990s when everybody was worried about the year 2000, I got involved in a number of year 2000 projects. And I went to – let’s call them a large insurance company – and they discovered that they had over 500 applications that they didn’t know existed on the mainframe. They were taking an inventory of the mainframe. Well, in those days, mainframe environments were far better looked after than anything that came later, I mean, there’s just no question about it.

I was really kind of stunned and I talked to people at the organization and they said there was no was no central comprehensive… there was no person responsible for knowing that information, you know, basically. They never took inventories of their assets. And a database is an asset in no uncertain terms because it contains data and data’s valuable. How many instances is the question and actually, where are they? This is just “What is a Database?” and the reason I think like that, a database is a cupboard into which you throw data. And I was talking to a site recently that had thousands of instances of Oracle. Well, Oracle’s a database that, if you use it in any sophisticated way, it requires a DBA.

I kind of asked about that and they said they, about, I think it’s about seven or eight DBAs in the whole organization. And I said, you know, “Who’s looking after the other thousands of instances?” And they said, “Well really what’s happened there is that people are just using it as a file system. We have a number of databases which are on large clusters where performance really matters and they have DBAs that are standing over them all the time. And then we have thousands of other databases that nobody is looking after at all.” And I did ask them exactly how many databases and they came up with, “Well, the last time Oracle audited it.” They didn’t do audits themselves, you know, which is kind of an interesting thing.

But, you know, there are reasons for using a database. A database implements a data model. It’s there for sharing data: can manage multiple concurrent requests for data, implement a security model, is ACID compliant, is resilient or can be set up to be resilient, you know. That’s the reason that we have databases. But, you know, it’s not unusual to encounter sites with thousands of instances of SQL Server or Oracle and most of them are just being used as file systems, basically. And so why would you create a new instance, really?

I know of developer teams that if they’re building a new application, they build it in a silo so any given new application would have a separate database. They wouldn’t necessarily be trying to make a data layer out of things – I don’t think that’s good practice. But there again, you know, if you’ve got a very complicated environment, it becomes very, very difficult to try and put together all of the databases that are related to one another in terms of having data within them where there are relations. Instances get created for replicas.

You know, you can have hot standbys or replicas for availability purposes, but you also have replicas or semi-replicas in data marts. And once the data warehouse world was introduced, the question of, you know, how many data marts were out there, and people were just using them as clone files, taking data out of the data warehouse and not particularly caring about its performance in the sense that they would just make do as default performance. Most of these people probably didn’t even know that you could actually tune databases. I’ve seen designs that have sharded data into distinctive heaps for the purpose of distribution.

You know, you often get this replication situation where you’ve got multiple depots within an organization and they’ve each got databases and each is a shard of a central database. You get instances from sharding. Poor design decisions – I’ve seen some really bizarre designs take place in terms of databases where people have created separate databases for no good reason. And as I’ve noted, databases are file systems.

And then there are the test and development environments that need to be stood up and fall down, but they all count as databased instances and all of them, by the way, need to have security and all of the other stuff that database hopefully provides. Instance considerations – a database workload can only be optimized for a specific instance. If you’re really interested in having absolutely the best performance, then having data sharded off in loads of databases isn’t necessarily going to give you that kind of optimization.

There’s a reason not to create spurious instances of data. Mixed workloads on the same database as the counterpoint can lead to poor performance – particularly notable by OLTP and large query traffic simply don’t mix, never have mixed and probably never will mix. It’s usually best to consolidate a database at the server level rather than having multiple VMs. But VMs provide isolation; with some people it’s a design decision to isolate data from other data so that, you know, if that application fails, or if that database fails, it doesn’t bring my application down.

The problem with that, of course, is that you end up running into the next point, which is database license fees. Those vary, but I’ve seen database license fees become a design criterion because somebody didn’t want to burst a particular number, and therefore, people designing systems poorly simply because of the way that the database license works. And there’s the other thing: if you start to consolidate all your databases, it is worth noting that DBAs are expensive. That’s not such an easy thing to do.

A simple view of the world – and this is the last slide really – there’s a data layer, there’s a transport layer and there’s a processing layer. And all of the hardware sits underneath that. It isn’t really possible to optimize the data layer without knowing exactly what’s in it and why.

And having said that, I shall pass on to my friend from down under, Dez Blanchfield.

Dez Blanchfield: Thank you, Robin. Let me just get my mouse sorted out here. So, I’m going to give us a couple of anecdotes today because this is a huge topic and I could spend two weeks with a whiteboard marker having fun about it, because I’ve had nearly three decades of up and downs in this space.

But first, a mental visual picture. When I think about the challenge that we’re talking about today – and essentially, we’re talking about database growth, replication and sprawl and all the challenges that come with that – I wanted to just put this picture of a giant oak in our mind. These are famously beautiful trees, they start out as a tiny acorn but they grow to these behemoths. And when they do so, they’re very big and messy. And as you can see from this image, as a visual metaphor, if you like, you know, branches going everywhere and then twigs coming off those and leaves at the end of those and they’re in all random, chaotic shapes, and that’s just the bit we can see above the ground.

I kind of think of those as data inside the database, and below that there’s a structure of roots and they tap into all kinds of directions. But it seems very clean and sensible at the surface of the ground there where it’s nice and flat, but the reality is it’s just as crazy under the ground as it is above the ground; we just don’t see it. And I often kind of use this when I start thinking about how to describe the challenge we’re talking about today to organizations from the board room down to the techies to try and get them to visualize what’s actually happening in their organizations. Because it’s so easy to look at a computer screen and see these beautiful fields of rows and columns and think, “We’ve got it sorted out, it’s no big deal.” But that isn’t the case at all. And so it’s at that point I usually hit this one line saying that databases in my mind are like acorns, you know, they start small and grow, but before you know it, you’ve got a forest of giant oak trees, and hence the visual.

So, two anecdotes just to share a scenario that grew out of control and just couldn’t be fixed, and then another one that did a similar thing but was able to be fixed, and I will highlight the key point of today’s discussion around how we came about it.

The first one was a scenario where a CIO with the greatest of intentions over time unwittingly caused one of the most unexpected and unwanted sprawls that just grew beyond control. It was a scenario where a government organization with thousands of staff, very technically savvy staff, were demanding access to its systems and tools that they could start to collaborate with and automate a lot of their processes. They wanted to get away from paper forms and they wanted to create online systems, they wanted to capture data and track it and monitor it and report it back and present it back to their peers.

And there’s all kinds of things, there’s things from people turning up to their offices and clocking in and signing in for security purposes all the way through to who was ordering what at the cafeteria at lunchtime. And so, a well-intentioned CIO decided that Lotus Notes was a great idea because he’d been to a series of seminars and IBM had done a great job at pitching it and in the right scenario it would have been a great decision, had it been done under control. But what happened was instead of handing Lotus Notes to a team of technical people to sort of implement in an environment and then stand up sensible tools and so forth and provide some control and governance around it, what actually happened was it got deployed to the standard operating environment, SOE, so every desktop effectively became a server.

And so, they provided training and hands-on notes and documentation for this whole process and all of the sudden people realized, “Yay, I’ve got Lotus Notes on my desktop!” What does this mean, do you think? Well, it meant that thousands of very technically savvy staff were taught how to script and write apps, effectively, in Lotus Notes, create little databases which essentially looked like spreadsheets, rows and columns and fields, and present these little web interface through Domino.

If I wanted to capture information about something, I could just create a little form and in spreadsheet-type interface, put it into a file, create a little Lotus Notes database behind it and present it as a web app and start collecting information. And that sounded great until it had been running for years and all of the sudden they realized, someone woke up and said, “Well hang on, why are there 10,000 new database-powered apps appearing on the LAN, and particularly in the last 12 months? What’s going on?” Well, what happened was, you essentially gave people a gun, and it was loaded and the safety was off, and of course they shot themselves in the foot.

And there’s this great image here that I usually conjure up in my mind of an Italian artist who does this weird thing where he gets a truckload of hay and straw and dumped into the middle of an art studio and then gets a curator of the art studio to randomly shove a needle into the middle of it. And then he spends days on live feed, on camera, going through the straw looking for the needle in the haystack, as it were. Until eventually, after hours and days, he finds it and jumps up and down and gets excited. And anyway, Italian artist, what can you do? But it’s quite humorous and if you’ve ever watched it online or if you do watch it online you’ll find it very cathartic.

Here’s a nightmare scenario where a well-intentioned technical person gave business people – very technically savvy business people – a tool that was supposed to make their lives easier. But before long we had questions like who’s backing them up, who’s monitoring and supporting them, where is this data, what structure is the data in, who’s policing the schemas, what if I want to create another version, what data is in those versions, can I do a dev test integration journey on these things?

You know, you can draw your own conclusions on how it went, but it didn’t go well and you can imagine that just hundreds of terabytes of data, and not backed up, sitting on, effectively, PCs or laptops on desks, some systems not even being available because people didn’t realize when they shut the laptop off at 5:30 and took it home to do work that no one on the LAN could get to that application. It didn’t end well. And a great deal of data had to be cleaned up and manually manipulated and brought back into a sensible system; majority of it was just wiped out and removed, because it just couldn’t be allowed to sprawl further.

Then my second anecdote with things on a very different journey. Imagine a scenario, you’ve got dev, test, integrations, systems integrations, user acceptance testing, production, disaster recovery, backups and backup copy one through to 99 and beyond, you’ve got upgrades, patches, and then demonstration environments from one through to 99 and more. And all of the sudden you sit there going, “Wait, what’s going on, hang on, who’s using what?” You know, this is a nightmare potentially waiting to happen.

But in this scenario what happened was I had opportunity to go into an organization who wanted to extract a wealth management business unit from their core banking platform and stand it up as a separate organization in essentially a startup within an enterprise. The challenge was, take our wealth management business unit and all the people and technology and data around it in the public services, create a startup inside our own company and carve it off so it can run on its own brand.

This is a global leader in banking, which I won’t name. We had to extract the wealth management business unit itself and all of the things around it. So, everything in its entirety, all the staff, the physical infrastructure, and move it into a new office space. All the business systems, all the software, all the data, all the licensing, you name it. Well, you can imagine, that looked like a bit of a nightmare to start out with.

And to put some context around it, we’re talking about 78 systems in the original banking platform supporting about 14 core products, which could be about a thousand different offerings. Hundreds and hundreds of live databases in use, and when I say in use, we had to move them in situ, so on a Friday afternoon they’d be at one environment, on Monday they’re expected to be somewhere else and on Saturday and Sunday they had to have this cross-over where transactions went from one system on the left, let’s say, to visualize it, to another system on the right.

About 15,000 customers with countless records each, and an ETL nightmare because none of the 78 systems on one side were matched by systems on the other side. We had completely new banking platform, new systems, new software, new databases and new schema. So, metadata, fields, rows, columns, records, tables, you name it, nothing matched. There are 14 different active development teams, one for each product. And when we built this environment we found that by the time we had development test, integration, systems integration, user acceptance testing, production, disaster recovery, demonstration copies, backups, upgrades, patching – I even missed one there – training, for example and education, there were 23 versions of each of these environments for each development team.

Now, you sit there and all of the sudden, your blood starts to curdle and your skin goes cold and your hair stands – that can never end well. Well it turns out, it ended out very well because the very first thing we did, before we even started the technology deployment design, was we went and got the right tools. And we used tools, and not necessarily people, but people driving tools. We used tools to map the data, we used tools to map the databases they lived in, we mapped all the metadata, the schemas, and all the way down to rows, columns, record and fields.

We knew what we were coming from and then we correlated that to the map of what we were putting in place as far as the off-the-shelf banking platform looked like, and we had a one-to-one correlation. And anything that fell off in the middle, we created a data room where we’d go through and manually map them. But, prior to doing any deployment and any setting up of these environments in the new world, we made sure that every single record, every single table, every field, every row, every column, every database, and all the metadata around it, all the permissions and controls were mapped, from one to one. And we didn’t move a single thing until that correlation was made.

And so, the ETL piece went from being a nightmare to a fairly painless process of just validating the controls and processes being followed. And we could do this on a regular basis, almost hourly. We were doing transition from production on the old world to new environments of dev, test, integration, etc., in the new world. And on the day we went live, after a five-month process to go live after a month with the testing and then in six months it was online and active, we only had one issue, and the issue was that someone forgot their password and it had to be reset. That was the only issue had, and essentially created about an hour of stress of people thinking something had gone wrong – it turned out a password expired and they forgot what it was and had to reset it.

You can imagine that scenario, compared to the Lotus Notes environment where someone had great intentions but didn’t think through the challenge, and next thing we had to go and try and map all this data and the bulk of it had to be written off and it was just a great loss of time and effort and resource and morale. To a scenario where, when it’s properly planned and properly done and delivered appropriately with the right tools, we got a great outcome.

And so that point brings me to this one line – before I hand over to our associate to talk about what IDERA have to solve this very challenge – is that in today’s world where increasingly systems are powered by databases, it’s not just a nicety, but to me it’s a fact, it’s a necessity, that smart tools are, in my experience, the only way to manage data discovery, data management in the scale and the speed that we’re moving.

And if it is done right, as the second anecdote that I just shared hopefully illustrated, it can be a very painless and very seamless process. Not just in new projects, but getting your arms around a current environment and ensuring that any time and day you can track and trace what’s happening in your organization, what database is there, what versions of database are you running, and who’s using what.

And to that end I will hand over to our associate from IDERA, and I look forward to hearing what they have to offer on the table and how they’d solve this very challenge.

Binh Chau: Great, thanks, Dez. Can you guys hear me okay? Alright, thank you. Hi everyone, I’m Binh Chau with IDERA. Today I’m going to talk a little bit about products that we have called SQL Inventory Manager and it talks about the discovery and the ability to inventory your SQL Server instances and databases out there and to kind of get a handle of what you have in the environment and talk about some other things that Dez and Robin talked about in terms of database sprawl and the need for data these days.

With that, here’s some consideration that you’ve heard, I think, anecdotally through the two tales that Dez was describing. But basically today, there’s so much need for data and business groups out there and business groups out there kind of spinning up their own applications and servers, particularly with SQL Server, right? Because you can easily spin up a SQL Express version or BI services, that there’s just SQL sprawl going on at many organizations, you know, from the small to the large.

A lot of times DBAs are not aware that somebody decided to start, you know, create an instance rather than just putting a database on an existing instance. They’re not aware of these things until potentially there’s a problem and someone’s calling the DBA, “Oh no, my application stopped working, it’s not able to connect to a database, what’s going on?” And you know, when the DBA’s asking some questions they discover, “Hey, this one wasn’t on our radar, we weren’t aware of it.”

Another one is licensing costs, right? Microsoft SQL Server license: the way it works is you’re not required to have a specific key for that number of instances that you have. You can deploy and then they do an audit. You know, they do an audit later and kind of discover how many licenses you actually need. And so, if they’re doing an audit and you’re not aware of the unknown servers, it could result in kind of a costly audit. And so, having the tool or having an inventory ahead of time to know what your licensing costs, and being able to not only know but also manage it, is a good thing to have.

And then, what I just talked about, if you’re not aware of a server a lot of times, if things are running along fine, everything’s fine, but the only time you’re made aware of something is when there’s a problem. And so that could lead to production interruptions or maybe the server wasn’t maintained and you didn’t get a patch on that server and that creates an issue.

Some of the questions a DBA kind of has to do day to day is that they face, you know, they could be administrative or strategic but some things like, Microsoft just released a critical systems patch, how many systems out there will need this new patch? Who’s going to be impacted by downtime if I need to take the system down to patch it up? How can I easily get to that information? Do I have to go into a spreadsheet? Do I have to go into multiple systems to find that? Do I have to reach out to the different business groups to get that list? It’s really hard to piecemeal it.

Another good one is basically, someone comes along and they say, I need a new database. It’s going to require X size and it needs to have this much capacity, and then they want to know, where can I put it. Without knowing what’s in your landscape it’s hard to tell them, okay, we can put it here, here or here. You kind of have to go and do your manual checks that’s needed to get that done. And we talked about the auditing, and also the rogue server.

If you have a rogue server out there, you don’t know what state it is in, whether it’s been backed up, whether it has all its patches. Sometimes you may not become aware of those things until there’s problem, which would be bad.

Those are kind of all the challenges, the questions, the DBA face on a day to day, what gets thrown at them. So, I wanted to introduce to you the SQL Inventory Manager, which is a product that we have out there. It does a couple of things. It does discovery, which is basically kind of going out into your environment to see what SQL Server are out there in your environment. And then it can also auto-discover, so basically, once you’ve run a discovery, you can set it to go out there daily or weekly – whatever time frame you like – to discover new instances out there.

And then you can also have it auto-register those instances so you can start monitoring them and check on their state of their health and then you can start cataloging and inventorying those instances so that you can have a good view of your SQL Server landscape. What’s out there, what’s production, what’s development, what’s disaster recovery, what’s less critical and you know, what applications are running on them. And you can also get alerts for when things, when health check is failing, so basically if the server goes down or [inaudible] as well as a number of additional things you can [inaudible] tool itself.

Eric Kavanagh: You’re getting a little bit soft, just so you know.

Binh Chau: Sorry, is this better? What I want to do was take you guys through a demo, show you guys what it does. Hang on a second, let me share my screen first. Are you guys seeing the web interface? This is the SQL Inventory Manager interface. The screen that I’m showing you here, it’s a web-based interface. The screen that I’m showing you here is our Database Instance View. Across the top, you can see we’ve got different [inaudible]. So, “discovered” is basically all the instances that it’s discovered on the network. And what it’s going to show me is basically [inaudible].

Eric Kavanagh: You’re starting to break up just a little bit there. You may want to put the phone down and put it on speaker. Go ahead.

Binh Chau: This Discovery screen will show you everything that the Inventory Manager’s discovered on your network. Here it’s discovered like 1,003 servers out there. And it will tell you the version, the edition, if it can find it, when it was discovered and how it was discovered. Let’s say for example I choose to ignore some of these, meaning, you know, maybe I want to ignore the Developer Edition because they’re not as important to me because they’re just Developer Edition; I can choose to ignore these and it’ll put them on the Ignore tab so the next time I run Discovery, it’s not going to show it to me again. Now I can fill out to do auto-registration or I can manually register.

And so here I have selected to monitor six instances. And here it’s logged in and it’s going to run periodic checks on these and then there’s multiple checks, anything here from, you know, it checks every 30 seconds to see if the server is up or down and it gives you kind of an overview of what that state is. Basically here it’s telling me that I’ve got one server that’s down and these five that are up. It’s also telling me what server editions, the number of databases, the status of the databases, any additional inventory or metadata around that server. I can also get to the Licensing view from here. Here it’s giving me some of the Microsoft licensing information that I need if I wanted to get ahead of getting a total or summary before a Microsoft audit.

Here is the number of cores, the number of sockets, the possible core license which was something that Microsoft introduced starting with 2012. That was our Instance view. Our Overview page, this is kind of the page that you will open up to. This will show you the health checks or recommendations it has, like right now it’s telling me that I’ve got nine databases that do not have current backup. I can click in there to go down to the details of which databases those are and I can go in and take an action on them if I needed to. It tells me all of the top databases by size, top databases by activity. I can click into the particular server and get more details about it.

Eric Kavanagh: While that’s rolling, what you’re showing us here is the ability to see really anything that’s connected to the network, is that right?

Binh Chau: Right. This is showing anything I have chosen to monitor using Inventory Manager. This is a SQL Server, it shows me here all the applications that’s connected to the server. Again, I can get in all the databases that are associated on this server. Over here I could tag things. I can create a tag for this particular server, whether or not it’s a Precise domain. We have customers that use it for, like, they want to tag their production servers or their debt servers and then they can kind of get a full report of the way things are. As I go over to the Administration tab, this is how I can run Discovery. And Discovery is basically going to go out and run into your network and find all the SQL Server in your environment.

Here, I have this Precise domain which is a domain of ours and I’ve set it up to say, you know, on this particular domain use this particular Windows user account to do discovery and I want you to do a complete scan. I can also select to specify “Only scan this particular subdomain” or “Only scan the parent.” But in this case here I’ve said run the complete scan. Here’s the different scan types I can use and if I save that, and then basically it’s a job that I can set. Right now, it’s off, meaning that I would have to manually run these scans. But if I wanted to, I could set it daily, you know, run the job daily. Or if I choose not to run it daily – it’s too much – I can say run the job weekly on a specific date and time.

And then Auto Registration here, if this is turned on, what it would do is that every time it found a new server it’s going to automatically register it into Inventory Manager so that I can start monitoring it. If there’s some sort of edition that I want to exclude, like for example, I don’t care about Express or Developer edition because those are development environment, then I would just click those here and what it’ll do is it just says every time I find something new I’m just going to add it to Inventory Manager so that you can monitor it as long as it’s not a Developer or Express edition.

And here’s where I can set the tags, so for example, if I have production servers I could go here and tag those servers. I could tag either database or server with a specific blue tag, so for example I could say that this AO_NODE should have a Production tag. And this way if I needed to easily get to the server, I can go out here and click on the Production tag and it will take me right away to those two servers. This is our Explorer view and this is shown by Owner, but I could say by Instance tag, [inaudible] by databases too and I can expand this to see what they are.

Another useful feature that we’ve built that people really like here is the ability to look at what you’re managing through Inventory Manager and seeing what patch level they’re at. Basically, here it’s telling me here the six servers that I’ve got managed in my tools, whether or not there’s an update available for Microsoft and whether or not the version that I’m on, whether it’s supported or not, and the support status. If I wanted to find out more about this particular hotfix I can click on it and it will link me up to the article from Microsoft in terms of what that hotfix is about and whether to address them. You can export this list if you wanted to, so that way you can say, “Hey I need to patch maybe three of these servers this weekend and the other three at a later date.”

The Build List – so there’s a list that it checks against to see that your version is up to date. You can go out and download this list to make sure it’s up to date and you have the latest list to compare it to. Another neat inventory feature that people like is the ability to add, not only tags, but the ability to add custom inventory fields. You know, if you wanted to add a field here to tag a database for example, let’s say I want to tag it at the database level. Department, this department and this database, I could make it a different type: open ended, true/false or picklist.

And I could say, you know, this is an HR, marketing, R&D, finance. And what this does here is basically, once you can tag these things, you can get some data out of here that says how much capacity each database is using and then you can start to kind of, is it growing and does it make sense to charge back these departments?

Another thing is, you know, if you have to run maintenance, by knowing who’s in that database you can know who to contact to let them know, “Hey I’ve got to run maintenance this weekend, your databases will be offline,” and so on and so forth. Another useful feature is the search box up here people like. A lot of times DBAs are asked about a database or an application or a server, depending on who’s talking to them, it’s sort of hard to figure out exactly where that’s at. What you could do here is, you may not know where the database lives but you could just type it in. I could just type in the IDERA Dashboard and it’s going to pull up a couple of databases and where they sit so you can easily get to those. And then it pulls up additional information about them: their size, a log size, whether or not it’s ever had a backup, what recovery mode it’s in, if I wanted to add any tags about it. There’s a lot of different features within this tool, you know, it’s an inventory tool but it’s an inventory tool that’s very specific to SQL Server and for DBAs.

Because there’s, I guess, additional things the DBA would like to have access to or to kind of get a good view of what the environment and their landscape look like for their databases. You can also subscribe, configure the SMTP server and set up subscription to alert for yourself or for any users on here. I’m going to stop this and go back to the presentation. And this last slide here is just a simple view of the architecture. It is a web console that runs on an embedded Tomcat Web Services.

We have some collection services and management services that we put into a repository and the management services goes out and runs Discovery on your various SQL Server instances. There’s nothing installed on your monitor servers. We have jobs that run periodically that just collect data about it, so basically whether it’s up or down, how much data is being used, what people’s other versions are. Well, that’s all.

Eric Kavanagh: Yeah, let me ask you – I’ll ask a couple questions and then I’m sure Robin and Dez have some as well – just out of curiosity, when someone comes in to do an audit, let’s say Microsoft, are they using this tool, or I presume they have some proprietary tools that they use?

Binh Chau: Yeah, I believe they’re using proprietary tools. The thing is this tool is an inventory tool so it keeps up to date in terms of, you know, because it has the job to go out and continuously collect information about your servers, it’s going to run out there and at any point in time you’ll have up-to-date information, in fact, about how things change versus, you know, one-time reports that you may get from Microsoft to say this is the number of servers you have, these are the versions you have.

Eric Kavanagh: Yeah, I’m curious about Discovery. So when someone buys this tool and begins using it, how does the discovery actually happen? This was kind of what I was alluding to earlier, in other words, are you tapping the network to see which signals are flying out there that appear to be database instances and then you catalog that and then once you’ve tagged a database instance that you’re monitoring? I’m guessing it has a sort of ping that it does every so often and if it goes down, for example, that’s how you know it’s down. Is that kind of how things work?

Binh Chau: Yeah. I mean, once you’ve turned on Discovery it goes out to your network and we’ve got several different scans to go out there, but it does, you know, a browser scan and registry scan. It does different scans to see what computer’s out there and then it does a check: do you have SQL Servers out there or BI services out there? And then it brings it back and pulls it into the tool and shows it to you, “Hey, here’s all the things that I discovered.”

And then if you were to say, “I want to monitor using this tool,” then it’s going to keep track of that and it’s going to ping it. It has jobs to ping it every so often to say, “Okay, check this now about this thing,” – you know, the database availability – check it now about the database history, check the database side. It runs a series of jobs to check the database that you’re monitoring.

Eric Kavanagh: Yeah, that’s good. And we have a question from an audience member. I know that you guys have tools that work with a variety of database technologies, but this one in particular you’re showing today, is this just for SQL Server or does this cover other database types too?

Binh Chau: Right now, this particular tool covers SQL Server.

Eric Kavanagh: Okay, that’s fine. Well, let me turn it over to Robin, I’m sure he’s got a couple questions, then maybe back over to Dez. Robin?

Dr. Robin Bloor: Yeah, sure. Microsoft fairly recently – sometime in 2006 – announced SQL Server on Linux, but I don’t think it’s delivered it yet. I just wondered if you got any comments on that. Are you aware of that? Are you playing with that?

Binh Chau: Yes, we are. We are planning to include that. I mean, the nice thing about this tool is, I’ve talked to a lot of customers that have built their own home-grown tools to kind of do the same thing, but they have to keep up with the new editions and versions that Microsoft comes out with, but we have new versions and editions, we get in on it early to make sure that the tool will be able to kind of monitor and manage the new editions. So, SQL on Linux is something we plan to add and make available when it’s available – I believe later this year.

Dr. Robin Bloor: Yeah, that’s interesting. Are you expecting a lot of your customers to actually do that? I mean, SQL Server’s a very sophisticated database, in my experience. I mean, you know, it’s long in the tooth, it’s probably the thing to say. I mean, you know, the original Sybase that it came from was actually fairly simplistic in a lot of things it did. But Microsoft has added more and more stuff over the years. Is all of that going to be available on Linux? I mean, will you be advising your customers on whether to make that migration?

Binh Chau: I’m sorry, is the question are we seeing people ask for that?

Dr. Robin Bloor: Well, given you’ve messed about with it, is it as sophisticated on Linux as it is on Windows?

Binh Chau: I haven’t played with it myself, but what I’ve heard from a colleague is that it’s actually very on par. But I personally have not played with the new version of SQL on Linux.

Dr. Robin Bloor: Okay. Am I right in thinking that you’ve simply put agents on every SQL Server you find? Is that how this tool works?

Binh Chau: No, we actually don’t put agents. For this particular tool, the Inventory piece, we don’t actually put agents on there. We just kind of go out and make a call and check statuses on it. One nice thing about this tool is that it is agentless.

Dr. Robin Bloor: So, you’ve got other SQL Server tools, can you kind of remind me as to what other products you’ve got in this suite that deal with SQL Server?

Binh Chau: Yes. We have SQL Diagnostic Manager. It is a monitoring and performance tool. It does more in-depth analysis or diagnostic and performance and health checks for you than Inventory Manager. Inventory Manager’s the lightweight version of that health check. We also have Compliance Manager and Secure, which is part of our security suite. It will tell you basically who’s accessing your data, what data are they accessing, why, and it helps you with compliance and other reporting guidelines. We have SQL Safe, which is our backup tool – it does backup and restore and that’s a nice one.

We also have our Enterprise Job Manager, which is just monitoring your job. And then we have the Toolbox tool which are Admin toolsets and also Comparison toolsets as well as SQL Doctor. Admin toolset and Comparison toolset, they’re what I think of as a Swiss Army Knife. They have multiple tools in there to kind of help the DBA do various different things like, you know, check patches or move or clone a database. But there’s 24 such tools in that Toolbox.

Dr. Robin Bloor: So, are the people that go for Inventory Management, are they normally already users of your other tools? Or is this kind of an entry point? I can imagine – I mean, you can tell me if you’ve got any war stories – but I can imagine if you’ve never actually run an inventory in a fairly sizeable data center, the experience can be quite sobering. Is that what you find?

Binh Chau: Yes. I mean, we have customers that are introduced to the tool from other toolsets, however we have customers that come looking for a tool like this because of projects that they have. One example I have was there was a company that merged with another company and bought a series of companies and needed to consolidate their SQL Server footprint in order to reduce their costs. And so they were looking for a tool to kind of go out and discover everything that they had so they can start the process of how do we consolidate this.

Dr. Robin Bloor: Right, I understand. I guess that’s quite common with mergers when you think about it. Okay, I’ll hand on to Dez, I don’t want to take all the time. See what questions we’ve got from Australia.

Dez Blanchfield: Thank you, yes, the questions are always upside down here. One of the things that comes to mind, and I get this quite a lot, you know, companies aren’t quite sure where to draw the line as to when to start to invest. When should an organization – in your experience given that you’re at the cold phase – when is the right time to start investing in tools like this to ensure you don’t get into trouble? Do you do it from day one when you start building your database infrastructure of the new organization or, as you just outlined, when you do an acquisition/merger?

Or is there a particular scale you really need to be at? Do you need 10 or 100 or 1,000 databases? What’s your experience so far as far as the market you’ve been dealing with for so long, when’s the right time to get into this space and probably, where to start? What does it look like when you get started?

Binh Chau: I mean, I think maybe if it’s a very small organization you may not have a need for this tool, like, with one DBA or a couple DBAs. When you start to get a group of, I don’t know, three or four DBAs and maybe 50 to 100 servers, you may want to start doing something like this. I guess, as your organization grows larger in size and just business people that are tech savvy wanting to, you know, like that example that you gave, they want to install the applications and databases on their own, but that’s when you want to have this kind of tool because that way you can see what’s out there.

But even in a smaller organization, it’s nice to have this type of a tool to kind of keep track of what you have. If you split it up so that you can say, “Oh yeah, I bought SQL 2012 for this box, but it’s currently running SQL 2008 because I have an application that still needs that legacy version.” It helps to have that Inventory tool just to kind of get away from managing multiple spreadsheets that can become stale.

Dez Blanchfield: The other question I had just following on that: what types of skills or resources should organizations be planning to have when they do get to that scale? Is it the case that there’s a particular skill set that you really need or a type of experience or background or the type of person that’s best suited to this kind of challenge? Or is it something that the average DBA or sys admin or network administrator type skill set could throw this at? Do you really need a sharp pointy ended brain or can you pick this up pretty quickly?

Binh Chau: Sorry, so you were talking about the skill set of the person?

Dez Blanchfield: Yeah, so when you think about a database administrator, there’s a particular set of skills that you would need. So when you go out hiring a DBA, per se, for that specific role, when you think about the types of challenges that you were talking about here where you’re using a tool like this to keep on top of mapping and tracking databases, doing the discovery piece, and driving this particular tool, is there anything unique about the use of the tool and approach to this type of challenge, or is it something that the average DBA can pick up pretty quickly?

Binh Chau: I mean, I think your average DBA can pick this up quickly. I think it’s helpful to have this type of a tool because you can have also turn it around ‘cause it’s web based. You can give it to other users within your organization. You could give it to app developer who can check on his specific database or server. It takes away some of the administrative things that a DBA has to do. Previously someone would call the DBA and say, “Oh, why is my server up or down?” Now they can kind of get access and see whether their servers are up or down.

Dez Blanchfield: And what sort of environment would an average organization need to deploy this? Does it need a dedicated physical server, or can it be done on a virtual machine? Can they deploy it in the cloud environment? What’s the general footprint for the deployment of the tool and just the general running of it? How much heavy iron does it potentially need to run in parallel to the other environments it’s mapping?

Binh Chau: Yeah, it can be run on a VM or a computer or a server. It doesn’t necessarily have to be a dedicated server, it just depends on how many servers you’re monitoring. If you have a larger environment, it may help to have a larger server because it’s collecting a lot of data about the SQL Server that you’re monitoring.

Dez Blanchfield: Right. Is it the sort of thing you could comfortably run in the cloud instance and create a VPN back to your environment, or is the amount of data it’s collecting probably a bit heavy for that type of use?

Binh Chau: We have not set it up to run it onto the cloud, to run this in the cloud yet. It should probably be run on prem.

Dez Blanchfield: And last question, if I can: a lot of the tools that I’ve seen in this space, particularly where you mentioned it for one scenario where someone acquired company or there was a merger or something to that effect, or even if it was an organization just merging business units, is it a sensible use case scenario where somebody deploys it on a laptop and takes it into an environment to map a world as a once off, or is that an unlikely use case scenario? Is it more sort of the case that it’s going to be in there and just permanently left to run?

Binh Chau: This specific tool’s more of a, kind of, install on a server and it’s left there to run. That way you can collect the information you need for it and keep, I guess, a running inventory of what you have. It’s unlike the Map tool because the Map tool is kind of a one-on-one, skip to the port that you need, do what you need to do with it today. This one is kind of – the nice part about it is the fact that you can kind of tag it, give people access to it to kind of check up the state of their particular server, the ones that they’re interested in.

Dez Blanchfield: Okay. Probably the last question for me and then I’ll hand back to Eric for questions that come through the Q&A window with the attendees, because we’ve had a good turnout today, one of my favorite. Just to wrap up this, what’s the process to get your hands on this? I know a lot of your tools are available for try-before-you-buy type things. Where should people go to learn more about this online, whereabouts on the website should they look for the downloads and what does the journey look like, sort of do a proof of concept or a trial and get your hands on it and become familiar with it to then get in touch and buy it?

Binh Chau: Yeah. You can go to the website and you can download a two-week trial for free. And if you like it and you want to reach out to us, we can also schedule a demo with one of our engineers to kind of do a more deep dive into the tool.

Dez Blanchfield: Fantastic. Well, thank you very much for that. I appreciate the time to chat to you about it and, based on my personal experience and I’m sure I speak for Robin on this on his lifelong experience, I think it’s a given that something like this is a requirement nowadays. We can’t do this manually now no matter how hard we try; the scale is just too large and things are moving too quickly.

I highly recommend people to do exactly that, jump on the IDERA website and get a copy to play with. Because the potential risk for my own experience with the anecdotes I shared just today, has been it can go from very bad to very good quickly, if you’ve got the right tools, but it can also go the other way if you don’t. Eric, back to you.

Eric Kavanagh: Yeah, just pop for one last question over to you, an interesting one. I’m just kind of curious to know what you’re seeing out there, you know, the cloud is obviously is ever more important these days – Amazon Web Services, but they’re not the only ones, Microsoft has its whole Azure offering that seems to be gaining steam. I’m curious to know, one of the attendees is writing that Dr. Bloor made an interesting point that DBAs are expensive and that management problem caused by either a rogue DBA or someone who’s not doing what they should be doing, can that be solved by migrating to the cloud. I’m really just curious to know, how much activity are you seeing? Do you see that migrating to the cloud is becoming a bigger issue for businesses, or what’s your take on that just as a trend?

Binh Chau: I feel like it just depends on what kind of an issue you’re in. I feel like some industries they say, “No, we’re not migrating.” They may not be migrating to a public cloud; they may be looking at migrating or migrating their stuff into a private cloud. But then I see some organizations that are interested in, you know, really getting in on the fast track and kind of going towards an Amazon or Microsoft Azure. And then there’s some people that are saying, “No, we’re not migrating our data” or “There’s only certain data we would migrate, but not our critical ones.” I think there’s kind of three camps.

Eric Kavanagh: Yeah, that would make sense. I mean, we’re seeing that more and more and I think it’s going to be moving in fits and starts for quite some time. And there is a backlash to the cloud too. People get up into Amazon Web Services – we’ve heard this more than a few times – and at first the costs are manageable and then over time it just creeps up and then you’re kind of stuck there. In many ways the cloud is just another data center, but it’s going to be an interesting journey going forward, to say the least.

Well, folks do archive all these webcasts. Hop online to to check out a complete list of all the things that we do. And of course, for all the latest. And with that we’re going to bid you farewell. And thank you so much once again for your time and attention. Thanks for all of our friends at IDERA and we’ll talk to you tomorrow hopefully for our Philosophy of Data culminating webcast. That’s right, Philosophy of Data is tomorrow at four o’clock Eastern. Hope to see you there. Take care folks, bye-bye.