Eric Kavanagh: Ladies and gentlemen, hello and welcome back to the hottest show in the world of enterprise IT, Hot Technologies of 2016. Yes, indeed! My name is Eric Kavanagh, I will be your host today for a show entitled “The Art of Visibility: Enabling Multi-Platform Management,” yes indeed. A few quick notes, there’s a slide about yours truly, admittedly from five years ago and enough about me, hit me up on Twitter @Eric_Kavanagh. The year is hot, this is our standard slide for Hot Technologies. What we did with this show is we wanted a program that would help us to define a particular kind of technology, so the whole idea is that we get two analysts who come in and give their take on a particular space or a particular type of function that the enterprise needs, and then the vendor comes in and demonstrates what they’ve built and explains how it aligns to what you hear from the analysts.

And the reason for it, as you might imagine, is because in the world of enterprise software marketing, there are terms that get bandied about and what happens invariably is that vendors grab on to the latest hot term, things like big data or analytics for example, or even SOA or different terms like platform, and sometimes those words are very accurate for a particular technology and sometimes they’re not. This show was designed to really help us articulate for you, the audience, what specific kinds of technologies do, how they work and when you should apply them.

With that, I’m going to introduce our speakers. We’ve got our very own Dr. Robin Bloor, calling in from his Austin, Texas location, Dez Blanchfield, calling from the other side of the planet, and our guest Scott Walz calling in from Kentucky. And yours truly, I’m actually outside of Pittsburgh, so we’ve got a fully geo-located organization today from multiple different places. With that, I’m going to push Robin’s first slide, feel free to ask questions by the way, folks, don’t be shy. You can do so using the Q&A component of your webcast console. And with that, I’ll hand it off to Dr. Bloor. The floor is yours.

Robin Bloor: Okay, thank you for that introduction, Eric. Let me just get to the first slide. This is a collection of meerkats thinking about database. The whole of the presentation really that I’m doing here is really just a general set of thoughts about database that I’ve had recently, the point being that really around the year 2000, it seemed like the database game was over in the sense that the vast majority of database implementations were occurring on relational database. And then it just changed, you know, all of these things that the meerkats are thinking about, column stores, key value stores, document databases, in-memory database, graph database, and a whole lot more things suddenly emerged. And it was almost like a new kind of geological era that has fossils of different kinds of animals suddenly appeared.

The news from Lake Wobegon, it’s really over for the single model database. There’s no doubt that RDBMS still dominate, but other kinds of databases are now established. Really, that’s pretty much the overview of what I’m going to say here.

The dimensions of database, some of these actually became more important recently, but the ones that I could think of when I did this slide, anyway, was did it scale up in terms of efficiently using the resources of any given server? Does it scale out so it can go across large clusters? Does it exploit the hardware available that’s kind of in-memory databases are going in that direction? Is it distributable? There are a number of databases that major on variability to distribute. What kind of characteristics it’s got? The fundamental ACID characteristic of the database. But now instead of having actual consistency, a number of databases have eventual consistency, people use them and they don’t have a problem with them so they’ve demonstrated that ACID wasn’t absolutely necessary, just a good thing to have in a lot of situations.

In terms of metadata organization, the whole game has changed. We’ve got different metadata organizations rather than a typical RDBMS schema. In terms of the optimizer, there’s an awful lot of optimizer activity going on depending upon the data structures that you’re trying to optimize. In terms of manageability, there’s a lot of variance in this that I’ll come on to later, but basically the whole point of a DBMS is manageable and again the extent of its manageability to some degree determines the extent of its usefulness.

In terms of hardware factors, this is the point really that’s saying – I mean there’s only one point that’s being made here – the point that’s being made here is that whatever we’re looking at today in terms of database architectures is going to change. It may be the same databases, but they’re going to have to, in one way or another, take account of what’s actually going on at the hardware level. For many, many years we had this relatively simple situation of CPU, memory and spinning disk – well that’s gone, really.

The point that’s here, first of all we’ve got CPUs but they’re way more parallel capability than they had before with many, many different processing cores. We’ve also got GPUs, we’ve also got FPGAs, different kinds of silicon, but Intel has married one FPGA with a CPU in its next release, and – AND – has married GPU and CPUs together on the same chip. You’ve got chips with different characteristics. The advantage of a GPU is that it’s really great for heavy parallelism and particularly with numeric calculation. FPGAs you can, in one way or another, you can put the code on the chip and it functions far faster than if you’re just feeding it to the chip.

There’s a cross-breeding of these things that’s happening. We’ve got 3D XPoint from Intel and PCM from IBM, which are new types of memory, that are slower than RAM, less expensive than RAM but non-volatile. And these are creating a little bit of excitement amongst a number of software vendors that I’ve talked to. We’ve got SSDs but now they’re getting very, very large and they’re providing parallel access. With parallel access to a very large SSD you can approach read speeds similar to RAM read speeds. We’ve got this possibility of three types of storage RAM, the 3D XPoint stuff and SSDs, all of which will be going extremely fast. And since speed is the essence of database, all the database technology is going to try and leverage these as fast as possible. And that’s going to involve and has been involved parallel architecture, but scale-out parallel architecture. The hardware level performance is accelerating all the time, has done for many years, continues to do so, and the general costs are falling.

Trail of Tears. This is just different attempts at databases, the first databases before relational were generally referred to as network databases, then came relational databases, then came object databases, they didn’t get a great deal of traction, then came the column-store databases which were relational databases done very differently. And then we had the document databases and the SQL databases which were object databases done differently, or if you like, the same column of object databases and they caught on. And recently we’ve had graph databases gaining traction and RDF databases. And what you’re looking at there is at least three different sets of data structures that are being accommodated. The relational database does tables and rows very well. The document database and object databases – they do awkward data structure, particularly hierarchical data structures, very well. And graph databases and RDF databases do network data structures very well. And these different, I think of them as three lines, these lines are going to continue indefinitely. It’s not going to stop because the engines that do these things well don’t work on the other data structure particularly well.

And then we’ve got the spoiling factor of Hadoop. Hadoop’s not a database but there are databases that use HDFS for their storage structure. And a lot of things that Hadoop does are the kind of management things that need to be done for a database. Also worth mentioning that Spark isn’t a database either, but it does have, and it’s an immature, but it does have a SQL optimizer and therefore it’s like the kernel of a database without necessarily knowing where you’re going to store the data, but if you stick it on HDFS, a lot of the database requirement is actually met, simply by the capabilities of the underlying file system. Spark in particular has become part of the database ecosystem and it’s often federated with more powerful databases, and the reason for that really is analytics. Analytics – Spark is, well it goes very, very fast at analytics. Analytics is the prime application that most people are investing in right now, so the two walk kind of hand in hand. Data federation rather than concentration rules, it kind of should be obvious from the fact that you’ve got at least three different needs, structured kinds of databases out there and therefore, data federation if you want to share the data between them. It’s often necessary, but you’ve also got databases that scale out and databases that don’t, really powerful engines like Teradata or Vertica have a very particular place, but lesser engines that can do an awful lot of the work, so, federation is likely to be there for a long, long time even between relational databases.

The final thing to say, the IoT, it ain’t over until the fat lady starts disgorging data. The IoT may well create in one way or another different dynamics in the database world and that will complicate things even more. Hopefully, there’ll be – in one way or another – there’ll be some kind of convergence that goes on, but I don’t see it all coming together like it did with the relational databases. Not any time soon anyway.

And I think that’s all I’ve got to say, so I’ll hand it over to Australia.

Dez Blanchfield: Thank you, Robin. Thanks to everyone for joining us, thanks for having me this morning, or this afternoon your time. This is a really hot topic because we have experienced quite an explosion in the last decade and a bit, in the amount of data that we’re having to deal with, and invariably that the data sits within some form of system that for most cases is a database of some form. I thought I’d quickly take us through a very high level sort of walk through how we got here and the problem that’s being created and the types of things we need to address now, and then we’re going to talk about the types of solution that can be applied to that. Let me just grab hold of my first slide here. I’m of the view that we’re at the point now where DB admin 2.0, or database admin 2.0, is kind of where we’re kind of at now, once upon a time a database administrator was a fairly straightforward role and challenge and you could train someone pretty quickly. In today’s world that is no longer the case, and I’m going to show you why that is so.

Once upon a time, a database administrator would be able to connect to the DB back end and do a quick show databases and there’d be a list of databases in the system that they had to be aware of and they could very quickly get across those databases and select them and have a bit of a poke and a probe around and use translate, describe table to find out what’s in a table and each of the columns and rows, and it was a relatively straightforward challenge and if you read the average two or three hundred page book on database administration for each platform, you were able to almost teach yourself without having to do a rocket science degree.

But that’s no longer the case, and the reason for that, in my mind, is that there’s just far too many options in the database world for any one person to be an expert of a specialist at and to be able to manually manage and administer. And the reason for that is that over the last four to five decades when it comes to the world of servers and database systems and database servers and application suites, we’ve come a very, very long way. Once upon a time we had big iron having to deal with what was effectively small data, and laughingly small when we look back now. I saw a really neat photo on Twitter the other day, of this amazing lady who was the lead programmer and developer for NASA at the time when we were putting men on the moon, and her code was printed out in a one hundred and thirty-two column line printers and fan-folded, and it stood actually taller than she was, the amount of code she wrote.

And when I thought about it, I was like, actually that’s probably about two or three hundred megs of data where she had to type it all in at the most, if not less. And so the total amount of data to hold her code, even though it physically stood taller than her when it was printed out on paper, was actually a very, very small amount. Even these massive room-sized computers, and this is an IBM System/360 in this particular slide, the amount of data it could actually hold was tiny compared with today’s world. In fact, our smartphones hold 60 and 128, and 256 gig and we’ll soon have terabytes in our phones before long when the price of flash comes down.

And so at that time and that era, database administration was quite straightforward. Here’s a snapshot of a 3270 terminal session, and for a DBA, being able to log in and have a look at the number of files that were related to the database, and the indexes that were there and the rows and columns were straightforward. And you can see here in this screenshot, that the context of this is one table and a number of table spaces, that would have been the entire mainframe managing one database table. Whereas today, we hold billions of rows of records in database systems. And the change came about through a shift in technology that allowed us to build database platforms and data management systems.

If we think about the sort of original mainframes and many computers running database and eventually relational databases, so fifty-plus years ago, and that big iron sort of world and the small data sets we had, by the time we got to about the eighties, we were sort of at the, we went through the mainframes from the mini to the micro, and we had PCs running things like dBase II and dBase III, and on DOS and CP/M and we had a very early relational-database-style technologies available and they scaled quite well compared to what we were used to in the mainframe. By the time we got to the nineties, we had the likes of [inaudible] and Oracle and DB2. And in the late nineties we had people, like secret computers that could glue like a network model, very, very big machines, cabinet-sized machines together and take the likes of [inaudible] and build these clusters of computers. But even then, it was still small compared to what we see today.

But in the slide that I’ve got up here, this is the Hadoop cluster and effectively acts like one machine and essentially it’s just a really, really big computer and it can hold the types of web-scale data we’re used to now. And so the challenge of database administration, database management on those types of platforms has indeed become, in my mind, rocket science. You’ve got to be an extremely clever character to be able to understand the technology it runs on, the platform it runs on, the data that’s in there, the types of uses of those datas. And yes, we saw this explosion from the early 2000s, where we had Microsoft SQL become a thing, Lotus Notes was quite well established and out there and the number of Lotus Notes databases that crept around the place was quite frightening. And we had the usual incumbents of Oracle and DB2 and really starting to take hold. Some of the brands like [inaudible] were starting to fade out. But we were still really just doing traditional database administration right up to that point, round about that sort of 2006 era where, if I go back to that image of that cluster, we had what we called Beowulf clusters become a thing, where we could take off-the-shelf PCs and glue them together and make major super computers.

But from about that point onwards, we crossed a tipping point where human beings were able to do old-school database administration and – as I say, in my view – the scale became very, very big very, very quickly. It’s almost as though we had this big bang event in technology that drove the adoption of data technology and data management technology and in particularly the databases around them. And because we were in effect building high-performance compute-style clusters to host data in different forms. And to punctuate that point, here’s a snapshot of the landscape as of 2016 of database technologies that are available to us. Ranging from the bottom right-hand corner and open source, all the way to the top left-hand corner in infrastructure. And in the top right-hand corner in application solutions that are available to us, and the bottom left-hand corner, a mixture of the infrastructure and performance engines that do analytics, and so forth. And in the middle there’s the devices like our smartphones of course, which do actually run on very small versions of databases, to do things like manage our contacts and so forth, or our call logs and other things that we have.

And so in my mind there was this explosion, kind of like a Cambrian explosion into that sort of thing, where the amount of technology development that took place in that very short period of time from about 2006 to 2016 now which is effectively a decade, as it were. We’ve now seen graph databases become a big thing, in-memory databases become a big thing, SQL databases are coming along. The move to different computing models, Hadoop came about, we had the MapReduce model, now we have Spark and streaming analytics and streaming computers, resilient distributed data, frameworks that people have to develop for them, to get to the scales that we need, and when we think about that journey, to go through the sort of, what’s the relational database management systems with the usual suspects, Oracle, PostgreS, Sybase, IBM DB2, MySQL, and the Microsoft SQL Server platform. We’ve seen some new kids come on the block now, Clustrix, Xeround, NuoDB, MemSQL, and there’s dozens and dozens more as you saw on that slide before. If you could imagine the challenge of having to know these platforms, and know-how to the run them and get the single pane of glass view, that you require to be a DBA and do these things, the challenge is far from trivial. And then all of a sudden along came the NoSQL engines which are a whole new breed of fun challenge.

And so the final slide I have here is sort of the ultimate one-two-three knockout punch and that is that we’ve taken some of these technologies now and we’re created a service capability for them, we’ve put them into cloud models and they are now available as a utility, as a service, you can basically get database as a service and the usual brands that we see there on Amazon’s Web Services and Google’s Cloud Compute Platform and Microsoft Azure are the ones that come to people’s mind, but there’s actually dozens and dozens of cloud platforms now. And in Australia for example there’s something like one hundred and twelve companies that are bona fide large-scale public cloud that offer database service in various forms.

To think about the challenge that the average DBA has to get out of bed and go to work and cope with now is quite a mind-boggling challenge. And so I’m very much of the view now that like many things in life, we’ve scaled up those horizontal and vertically, that is the infrastructure’s scaled in a very horizontal, near-linear growth model, and the complexity of stack in a vertical sense, the number of database platforms, the number of application frameworks and models we have to deal with, have gotten well beyond what humans should be able to cope with in single pane of glass view and what the point now with database administrators need a whole set of new tools to be able to talk with all these platforms, mange them, administer them and support them, and I believe that’s the entire topic of our conversations this morning, or this afternoon your time, and with that in mind, I’m going to hand over to our guest who will talk a lot about their product and how it’s going to address the challenge.

Eric Kavanagh: Alright Scott, I’m going to hand—

Scott Walz: Thank you very much, alright, thank you. Thanks Dez, thanks Robin, and thanks to all for joining and having me on the call today. I want to thank Robin and Dez for taking me on a walk down memory lane, having been in the space since the early nineties, you brought back a lot of good memories. The memory that I didn’t see on any of those slides and the pictures, were the punch cards. And that was the very first thing that was introduced to me when I first started at my first job out of university, my coworker in the cube next to me, told me not to touch his punch cards. So, yes, absolutely, and it has been indeed a challenge, and a challenge that we have been working on helping our customers address and since the mid-nineties, and this is a product that I want to talk about today. Let’s take a look at the multi-platform management, and this is only a sub-set. I chose a graph but as Dez put up—

Eric Kavanagh: You’ve got to share your screen.

Scott Walz: Oh, I sure do, thank you.

Eric Kavanagh: No worries. And folks, don’t be shy, ask questions, we’ve got three smarty pants on the call today, so send them the hard questions. You can use the Q&A component of your webcast console or you can tweet with the hashtag of BriefR. Okay, Scott, take it away.

Scott Walz: There we go, thank you. I grabbed this slide, and this image. The image from Dez really blew me away because that is, that’s really the world we’re living in today, and the world that the DBAs are performing in. And as they mentioned, it is no longer, you really, struggle to be able to do this with just brute force. You really need the tools and that’s, we’re coming in to play and we’re seeing that whole switch, the momentum change where it was early on and were very siloed as you mentioned, and then we went to working with multiple database platforms, so that was our first foray into the tools, and then it was back out to where organizations, and after the year 2000 and when it sort of constricted a little bit. With the organizations and wanted to go solid, but then it came back and it just really blew up when you introduced all those new platforms. And now instead of being pigeonholed into a specific platform or a specific technology, none of those organizations are finding out what’s best. What is the best application database, what’s the best platform to use? And with that said, I want to walk you through a little bit about what we do with DBArtisan. And DBArtisan has been our flagship product, managing, as it says cross-platform environments for over 20 years, and this is where we live and this is where we like to emphasize and work with our customers and give them the tools to make them productive and performed.

Let’s go ahead and I’m going to hop right in. I’m showing the product more as I’m going through slides and I think you probably do too. For those of you who have not seen DBArtisan before, we’re looking at the comp, and I think Dez used the term “single pane of glass,” and that is something that we pride ourselves on to giving the DBA a single look into all of their platforms. Right, no need to open up any other application, we’re going to connect and get you in there and start working with the platform. Looking at the database explorer to the left, we can create this as we see fit, we can organize it however we like. And you’ll see I have a mix, I some of my Oracle servers, I have MySQL, I have PostgreS in here, I also have one – it’s labeled production servers that some include some of MySQL server environment. Again, we can see right there that we’ve got a good fit. If I look at registering a new database, you’ll see one of the platforms we support, there’s a couple that I want to bring up. You’ll notice when this is your SQL, support for that, Teradata, Apache, PostgreS, here are the generics that we support.

If we have JDBC driver or LDBC driver to any of the platforms, we’re able to connect, give you a connection and allow you to work with the platform right from within DBArtisan. Again, letting you focus on the job at hand, and not how you’re going to get it done. Walk through all that. But I want to show a few things about the product. In that case, let’s open up and we’ll deal with Oracle, for example. This is just my little landing page here, but I want to go and take a look at some of my schemas that I work with. We’re going to pull in one of the larger schemas, so again, we’ll bring back the list of tables. Right, in this case, I’m going to open up a table, so we’re going to just select them, and it’s going to bring them up into our object editor.

Now, Oracle is something that I’ve worked with for years, what I’m going to show you is probably an easy statement for you. But if Oracle is the platform, or if PostgreS is the platform, or Teradata is the platform that you’ve just been given and you need to come up to speed, the task at hand is to add a column. Or maybe the task at hand is to delete a column. But you don’t want to have to worry about the syntax, right? We want to go, just type what we need, set it up and we leave DBArtisan to generate. Here, we’re going to press “Alter.” It’s going to generate the script for us. Again, a very simple example, but the point is it’s going to do the work for us in order to generate and place this column into the table.

What we can also do, though, is move columns around in the table. If you’ve ever tried to do that with the traditional, it’s a little bit more complicated than just a single line of code like this is. But again, DBArtisan is going to work behind the scenes, generate the code for you, and again produce the SQL. We’ll close out of here. Before I do, notice all the tabs across the top again, the user interface is very intuitive. If I come into the explorer, if I hop down to PostgreS, right? If I go into my schema mode there, look at the table, very similar look and feel, right? We’ll open this up, again we’ll see the information here. The properties, ancestors, the columns. We’re specific to the platform, we’re going to give you this, the user interface, to be able to display this and to work with the objects. You’re going to know what you need to do, and it’ll enable you to do it in an efficient and timely manner, so you don’t need to worry about exactly what is the clause that needs to go there in order to provide that option. We’ll take care of that for you.

Also, when we look, I’m going to pop over to SQL Server now and talk a little bit about some of the other features so, we all need to monitor the database. So again, start it up, let’s see all the sessions that are occurring, sessions that are running. How are we going to see what statements are being executed and be able to have control over that? Do we need to stop a session? Do we need to see any locks that could be on the database? Any blocking locks? Again, we have all that information right here at our fingertips in order for us to quickly react, take corrective actions if needed, and turn it around. We’ll come back over to our explorer. This is where, this is the driving point, this is where I always come back to, this is where I personally like to get things started and work from here. As I’m connected to a SQL Server database to look at the utilities. Because we’re cross-platform, we can start looking at extractions, migrations. We can move across platforms if we need to migrate objects from one platform to another, we can do that, provided those objects exist on the different platforms. Extract the schemas, publish to reports, load and unload data, and back up databases.

Again, all that from within the UI. And coming over here to the tools, you can see a complete set of tools that we can operate from, right? From between the “Find in Files” we can do a complete database search where we’re looking inside of the system tables to find that string that you’re searching for. “Script and File Execution,” if you have a standard statement that can be executed against multiple platforms, multiple data sources, we can set that up right from within a DBArtisan pointed to the targets that we want it to execute against. Press “Go” and it will run and bring us back the results against all of those target data sources. Again, letting you work from that single pane of glass.

And “Analyst Series,” again, those are more in-depth. Those are geared more towards relational databases as we start getting into more of the newer platforms you’ll start seeing us expand this functionality into those arenas as well. And in general, just a lot of user interface enhancements. Features geared specifically for the DBA. Items such as we have the ability to do a script library. Those SQL scripts that you execute often against multiple platforms, save it here, drag it, as soon as we get a new ISQL window set up, we can just drag the script in, and we’ve got the script now ready to go. Again, having that at your fingertips to be able to do and manage. You’ll notice that we deliver with scripts already defined for some of the platforms so we can go ahead and create as many as we need to at any time.

A nice thing that I like and a lot of our customers do, if you’re ever interested, and I get this question a lot with regards to, “How do I do that? That’s pretty cool. How does DBArtisan do that?” There’s a little feature right here, “Logfile,” you can log all the SQL statements that we execute, so if you want to know how we populate that exploratory or how we populate the editor for a PostgreSQL table or a Teradata table, log the SQL and we will record everything that DBArtisan is executing against the database and you can come back and look at that SQL and have everything that we need. Maybe you want to incorporate that as part of one of your scripts. Absolutely. Totally fine.

We like to be very transparent with what we’re doing and what we’re executing against the database, hence we’re going to allow you to save and record anything that we apply to the database. We have configuration options as well. You’ll notice I have it set up as “Organizing by Object Owner.” I can also set up by “Object Type.” If I came into my PostgreSQL environment again, I went into the scheme if I looked at the SQLs instead of just my GIM tables belonging to that scheme, I’m going to see all of the tables, regardless of the schema names. Again, different ways to organize things that really customize it for your own workflow and how you’d like to see it.

And the last thing I want to talk about is the ability to set “Bookmarks.” If I drill in, if I’m working in one of my platforms and I want to focus on just my tables mode, I can add a bookmark. I know, a very simple feature, but so nice to have, especially when you’re working with as many data sources and as many platforms as today’s DBA is. To be able to come into the system, start up DBArtisan and let the bookmark manager take you right to the spot in the tree where you need to be and be able to work. And then from here I could create a new table, and again, on the platforms that we support that you saw earlier, and we’re going to walk you through “Wizard” to let you drive and develop and create the table. And we’re going to generate all the syntax needed to do that behind the scenes for you and then present that to you at the end in a preview pane. You can get to validate, see exactly what we’re going to generate. You can hit the “Execute” button, then the “Finish” button, let it execute. Or you can save it or push it off to another ISQL window, so make it, again, maybe it needs to be part of a bigger, a larger script that you want to save and deploy during your batch window hours.

That is an overview of DBArtisan. When we talk about that, again, it is a product that’s seen a lot of platforms, support for those platforms and great user experience, great feedback from our customers as well. And if you’re interested, as one of the panelists, but if you need to find anything IDERA-related or DBArtisan-related, feel free to reach out and you can certainly find me at my email address.

Eric Kavanagh: Alright, I guess I’ll throw it open to Robin for questions and then Dez and then I’ll be monitoring the Q&A from the attendees. Robin, take it away.

Robin Bloor: Okay, well I mean, the first question, I’ve actually been familiar with DBArtisan for quite a while so I’m kind of aware of its capabilities. What I’d be interested in you addressing is its, kind of, future paths from here. I mean, I see, you know, the last time I looked at it, it must have been a long time ago. I see that you’re supporting at least three databases that I didn’t realize that you supported before. What is the forward path for DBArtisan? Is it likely that you’re just going to add more and more databases or is it a feature extension thing? Where are you intending to go with it?

Scott Walz: That’s a great question and I’d like all of the above. We certainly are going to continue to build out because the traditional RDBMS platforms aren’t sitting still, right? They’re continuing to build out. We will continue to follow that path. And then you’ll see us start looking and going in that direction of supporting net new platforms. Because we recognize that even though some of those platforms are continuing to grow, the traditional RDBMS, there are certain situations that the new platforms are the right platforms for the customers to go with. We really are keeping a close eye on that market, on that segment, and trying to make the right decisions on which platforms to go with. They seem to be changing every day, practically.

Robin Bloor: Well it’s as both I and Dez were saying, it’s a very lively market, is possibly one way of looking at it. Another thing I’d be interested in – obviously you’re not going to be able to answer this question in precise detail, but I’ve come across sites in my time where there are a thousand instances of Oracle, and Oracle wasn’t the only database being used, that was being deployed, you know. And when I actually talked to them about how on earth do you manage that many instances they said, “Well, you know, there’s only about five or six big instances and we’ve got about three DBAs we spread across that.” I’m kind of interested in terms of using DBArtisan, because you can do an awful lot with it, how many databases does it sit over, let’s say typically, or even what are the largest examples of how many strings it can manage at once?

Scott Walz: Well, I’ve seen situations – and again, it’s a little bit complicated, that question is, because DBArtisan allows me to have multiple connections or multiple data sources defined to a single instance. Maybe I want to do a syslogin and then a lower permissions login but I’ve dealt with customers that with everything collapsed it’s going multiple screens. Now when I asked them that, the question that you’ve asked me, is, “How do you manage that many?” And then he says, “I don’t.” Right? “I manage what I can, but I need access to everything.” I’m yet to see anything that stops, you know, the upper limits of what people can manage is really the upper limit of what that person, the individual, can handle. But you know, as I mentioned, those people that I challenge with, they openly admit that they have all those connections but there’s no way that they can manage it. They rely on their team. As I’m sure you’ve experienced, yeah.

Robin Bloor: Well I’ve actually been a DBA myself, although I didn’t do that for very long. And the one thing that, you know, I remember, above and beyond anything else in relational databases, is that you can do a massive amount of things with SQL. Often more than you think you could. Which in one way or another explains some of the functionality that DBArtisan’s got, because it just translates directly into SQL. But, you know, I’m sure you do other things. It is all SQL scripting or is there other special routines that have been written for esoteric situations?

Scott Walz: Yeah, a lot of it, the bulk of it is SQL, that’s just the nature of that. But we do write routines that can be run from a command line using the vendor’s tools, the vendor’s front ends. We’ll put front ends onto, you know, for an example, for the data load utilities in the platforms, right? Those aren’t SQL scripts, right, those are command-line jobs. It will generate those and be able to give those to the DBA that they can then execute. See yeah, we’ll do a little bit of both but majority of it are SQL scripts.

Robin Bloor: In looking at, because obviously you must in one way or another take a look at the developments that are going on that I regard as fairly new. I mean, one of the things that I find interesting that’s happening is that Spark obviously is taking off like a rocket, but Spark’s SQL, it’s gone from being horribly immature to starting to look a bit more mature with a bit more SQL capabilities. Do you look at things like that and wonder whether you’re going to start managing those with DBArtisan?

Scott Walz: Certainly and I do. That’s always there. I know our product management team is always looking at where to go and absolutely, everything’s on the table for us, regards to what we’re looking at in the future.

Robin Bloor: Okay, Dez, do you want to pile in?

Dez Blanchfield: Yeah, actually, there’s a bunch of great things you opened the door for me there, Robin. Thank you very much. I’m keen to just kind of explore some of the things that jump out at me when I look at products like this and I get very excited. When I double checked my homework, because like Dr. Robin Bloor mentioned before, he’s, as have I, been tracking this for some time and I remember looking at your spec requirements the other day and thinking, actually, this thing runs on the very leans of what it actually does. And I think from memory – correct me if I’m wrong – I think it was like as little as a laptop performance would comfortably run DBArtisan and yet it was capable of running some pretty significant database back ends. And I was quite interested to see you had Firebird as well now and Greenplum. I was quite impressed with the requirement or the specification of the hardware that could quite literally run on like a gig of RAM on a one gigahertz CPU. That was pretty impressive.

But the use cases is something that I want to delve into just a little bit. Are you seeing the uptake of the product being a case of a need because of existing environments that have just gotten out of control, or are you seeing people now being a little bit more proactive and saying, you know, we’re building something very big, it is complex. And I’m thinking about mergers and acquisitions for example here, where an organization might buy a bunch of firms – small, medium, large, whatever – and end up inheriting all these environments and having to build a new DB capability. What are the typically use cases for this as far as the type of organization and the type of application to it? Is it predominantly people who’ve got existing environments and have to just clean them up and get control of them or are people being a bit more proactive and thinking about the complexity they’re about to build and get you on board early?

Scott Walz: We’re seeing more of getting on early for the very reason you mentioned, the consolidation. With the breadth of platform support that we have, it’s not total future proofing, right, but it’s putting you and your DBAs in a really good situation that when they do look at a potential acquisition target, right, they’re a little bit less, you know, the thought of what platforms could be we inheriting, right? Though it’s important, right, the concern there is a little bit less than what it’s going to mean to our DBAs, right? The DBAs have a product now that they know they can connect and if they’re familiar with using the product they’re going to be familiar with connecting to that platform that they’ve just acquired. So that’s certainly an area that we’re seeing, again you know, long time, the customers with that mash-up of all those platforms, right? How am I going to get my hands around this, right? And they’ve tried it because the thought process is each of the platforms have a tool, right? We can use our own tool, right? But it eventually comes back that, you know what, yes you can, but not only am I going to have to learn each of the platforms, now I’m learning each of one of the tools that go with each one of the platforms and so you’ve just compounded the job of a DBA. So we’re also seeing that situation where they’re coming back to us and saying, “You know, we need to get our hands around this. Let’s get one tool for the DBA, because I’ve got more important things for the DBA to do than to learn the UI of a new tool. Or different tools.”

Dez Blanchfield: Yeah, no definitely. And, you know, when you see, I think from memory when I looked yesterday just to double check I wasn’t wrong, I remember you’d supported Sybase for example, so this thing’s been around for a little while. There’s another question I had for you actually just – yeah, it’s great to have Greenplum and Firebird on your list, but your Sybase, that kind of ages very quickly, that shows that it’s been around for a while and done a good job.

Clusters. So, one of the biggest headaches for a DBA is that they will point at essentially what looks like an IP address and a bunch of APIs or whether it’s JDBC or LDBC or whatever we might be talking to, but behind that there’s a cluster. What can, or does DBArtisan know about what’s behind door number one, as it were, as in when I plug into the database back end, do I get to see all the environments behind there, and in particular, so there’s two parts to the question, maybe. The cluster for example, when you think about, you know, you support IBM DB2 and Microsoft SQL Database Server and MySQL and PostgreSQL and Oracle and some of those traditional RDBMSs and, you know, invariably we run a master-slave or master-master environment for redundancy and high availability and also performance. Does DBArtisan know that there’s something behind door number one that’s not just one database per se, but a cluster, and if so, what does it know about that? And to flow into that quickly so you can answer the same question, sorry. So, behind the clusters in some of the scenarios you’ve got, how are people coping with the mix between production environments and disaster recovery environments, as far as DBArtisan’s use goes?

Scott Walz: Great questions. I’ll give you that’s going to be contingent on the specific platforms because as much as we try, we’re going to have different levels of support for some of those in-depth, the deeper down features. For Oracle, for example, and their RAC environment, Real Application Cluster, you can connect to the primary node in that cluster but yet going through the database monitor that I showed, we’re going to let you see the SQL running and we’re actually going to tell you what node of the cluster it’s running on, right? To let you see exactly whether, you know, slow-running query, let’s keep an eye on that, what node is it running on? Because inevitably the whole reason for the cluster, right, is for the end user, he doesn’t care where it’s been executed, but for the DBA we need to keep track of that type of information. We’re able to go down to that level of detail in Oracle, for example. The other platforms that we do have connectivity, probably not as much detail than we do for Oracle.

With regards to the production and the development environment, that’s a good question. We’re giving the same level of support. The real primary way that we’re going to assist, the connectivity layer’s going to be there, right? We’re going to be able to connect and do all the features. I have customers that are utilizing some of the features in DBArtisan to categorize their data sources, right? And again, this might be a little bit off for the exact question you’re asking, but we’re going to enable them to graphically denote as they’re working. Because that’s one of the things about DBArtisan, is I can quickly change between data sources. And the next thing you know I’m getting ready to run a truncate statement and I’m looking to see am I connected – did I just run this against production or development? And so we provide some features within DBArtisan to help the DBAs out there as well to manage that and keep them out of trouble, if you will, with some of the DBA activities.

Dez Blanchfield: With that in mind, on the long list of platforms you do currently support, and I’m sure that will explode very soon for obvious reasons. I mean, you support the likes of say DB2 on z/OS for example, on mainframe, and then obviously you support the likes of what we used to call mid-range but now just UNIX systems, and sort of more modern platforms, you know, Linux and then eventually it’ll get ported to the likes of Bluemix and on Cloud Foundry, so you’ll end up with DB2 running on the Cloud Foundry on Bluemix, with IBM and the cloud on soft [inaudible]. Are people currently running not just the management and monitoring, but also you mentioned before the ability to migrate, and move data around. Are you seeing people jump in bed with DBArtisan and say, “You know what, we’ve got a bunch of stuff on the old mainframes that we just need to get off and it was a real hassle to do that. If I can point, click and drag from here to there, I can actually move and migrate my data and my schema.” Is that a thing that people are doing?

Scott Walz: They are indeed moving, right? They’re moving the data off, right? Now, they’re using DBArtisan as a tool for that. Is it doing everything for them? No. We’re starting, you know, the drag and drop, not exactly there, but we’re enabling them to generate some scripts, because ideally you’re going to want to use – you don’t want this job to be running on your client, on your laptop, for the very reason that you mentioned. We can run on a very low footprint, right? We’re helping them generate scripts and then turning it around and building it and then they can deliver that script over and have it run on the server, right? And get the power, the horsepower behind the server to do that. We’re helping them generate some of their jobs to do some of that work.

Dez Blanchfield: Right. A couple of last ones for you and then we might circle back. The thing that’s really struck me just going through your addendum, which is fantastic, and in fact, I wish we had another hour to go into more detail. A really big challenge for DBAs, right, is basic compliance, overall governance of the infrastructure, the audits, reporting on current state, looking at future prepping for things like, you know, just general growth of the environment. It strikes me that even though that at the core of what your product seems to do which is just make life easy, that single pane of glass, single view of the world, and I can essentially click and point and drag and I love the fact that I could train somebody to do this very quickly now, they don’t have to read the manual, as it were. It strikes me that the tool also gives me the ability to do a whole bunch of things around governance and compliance and audits, that I’m wondering whether people have actually kind of woken up to, I’m sure they have.

But are you seeing folk now look at it and go, and it’s like this eureka, a-ha moment, going, “Hey, you know what, this makes the DBA’s life really easy from now, or easier from an operational point of view or development point of view. But gosh, we could actually just report on all of our databases now and all the data sets and all the contentless data and all the metadata around. Like, who’s got access, when they’ve got access, why they’ve got access, and what type of access they’ve got.” And then all of a sudden, address some of the challenges around compliance. Particularly when we’ve got some really big things happening around data breaches. We’ve got some amazing things like the global financial crises, all these challenges are coming to [inaudible] but how on earth are we going to measure and monitor and address compliance? Is that the kind of big thing for people yet or is it still, sort of, early days as far as the applying DBArtisan to it?

Scott Walz: I have customers that can’t say enough about DBArtisan. Now those are the ones that have realized that. The light bulb’s gone on. They say, “Wait a minute. I can reply and respond and generate some of the very reports you mentioned, right, all from within one tool. I’ve got it.” Now there’s others that are still yet to catch on to that and that could be for various reasons, right? They may not be [inaudible] yet or maybe it’s being handled by somebody else, but our customers that we’ve found that are using it, that’s an a-ha moment, right? That, not only am I able to create a table [inaudible] all this stuff. And absolutely, with all the compliance requirements, it’s huge. That’s a job in and of itself.

Dez Blanchfield: Well, indeed. And you know, I mean, off the top of my head I’m immediately thinking, you know, if there’s someone comes along and says they wanted to create a configuration management database, CMD, if they’re having to meet everything from Sarbanes-Oxley to COBIT to ITIL, you know, SWIFT compliance and banking, even going down to the likes of the International Standards Organization, ISO 27001, 27002. It’s all these really big frameworks. One of the challenges is just finding where the data is, who’s managing it, what format it’s in and I’m thinking, it has for me, like for me just watching it now that eureka moment just went off, it was like, hang on a second, I could throw this in at even somebody who isn’t necessarily a DBA, but I could train him up quickly and say, “There’s a compliance tool.” I think it’s great that it does its job in an administration database management world.

But I’m sitting here thinking, god, you know, the fact that you can manage multiple platforms as one these days, and you can dive right down into, as you said, logging the transactions that you do. You know, imagine taking this tool into a data breach incident and you’ve got your security team running around trying to find what’s where and why, and who’s seen what. And as they’re moving around, they have to log and track every action they do because they may become part of the problem if they can’t otherwise. Yeah, I think it’s an incredible capability here that, you know, you could immediately start to do, you know. Particularly when we look at the challenges of data audits you know, we have this massive like a feature creep, as it were, with data sets and data.

And one of the things we’ve talked about in another couple of shows we’ve done is, you know, how do you go and find your data and often we talk about the fact that when you start in any organization, you tend to stand up in your cubicle and put your hand in the air and wave and go, “Does anyone know where this database is? How do I get to this data source? Where’s this file?” “Go and ask reception.” Right? Your tool can immediately provide that capability of finding things and discovering them and even reporting on them.

Back to one of the questions just briefly and then I’ll wrap up and hand back to Eric. It strikes me that scale is going to become a challenge in the next, sort of, 12 months for you. Can you give us some insight, just at a thirty thousand foot point of view I guess, in the scale or the range of scale that DBArtisan’s come to work. I can imagine that when I put this on my laptop and I rock up and I point it at an environment I can discover it and I can start doing things on it. I imagine it goes from like a single little, you know, open source minuscule database engine with a few rows and tables. What scale would it go up to? You talked about DB2 on mainframes, that’s big. And clusters. What’s the range of scale that we can sort of cope with here? And Robin sort of touched on that earlier, but I’ll just need to get into that in a little more detail for how big we can get with DBArtisan.

Scott Walz: Sure. There certainly are going to be your challenges because it is a client piece of software. And so, again, if I’m working on a mainframe, when I’m working against our test system on the mainframe that we have, I can point it against millions of rows and do a cross-join against millions of rows. All the work is going to be done on a server, right, because we’re passing that command, and that’s just a matter of DBArtisan handling the result sets, right? And so that’s the challenge, and that’s the beauty, right, of what we’re doing. Most of the heavy lifting is being done on the server. We’re just handling all the results. And so, again, you get into situations of course when you want to run ten queries simultaneously that are all returning millions of rows, yeah absolutely, you might find yourself in some performance there, right? But at no time do I have customers shy away from running big queries against DBArtisan, you know, against their database. Again, like I said, mileage varies depending on a lot of factors, right, but, again, like I said, I’m dealing with millions of rows coming back and as long as it fills up the grid, you know, I’m ready to go. But sometimes obviously I have to wait for the results to come back.

Dez Blanchfield: I have a question for you before I wrap up, because I’ve taken too much of your time and thank you for that. Just tell us a bit more around, you know, reading the latest specs yesterday just to make sure that I was across as well as I thought I was. Process monitoring and sort of alerting and notifications, you know, capacity planning brings up all the massive issues with DBAs, all day every day, you know. Is someone going to fill up this table, is he going to fill up the database, are they going to fill up the disk space I’ve got, how do I manage it? Give us a quick rundown on sort of the process monitoring and particularly monitoring alerts and then ideally around capacity planning. I think that’s an area that I think there could be a lot of interest in.

Scott Walz: Process monitoring showed probably that the feature that majority of our customer base uses and that’s a database monitor to be able to show and do that. And we do have some in the analyst pack. Performance Analyst does have some alerts you can set up when certain thresholds are met. It can alert you. Maybe X number of logs, errors in the log file, you know, it’ll get out an alert for you. Table space hit a certain percentage full, you can get another alert. And the beauty of it is, is you’re in the same tool, right, it’s a part of DBArtisan so you just right click on the error, the alert, and you manage with DBArtisan and it takes you right to the table space editor. And you can address the problem right there.

With regards to capacity, absolutely that is a hot button, and capacity analyst that we have currently is ported to SQL Server, Oracle, DB2 LUW and Sybase ASE. And that does exactly what you described. You can start, once we get some collections, right, and once we get a sample size, and maybe its row size, maybe its object count, lots of options within the tool, and then you can start trending, right? And what is it going to look like in six months? What’s it going to look like in twelve months? I can trend to, just trend to a date or I can trend to a value, right? And an example you had, I have X amount of disk space, based on that, when am I going to hit that limit? Based on the growth that I have and these collections that I’ve done, when am I going to hit that limit? At least I know I can start planning for that. Is it going to be six months, is it going to be two years? But again, we can use the capacity analyst to trend towards that.

Dez Blanchfield: That’s awesome. Fantastic demo. I really enjoyed it. I’m going to pass back to Eric because I know there are a couple of questions that have popped up from our amazing audience today. Thank you so much, it’s been really great to get to know the product well, and I look forward to keeping a very close eye on it.

Eric Kavanagh: Okay good. We do have a couple of good questions. And we’re going a little bit over time so we’ll try to wrap up quickly because I know, Scott, you’ve got a closed hard stop. Here’s a big question. How about working on old data stores like VSAM, and Model 205, and IMS and IDMF and those kinds of things? Do you see that very often these days and how well does it work?

Scott Walz: I don’t want to tell you that you’re stuck. Some of those environments, if they have ODBC or JDBC and I know some of them are out there, we can connect to it and you can work with it via that way. But for the most part the green screen is the way to go still.

Dez Blanchfield: I love the green screen.

Eric Kavanagh: Well you know, as Dez pointed out with that one slide, where he had all those different applications and tools that are available today, that is a very daunting reality for anyone who wants to responsibly perform the function of a database administrator. And I’m guessing that over time you guys can build connectors to any one of these tools as and when customers demand, and so forth, right? So that you enable that single pane of glass.

Scott Walz: And that was the big key behind making DBArtisan equipped to be able to handle those JDBC and ODBC connections. We really extended it out now. Now, as long as we have that connection, right, as long as we have that driver, we can connect and work against it.

Eric Kavanagh: That’s good stuff. Well folks, we archive all these for later viewing. I posted a link to the slides, hopefully you can see that, via SlideShare. Thanks so much for all of your efforts, gentlemen. Wonderful webcast today again. A lot of good slides. A lot of good content. I loved that demo. It really is kind of interesting that you guys have targeted a very sweet spot in the marketplace because there is such an explosion of database types these days. And we just need, as managers, some place to handle all of that. Well done, guys. We’ll catch up to you tomorrow for another Hot Technologies. Hopefully you’ve carved out an hour tomorrow. Same time. Same station. We’ll catch up to you next time, folks. Take care. Bye bye.