Bulletproof: How Today’s Business Leaders Stay on Top


Host Eric Kavanagh discusses backup and recovery with IDERA's Tep Chantra in this episode of Hot Technologies.

Eric Kavanagh: OK, ladies and gentlemen, it is Wednesday at 4:00 Eastern, for those on the enterprise technology space, you know what that means: It's time for Hot Technologies. Yes, indeed. My name is Eric Kavanagh, I'll be your moderator for today's event entitled “Bulletproof: How Today's Business Leaders Stay on Top.” And folks, we'll have a nice, intimate conversation here today; it's going to be Tep Chantra and yours truly hosting this conversation. We're going to talk all about a number of different things, including disaster recovery, backup and restore, but really the term I like to use these days is data resiliency – I heard that from a gentleman just a couple of weeks ago, and it really, it makes a lot of sense. Because it speaks to just how important it is to have a resilient information infrastructure underneath your business.

This is the information economy these days, which means most companies are reliant in some sense or other on information assets, on data. I mean, even retail companies, even hardware companies, really any kind of organization these days is going to have some kind of information backbone, or at least they're going to, if they're in the modern age, if you will. There are some mom and pop shops that still can avoid that stuff, but even there, you're starting to see a lot more proliferation of information systems, many of them cloud-based, frankly, but a lot of them still on premise, for handling customer transactions, keeping on top of things, for knowing what your customers want, for knowing what the inventory is, for knowing what it was, being able to understand the big picture – it's really important stuff these days.

So, data resiliency is a term I like to use; redundancy is another term that comes to mind. But you want to make sure that no matter what happens, your employees and your organization is going to have the information it needs to serve your customers. So, I'm going to walk through, just kind of framing the argument, before Tep steps in and explains to us some of the stuff that IDERA has going on. Of course, IDERA's done quite a few webcasts with us in the last year or so. It's a very, very interesting company, they're focused on some of the brass tacks, blocking and tackling, as necessary, to survive in the information economy. We'll kind of dive in.

Bulletproof infrastructure – that's actually an old picture of a mainframe, look at that, it's like early 1960s from Wikipedia. You think about way back then, mainframe days weren't a lot of access points for the mainframes, so security was kind of easy, backup was pretty straightforward, you could understand what had to be done, you just had to go in and do it. Of course, then there weren't that many people who knew what to do, but the ones who did, it was pretty clear what you had to do. And there wasn't too much concern about that. You did have the occasional issue, but it wasn't really all that common.

Back in the day, this stuff was fairly easy – today, not so much. So, here's the picture – that's actually Hercules fighting the Hydra right there. For those of you who are not big into mythology, the Hydra was a very vexing creature in that it had multiple heads, and any time you chopped one off, two more came up in its place, so it kind of speaks to the challenge of dealing with some of the issues that you find in life, specifically in that context, was really geared around bad guys. You take out a bad guy, two more crop up in their place. And you kind of see this in the hacking world, quite frankly, it's a big industry these days and it's just one of the big challenges that faces us.

So, you think about if you're trying to map out your data resilience strategy, what do you have to worry about? Well, there are lots of things to worry about: disasters, fires, floods. I spent a lot of time in the South and New Orleans of course has some interesting stories regarding hurricanes and flooding and so forth. And a lot of times human error comes into the play, comes into the picture, I should say. And that was the case even in Katrina in New Orleans, because yes, a hurricane came through, that is an act of God, as they say, a force majeure. But nonetheless it was human error leading up to the hurricane that resulted in several of the breaches of the levies. So, there were three of them, in fact, there was one on the industrial canal, and the problem there is a ship had not been moored properly, down river. And the hurricane came in and pushing it off its moorings, and it actually threaded the needle going round the bend, where the river bends in right outside of New Orleans and it just went right down the industrial canal and crashed through one of those walls. So, even though, yes it was a natural disaster, still, it was human error that resulted in that huge problem.

And the same thing occurred on the other side of town, where there was a section of the levy that had never been completed, apparently because the city and the army corps of engineers had never agreed upon who was going to pay for it. Well, it doesn't take a rocket scientist to figure out that if you have one big gaping hole in your levy, that's not a very effective levy. And so, the point is human error really does play into the scenario where disaster strikes. So, even if it's fire, or if it's a flood, or if it's an earthquake, or whatever the case may be, there's likely something someone could have and should have done to prepare for such an event. And of course, that's what we traditionally call disaster recovery. So, yes, disasters occur, but human beings should really see through that stuff, and prepare accordingly. We'll talk a bit about that today with Tep.

So, disgruntled employees – do not underestimate the damage that a disgruntled employee can do – they're out there, they're everywhere. I know people who have told me stories of just really unpleasant things that have happened, where people just do bad things, they intentionally sabotage their own organization, because they're unhappy. Maybe they didn't get a raise, or they got fired, or who knows what happened. But that's something to keep in mind, and it's a very significant component. In the case of licensing, too, just as an FYI out there, folks. One of the stats I heard was something like 60 percent of all tips that software companies get for failure to pay license fees come from ex-employees. So, you want to make sure that you bought that software and that you got it fair and square. Corporate sabotage doesn't happen all the time, but it does happen. Privacy issues also come into the mix; you have to be careful about what you're storing and how you're storing it, really think through these things.

And I always try to remind people in terms of regulation, it's really important to have a plan and to execute on that plan, because when push comes to shove or some auditor comes in or a regulator, you want to be able to point to your policy that you have, and then explain how it is that you address that policy, when certain things happen, like a disaster for example, like an issue of being audited or whatever the case may be. You want to know what you were doing, and have a record of that – it's going to go a long way to keep the auditor and bay, and that's just good stuff.

So, hackers, of course – I'm going to talk a couple of minutes about hackers and why they pose such a threat. And of course ransomware, just say this whole case with WannaCry, the WannaCry ransomware, that just covered the planet in very short order, and apparently some clever unfriendly people for a bunch of information from the NSA, there were hacking tools that were used and exposed. So, I remind people of, there's an old fable, Aesop's Fable, that says we often give our enemies the tools of our own destruction. This is something to keep in mind, because again, this technology was cordoned off by the NSA, by the National Security Association – can't remember what it stands for, actually. But it was exposed, and got out into the world, and just wreaked havoc. Guess what? And lot of companies had not upgraded their Windows environment, so it was an old, think it was Windows XP, that was compromised. So, again, if you are being diligent, if you're staying on top of your patches and your versions of your operating systems and if you're backing up your data, and restoring your data. If you're doing all the things you should be doing, stuff like that is not that big of a problem. But you can just tell the people who are axmen, “Hey, guess what? We don't care, shut the system down, reboot it, load up the backups.” And you're off to the races.

So the point is yes, these bad things do happen, but there are things that you can do about it – that's what we're going to talk about on the show today. So, I did some research – actually, it was kind of interesting, if you go to Wikipedia and look up hacking, it goes all the way to 1903. When a guy hacked a system for telegraphs and was sending rude messages through the telegraph, just to prove that he could hack it, I suppose. I thought that was rather amusing. The point is, that hackers are basically good at breaking and entering, this is what they've been doing for years and years and years. They're like the lock pickers of the modern internet world.

And you have to remember that any system can be hacked, it can be hacked from the inside, it can be hacked from the outside. A lot of times, when those hacks occur, they will not show themselves, or the people who hack into your system aren't going to do much for a while. They wait for a while; there's a bit of the strategy involved, and partly it's just because the business side of their operation, because typically what hackers are doing is they're just doing their one little part of the program, so a lot of guys who are good at penetrating firewalls and penetrating information system, well that's the thing that they do best, and once they do penetrate a system, then they turn around and try to sell that access to someone. And that takes time, so often it's the case that someone behind the scenes just trying to sell access to whatever system they've hacked – your system, potentially, which would not be too much fun – and they try to figure out who will actually pay for access to the system.

So, there is this sort of disjointed network of individuals or organizations out there, who coalesce and collaborate to make use of stolen information. Whether it's identity theft, or just data theft, whether they're making life unpleasant for a company – that's the case with this ransomware, these guys just take hold of your systems and they demand money, and if they get the money, maybe or maybe they won't give your stuff back. Of course, that's the real scary thing, is why would you even want to pay that ransom? How do you know that they're going to give it back? They might just ask for double, or triple. So, again, this all speaks to the importance of really thinking through your information strategy, your resiliency for your data.

So, I did some more research, that's an old 386; if you're old like me, you could remember these systems. And they were not that problematic in terms of hacking; there weren't a whole lot of viruses out back then. These days, it's a different game, so of course the internet comes along, and changes everything. Everything is connected now, there's a global audience out there, the first major viruses started to attack, and really the hacking industry started to balloon, quite frankly.

So, we'll talk a little about IoT, we've got a good question already from an audience member: How do I protect IoT devices, from a vulnerability standpoint? That's a big issue – quite frankly, it's a lot of effort being placed into that right now, into how you deal with the potential for IoT devices being hacked. It's a lot of use, the usual issues that you focus on, password protection for example, going through the process of setting it up carefully, of setting your own password. A lot of times people will just leave a default password in there, and that will in fact result in the vulnerability. So, it's the basic stuff. We just had another show on security earlier this week, on our radio show, with several experts on there and they all said that 80–90 or more percent of hacking problems, whether it's IoT or ransomware, or whatever, would be avoided if you just dealt with the basics, if you just made sure that you had your bases covered, you did all the basic stuff, that you know you're supposed to do, that handles over 80 percent of all the problems out there.

So, the internet of things, OK, IoT. Well, if you think about IoT, it's not all that new. Frankly, there are high-end manufacturers who are doing this kind of thing 20 and 30 years ago, and then about 15, 20 years ago, that's when RFID came in – radio frequency identification tags – which had been extremely useful in helping very large organizations, like retailers, for example, shipping companies, any product company that moves stuff around the country, around the world, it's extremely useful to have all that data, you find out where your stuff goes; if something disappears, you find out.

Of course, it's not a foolproof solution, in fact, I had my laptop, my Apple absconded with, from the Atlanta airport – Atlanta Hartsfield Airport – someone just took my bag, with my computer. I thought they don't steal bags anymore; they always find bags – wrong. Someone stole the bag and then it appeared about a month later, it woke up, I got a little message from Apple, from iCloud that it woke up about seven to ten minutes south of Atlanta Hartsfield Airport; someone just decided to go into it. They'd just been sitting on it for about a month and I went through the fairly frustrating process of realizing, well, OK, I know roughly where it is, it may be in this house, that house, the house across the street, it was just there temporarily. What do you do? Like, how is that information useful to you?

So, even though you learn something, sometimes you can't do a whole lot about it. But nonetheless, this IoT-enabled world, I have to say, I think we're not quite ready for it, to be honest. I think we have a case where there's a lot of good technology out there and we may be moving too quickly to take advantage of these things, because the threat is so significant. We just think about the number of devices now that are part of the threatscape, as people talk about it, that's a huge, huge wave of devices coming our way.

Some of the big hacks that have occurred recently, taking down DNS servers, had to do with IoT devices being co-opted and turned against DNS servers, just classic DDoS hacks, distributed denial of service, where literally, these devices are reprogrammed to call on a DNS server at a blistering pace, where you'll get hundreds of thousands of requests coming into this DNS server, and just chokes and crashes and dies. It's the kind of thing where the story of the great [inaudible] on a not-so-popular website the servers just crashed – they're just not made for that kind of traffic.

So, IoT is just something to keep in mind, again, if we're dealing with backup and restore, it's just important to remember that any of these attacks can happen at any given point in time. And if you're not prepared for that, then you're going to lose a lot of customers, 'cause you're going to make a lot of people very unhappy. And you'll have that reputation management to deal with. That's one of the new terms that's been floating around there, “reputation management.” It pays to remember and appreciate that reputations can take years to build and minutes or even seconds to squander. So, just kind of keep that in mind as you're planning out your information strategy.

So, then, there's this whole concept of the hybrid cloud. I've got one of my old, favorite movies from childhood, The Island of Dr. Moreau there, where they created these half-animal, half-creature things, that's kind of like the hybrid cloud. So, the on-premises systems are going to be here for years – make no mistake about it, it's going to take a long time to wind down those on-premise data centers – and even in small businesses you're going to have a lot of customer data in your systems and your drives, and the more complex that situation gets, the harder it's going to be to stay on top. That said, consolidating in one database is always a real challenge as well, especially with a system like MySQL, for example.

Trying to cram everything into one system has never been very easy to do. Typically when it is done, there are problems, you get performance problems. So, again, it's going to be an issue for quite some time now. Legacy infrastructure out there in data centers and in businesses, of course. That was the problem with WannaCry, is you have all these XP systems – Microsoft doesn't support XP anymore. So, it's just kind of amazing how some of these issues that become so severe and so painful monetarily and otherwise could be avoided with basic maintenance and upkeep. Basic stuff.

So, there's going to be a skills gap; these skills gaps are going to grow over time, because again, the cloud is the future – I don't think there's any doubt about that – the cloud is where things are going; there's already a center of gravity in the cloud. And what you're going to see is more and more companies, more and more organizations looking to the cloud. So, that's going to leave some skills gaps on the on-premise side; it's not there yet, but it's coming. And even think about amortization, so a lot of big companies, they can't just move to the cloud – they could, but it wouldn't make a lot of sense, cost-wise, because they're amortizing all those assets over three, to five, to seven years, maybe.

That creates a fairly significant window of time, during which they're going to be migrating away from on-prem and toward the cloud environment. And frankly we've reached the point now, where on-premises is probably less secure than the cloud. Kind of funny, because that was the big knock for a long time: Companies were worried about going to the cloud for security reasons, they were worried about the cloud being susceptible to hacks. Well, it still is, certainly, but really if you look at the big guys: Amazon, Microsoft, even now SAP and Google, all these guys, they're pretty good at that stuff, they're pretty good at securing the cloud itself.

And then, of course, finally on the on-prem side, dated systems: these applications get long in the tooth pretty quickly these days. I heard a joke one time, the definition of legacy software is any software that's in production. (Laughs) I think it's kind of funny. So, over to the cloud systems, I mentioned the major players, they're just growing by the day. AWS still dominating that space, although Microsoft to their credit has really figured some stuff out and they're focused very intently. So is SAP, the SAP HANA Cloud, it's the HANA Cloud platform they call it – it's a huge area of focus for SAP and for obvious reasons. They know that the cloud now has gravity, they know that the cloud is an excellent martialing area for technology.

So, what you're seeing is this consolidation around cloud architectures, and you'll have a lot of work in the next two years about cloud-to-cloud migration. Even master data management across clouds is going to become a big issue. And Salesforce – look how big Salesforce has become – it's an absolute force to be reckoned with. Also, it's a marketing systems are in the cloud; there are something like 5,000 marketing technology companies now – 5,000! It's crazy. And you're seeing more effort on this single pane of glass, for being able to manage multi-cloud environments. So, one last slide from me, and then I'll hand it over to Tep to give us some advice on how we can stay ahead of the game, here.

This, we talked about on my radio show earlier this week, the shared responsibility cloud model. So, what they talk about is how AWS was responsible for securing the cloud, so security of the cloud. Could see compute stores, database networks, etc. But the customer is responsible for data and security in the cloud. Well, it was funny because they use this term “shared responsibility” and what I kind of gathered from the guests on our show is that it's not really shared at all. The idea is, it's your responsibility, because odds are if push comes to shove and someone infects your environment, AWS is probably not going to be held liable, you are.

So, it's kind of a strange world, I think it's a bit of a duplicitous term, “shared responsibility” 'cause really it's kind of not, it's kind of still your responsibility to stay on top of all that stuff. So, with that, and I know I've talked a bit about the IoT – we had one good question about how to secure IoT devices – there is going to be an absolute range of technologies coming out to be able to deal with that. Obviously you've got some software on some firmware on the IoT devices themselves, so that's something to keep in mind; you have to worry about whatever authentication protocol you have to use for that stuff. But like I say, the basics, probably going to get through most of the trouble that you're going to encounter, just doing password protection, doing changing of passwords and really kind of staying on top of that – monitoring those things, and watching.

A lot of the technologies used for monitoring fraud, for example, or nefarious activity in networks really focuses on outliers, and that's something that machine learning is actually pretty good at, at clustering and watching for outliers, watching for strange patterns of behavior. Like, frankly, what we saw with this recent DDoS attack on DNS servers, where all of the sudden all these devices start sending a callback to a particular handful of servers, well that doesn't look good. And frankly, what I always remind people about with these systems: Any time you have serious automation in those kind of environments, always have the manual override, have the kill switch – you want to have some kind of kill switch programmed in there to shut those things down.

So, with that, I'm going to push Tep's first slide, he's going to be doing some demos for us. And then I'll go ahead and give you the keys to the WebEx tab. Now, it's coming your way, and take it away.

Tep Chantra: All right, thanks, Eric. My name is Tep Chantra, and I'm the product manager here at IDERA. Today, wanted to talk about IDERA's enterprise backup solution, namely SQL Safe Backup. For those of you are familiar with SQL Safe Backup, let's take a quick look at some highlights of the product that— excuse me. So, as you may have already guessed, people say backup, SQL Server backup and restore product, one of the key features of SQL Safe is the ability to perform rapid backups. And it's an important feature, given that most backups must be made and in most cases they have to be made very quickly, in a small window of time.

In some environments now, meeting those backup windows can be quite a challenge, especially when you have several large databases that have to be backed up. SQL Safe's ability to complete the backup operations quickly allows end users to be able to meet those backup windows. Speaking of large databases, backing up those big databases, obviously larger backup files. Another feature where SQL Safe shines is the ability to compress backup files. The compression algorithm used can achieve up to like 90–95 percent compression. This means that you can store backups longer, or allow cost savings in terms of storage needs.

On the flip side of the backup operations, you have restore operations. One of the battles that DBAs must fight in restoring databases is that those databases have to be restored as quickly as possible. In cases of large databases a full restore of a backup file can take several hours, which obviously means longer downtime, and possibly loss of revenue. SQL Safe fortunately has this feature called “Instant Restore,” which basically cuts down the time between when you start a restore and when the database can be accessed by end users or even applications.

I remember speaking to a customer once, where he reported the [inaudible] restore of one particular database had taken 14 hours. But with the instant restore feature, he was able to get access to that database within an hour or less. Policy-based management, another highlight of SQL Safe is the ability to create policies and manage your backup operations through those policies. When you configure a policy, you basically define which instances are to be backed up or which databases on those instances are to be backed up, what kind of backup operations are to be performed, and even the schedule at which those backups are to occur.

In addition, you can also configure alert notifications. That way you can be notified on events such as the backup completed successfully, the backups failed, maybe it could see this, but there are some warnings associated to that operation. You’ll also be notified if a backup didn't execute as scheduled. That's an important notification, 'cause then you might have, risk a window of time where a backup did not exist. And receiving such a notification will indicate to you that you need to go out there and make that backup run and then possibly do some research as to why that backup didn't run as scheduled.

Some of the other things, let's see here, fault-tolerant mirroring, that essentially means that we have the ability to create duplicate backup files in more than one location. So, for instance, let's say you have a target destination at your primary as a— what your main storage is, where all your backup files go. However you may have the need to have a copy of the same backup file for instance on the local machine itself, just in case you need to do some additional testing, make sure that that database can be restored, whatever the case may be. SQL Virtual Database Optimize – what that essentially is, is that we have another product that was recently integrated into SQL Safe, called SQL Virtual Database.

As I mentioned, is that recently integrated so is that actually included inside the SQL Safe itself. Now, what SQL Virtual Database essentially allows you to do, is to actually create a virtual database. (Laughs) I hate using the same terms as the definition, but what essentially happens is that we will mount a database and based off the backup file. So, what essentially happens is that SQL Server thinks that the database is actually up and running, whereas it's actually reading data from the backup file, rather than actually creating the actual database itself on the file system.

This is real helpful because it allows you to access the data that's within the backup file without actually consuming additional disk space, so it comes in real handy, especially when you're dealing with huge databases that you just need to get, take quick view, or do some dev work on. Zero-impact encryption – what that essentially means is that where we're performing backups of these databases, we can actually encrypt the backup files, and when we're encrypting these backup files, we're not adding any additional load to the actual performance of the system. So, it's completely negligible. Log shipping is another thing that we can do, where our policies, as I mentioned earlier, and in regards to the advantageous licensing – what that essentially means is that our licensing models allows you to move licensing models from one instance to another instance, with a few simple clicks of the mouse.

Moving on, let's take a quick look at the architecture of the product itself. So, there's basically four main components to the product. We have starting from the left, the SQL Safe Management Console and Web Console. Both of these are essentially user interfaces, one is the desktop client and the other is a web application. Both of these user interfaces pull data from the next component, which is the SQL Safe Repository Database. The repository database basically stores all of your operational history, all of the backup and restore operations. Those details are stored here. All of this data that's in the repository is managed by the SQL Safe Management Service, which is the next component. The Management Service is responsible for updating the repository database and sending alert notification. The data regarding the backup and restore operations are actually coming from the SQL Safe Backup Agent, which is the last component, on the far right.

The SQL Safe Backup Agent is a component which is installed on all of the servers hosting the SQL Server instances that you're trying to manage with SQL Safe. And this is the service that's actually responsible for performing the backups and compressing them. Now, on this slide, there's also a fifth component, which isn't entirely required, but it's a nice-to-have thing. And that's our SQL Server Reporting Services RDL files. What this basically allows you to do is deploy some RDL files to SQL Server Reporting Service so that you can run reports against our repository database. And we have a number of different reports such as the last time your backup ran, details regarding backup operations, what have you.

And excuse me. Let's go ahead and take a look at SQL Safe itself. Give me a second here. And give me a second to log in. As you see, I have loaded right now is the web application, but first, I would actually like to take a look at the desktop application. So, let me fire that up real quick. And this is the SQL Safe desktop application, when it first loads it takes you to the SQL Safe today view. This is essentially lists all the backup operations or restore operations that have happened as of today. It also gives you a quick status of your environment, as you can see here, it states that my policies have one policy, that are in an OK state, which is good, 'cause I only have one policy and I'm hoping that it's not [inaudible]. Also gives you a summary of operations that were successful, any operations that might have failed. Overall, I'm in good shape: Just by taking a quick look, you can see all greens; we're good to go.

On the left here you can see all the servers that you have registered with SQL Safe and the ones that you're basically managing. If you expand it, you get to see the list of databases on that system. If you select a particular database, you can see the operational history for that particular database. There's not much more to explain, other than that you can go ahead and perform ad hoc backups from this window as well, and it's real quick and simple. And let me demonstrate that to you real quick. You just right click on it, and select the operation you want to do. And for this purpose, I'll go ahead and choose backup database. And the SQL Safe Backup Wizard opens up. From here you get this, like which instance that you want to perform the backup against, and select which databases that you want to back up. In this case, I preselected the HINATA machine, and this Contoso Retail database, because that's what I had highlighted when I chose the option. I'll go ahead and leave that for now, but you do have the option to actually select more databases so that, if you want to back up all your user database for example, you can select this radio button and it will preselect all of those. Let me go ahead and just proceed with that.

On to the next page of the wizard. This is where I can select the backup type that I want to perform, and you have a number of different options here. That's— I'm sure are found in all backup utilities, for instance, you can perform a full backup, a differential backup, transaction log backup, or you can actually just simply back up the database file itself. You also have the options of creating a copy-only backup, which basically is used when you don't want to mess around with the LSMs. I'm going to select “no” for now. And you also have the option to verify the backup after the backup's complete – that way you kind of make sure that your backup's good and can be used later on. It's always one of those features that you want to make sure that you have, just to give you a little bit of assurance that the backup is usable.

Here, you find the name and data description. This is essentially metadata that you can help easily identify what the backup was used for, so I'm going to say demo purpose here. And use your database’s backup for demo. Next, here we define where we want to save our backup file to, and you have several different options here: You can save it to a single file, you can create stripe files, you have the ability to select here the target destination, we also support data domain. And that, Amazon ST cloud, in case that's where you want to save your information to.

I’ll proceed with the single file for this demonstration, this enable network resiliency, this is a really nice feature within SQL Safe in the sense that if you're backing up to a network location – which is what I'm doing here, you can see from the primary archive – if you're backing up to network location there are chances that you might encounter some network hiccups. In some cases if your network hiccups are countered, the backup operation will completely sell out. Well, enable network resiliency option, what it essentially does is if a network hiccup is encountered, what SQL Safe essentially does, is it pauses the backup and waits for a specific amount of time and tries the network location again. And if it's able to connect, then it will just resume the backup right where it left off. That way you don't spend hours at a time trying to run this backup and right when it's getting close to finish, a network hiccup’s encountered – we don't sell the operation right away, we'll just wait a little bit and try to complete it again.

There are some other options when configuring this. Now, it basically entails the interval at which we retry, so in this sense, if we encounter a network hiccup, it will try to access the network location again in ten seconds. The second option here basically tells you that if we encounter network hiccups for, it says 300 seconds here – so what, five minutes, total – then we'll just completely sell the backup operation. And that's five minutes in sequence, so it's if we retry over and over and within that five minutes we still can't reestablish the network connection, then we'll completely sell the operation out. This very last operation here is basically for the whole duration of the backup, so if you lose ten seconds here, reestablish connect, and then lose the connection again, if that basically repeats for 60 minutes, then that operation's going to sell out. And these are configured, as you can see, so you can tailor it to your environment.

This mirror archive option right here, this is what I was talking about earlier, having fault-tolerant mirroring. This is where you can specify another backup location, in case you should ever want to. I'm going to leave this unchecked right now, just 'cause I'd like to go ahead and proceed. On these options windows, you can define things such as your type of compression that we want to use for this backup operation and whether or not we want to enable encryption for the backup file. We offer a number of different options for compression, even including none, if you choose that you don't want to have any compression at all. So, it's just to quickly go over these options.

High speed basically tries to complete the backup as fast as possible, while including some amount of compression. ISize is more focused on including as much compression as possible but it can – because we trying to compress it so much – it may take a little bit longer, and likely use a little bit more CPU. Level 1 essentially means the least amount of compression all the way to Level 4, the most amount of compression that we can add. So, this is a little bit more detailed, iSpeed typically – what's the word? Ranges from between Level 1 and Level 2 compression; it takes a look at your system to see how much CPU and available resources are available and makes judgments on much compression, it should use between Level 1 and Level 2.

ISize does the same thing, except with Level 3 and Level 4. There are some other advanced options here, as how many there is on the CPU we should be using, here's the option for creating the mapping data for SQL's Virtual Database and also our instant restore feature. You can include database logins, and some other options some users find very valuable, so like generating checks from this, so they can check that later on, to make sure the backup files are good. If we proceed to the next page, this is where you set up your notifications. And you can see the various options we have here: notify if the backup fails, notify if the backup is skipped, for whatever reason. If the backup is canceled, or if the backup completes with warning, and if you so wish, you can be notified is your backup's clean. For environments where a large number of databases, that might not be something that you want to enable, just because it's more than likely your backup's going to succeed and you'll be flooded with emails.

On the next page you can view a summary of what you've defined, 'cause this backup operation. And if you wish to, if everything looks good you can go ahead and click backup, we kick it off. Before I click backup, let me go ahead and show you this “generate script” button. Because what SQL Safe offers a command line interface where you can actually kick off a backup or restore operation, what have you, through a command line, DOS prompt. If you click the generate script here, it basically provides you with the actual script that you can use, if you wanted to take the backup off from the command line.

Other neat thing is that we also offer extended store procedures, and in this case we generate a script for you that will execute this exact same backup operation using extended store procedures – just a little quick tidbit that I wanted to share. So let's go and kick off this backup. And you can see that the backup's already started. And this database is a little large, so it may take a little while. You can see that I ran a few times here, previously, so it’s going to take me anywhere from one minute to three minutes. This is a Level 4 so I'm guessing it's going to be between these two times.

While that runs, let's take a real quick look at policies. As I mentioned previously, policies allows you to configure scheduled backup operations across your enterprise, so I have a policy here, preconfigured already and rather than creating a new one, let's go ahead and take a look at the details of this one. Do apologize, my VM is running on my personal laptop and it seems to be running the fan pretty hard. (Laughs)

Eric Kavanagh: That's good – you know, I was going to ask you a question while we're watching this here. Does IDERA use much change data capture in terms of backups, or are you doing whole backups every time? How does that work, do you know?

Tep Chantra: Say that one more time, I'm sorry?

Eric Kavanagh: Yes, so do you know if IDERA uses CDC, change data capture technology in order to do smaller backups, or is it doing full backups every time?

Tep Chantra: I don't believe so. I do recall seeing that previously, in a number of tickets. And if I recall correctly, no, we're not leveraging the CDC, we're, to be honest, we're essentially letting SQL Server perform the backup, we're just capturing the data in between and compressing it, resulting in a backup file being created. So, essentially using that. Yeah.

So, now that I have my policy loaded— oh, I'm sorry, did you have another question?

Eric Kavanagh: No, that's it. Go ahead.

Tep Chantra: OK, so now that I have my policy loaded, you can see some quick things here: name, description, you can set the what kind of policy you're going to create, whether it's a policy that's going to be managed, the schedule's going to be managed by the SQL Server Agent, or schedule is going to be managed by the SQL Server Backup Agent. In most cases you're going to want to use the SQL Server Agent, because that's typically something that's running anyways on your system, so might as well leverage what's available to you. On the membership tab, this is where you specify the instances in the backup databases that you want to back up. And this case, you can see I've added all my registered instances and I've specified specific database that should be backed up. Now, if I wanted to, I could go ahead and edit these and say, “I want to back up all the databases or just user databases, or even system databases.” The nice thing about this is I can also use wildcards and create certain databases.

I'm not going to make that change here, just because I don't want to make any big changes to my settings. So, let's go back to the options. And on the options, this is where you define what kind of backups you're going to perform, and if you take a look here, I have full backups, differential backups and large backups configured. And for each of these backups, I can define whether I want to use a specific amount of compression or turn the encryption on. Just like the options that you would have found on the ad hoc wizard. And on locations, you can also define the destination of these backup operations. One of the good things about policies is that you can also define whether or not you want to go ahead and delete those old backup files, based on X number of days, or weeks, what have you.

And that is configurable for each backup type. So, you can see here, I have my full backups to delete after one week. My differential delete after two days, and I want my backups to delete after one day. This is real nice, ‘cause it automates handling scenario, old backup files, keeping only the ones that you really need, based on time. Next page you define the schedule, and again, the schedule can be specific for each type of backup operation you're going to complete, so for my full, I'm running it weekly, my differential I'm running it every six hours, my logs I'm running every 30 minutes. On the next page is where you set up notifications and it's essentially the same types of notifications that you have found in ad hoc backup, the one difference is that you have this new, other option where it can tell you if the backup fails to start as scheduled. This is where you can be alerted upon situations where your backups didn't run. Real important, especially in cases where you have certain SLAs to make sure that you have backups available at the times that you need them. And next page you can view the summary. If I had made any changes, if I clicked finish, it would go out and make those changes, save it and for instance will save it to the repository of the SQL Server Agent jobs.

And just to kind of quickly show you real quick, here's a policy and a job that I created for that particular policy. And you can see it created three different jobs: one for each backup type. Now, real quick, let me take a quick look at the HUD interface and kind of— as I mentioned earlier, virtual database used to be a [inaudible] to we've integrated into SQL Safe. Now, as I mentioned, it basically fools SQL Server into believing that an actual database has been restored when in actuality we're just reading the backup file. So, let me go ahead and not one real quick for you guys. Let me take a backup file. Here, let me take a four right here. Process is completed, and real quick, if I refresh my databases here, you can see that the database is accessible and SQL Server thinks it's live, but in actuality, we're just reading the data out of the database.

Some other features that are new to this release is the ability to perform backups using the latest backup format. It's real handy for those customers that need to make use of our policy-based management, but they want to keep the SQL Server file format for whatever reason. Now, I know we're running out of time, so I think I'd like to go ahead and stop this presentation, just so that we can take some questions, or whatnot.

Eric Kavanagh: Yeah, sure. So, I think one of the keys really is in policy management, right? As in thinking about the optimal policy and what do you base that on? Obviously in some cases there are regulations to worry about, but in a business maybe that's not highly regulated; you just need to find the optimal times to be doing your backups and then, I'm guessing you get some reports on how long it took and how expensive it was in terms of computational power and so forth. What goes into defining the optimal policy?

Tep Chantra: That's really a case by case, every environment's going to have a different policy in regards to when these backups should run. Also, and that can entail the type of backups that are running, the schedule at which they run, and it really determines, really also dependent on their recovery needs, I suppose, that's the answer.

Eric Kavanagh: OK, yeah. And you talked about being able to do different kinds of backups and stripes was one of the options. Is that for sort of hot and cold data, or what's the logic behind going stripe, as opposed to some other method?

Tep Chantra: So, I think the best answer I can provide for that is that so, striped files, what we essentially do is write the backup content over a number of different files. I believe the idea of using striped files is that you can possibly write your backup files faster, that way. For instance, you could have each different file going to a different location. That costs the server means of security too, since you're distributing your backup files to different locations.

Eric Kavanagh: And there's some cool, new things in terms of restore capabilities, right? Because let's say there is some kind of event, whether it's a natural disaster or ransomware, whatever the case may be. You don't have to just have one option for restoring, right? Can you set priorities on what gets restored and what kinds of data? Can you talk about the options there?

Tep Chantra: Well, in terms of restore, I mentioned earlier that we provide the ability to perform instant restores, which essentially gets users to the data faster, right? And just to demonstrate, I did one earlier, so you can see here, that again, this database is not very huge, this is the one running on my laptop. So, I think it's maybe like two gigs in size, but this database completed within 37 seconds. Actual restore. So, it took me 37 seconds before I would be able to access my data, so with the instant restore, I was able to access my database within two seconds. So, you can imagine what it would look like if your database was much larger.

Eric Kavanagh: Yeah, good point. And of course, we were talking about this before the show; you've spent a lot of time on the frontlines doing support for people and then moved over to the product management space, so it's a bit of a different challenge, I suppose. But you were on the frontlines – I think it's a pretty good place to learn where people go wrong and what some of the problems are. What do you see as some of the more common pitfalls, that people could avoid if they just kind of thought through this stuff better?

Tep Chantra: Some of the common pitfalls is just – I suppose as you mentioned earlier – scheduling your backups. There's been times where I've seen people are trying to leverage, for example, our policies, [inaudible] policies, [inaudible] policies you're performing a lot of backups and basing it off of LSM. And in some cases I've seen some people also have some other utility performing backups on their databases which in effect messes up their log shipment policies, because backups are being made essentially outside of SQL Safe and we're not aware of them. It's mainly just planning things ahead, that's where the pitfall comes from.

Eric Kavanagh: Doesn't surprise me. Well, folks, this has been a great review of some of the blocking and tackling that is necessary to keep your enterprise happy, to keep your customers happy. I want to give a big thanks to everybody, Tep Chantra from IDERA, stepping in here, doing some live demos, that's always interesting – it's always a bit risky to do the live demo, but I think that went pretty well. You know, it's basic stuff, but it's the kind of thing where if you don't do it, you're going to have all kind of problems. So, this is the important stuff that companies have some people doing.

So, Tep, thank you for your time. Folks, we do archive all these webcasts for later viewing, so usually you can come back within an hour or two and check out the archive. But once, again, great stuff here, we're trying to help the enterprise stay on top of things, we appreciate all your time and attention, folks out there. We'll catch up with you next time. You've been listening to Hot Technologies. Take care, folks. Bye bye.