Into the Future: An On-Ramp for In-Memory Computing

Why Trust Techopedia
KEY TAKEAWAYS

Host Eric Kavanagh discusses in-memory computing and SAP HANA with guests Dr. Robin Bloor, Dez Blanchfield and IDERA's Bill Ellis.

Eric Kavanagh: Okay, ladies and gentlemen. Hello and welcome back once again. It is four o’clock Eastern Time on a Wednesday and the last couple years that means it’s time, once again, for Hot Technologies. Yes, indeed, my name is Eric Kavanagh, I’ll be your host for today’s conversation.

And folks, we’re going to talk about some cool stuff today. We’re going to dive into the world of in-memory, the exact title is “Into the Future: An On-Ramp for In-Memory Computing.” It’s all the rage these days, and with good reason, mostly because in-memory’s so much faster than relying on spinning disks. The challenge, though, is that you have to rewrite a lot of software. Because the software of today, most of it, has been written with disk in mind and that really changes the architecture of the application. If you design the application to wait for a spinning disk, you just do things differently than if you have all that power of in-memory technology.

There’s a spot about yours truly, hit me on Twitter, @eric_kavanagh. I always try to follow back and also to retweet anytime someone mentions me.

Like I said, we’re talking about in-memory today, and specifically about SAP HANA. Yours truly spent the last year getting to know the SAP community really well, and it’s a fascinating environment, I have to say. Hats off to the folks who run that operation and are on the front lines, because SAP is an incredibly good operation. What they’re really very good at is doing business. They’re also great at technology, of course, and they’ve really put a heavy investment into HANA. In fact, I can remember – it was probably about six or seven years ago – that we were doing some work for the U.S. Air Force in fact, and we got someone from SAP to come in and give us an early look at the world of HANA and what was planned. And to say the least, the folks at the SAP Labs had put a lot of time and effort into understanding how to build out this architecture which is completely different, once again, from traditional environments, because you have everything in memory. So, they’re talking about doing both transactional and analytical on the same data in-memory, as opposed to the traditional way, which is pull it out, put it into a cube, for example, analyze it there, versus transactional, which happens at a very different way.

This is an interesting space and we’re going to find out from another vendor actually, IDERA, a little bit about how all that stuff is going to work, and what the on-ramp is all about, frankly. So, we’ll be hearing from Dr. Robin Bloor, our very own chief analyst here at The Bloor Group; Dez Blanchfield, our data scientist and then good friend Bill Ellis from IDERA. So, with that, I’m going to hand off the keys to Dr. Robin Bloor, who will take it away.

Dr. Robin Bloor: Yeah, as Eric was saying, the time that we first got briefed by SAP HANA was back many years ago, now. But it was very interesting, that particular time was very interesting. We’d run into one or two companies that were, in one way or another, offering in-memory technology. It was quite clear that in-memory was going to come. And it really wasn’t until SAP stood up and suddenly launched HANA. I mean, it was shock when I saw SAP do that. It was, like, it was a shock because I expected it to come from elsewhere. I expected it would be, you know, Microsoft or Oracle or IBM or somebody like that. The idea that SAP was doing it was really very surprising to me. I suppose it shouldn’t have been because SAP is one of the strategic vendors and pretty much, you know, everything big that happens in the industry comes from one of those.

Anyway, the whole point about in-memory, I mean, we realized, we used to talk about it, that as soon as you actually go in-memory – this isn’t about putting data in memory, this is about committing to the idea that the memory layer is the system record – as soon as you migrate the system record to memory, disk starts to become a handoff medium of one sort and it becomes a different thing. And I thought that was very exciting when that began to happen. So, really, it’s over for spinning disk. Spinning disk will soon be existing only in museums. I’m not sure how soon that soon is, but basically, solid-state disk is now on the Moore’s law curve, it’s already ten times faster than spinning rust, as they now call it, and pretty soon it will be faster still and then that means that use cases for disk just get fewer and fewer.

And the curious fact, traditional DBMS, in actual fact, a lot of traditional software was built for spinning disk, it assumed spinning disk. It had all sorts of physical-level capabilities that were painstakingly programed in, in order to exploit spinning disk, making data retrieval as fast as possible. And all of that is being washed away. Just disappearing, you know? And then, there was obviously a very – I don’t know, lucrative, I suppose, it will be in the end – opening for an in-memory database that tried to occupy the position that the big databases, Oracle and Microsoft, SQL Server and IBM’s DB2, it occupied in the in-memory space and it was very interesting to watch that come marching forward and do that.

Let’s talk about the memory cascade; it’s just worth mentioning. It’s also, the reason for mentioning this, the reason I threw this in, really, was just to let everybody know, when I’m talking about memory here, all of these layers I’m talking about are in fact memory. But you suddenly realize when you look at this, this is a hierarchical store, it’s not just memory. And therefore, pretty much everything we learned a long, long time ago about hierarchical store, also applies. And it also means that any in-memory database has to navigate its way through this, some just walk through it on RAM itself, you know. And it’s just been getting larger and larger and larger and it’s now measured in megabytes. But you’ve got L1 cache which is a hundred times faster than memory, L2 cache 30 times faster than memory and L3 cache at about 10 times faster than memory. So, you know, there’s a lot of technology – well, a fair amount of technology – has adopted the strategy of using those caches as, kind of, storage space on the way to having things executed, particularly database technology. So, you know, that’s one influence.

Then we’ve got the emergence of 3D XPoint and IBM’s PCM. And it’s almost RAM speeds, is basically what both of these vendors are boasting. The use cases are probably different. The early experimentation with this is yet to be completed. We don’t know how it’s going to impact the use of RAM and the technology of in-memory database for that matter. You’ve then got RAM versus SSD. Currently RAM is about 300 times faster but, of course, that multiple is diminishing. And SSD versus disk which is about 10 times faster, if I understand it. So, that’s the kind of situation you’ve got. It’s hierarchical store. Looking at it another way, in-memory, of course, is completely different. So, the top diagram shows two applications, both of them perhaps accessing a database, but certainly accessing data on spinning rust. And the way you actually make things flow through the network, depending on what dependencies are around, is you have ETL. So, this means that, you know, data goes onto spinning rust and then comes off spinning rust in order to go anywhere, and to get anywhere it goes back onto spinning rust, which is three movements. And bear in mind that memory can be a hundred thousand times faster than spinning disk, and you certainly realize that taking data and putting it in memory makes that whole thing really quite different.

So, you might have thought what would happen would be on what’s on the screen right here, you might have thought that, in one way or another, the ETL would in actual fact just go from data to data in memory. But in actual fact it might not do that; in actual fact you might have the situation on the right here where two applications can actually fire off the same memory. Certainly an in-memory database could give you that capability, as long as you’ve got the locking and everything else orchestrated around it. So, this doesn’t just alter the speeds of things, this alters how actually you configure applications and whole data flows.

So, it’s a huge kind of impact. So, in-memory is disruptive, right? And we should get that from what I said. In-memory processing currently an accelerator but it’s going to become the norm. It will be used, being applied according to application value, and that’s therefore very, very interesting, that SAP will actually come out with a version of their ERP software that’s in-memory. And latency improvements up to three orders of magnitude entirely possible, and actually even more than that is possible, depending on how you do it. So, you’re getting huge improvements in speed by going in-memory. And the upshot, SAP HANA’s S/4 – which they’ve released, I think, well, people say it’s still being released, but it was certainly released last year – it’s a game changer given the SAP customer base. I mean, there’s 10,000 companies out there using SAP’s ERP and pretty much all of them are large companies, you know. So, the idea of them all having an incentive to go into memory and to use their fundamental, because ERP nearly always is fundamental applications that the businesses are running, it’s just a huge game changer and it’ll be very interesting. But of course, that all sounds very good, but it needs to be configured intelligently and it needs to be well monitored. It’s not as simple as it sounds.

Having said that, I think I’ll pass the ball on to, who’s this guy? Oh, Australian guy, Dez Blanchfield.

Dez Blanchfield: Very funny. Always a tough act to follow, Dr. Robin Bloor. Thanks for having me today. So, big topic, but exciting one. So, I’ve chosen an image that I often conjure up in mind when I’m thinking about the modern data lake and enterprise data warehouses, and my little gems of data. So here I’ve got this beautiful lake surrounded by mountains and waves coming out, and the waves are crashing over these rocks. This is, kind of, how I mentally visualize what it looks like inside a large data lake these days. The waves being batch jobs, and real-time analytics being thrown at data, being the rocks. And when I think about it as a physical lake it kind of brings back a wakeup call to me that, you know, the scale of the data warehouses that we’re building now, the reason we came up with this coinage and the term of a data lake is that they are very big and they are very deep, and occasionally you can have storms in them. And when we do, you’ve always got to resolve what’s creating the storm.

So in the theme of this thing, to me it seems that this siren call of in-memory computing is indeed very strong and for good reason. It brings about so many significant commercial and technical gains. That’s a discussion for a couple of hours on another day. But the general shift to in-memory computing, firstly I just want to cover how we got here and what makes this possible because it, sort of, sets the foundation of where some of the challenges can lie first and what we need to be cognizant of and thinking of, in our world of moving away from traditional old spinning disk holding data and being paged on and off disk and into memory and out of memory and into CPUs, to now we’re just removing almost one of those whole layers, being the spinning disk. Because remember, in the very early days of computing, architecturally, we didn’t move for a long time from the mainframe or the midrange world of what we originally thought of as core memory and drum storage, you know.

As Dr. Robin Bloor said, the approach we took to moving data around computer architecture didn’t really change dramatically for some time, for a couple of decades, in fact. If you think about the fact that, you know, modern computing, technically, has been around, if you’ll pardon the pun, for some 60-odd years, you know, six decades and more and it’s in the sense that you can buy a box off the shelf, as it were. The shift to new architecture really came about in my mind when we shifted out of the thinking around mainframes and midrange, and core memory and drum storage architectures, to the brave or the supercomputing, particularly the likes of Seymour Cray, where things like crossbar backplanes became a thing. Instead of just having one route to move data across the backplane or the motherboard, as it’s called these days. And inline memory, you know, in these days people don’t really think about what it actually means when they say DIMM and SIMM. But, SIMM is single inline memory and DIMM is dual inline memory and we’ve got more complex than that since and there are dozens of different memory types for different things: some for video, some for just general applications, some built into CPUs.

So, there was this big shift to a new way that data was stored and accessed. We’re about to go through that same shift in another whole generation, but not so much in the hardware itself but in the adoption of the hardware in the business logic and in the data logic layer, and it’s another big paradigm shift in my mind.

But just briefly on how we got here. I mean, hardware technology improved, and improved dramatically. We went from having CPUs and the idea of a core was a fairly modern concept. We take it for granted now that our phones have two or four cores and our computers have two or four, or even eight, cores in the desktop and eight and 12 and more on, you know, the 16 and 32 even in the server platform. But it’s actually a fairly modern thing that cores became a capability inside CPUs and that we went from 32-bit to 64-bit. A couple of big things happened there: we got higher clock speeds on multiple cores so we could do things in parallel and each of those cores could run multiple threads. All of the sudden we could run lots of things on the same data at the same time. Sixty-four-bit address spacing gave us up to two terabytes of RAM, which is a phenomenal concept, but it’s a thing now. These multipath backplane architectures, you know, motherboards, once upon a time, you could only do things in one direction: backwards and forwards. And as with the days with the Cray computing and some of the supercomputer designs of that time, and now in desktop computers and common off-the-shelf, sort of, desktop-grade rack-mount PCs, because really, most of the modern PCs now went through this era of mainframe, midrange, micro desktops and we’ve turned them back into servers.

And a lot of that supercomputer capability, that supercomputer-grade design, got pushed into common off-the-shelf components. You know, these days, the idea of taking very cheap rack-mount PCs and putting them into racks by the hundreds, if not thousands, and running open-source software on them like Linux and deploying the likes of SAP HANA on it, you know, we often take that for granted. But that’s a very new exciting thing and it comes with its complexities.

Software also got better, particularly memory management and data partitioning. I won’t go into a lot of details on that, but if you look at the big shift in the last 15 or so years, or even less, how memory is managed, particularly data in RAM and how data gets partitioned in RAM, so that as Dr. Robin Bloor indicated earlier or alluded to, you know, things can read and write at the same time without impacting each other, rather than having wait times. A lot of very powerful features like compression and encryption on-chip. Encryption’s becoming a more important thing and we don’t have to necessarily do that in software, in RAM, in CPU space, now that actually happens on the chip natively. That speeds things up dramatically. And distributed data storage and processing, again, things that we once assumed were the stuff of supercomputers and parallel processing, we now take that for granted in the space of the likes of SAP HANA and Hadoop and Spark, and so forth.

So, the whole point of that is this high-performance computing, HPC capabilities came to the enterprise and now the enterprise is enjoying the benefits that come with that in performance gains and technology space and technical benefits and commercial gains, because, you know, the reduced time to value is dramatically dropped.

But I use this image of a story I read some time ago of a gentleman who built a PC case out of Lego, because it always comes to mind when I think about some of these things. And that is that, it seems like a great idea at the time when you start building it, and then you get halfway through it and you realize that it’s actually really tricky to put all the Lego bits together and make a solid thing, solid enough to put a motherboard and so forth in, that’ll build a case for a personal computer. And eventually you realize that all the little bits aren’t sticking together right and you’ve got to be a little bit careful about which little bits you stick together to make it solid. And it’s a very cute idea, but it’s a wakeup call when you get halfway through and you realize, “Hmm, maybe I just should have bought a $300 PC case, but I’ll finish it now and learn something from it.”

To me that’s a great analogy for what it’s like to build these very complex platforms, because it’s all well and good to build it and end up with an environment where you’ve got routers and switches and servers and racks. And you’ve got CPUs and RAM and operating system clustered together. And you put something like HANA on top of it for the distributed in-memory processing and data storage and data management. You build the SAP stack on top of that, you get the database capabilities and then you load in your data and your business logic and you start applying some reads and writes and queries and so forth to it. You’ve got to keep on top of I/O and you’ve got to schedule things and manage workloads and multitenancy and so forth. This stack becomes very complex, very quickly. That’s a complex stack in itself if it’s just on one machine. Multiply that by 16 or 32 machines, it gets very, very non-trivial. When you multiply up to hundreds and eventually thousands of machines, to go from 100 terabytes to petabyte scale, it’s a frightening concept, and these are the realities we’re dealing with now.

So, you then end up with a couple of things that have also helped change this world, and that is that disk space became ridiculously cheap. You know, once upon a time you’d spend 380 to 400 thousand dollars on a gigabyte of hard disk when it was a massive drum the size of a— something that needed a forklift to pick it up. These days it’s down to, sort of, one or two cents per gigabyte of commodity disk space. And RAM did the same thing. These two J-curves in both these graphs, by the way, are a decade each, so in other words, we’re looking at two blocks of 10 years, 20 years of price reduction. But I broke them into two J-curves because eventually the one on the right just became a dotted line and you couldn’t see the detail of, so I re-scaled it. A gigabyte of RAM 20 years ago was something in the order of six and a half million dollars. These days if you pay more than three or four dollars for a gigabyte of RAM for commodity hardware you’re being robbed.

These significant tumbling of reduction in prices over the last two decades has meant that now we can move beyond disk space and straight into RAM, at not just the megabyte level, but now the terabyte level and treat RAM like it’s disk. The challenge with that, though, was that RAM was natively ephemeral – that means something that lasts for a short period of time – so, we’ve had to come up with ways to provide resilience into that space.

And so, my point here is that in-memory computing is not for the faint hearted. Juggling this very large scale in-memory data and the processing around it is an interesting challenge; as I indicated earlier, it’s not for the faint hearted. So, one thing we’ve learned from this experience with large-scale and high density in-memory computing is that the complexity that we build begets risk in a number of areas.

But let’s just look at it from a monitoring and response point of view. When we think of the data, it starts out in disk space, it sits in databases in disks, we push it up into memory. Once it’s in memory and distributed and there are copies of it, we can use lots of copies of it, and if any changes get made, it can be reflected across at memory level instead of having to go on and off and across the backplane at two different levels, it goes in and out of memory. We’ve ended up with this hyperscale hardware platform that allows us to do this now. When we talk about hyperscaling, it’s harder at ridiculously dense levels, and very high density memory, very high density counts of CPUs and cores and threads. We’ve now got very highly complex network pathologies to support this because the data does have to move across the network at some point if it’s going to go between nodes and the clusters.

So, we end up with device fault redundancy becoming an issue and we’ve got to monitor devices and pieces of it. We’ve got to have resilient data fault redundancy built into that platform and monitor it. We’ve got to have the distributed database resilience built in so we’ve got to monitor the database platform and stack inside that. We have to monitor the distributed processing scheduling, what’s happening inside some of the processes all the way down to polling and query and the path that query takes and the way the query’s being structured and executed. What does it look like, has someone done a SELECT * on “blah” or have they actually done a very smart and well-structured query that’s going to get them the nominal, minimum amount of data coming across the architecture in the backplane? We’ve got multitenancy workloads, multiple users and multiple groups running the same or multiple workloads and batch jobs and real-time scheduling. And we’ve got this blend of batch and real-time processing. Some things just run regularly – hourly, daily, weekly or monthly – other things are on demand. Someone might be sitting there with a tablet wanting to do a real-time report.

And again, we come to that whole point, that the complexity that comes about in these is not just a challenge now, it’s quite frightening. And we have this reality check that a single performance issue, just one performance issue in its own right, can impact the entire ecosystem. And so, we end up with this very fun challenge of finding out, well, where are the impacts? And we have this challenge of, are we being reactive or proactive? Are we watching the thing in real time and seeing something goes “bang” and responding to it? Or have we seen some form of trend and realized that we need to proactively get on board with it? Because the key is everyone wants something fast and cheap and easy. But we end up with these scenarios, what I like to refer to and my favorite line of the Donald Rumsfeld conundrum – which in my mind applies in all of these scenarios of high complexity – and that is that, we have known knowns because that’s something we designed and built and it runs as planned. We’ve got known unknowns in that we don’t know who’s running what, when and where, if it’s on demand. And we’ve got unknown unknowns and those are the things that we need to be monitoring and checking for. Because the reality is, we all know, you can’t manage something you can’t measure.

So, to have the right tools and the right capability to monitor our CPU scheduling, look for wait times, find out why things are having to wait in schedule queues in pipelines. What’s happening in memory, what sort of utilization’s being performed, what sort of performance we’re getting out of memory? Is stuff being partitioned correctly, is it being distributed, do we have enough nodes holding copies of it to cope with the workloads that are being thrown at it? What’s happening with process execution away from the operating system processes? The jobs themselves running, the individual apps and the daemons supporting them? What’s happening inside those processes, particularly the structuring of queries and how are those queries being executed and compiled? And the health of those processes all the way out in stack? You know, again, back to wait times, is it scheduling correctly, is it having to wait, where is it waiting, is it waiting for memory reads, I/Os, the CPU, I/O across the network to the end user?

And then back to that point I’ve just mentioned just quickly before I wrap up and that is that, how are we approaching issue resolution and response times to those? Are we watching in real time and reacting to things, which is the least ideal scenario, but even then, it’s better we do that than not know and have the help desk call and say something went wrong and we’ve got to track it down? Or are we doing it proactively and are we looking at what’s coming down the line? So, in other words, are we seeing we’re running short of memory and need to add more nodes? Are we doing trend analysis, are we doing capacity planning? And in all of that, are we monitoring historical execution times and thinking about capacity planning or are we watching it in real time and proactively rescheduling and doing load balancing? And are we aware of the workloads that are running in the first place? Do we know who’s doing what in our cluster and why?

In-memory computes are very powerful, but with that power it’s almost one of those things, like, a loaded gun and you’re playing with live ammo. You can eventually shoot yourself in the foot if you’re not careful. So, that power of in-memory compute just means that we can run lots more and quickly across very distributed and discrete data sets. But then that then has a higher demand being driven from end users. They get used to that power and they want it. They’re no longer expecting that jobs take weeks to run and reports turn up in plain old paper. And then, underneath all of that we have the day-to-day maintenance encircled around patching, updates and upgrades. And if you think about 24/7 processing with in-memory compute, managing that data, managing the workloads across it, it’s all in-memory, technically in ephemeral platform, if we’re going to start applying patches and updates and upgrades in there, that comes with a whole range of other management and monitoring challenges as well. We need to know what we can take offline, when we can upgrade it and when we bring it back online. And that brings me to my final point and that is, that as we get more and more complexity in these systems, it isn’t something that a human can do just by sucking their thumb and pulling their ear anymore. There are no, sort of, gut feeling approaches anymore. We really do need the appropriate tools to manage and deliver this high level of performance in compute and data management.

And with that in mind I’m going to hand over to our friend from IDERA and hear how they’ve approached this challenge.

Bill Ellis: Thank you very much. I am sharing out my screen and here we go. So, it’s really humbling to just consider all the technology, and all the people who came before us, to make this stuff that’s available in 2017, available. We’re going to be talking about workload analysis for SAP HANA – basically, a database monitoring solution: comprehensive, agentless, provides real-time and it builds out a history, and so you can see what has happened in the past. SAP S/4 HANA offers the potential of better, faster and cheaper. I’m not saying it’s inexpensive, I’m just saying it’s less expensive. Kind of, traditionally what happened was that you would have a main production instance – probably running on Oracle in a larger shop, potentially SQL Server – and then you would use that ETL process and you would have multiple, kind of, versions of the truth. And this is very expensive because you were paying for hardware, operating system, Oracle license for each of these individual environments. And then on top of that you would need to have people to reconcile one version of the truth to the next version of the truth. And so, this multiple-version ETL processing was just slow and very, very cumbersome.

And so, HANA, basically one HANA instance, can potentially replace all of those other instances. So, it’s less expensive because it’s one hardware platform, one operating system, instead of multiples. And so the S/4 HANA, really, it does change everything and you basically are looking at the evolution of SAP from R/2 to R/3, the various enhancement packs. Now, the legacy system is available until 2025, so you have eight years until you’re really forced to migrate. Although we see people, you know, dabbling their toes into this because they know it’s coming and eventually, you know, ECC will be running on HANA and so you’d really need to be prepared for that and understand the technology.

So, one database, no ETL processes, no copies that must be reconciled. So, once again, faster, better and cheaper. HANA is in-memory. SAP supplies the software, you supply the hardware. There’s no aggregate tables. One of the things that they, kind of, suggest when you’re thinking about this is you don’t want to get into this, we’re just going to buy the very largest server that’s available. They suggest that you, kind of, right size your SAP landscape ahead of time and they basically say, do not migrate 20 years’ worth of data. I think archiving is something that’s underutilized in IT, kind of, across the board, not just in SAP shops. And so the next thing is that SAP has actually spent a lot of time rewriting their native code to not use SELECT *. SELECT * returns all of the columns from the table and it’s particularly expensive in a columnar database. And so, it’s not a good idea for SAP HANA. So, for shops that have a lot of customization, a lot of reports, this is something you’re going to want to look for and you’re going to want to specify column names as you progress to migrating everything to HANA.

We like to say that HANA is not a panacea. Like all databases, all technologies, it needs to be monitored, and as mentioned earlier, you need numbers in order to manage excess, measurement by measurement. And one of the things that I talk about in the IDERA area is that every business transaction interacts with the system of record, and in this case, it’s going to be HANA. And so, HANA becomes the foundation for the performance of your SAP transactions, the end user experience. And so, it’s vital that it be kept running at top speed. It does become a single point of failure, and in talking to folks, this is something that can crop up where you have an end user and maybe is using that real-time data and they have an ad hoc query that potentially isn’t quite right. Maybe they’re not joining tables and they’ve created an outer join, a partisan product, and they’re basically consuming a lot of resources. Now, HANA will recognize that eventually and kill that session. And so there’s the crucial part of our architecture that’s going to allow you to actually capture that in the history, so you can see what had happened in the past and recognize those situations.

So, let’s take a look at the workload analysis for SAP HANA. This is Version 1 so we are very much inviting you to join us in the journey, and this is a product from IDERA. It’s comprehensive, yet simple. Real-time with trending. Host health, instance health. We track the wait states, the SQL queries, memory consumers and services. So, this is what the GUI looks like and you can see right off the bat that it’s web enabled. I actually opened up this solution running live on my system. There’s some crucial things you want to take a look at. We’ve, kind of, sub-divided into different workspaces. Kind of the most crucial one is what’s happening at the host level from a CPU utilization and memory utilization [inaudible]. You definitely don’t want to get to a swapping or thrashing standpoint. And then you basically work your way down into what’s happening in trending, from response time, users, SQL statements, that is, what’s driving the activity on the system.

One of the things with IDERA is that, you know, nothing happens on a database until there’s activity. And that activity are SQL statements that come from the application. So, measuring the SQL statements is absolutely vital to being able to detect root cause. So, let’s go ahead and drill in. So, at the host level, we can actually take a look at memory, track over time, host CPU utilization. Step back, you can look at the COBSQL statements. Now, one of the things that you’re going to see in our architecture side is this information is stored off of HANA, so if something were to happen to HANA, we’re basically capturing information up to, God forbid, an unavailability situation. We also can capture everything that happens on the system so that you have clear visibility. And one of the things that we’re going to do is we’re going to present the SQL statements in weighted order. So, that’s going to take into account the number of executions, and so this is the aggregated resource consumption.

And so you can get into individual metrics here – when did that SQL statement execute? And then the resource consumption is largely driven by the execution plan, and so we’re able to capture that on an ongoing basis. HANA is in-memory. It’s highly parallel. It does have primary indexes on every table, which some shops choose to build a secondary index to address certain performance issues. And so, kind of, knowing what happened with the execution plan for certain SQL statements can be very valuable. We’ll also look at the services, memory consumption once again, charted over time. The architecture: so, this is a self-contained solution that you can download from our website and the architecture is that it’s web-enabled.

You can have multiple users connect to a particular instance. You can monitor local instances of SAP HANA. And we keep a rolling four-week history in our repository and that’s self-managed. To deploy this, it’s rather simple. You need a Windows Server. You need to download it. Most Windows Servers will have a built-in .NET framework and it comes bundled with a license. And so you would go to the installation wizard which is driven by Setup.exe and it would actually open a screen, license agreement, and you would simply work down this outline by clicking “Next.” And so, where would you like HANA to be installed? Next is database properties, and this is going to be your connection to the SAP HANA, so this is agentless monitoring of the HANA instance. And then we’ll basically give a preview, this is the port that we communicate on by default. Click “Install” and it basically starts up HANA and you begin building the history. So, just a little bit of the sizing chart information. We can monitor up to 45 HANA instances, and you’ll want to use this, kind of, on a sliding scale to determine number of cores, memory, disk space that you’ll need. And this assumes that you have a complete four-week rolling history going in.

So, just as a quick recap, we’re looking at server health, instance health, CPU/memory utilization. What are the memory consumers, what are the activity drivers, what are the services? SQL statements are vital – what are the execution states? Show me the execution plans, when did things execute, provide trending? This is going to give you real-time and a history of what had happened. And as I mentioned, because our history is separate from HANA, we’re going to capture stuff that had timed out and had been flushed from HANA’s history. So that you can see the true resource consumption on your system because of the separate history.

So, as I had mentioned, IDERA’s website, under Products, you can easily find this. If you want to try this out, you’re certainly welcome to. See how it provides information for you and there’s additional information on that website. So, any interested parties are more than happy to go into that. Now, in the portfolio products offered by IDERA, there’s also an SAP ECC transaction monitor, and this is called Precise for SAP. And what it does is – whether you’re using portal or just straight-up ECC – it will actually capture the end user transaction from click to disk, all the way through down to the SQL statement and show you what’s happening.

Now, I’m showing you just one summary screen. There’s a couple of takeaways that I want you to have from this summary screen. It’s the Y-axis’s response time, the X-axis’s time plus the day, and in this transaction view we’ll show you client time, queuing time, ABAP code time, database time. We can capture end user IDs, T-codes and you can actually filter and show servers via a particular transaction traversed. And so, many shops run the front end of the landscape under VMware, so you can actually measure what’s happening on each of the servers and get into very detailed analysis. So, this transaction view is for the end user transaction through the entire SAP landscape. And you can find that on our website under Products APM Tools and this would be the SAP solution that we have. The installation for this is a little bit more complicated, so it’s not just download and try it, like we have for HANA. This is something where we would work together to do, design and implement the overall transaction for you.

So, just a third quick recap, workload analysis for SAP HANA, it’s comprehensive, agentless, real-time, offers a history. We offer the ability to download and try it for your site.

So, with that, I’m going to pass the time back to Eric, Dez and Dr. Bloor.

Eric Kavanagh: Yeah, maybe Robin, any questions from you, and then Dez after Robin?

Dr. Robin Bloor: Okay. I mean, the first thing I’d like to say is I really like the transaction view because it’s exactly what I would want in that situation. I did a lot of work – well, it’s a long time ago right now – doing performance monitoring, and that was the kind of thing; we didn’t have the graphics in those days, but that was the kind of thing I particularly wanted to be able to do. So that you can, in one way or another, inject yourself into wherever the problem is happening.

The first question I have is, you know, most people are implementing S/4 in some way or other out of the box, you know. When you get involved in any given implementation of S/4, did you discover that it’s been implemented well or do you end up, you know, discovering things that might make the customer want to reconfigure? I mean, how does all of that go?

Bill Ellis: Well, every shop is a little bit different. And there’s different usage patterns, there’s different reports. For sites that have ad hoc reporting, I mean that’s actually, kind of like, a wildcard on the system. And so, one of the crucial things is to begin measurement and find out what the baseline is, what’s normal for a particular site, where’s that particular site, based on their usage patterns, stressing the system. And then make adjustments from there. Typically the monitoring optimization is not a one-time, it’s really an ongoing practice where you’re monitoring, tuning, honing, making the system better for the end user community to be able to serve the business more effectively.

Dr. Robin Bloor: Okay, so when you implement – I mean, I know this is a difficult question to answer because it’s going to vary depending on size of implementation – but how much resource does the IDERA monitoring capability, how much does it consume? Does it make any difference to anything or is it, just doesn’t kind of interfere? How does that work?

Bill Ellis: Yeah, I’d say that the overhead is approximately 1–3 percent. Many shops are very much willing to sacrifice that because potentially you’ll be able to buy that back in terms of optimization. It does depend upon usage patterns. If you’re doing a full landscape, it does depend upon the individual technologies that are being monitored. So, kind of, mileage does vary, but like we talked about, it’s definitely better to spend a little bit to know what’s going on, than to just run blind. Particularly it would be, you know, here we are in January and you get into yearend processing and you’re aggregating 12 months’ worth of data. You know, that’s doing performance, getting reports out to regulatory organizations, the banks, to shareholders, is absolutely vital in a critical business performance.

Dr. Robin Bloor: Right. And just a quick, from your perspective – because I guess you’re out there involved with a whole series of SAP sites – how big is the movement amongst the SAP customer base towards S/4? I mean, is that something that is being, you know, that there’s a kind of avalanche of enthusiastic customers going for it, or is it just a steady trickle? How do you see that?

Bill Ellis: I think a couple years ago, I would say it was a toe. Now I’d say that people are, kind of, up to their knee. I think that, you know, given the timeline people are going to be really immersed in HANA over the next couple of years. And so the monitoring, the transformation, you know, I think that the majority of customers are, kind of, on the learning curve together. And so I think we’re not quite at the avalanche as you had stated, but I think we’re on the cusp of the major transformation over to HANA.

Dr. Robin Bloor: Okay, so in terms of the sites that you’ve seen that have gone for this, are they also adapting HANA for other applications or are they, in one way or another, kind of, completely consumed at making this stuff work? What’s the picture there?

Bill Ellis: Yeah, oftentimes people will integrate SAP with other systems, depending upon what modules and so forth, so there’s a little bit. I don’t really see people deploying other applications on HANA just yet. That’s certainly possible to do. And so it’s more around the landscape around the SAP infrastructure.

Dr. Robin Bloor: I suppose I’d better hand you on to Dez. I’ve been hogging your time. Dez?

Dez Blanchfield: Thank you. No, that’s all good. Two very quick ones, just to trying to set the theme. SAP HANA has been out for a couple of years now and people have had a chance to consider it. If you were to give us a rough estimate of the percentage of folk that are running it – because there are a lot of people running this stuff – what do you think the percentage of the market that you’re aware of is currently that have gone from just traditional SAP implementations to SAP on HANA? Are we looking at 50/50, 30/70? What, sort of, percentage of the market are you seeing of people who transitioned and made the move now versus folk who are just holding back and waiting for things to improve or get better or change or whatever the case may be?

Bill Ellis: Yeah, I’d actually put, from my perspective, I’d put the percentage around 20 percent. SAP tends to be traditional businesses. People tend to be very conservative and so their people will drag their feet. I think it also depends upon, you know, have you been running SAP for a long time, or are you, kind of an SMB that maybe had more recently deployed SAP? And so, there’s kind of a number of factors, but overall I don’t think the percentage is 50/50. I would say 50 percent are at least dabbling and have HANA running somewhere in their data center.

Dez Blanchfield: The interesting takeaway that you gave us earlier on was that this is a fait accompli in a sense and that the clock is physically and literally ticking on the time to transition. In the process of doing that, do you think people have considered that? What’s the general sense of folk understanding that this is a transitional shift in platform, it isn’t just an option, it’s becoming the default?

And from SAP’s point of view, I’m sure they’re pushing that way because there’s a significant competitive advantage in performance, but it’s also, I guess, they’re wrestling control back of the platform instead of it going to a third-party database, they’re now bringing it back to their own platform. Do you think companies have actually gotten that message? Do you think people understand that and now are gearing to it? Or is it still, sort of, an unclear thing, do you think, out the market?

Bill Ellis: I do not think the SAP is shy about communicating and people who’ve gone to SAPPHIRE have seen HANA everywhere. So, I think people are well aware, but human nature being what it is, you know, some people are, kind of, dragging their feet a little bit.

Dez Blanchfield: Because I think the reason I was asking that question, and you’ll have to forgive me, but it’s that I agree. I think they haven’t been shy about communicating it. I think that the signal’s gone out in many ways. And I agree with you – I don’t know that everyone’s jumped yet. You know, traditional enterprise, very large enterprises that are running this, are still in many ways, not quite dragging their feet, but just trying to grapple with the complexity of the shift. Because I think the one thing that your tool, and certainly your demonstration today has highlighted, and for me, one key takeaway I’d like everyone listening and tuned in today to sit up and pay attention to reflectively is, you’ve got a tool now that’s simplified that process in my mind. I think there’s a bunch of very nervous CIOs and their teams under them who are thinking, “How do I make the transition from traditional RDBMS, relational database management systems, that we’ve known for decades, to a whole new paradigm of compute and storage management in a space that is still relatively brave?” in my mind. But it’s an unknown in many ways, and there are very few people have made that shift in other areas, that it’s not like they’ve got another section of business that’s already made a move to in-memory compute. So, it’s an all-or-nothing move in their mind.

So, one of the things I’ve taken away from this more than anything – I’m going to hit you with a question in a minute – is that fear now, I think, is allayed in many ways and that prior to today, if I was a CIO listening, I would, sort of, think, “Well, how am I going to make this transition? How am I going to guarantee the same capability that we’ve got in the relational database management platform and years of experience of DBAs, to a new platform that we don’t currently have the skills in?” So, my question with that is, do you think people have understood that the tools are there now with what you’re offering, and that they can, kind of, take a deep breath and sigh of relief that the transition isn’t as scary as it might have been prior to this tool being available? Do you think people have understood that or is it still, kind of, a thing that they’re just grappling with the transition to in-memory compute and in-memory storage versus old-school combinations of NVMe, flash and disk?

Bill Ellis: Yeah, so there’s undoubtedly a lot of technology and tools that can graphically display this, what’s happening and make it very easy to pinpoint top resource consumers. I mean, it does help to simplify things and it does help the technology staff really get a good handle. Hey, they’re going to be able to know what’s going on and be able to understand all of the complexity. So, absolutely, the tools in the marketplace are definitely helpful and so we offer workload analysis for SAP HANA.

Dez Blanchfield: Yeah, I think the great thing about what you’ve shown us today is that, in monitoring the hardware piece, the operating system piece, even monitoring some of the workload moving through, as you said, I mean, the tools have been there for some time. The bit for me, particularly inside the likes of HANA is that we haven’t necessarily had the ability to get a magnifying glass and peek into it and see right down to what your tool does with what’s happening with the queries and how they’re being structured and where that load is.

With the deployments you’ve seen so far, given that you are quite literally the most authoritative in this space in your platform in the world, some of the quick wins that you’ve seen – have you got any anecdotal knowledge you can share with us around some of the eureka moments, the aha moments, where people have deployed the IDERA toolset, they’ve found things that they just weren’t aware were in their platforms and performances they’ve had. Have you got any great anecdotal examples of where people have just deployed it, not really knowing what they’ve had and all of a sudden gone, “Wow, we actually didn’t know that was in there?”

Bill Ellis: Yeah, so a big limitation of the native tools are that if a runaway query is canceled out, it flushes the information and so you basically don’t have the history. By us storing the history offline, like a runaway query, you’ll have a history, you’ll know what had happened, you’ll be able to see execution plan and so forth. And so, that allows you to, kind of, help the end user community basically operate better, write reports better, etcetera. And so, the history is something that’s really nice to have. And one of the things that I had meant to show is that you can look at real time up to four weeks and then you can easily zoom in on any timeframe of interest and then you can expose the underlying driving activity. Just having that visibility is something that’s very helpful to know what bottleneck has arisen.

Dez Blanchfield: You mentioned it’s multi-user, once it’s deployed, and I was quite impressed by the fact that it’s agentless and effectively zero touch in many ways. Is it normal for a single deployment of your tool to then be available to everyone from the network operations center in the NOC watching core infrastructure underpinning the cluster all the way up to the application and development team? Is it the norm and you deploy once and they would share that, or do you anticipate people might have model instances looking at different parts of the stack? What does that look like?

Bill Ellis: So, the basis team will typically have a very strong interest in the technology underpinnings of what’s happening in SAP. Obviously there’s multiple teams that will support entire landscapes. The HANA piece is just focused on that. I’m just going to default to the SAP basis team as the primary consumers of the information.

Dez Blanchfield: Right. It strikes me, though, that if I’ve got a development team or not even just at code level, but if I’ve got a team of data scientists or analysts doing analytical work on the data sets in there, particularly given that there’s a significant push to data science being applied to everything inside organizations now, in my mind – and correct me if I’m wrong – it seems to me that this is going to be of great interest to them as well, because in many ways one of the serious things you can do in a [inaudible] data warehouse environment is unleash a data scientist upon it and allow it to just start doing ad hoc queries. Have you had any examples of that kind of thing happening where shops have rung you and said, “We’ve thrown a data science team at the thing, it’s really hurting, what can we do for them versus what we’re doing in just traditional operational monitoring and managing?” Is that even a thing?

Bill Ellis: Well, yeah, I’d kind of turn this a little bit and cut my reply would be that, looking at performance, being performance aware in developing QA production, you know, the sooner you store, the less problems, less surprises you have. So, absolutely.

Dez Blanchfield: Following on from that, a lot of the tools that I’ve had experience with – and I’m sure Robin will agree – a lot of the tools here, if you’ve got a large RDBMS you need really high-skilled, deeply knowledged, experienced DBAs. Some of the infrastructure and platform requirements that come around with SAP HANA because it’s currently supported on particular distributions aligning from particular hardware and so forth, to the best of my knowledge. You know, there’s people with decades of experience who are not the same. What I’m seeing, though, is that that isn’t necessarily the requirement with this tool. It seems to me that you can deploy your tool and give it to some fairly new faces and give them the power straight away to find things that aren’t performing well. Is it the case that there’s a pretty short learning curve to get up to speed with this and get some value out of deploying it? You know, my general sense is that you don’t have to have 20 years of experience of driving a tool to see the value immediately. Would you agree that’s the case?

Bill Ellis: Oh absolutely, and to your point, I think that a lot of the success of a deployment really depends upon the planning and architecting of the SAP HANA environment. And then there’s undoubtedly a lot of complexity, a lot of technology it’s built on, but then it just comes down to monitoring the usage patterns of what’s happening. So, although it’s more complex, in a way it’s packaged and somewhat simplified. That’s a very poor [inaudible].

Dez Blanchfield: Yeah, so before I hand back to Eric, because I know he’s got a couple of questions, particularly from some that’s come through Q&A which looked interesting, and I’m keen to hear the answer on. Traditional journey for someone to— you mentioned earlier that you can get it, you can download it and try it. Can you just recap on that quickly for folk listening either today or folk who might replay it later? What are the quick two or three steps to get their hands on a copy and deploy it and try it in their environments before they buy it? What does that look like? What are the steps for that?

Bill Ellis: Yeah. So, IDERA.com and just go to Products and you’ll see Workload Analysis for SAP HANA. There is a download page. I think they’ll ask you for some contact information and the product is just packaged with a license key so you can install it with the Setup.exe and just get rolling, I think, very quickly.

Dez Blanchfield: So, they can go to your website, they can download it. I remember looking at it some time ago and I double checked last night as well, you can request a demo, from memory, where someone on your team will, sort of, walk you through it? But you can actually download it for free and deploy it locally in your own environment, in your own time, can’t you?

Bill Ellis: Yes.

Dez Blanchfield: Excellent. Well I think, more than anything, that’s probably the thing that I would personally advise folk to do, is grab a copy off the website, grab some of the documentation there because I know there’s a lot of good content there to do that with, and just try it. Put it in your environment and see what you find. I suspect that once you have a look under the hood with your SAP HANA environments with the IDERA tool you’re going to find things that you actually weren’t aware were in there.

Look, thank you so much for that and thanks for the time just for the Q&A with Robin and I. Eric, I’m going to pass back to you because I know that some Q&A’s come through from our attendees as well.

Eric Kavanagh: Yeah, just a real quick one here. So, one of the attendees makes a really good comment here just talking about how things are changing. Saying in the past, memory was choking, slowing down by frequent paging, currently CPU is choking with too much in-memory data. You know, there are network problems. It’s always going to be a moving target, right? What do you see as the trajectory these days in terms of where the bottlenecks are going to be and where you’re going to be needing to focus your attention?

Bill Ellis: Yeah. Until you measure, it’s hard to know. One of the things about the SQL statements is they are going to be the drivers of resource consumption. And so in the circumstance that you were to have, like, a large memory consumption or CPU consumption, you’ll be able to figure out what activity caused that resource consumption. Now, you wouldn’t necessarily want to kill it, but you also want to be aware of it and, kind of, what’s happening, how often does it happen, etcetera. We’re, kind of, still new in terms of addressing the whole set or cookbook of responses to different circumstances. And so, it’s a great question and time will tell. We’ll have more information as time passes.

Eric Kavanagh: That’s it. Well, you guys are at a very interesting space. I think you’re going to see a lot of activity in the coming months and next couple of years because I do know that SAP, as you suggested in our content call, has provided a nice long on-ramp for folks to make the transition to HANA. But nonetheless, that ramp does have an ending and at a certain point people are going to have to make some serious decisions, so the sooner the better, right?

Bill Ellis: Absolutely.

Eric Kavanagh: Alright folks, we’ve burned through another hour here on Hot Technologies. You can find information online, insideanalysis.com, also techopedia.com. Focus on that site for lots of interesting information, including a list of all of our archives of these past webcasts. But folks, a big thank you to all of you out there, to our friends at IDERA, to Robin and of course, Dez. And we’ll catch up to you next week, folks. Thanks again for your time and attention. Take care. Bye bye.