Eric Kavanagh: Ladies and gentlemen, hello and welcome back once again to Hot Technologies! Yes, indeed! My name is Eric Kavanagh. I will be your moderator for today’s event, and folks, we have some exciting stuff mapped out for you today, I can tell you right now. This is one of the more fascinating areas of IT management in general. The topic is "Keep It Simple: Best Practices for IT Portfolio Management." We’re going to focus largely on the data side of that equation today. In other words, making sure your data is clean or as clean as possible as you try to understand the landscape of devices all over your enterprise.

Of course with this whole new world of BYOD, bring your own device – there is yours truly very quickly – we have very heterogeneous landscapes these days. I mean those of you in large organizations know the stories. There are whole rooms filled with servers. There are applications that have been running for years. There are old IT systems that no one has touched in ten years and everyone is afraid to turn off because you never know what’s going to happen.

So we’re going to talk today with a couple of experts, in fact four experts total, about what to do in this space.

Hot Technologies, the whole purpose of this show is really to dig deep into specific kinds of technology and help our audience understand how things work, why to use these kinds of technologies, what some best practices are, what you should consider. We’ll tell some use cases on occasion. In fact, Dez is going to talk about a little story from his experience in the world of IT asset management. But again, we’re kind of going to focus on the data side because that’s really the expertise of our friends from BDNA. They’re masters at helping organizations really get a handle on what exactly they have in their environment and how to understand where it is, what it does, who’s using it, all that kind of fun stuff.

Here are our panelists. We’ll hear from Dez Blanchfield, our newly invented data scientist. I like to brag that Dez was literally one of the top ten most visited LinkedIn profiles of Australia last year. It’s because he never sleeps. We also have Dr. Robin Bloor, our very own chief analyst. Dr. Bloor, for those of you who don’t know, really kind of started the whole IT independent analyst industry in the U.K. about 25 years ago. These days, there are quite a few. It’s almost like I say a cottage industry. There are lots of independent IT analyst firms. We also have Gartner, Foster, IDC and the big guys. But the nice thing about the independent firms is that frankly we’re a little bit more free to speak candidly about stuff. So ask him the hard questions. Don’t let these guys off easy. You can always ask a question during the show by using the Q&A component of your webcast console. That’s in the lower right-hand corner or you can chat me. Either way, I try to monitor that chat window all show long.

With that, let’s introduce Dez Blanchfield. Dez, I’m going to hand you the keys of the Webex. There you go. Take it away.

Dez Blanchfield: Thank you, Eric. Great. Boy, fantastic intro.

The topic today is something that I’ve lived with for the better part of me, like thirty years, large IT environment. They grow through an organic process. As Eric said, you start up small and you build these environments and they grow, and they grow organically in some cases. They might grow through other means such as large expansion acquisition.

I’m going to share an anecdote that touches on all the key things we’re talking about today, and in particular, data and where the data comes from to do and the collection of data to do IT asset management. In this case, I’m going to talk about a large piece of work for one of the top three publishers in the world. They’re in radio, TV, magazine, newspaper, print, digital and a range of other publishing spaces. We were given a three-month window to run what was essentially called a cloud readiness assessment but that ended up being an entire business-wide cloud strategy that we put together. We were given this fundamental challenge from the CIO to reduce the data center footprint by 70 percent within three years. It was pretty obvious to do this we had to do a whole business-cloud transition. We had three months to do this piece of work. It covers four different regions in five countries. There were six separate business units that were included and seven different incumbents of status service status providers. As the title says, nothing beats the real-world example.

We came to the conclusion pretty quickly that the business goals were frankly nothing short of a miracle. They wanted to consolidate their own data centers. They wanted to leverage third-party data center environments were necessary, but in general they wanted to move to somebody else’s cloud infrastructure, particularly public cloud or virtual private cloud for necessary security reasons. In particular, Amazon Web Services and Azure were focused on because they were the most insured at the time. They ran a mixture of the Intel x86, 32/64-bit platform, IBM I series, the AS series, the AS/400P series mainframe. They actually had two mainframes, one for production and one for disaster recovery developments. Then the whole mix of operating systems – Windows, Linux, AIX, Solaris and various things on laptops and desktops.

Storage was one of the biggest challenges. They had enormous amounts of data because they are a publisher – everything from photographs to videos to editing images to text and content. Throughout these big platforms and different storage formats were NetApp, Hitachi, IBM and EMC. So extremely diverse environment to try and capture and map the different types of services that were in there and just get a view of what we were taking from on present and private data center environments to a cloud environment.

The height of what we’re talking about today around the IT asset management piece is driven by data in essence and here’s a map of what we had to deal with with this particular project, that I’m sharing the anecdote about. We had a lot of data inputs. Unfortunately, none were really in very good shape. We have a range of incomplete asset registers. There are five different asset registers being run so configuration management databases, ITF input forms. We have disparate data sources that range up to ninety-odd different types. We had multiple core service models, conflicting service groups, one of the largest community of stakeholders I have ever dealt with in my career. There were four hundred senior execs who were in charge of these different systems. Invariably, for all intents and purposes, we had completely misaligned business entities – each of them were operating independently with their own environments and their own infrastructure in some cases. It was quite a challenge.

We discovered this within about the second or third day that we were just being <inaudible> with data that almost made no sense, and so it was becoming increasingly obvious that we had to do something slightly different. The initial approach was we simply threw bodies at it. This is a classic IT approach in my experience. Just get more humans and run faster and it will all work out in the end. So we ran lots of workshops in the early days with the domain experts trying to just capture a model – what the business looked like, how the service group works, what services were in place, what systems are we dependent on and the infrastructure and any data around that infrastructure, routers, switches and services, and apps and data within those apps and control groups and governance. We started mapping the business requirements, but in the process of doing the application discovery and trying to capture some performance data and validate that data and produce some reports around it, it became very obvious to us that we weren’t even going to remotely come close to meeting this tiny deadline of three months to complete this piece of work.

The "throwing bodies at it" didn’t work. So we decided to build a system and we couldn’t find it in this stage as this was a number of years ago – and we couldn’t find the tools that suited our purpose and we looked long and hard. We ended up building a SharePoint platform with a number of databases feeding it with a series of workloads in different stages. We went back to the fundamentals just to get access to data that made sense so we could validate, so we used a range of tools to map the ecosystems that we’re running. We ran automated audits of the data center in physical and logical infrastructure. We did automated discovery tools, mapping the services running within those data center environments. We did full scans of applications – looking for everything from one application that’s running in their configuration while the port systems are on, while the IP addresses are on.

What we did is we built a new single source of truth because each of the other databases and information collections they had around their environment and configuration and assets just didn’t ring true and we couldn’t map reality back to it. So we ended up building a single source of truth. We went from throwing bodies at it to throwing automated tools at it. We started to see some light at the end of this tunnel. So we ended up with a very sophisticated system. It did some enormously clever things from capturing automated log analysis to data that is being thrown at us from various systems, monitoring security controls, using and logging passwords controls, physical infrastructure auditing, application auditing. We built a series of things inside that to then analyze that data by automated score cards. We then produced reports around suitability and percentage ranking, whether applications were or were not a good fit for cloud.

We then ran a baseline of that score card across the Amazon Web Services, with Azure and VMware models. We produced a series of report and financial dashboards on this and almost never did we allow any manual override. So essentially what we got the point was an automated system that was self-maintaining and we really didn’t need to touch this thing or very rarely did we ever have to override them manually. This thing grew a lot on its own and we finally had the single source of truth and real data that we could drill out to the service groups, to the service systems that we’re running in the applications or the data that uses them and the services being delivered.

It was quite exciting because we had the ability now to deliver on the promise of this string of projects. The scale of this project – just to put some context around it – is that we ended up, I think it was around about $110 million year-on-year was slashed off the bottom line, the operating (inaudible), once we completed this transition to shifting the majority of their infrastructure from their own data centers to the cloud. So they’re a very large scale program.

We got this great outcome for the project. But the real issue we ran into was that we created a home-baked system and there was no vendor behind it at this stage. As I said, this was a number of years ago. There’s no vendor behind it to continue developing it and provide maintenance support for it. The small team of about 30 people who helped develop it and gather all the data and speed of this monster eventually moved on to other projects and two or three people were left with it. But we ended up with a situation where we didn’t have a material-managed IT asset management solution. We had a one-off project and the business made it very clear they already thought they had configuration management databases and ITSM tools mapping the world despite the fact that we had stood on top of a very big soap box and screamed at the top of our voices that that data that didn’t make any sense.

We demonstrated by having them build the tools around the project. The unfortunate outcome of this exciting yet sad-in-the-end story was that the outcome for the project was very, very successful. It was a resounding success. We pulled a hundred and a half million dollars off their bottom line year-over-year. What we have done is we created this Frankenstein, this really powerful system that can collect data and provide reporting on it in real time in some cases but there was no one there to maintain it. The business kind of just let it run for a while until eventually the data wasn’t being used by anyone and then changes came to that and it wasn’t able to collect data that was consistent with change. Eventually at the time, this home-baked system was left to die along with the data that was with it.

We had this scenario where they went back to exactly what they had in the first place, which disparate followers and the disparate data sets looking very, very closely in a niche form into a particular area of service or service groups and solving their problems, but they lost that organization wide. They have 74 different services in the group. They lost all that value, and oddly enough, some two or three years later, they realized what they have lost, having to look at how they solved this problem again.

The moral of the story is that if it were the case, if it was a product that we could have gotten out of the shelf a number of years ago, we had to build one, but that’s not only the case anymore. There are products out there, as we are about to see, that can do this and they can do it in an automated fashion. They can clean up all the data, they can take multiple data sets and merge them and dupe them. They can take really obvious things to humans and spreadsheets of things that they would say, marched up version one dot one, version one dot zero dot one, and just call them Microsoft. At the time we built this tool, that sort of thing wasn’t available; hence we had to do a lot of that capability. I’m looking for the same details of what this platform we’re about to hear about today does because I only wish that we had it back then. We could have saved ourselves a lot of grief and we could have saved a lot of time and effort and development for an off-the-shelf platform that could be maintained by somebody who continues to develop and grow the platform that makes it available as a general consumption.

With that, I’ll hand back to you, Eric.

Eric Kavanagh: Alright. I’m going to hand it over to Dr. Robin Bloor. Robin, take it away.

Robin Bloor: Actually, that’s kind of an interesting story, Dez. I like that. It doesn’t really strike me as particularly unusual. Every time I ran into the IT asset management problem, there always has been a company that actually went home and did something with it and had to, but it never seems that you ran into an organization that has the whole thing under control. Yet, as far as I can tell, if you aren’t managing your IT assets, you’re burning money. Since Dez came out with the nitty gritty story, I thought that I would just do the overview of, well really, what is IT asset management. What does it actually mean? This is the bird’s-eye view or the eagle’s-eye view.

Consider a factory – especially organizations that run factories with the intention of making a profit. Everything possible is done to make maximum utilization of the expensive assets deployed. It’s just the case. Consider a data center, not so much, in fact, mostly not at all. Then you kind of think, well how much are they invested in the data center? Well you know, if you actually work it out, it’s really, really large amounts of money. You put together, I know, the historical efforts of everybody that built the system. Their licenses are paid for the software and the value of the data and the cost of the data center itself and of course all the hardware, it only comes out to be tens of millions. It depends on how big the organization is, but easily tens of millions in most organizations. This is a huge investment people make in IT and certainly in large organizations, it’s massive. The idea that you shouldn’t really particularly bother to get maximum value out of it and it should be run efficiently is obviously an absurdity, but as an industry, there are very few places that actually have the discipline to actually really truly manage the IT assets.

This is a model I’ve used, I don’t know, quite many times, I guess. It’s what I call the diagram of everything. If you look at an IT environment, it has users, it has data, it has software, it has hardware. There’s a relationship between all of these fundamental entities that make up an IT environment. It uses specific softwares or relationships that have access to specific data relationships. They use specific hardware resources so there’s a relationship there. Software and data are intimately related. The software resides and is executed on specific hardware and there is the data-specific hardware. So there is all of these relationships. If you want to know where the IT assets are, just put your hand over the users because there’s very little that you could call an IT asset apart from acquired skills and its users and it’s everything else.

Then you look at that and you see, well, how many organizations even have an inventory of all the software issued in all of the systems that they employ? How do we even have a proper inventory of hardware that includes all of the networking capabilities? How many have any meaningful inventory of the data? The answer is none. Knowing where the stuff is and knowing how one relates to another can be very, very important in some instances, particularly in the kind of instance that Dez just described where you’re going to pick it up and move it all or pick it up and move most of it. It’s not just a trivial thing and just actually knowing what’s there is a big deal. Actually knowing how one thing relates to another.

Then the other thing is this diagram applies at the smallest level of granularity, you can imagine, the smallest piece of software. Accessing the smallest amount of data you can imagine running on a trivial piece of hardware resource right up to an ERP system with a huge, massive amounts of distinct databases and data files, running on multiple pieces of hardware. This diagram generalizes everything and it applies every level of granularity and this arrow of time goes underneath just indicates that all of this stuff is dynamic. This might look like it’s still a diagram but it’s not. It’s moving. Everything is changing. Keeping track of that is no trivial thing. I mean it just isn’t. You can actually expand this diagram and you can say, forget computers and just make it even wider. Businesses consist of all data plus business information that might not be electronically stored. Various facilities and that’s not necessarily computer related. Various business processes that are not necessarily software dependent or partially maybe independent as a software.

Lots of people – not just users of systems but staff, panelists, customers and so on – that makes up the ecosystem of a business, and then you actually have humanity as a whole as well, people. There’s all the information in the world. There’s civilization. All of it is what we call hard stuff and all human activities. This is the diagram of all and everything. That diagram gives you an indication of how interrelated from the smallest collection of things that do anything to the largest because in terms of humanity, there is just like the whole of the Internet and the billions of computers that make it up and all of the devices and so on and so forth. That’s a vast array of things and all of that is obviously subjective to the arrow of time. That’s the bird’s-eye view.

I just listed this straight out of the top of my head without even thinking about it. Dimensions of IT asset management. There’s an asset registry, hardware, software, data and networking. There’s asset attribute captured – do you have all the data related to all of those stuff? Asset usage – why does this stuff exist at all? Asset acquisition cost and ownership cost – how much to cost and therefore how much is the ownership and how much to replace from a good idea? That brings in the idea of asset depreciation. I’m not just talking about the hardware. We’re also talking about stuff and possibly the data as well. A complete asset map which would be to instantiate the diagram I just discussed. Cloud assets – stuff that isn’t actually on parameters but actually does in one way or another belong to the organization by virtue of rental and by virtue of reason. Service management targets and how they relate to all of these particular possibilities. One of the things that Dez was talking about is his efforts, a collection of systems from one place to another place which is like, how did the service management work out in terms of "did you hit the target the people are expecting in their systems?" and so on. There’s risk and compliance – things that in one way or another, the shareholders that might be concerned about and the government itself might be concerned about and all of that is an aspect of asset management. There is the procurement and licensing of all software. There are business performance objectives. There is a whole of asset governance in terms of what are the rules the organization might set for any of these things. We are talking about really complex stuff.

So the question arises and this is how I finish – how much of this can be done? How much of that actually should be done?

Eric Kavanagh: With that, let’s find out what the experts have to say. I’m going to pass it over to Tom Bosch. Stand by, giving you the keys of the Webex. Take it away.

Tom Bosch: The title of the Webex, from our perspective, was about keeping it simple and obviously best practices for IT portfolio or IT asset management. Anytime you say best practices, it is ultimately an opinion. It’s an approach from our perspective. Ultimately what BDNA wants to do is help many of the companies out there that we find are still just getting their feet wet back down the IT journey path. IT asset management was a hot topic right around Y2K for some of you that have been in the industry for a while, and the primary reason why is, I need to understand if the software that I have and the systems that I have are even going to get replaced or updated or will they fail when we hit the new millennium?

I think what we all lived through that strange evening some sixteen years ago was the fact that actually very little went down in the background. Our power plants stayed alive and the trains kept running. The lights in New York City and Sydney stayed on. Through that process, people began to understand that there was an enormous amount of information that needed to be gathered and brought together. Ultimately, it was the data behind all of that that had to be cleansed, as Dez said earlier, to be able to make the kinds of decisions that people were looking for. So that’s the crux of our conversation today. I think every single one of us realize that every day we walk into our IT department, every day that we walk into our organizations. Enterprise, information technology is just almost out of control. What I mean by that is that there are new servers being brought online. There are new pieces of software that are being deployed from department to department to department across organizations, whether you’re in the manufacturing business, you’re in a services organization, you’re in retail, every single one of our organizations today are not only being run but they are being driven.

IT is becoming the production engine of many of the organizations that we work in. That becomes no more apparent by looking at the solutions that are being deployed. If we just focus internally on the complexity of the data just inside the IT department – just the applications they are being utilized to ultimately support IT – we’ve got everything from vendor management systems to IT portfolio management, procurement systems, architecture security systems, and one of the key attributes evolving this is that they could lead to utilize essentially an inventory of what you’ve got inside your environment to be able to effectively drive solutions in their specific disciplines. So having those assets at hand is critical for almost every discipline inside the IT organization. But one of the things that is quickly found when companies begin to try to bring these different systems together is they don’t talk the same language and ultimately it boils down to the data.

As Dez pointed out earlier, bad data was the root of the project that they started with, and some very interesting statistics in the company Gartner, that literally IT is wasting over 25 percent of the money that they invest in an annual basis because of bad data. It is costing Tenex projects because ultimately for most companies, it’s a matter of cleaning up that data manually. Again, as Dez said, it’s really bothersome. Specifically, around asset management itself and in general across IT projects, Gartner basically concluded that over 40 percent of all IT projects fail because of bad data. We know the root of the problem. It’s the data. How do we begin to manage that? One of the things that’s going on is that ITAM is becoming important then to organizations for more than just one reason – obviously the one that we just talked about and that is that we need to get systems talking to each other. We need to understand where the systems exist inside our organization, so that we can do simple operations like refresh or upgrades to just the systems that we have in place.

To further enhance the problem in today’s environment, many of the software publishers and manufacturers are finding there is, what we call what it is, low-hanging fruit for these publishers by coming in and simply forcing clients to an audit or true up. Literally, 63 percent of the Fortune 2000 went through at least a single audit in 2015 according to independent research corporation. Those audits are costing companies in an enormous amount of internal fees and external true-up cost anywhere from one hundred thousand to a million dollars, and Gartner essentially came out with another interesting statistic which is not in my presentation but I picked it up early this morning that they consider the average cost of an audit at somewhere around half a million dollars for an organization.

When we talk about 25 percent of the dollars being spent in IT being wasted, these are some of the examples that are going on. I think that the facts in all these, so what do we do? How do we tackle this? It starts by really understanding what this journey is for most organizations. IT asset management is a series of steps that basically starts with discovering what I’ve got out on my networks. Most people have one or some or many of these discovery tools, probably one of the most common discovery tools in the marketplace is SCCM. Most corporations that have any level of Microsoft and Windows-centric environments utilize SCCM for many purposes, deploying applications, and can also be used to severing data, but that data comes back in a muddy messy format. We’ll talk about that more in just a minute. There are numerous other tools as well. Most of the ITSM solutions whether it’s BMC or Service Now or Nationale or HP have very good discovery tools and those often come into play when you’re especially trying to pull together the information and the interdependencies of your server networks and networking devices, because the last thing we need is a situation where the booking system for a large airline goes down in the middle of the day and millions if not billions of dollars of revenue are lost. Understanding how all these things are connected starts again by understanding the assets that are associated with that.

The second stage or the second step in this process – I got all these data, but what does it mean and how can I begin to work with it? That step is typically referred to as normalization and it’s one that we will focus onto a great deal today, because at its core it’s the simplest and most important step in moving towards a fully optimized or fully mature ITAM journey. As you move through that process of normalization, ultimately what you’re trying to do is pull together all the different discovery sources you have and some of those may be simply the applications and solutions that we talked about in one of the earlier slides. We want to be duplicated. We want to reduce all the buzz and filter out all of the data that’s not relevant. We’ll talk about that more as we go along.

From there, some of the logical steps are on top of the low hanging fruit. As corporations come together and merge and go out and acquire other organizations, they begin to develop duplication in the applications that they utilize. A very typical step that people take once they understand and the landscape of software and hardware that they have is to rationalize or remove the duplication, the redundant devices and redundant software in their environment. For instance, you might find that if you go out and look, you might have as many as twenty or twenty five different BI tools in use across your environment. The potential savings there for a corporation to remove not only those that are associated with specific applications but more importantly those that have broader reaches offer some tremendous cost savings and potential risk reduction.

What do organizations do? They typically take a look at these in a large detail and as Dez said, you got a lot of bodies thrown at it and they start figuring out what they need to do and how did they get this optimized state, and I watched this happen time and time again. I’ve worked with hundreds of corporations over the better part of the last decade with their software asset management specifically, and ultimately what stops most of these projects or what causes most of these projects to fail is they try to bite off more than they can chew and they don’t take it back to its core roots without creating essentially projects that require enormous amount of change management, management authorizations, education programs and governance that affects an enormous space across their environment.

When you sit down with the program or a project that they show in front of a senior executive, oftentimes the question is asked, "Is the problem really this big?" As I have discussed this in more detail with many senior executives, they say, “You know, Tom, it really boils down to three things for me. I want to know what we have. I want to know that we’re using what we purchase. Most importantly, I want to know that what we’re using and what we deploy matches up with what I bought.” In other words, “Am I entitled to what I’m utilizing or have I got myself into a case of a piracy albeit, non-intentional piracy?”

Those three questions can actually be answered very easily by going back and just simply cleaning up the data. That’s what we’re going to show you the rest of the way. Let’s take look at the data specifically and what some of the problems are that comes out of these discovered data. It’s irrelevant. It’s inaccurate. It’s inconsistent. It’s incomplete, and ultimately, it’s costing corporations well in excess of $14 million annually in poor decision making.

Here’s an example of the type of data that you get coming straight out of a discovery tool such as SCCM, it involves an enormous amount of literally irrelevant data. In fact, 95 percent of the data is irrelevant. It includes things like executables, patches, and hot fixes and device firmware and different language packs and knowledge base packs. A good example is go take a look in the inventory on a typical PC inside your environment, look for something from Adobe. Oftentimes, Adobe Acrobat may have one licensable copy on your PC, but yet there may be as many as nine or ten of those copies or upgrade copies. So to the naked eye, you’re not certain if you have liability for nine different copies or just one product.

One of the second areas of , so to speak, is the inconsistency that takes place. This is just a brief example of how Microsoft can be named so many different things inside an organization. This is a focused area for BDNA. I think one of the most telling examples that we can give is that right around the topic of SQL, we have found across our customer base 16,000 different variations of how SQL can be named inside an inventory. Consider putting that up on a consistent basis. Another area is basic lack of standards. To what level database releases, to what level of CAL, PV use, of IBM are we going to manage this data? So this is part of the conundrum and issue of helping normalize all of these raw materials, all of these raw data to a point where it is usable. Along with that, there is an enormous amount of data that’s not discoverable that would also be very valuable to someone in a traditional ITAM environment. We’ll give you some examples of that as we go along as we cover some use cases.

The one element that is certainly without question is the fact that this data changes daily. If we just take a look at Microsoft alone, Microsoft in 2015 introduced over 3,500 new software titles and upgraded or updated some 9,800 different pieces of software. That’s 14,000 changes at Microsoft alone. BDNA manages this on a daily basis. We've got a team of engineers who stay up with this and literally make some words of upwards of a million changes to our master dictionary and encyclopedia. We’ll cover that here in more detail as we go along. Ultimately, we take a look at that environment that we looked at earlier and the inability for all of these different solutions to talk to each other is definitely an issue and that’s where BDNA comes into place and the BDNA platform and its core component Technopedia allow us to create a common data platform.

How that takes place is actually quite simple. We aggregate the data that’s coming from a number of your different discovery sources. Those discovery sources may be some of the ones I mentioned earlier like SCCM or ADDM or HPUD. It might be this thing CMDB. It might actually also be the purchase order systems that you have from your procurement systems. We bring that together and we look at the core components of how things are listed and rationalize that and normalize that. Again, that’s something that BDNA calls Technopedia. Technopedia is the world’s largest encyclopedia of IT assets. It’s utilized by some other twenty other applications across the globe outside of just BDNA usage to again create a common language. Tools like architectural tools, procurement tools, service management tools – again the idea being, "Let’s speak a common language across all of our IPVs." We then add to those specific titles, 1.3 million entries over 87 million attributes. Those attributes might be something as simple as, "What are the hardware specifications or the specifications around the simple server? What are the physical dimensions? What is the energy usage? What is the energy rating? What is the VP use of heat generated by all things that might be utilized by our architects?" That’s just one example of many different catalog add-ins that are available. We take your data. We aggravate it. We essentially map it out, normalize it against the Technopedia catalogue and deliver a normalized set of data that then can be consumed across the rest of your environment.

We feed that into a data warehouse internally that we’ll show you in just a few minutes, but we also have standard integrations into many CMDB, ITSM, and additional tools that are utilized across the IT environment to help those solutions become more valuable to you. A simple example of some of the content packs, pricing, hardware specifications, life cycle and support is probably the most common that gives you things like end of life, end of support, virtualization compatibility, Windows compatibility, and again, Chris will cover some of that as we move along.

In a recent cartoon I picked up, a Dilbert cartoon, he had actually been asked by his boss to do this exact same thing. So, "Dilbert give me a list of the assets inside our organization." Dilbert’s response were, "Who’s going to use it if I deliver it?" The use of IT asset management data, as we talked about it, going forward here really will reach an enormous amount of usage across your organization. This is just a small sampling of the different disciplines inside an IT organization and how they would utilize it. The reality is it drives value inside the organization and by taking some of the best authoritative enterprise data, BDNA essentially helps companies drive better business decisions. As you’re going and sitting down and you’re looking for a simplified way to tackle your ITSM solution, what BDNA ultimately does is help you drive simplicity by cleaning up the data and giving you the opportunity to make good business decisions, and we do it fast.

Most of our customers – in fact almost at 50 percent – have told us through independent research that they received a full ROI on their project in less than 30 days and literally 66 percent received over 200 percent ROI in the first year. Those are the kind of statistics that your CFO and your CIO will certainly want to hear if you’re considering ways to invest and improve your organization.

What we’re going to do now is I’m going to turn things over to Chris. We've got the better share of thirteen or fifteen minutes, what we’re going to do is essentially walk through some use cases that are critical and some that we talked about earlier, basically what have I got installed. You’ll have an opportunity to see what I’m using so that can potentially re-harvest those. Am I compliant with what I have installed? Maybe I want to take a look at which devices are over three years old because I want to know if I can refresh those devices. What software are on those devices so that I can plan for that refresh process? And if I want to take a look at security risk specifically, what potential software components have an end of life that have either exceeded or is coming sometime in the next thirty days or within the next year? And which might be listed on the National Institute of Securities Vulnerability List?

Eric, what I'd like to do now is pass it back to you, and if you would, can you please hand things to Mr. Russick?

Eric Kavanagh: I will do that and, Chris, you should have the floor now. Go ahead and share your screen and take it away.

Chris Russick: Excellent. Thank you, Tom. Thank you, Eric. I appreciate that.

For our demo today, I would like to introduce to you BDNA Analyze. BDNA Analyze is the report section of our BDNA products. Let’s start answering some of those questions that Tom brought to the table. What do we have? Who is using or are we using our products? What are we entitled to and are we secure?

The first one, let’s talk about Microsoft products, what do we have installed and for that I’m going to start by bringing over our software install count. Next, I’m going to come in and filter down software manufacturers to Microsoft. Next I’m going to bring over for a complete introductory tradition the software name and let’s just start with the major version. Again, this is basically the Microsoft inventory position in both licensable and non-licensable products.

Where the rubber meets the road is really going to be licensable products. Let’s filter it down even further to licensable products. We’re going to start by answering what was, again what we started to, what are the Microsoft products feed. That’s an expensive title and say when it was last used and by system and try and reclaim some of those licenses by doing a software re-harvest. So next we’re going to come down to last used, years, and we’ll filter that. I’ll choose the 2012 and 2014. I’m also bringing in SCCM’s metered data. What we can do at this point is bring over to the software last used date. Finally, we can come down to the host’s name and bring that over and we’ll also bring over the last full user log-in.

From this report, you can simply go to Mr. Acme user and ask them, “Are you going to use Microsoft product this year? It seems you haven’t used since 2013.” The sample report, noted it is attended by it and you’re able to reclaim those licenses. Next up, I’m going to jump over to our software compliant dashboard. I have this one pre-loaded and this one contains for example Adobe – which application we’re already compliant with and which we aren’t compliant with and is there an estimate of what is below them with the questions that Tom had brought up earlier. Based on your purchase order information and with the discovered information we brought in, there’s software titles, your entitlement counts, what the cost is of that, what are installed and whether or not you’re under or over. By looking at this report you can answer many of those questions.

The next I’d like to jump over to is the hardware refresh. The intent here is to determine what hardware is out of date, what’s more than three years old or four years old, whatever your organization deems is important. Simply move to your system count. For this example, we’re going to focus on desktops. I’m going to come up here to the software products information and we’ll bring in category, sub-category, and we will keep only the desktops. From here, we’ll bring over the product, manufacturer, and model information. For today’s example, we’re going to focus on the 790s. The reason I need to do this is because we know these are more than three years old but we bring over the hardware GA here. If you wanted to find this GA here, you can certainly bring over it across for all of the hardware sub-category products.

Finally, if you are going to do an upgrade or refresh to these devices, it’s helpful to find out what these devices are. Again, we can come down to host name, and then furthermore it is helpful to understand what are installed on them. So we’re having a software install count and this is where the report gets large. We need to bring over the software manufacturers, software names and finally software major version. We don't need a hardware category and sub-category, so we can save a little bit of space here. Here’s a list. So at this point, we understand that on this host, we’ve got these products that need to be upgraded as part of its hardware refresh. At this point, we need to know what’s compatible with the operating system so we’re going to bring in a software readiness deal. That’s going to be software Windows readiness 64 bit. We’re going to go to a 64-bit environment. At this point, you've got truly actionable data – what’s installed on what host – but you do need to upgrade based on the GA data and furthermore you can tell whether it’s compatible or there needs to be compatibility check or simply not compatible. This gives your teams, whoever is going to be doing this, how this refreshes valuable information and saves them time in the long run.

Finally, for the security, there’s two pieces of security. They are tremendously helpful when speaking of hardware and software assets and production environments. First is the end-of-life data. Certainly you want to have all your patches updated and your software end-of-life products up to the latest version for obvious reasons. So we’ll tackle that first. Again, we’ll start with software install count. We’re going to bring over your entire environment. We’ll bring over your software manufacturer, software name, and major version again. Next what we’re going to do is come down and limit the end-of-life data to the software end-of-life year. We’ll bring the scope to this. We’re going to do the current year – the previous, we’ll say two years and the next two years – so we’re going to do a five-year scan. The intent here is to answer the question of, “What do we need to upgrade this year? What should we have upgraded in the past two years? And to stay ahead of the game, what do we need to plan for for the next two years?”

We’ll bring this data and put it across the top with that refresh. Right off the bat, you can see that in 2014, there are 346 installations of what looks like BlackBerry software, personal vDisk from Citrix, there’s 25, etc. So this is a good report. Again, we want to go through all the steps, but you could certainly select only the desktop software or "Keep Only" and then find out its host where it is installed. You can export this data to a CSC, PDF or Excel. Thereby, the CSC can bring that into other products as well if you want to do some upgrades through an automated fashion and from a client perspective, you can see exactly what needs to be done at the future.

Finally, another report that I have created in our system is using BDNA Analyze. It is a system report based on specific CVEs from the NIST database, the National Institute Standards and Technology. What I’ve done here is I targeted Apple iTunes and specifically called out some CVEs in 2015 and I’ve tried to create a report that looks for the specific version, how many systems we have installed, and how many systems are affected and how many software components that are installed based on these CVEs.

Again, it’s a great tool if you’re trying to get (inaudible) remediation point or simply help out in the security department better manage their IT assets and the inventory. At this point, I’d like to turn it back over to Tom and Eric for Q&A.

Eric Kavanagh: Let me bring in the analysts first and foremost, Dez and Robin. I’m sure you’ve got some questions. That was a fantastic demo, by the way. I’m kind of just finding myself amazed at the amount of visibility you can get into this environment. Let’s face it, in this really heterogeneous ecosystems that kind of visibility is what you need to have if you’re going to understand what’s going on out there and if you’re going to face an audit, which of course no one wants to do, but, Dez, I guess first I’ll turn it over to you for any questions that you’ve got.

Dez Blanchfield: Man, I’m going to time box myself because I could just spend the day talking with you about this. There’s a couple of things that have come to me through questions and product messages that I’ll also get to if you don’t mind. This reminds me that, the screens you’re showing me reminds me of what kind of project that I would have loved to talk about where we did just a refresh of nineteen-odd thousand machines for a company called Data EDI through their (inaudible) division and other areas, and I can publicly talk about that because it's an open project. What I found was there were three separate desktop refreshes and SOA refreshes running in parallel for some reason and I ended up just bringing them all to a halt and starting from scratch with an automated tool.

We’re talking about scale and I’m going to come back to you with a question in a second. When we did something on that scale, what happened was I got out of the engineering team and out of the CIO's office and I walked around the rest of the business and said, "We’re running an audit of everything in this organization from the desktop down. What would you like to know about it?" and no one really asked any questions. So now, I have some brand X sessions where I got them in a couple of board rooms and said "Let me just ask the question again." In finance, let me know tell you every single piece of software where you got to report how much we pay for and what that kind of gets the end of life and when you can write that as its off. Can you get it to the PNL and GL? Where is your asset management around this and how do we manage budgeting for software licensing for next year? Glazed eyeballs, and I went through all the other groups, so I’m keen to get some insight in what you’ve seen in these places where you’ve obviously got a great tool that does enormous amounts of powerful things across just asset management and asset discovery.

What’s your reaction been to these sorts of scenarios where you’ve run a project where you’ve had a client run a project and all of a sudden it’s finance and engineering and dev ops and security and compliance and lots of things and even some shadow IT environments pop and say, "We had no idea this was here and how do we get access to the data?" I’d love to hear about any kind of eureka moment of organizations that you’ve had and what they’ve done about it.

Tom Bosch: I’ll throw in one, Dez. I think what we see time and time again, guys, is obviously there’s always an entry point, right? There’s a group inside an organization that says, “I need the screen data for a use case.” Any solution provider, that’s typically where it comes in and I would say probably 65 or 75 percent of the year, the entry points for us tend to be around asset management. They tend to be around IT. We’re not an ITAM tool. At the end of the day, what we are is a data management tool. We feed ITAM solutions like the ones inside service now and other more complex solutions like Sierra and Snow.

At the end of the day, what begins to happen is once that clean data becomes utilized and presented inside other IT organizational meetings, people go, “Where did you get that? Oh, it came from here.” “Really? Can I take a look at that?” Then when they find out that you can begin to attach or enhance the assets with additional content data and that’s something that’s very, very unique to BDNA, that’s when the “aha” moments begin to open up. So one of the reasons why we like to show the security is because Verizon did a study a couple of years ago and basically they came back and said, “99.9 percent of all hacks that go on in the environment are coming in through pieces of software. They are out of date, haven’t been patched and/or are end of life.” Most of those are somewhere between three months and a year out of date or out of life.

By having that information in advance, security departments can now be proactive in their approach to prevent any breaches. Chris, do you have anything to present from your travels?

Chris Russick: Absolutely, so we all kind of nailed a couple of stories together and talk about how the two "aha" moments are. We try to understand where they are getting the data from and many customers don’t realize the breadth of data that is available out there whether it is from an SCCM or Casper, or you pick the tools. The intent there is to be able to get good data from all of your tools. How do you aggregate that, right, without BDNA, and perhaps the first "aha" moment is, “Wow, we can take all of these data that we have, aggregate it together.”

It’s the ability for people to make truly actionable decisions based on the data rather than trying to find supporting information in the data to support decisions that they have already made. I had a customer up in the Tennessee area who literally once they were able to perform this, I think it was like in a week that they had this installed, were literally dancing on their desks and cubicles because they didn’t know the full breath of their data and now they do.

Back to you guys.

Dez Blanchfield: The enrichment piece is interesting to me. Just quickly on that and then I’ll hand it over to Dr. Robin Bloor. I did a lot of work with banks and wealth management firms and there are a couple of key things that they put themselves through on a regular basis in their attempt to stay compliant for the range of challenges that are know your client, or KYC. There’s anti-money laundering, AML. What I find though is a lot of these organizations when they get good at the KYC process and their client process, more often than not, look inwardly and treat themselves as a client and I’m seeing a lot of them now use not the depth that you got here but very high level tools to try and map who their end users are with the client and what they are using because of the reason that you’re talking about. Some people just come with BYOD, some people got the old versions of software. They invariably bring bad things with them to work.

In the journey you’ve had, have you had specific examples of people taking the data you’ve got on applied server and in which their process then they take the substance of the data and feed it to something else? Maybe it’s a mapping of who is actually using the system in the first place and who’s mapping that, HR for example, people that are using the system are actually employed and supposed to be in the buildings and other examples of how something is in store, how something is in the machine that they shouldn’t have and how to recapture that? Have you got any examples where a different part of the business that you wouldn’t traditionally think would get value out of the data have taken a subset or gotten access to it and involve them to get a seemingly unrelated value that they saw come out of this work?

Chris Russick: I’d like to jump on this one first. I’ve got key customers that I’m thinking of specifically. One is in a medical field hospital and they do exactly that. We’ll take some enrichment data against their discovery data by bringing in Active Directory, and then from that, they know what assets actually belong on their network. From there they can determine who should and should not be patched, who should and should not even be on their network and then keep a list for their desk access and whatnot. The second is actually specifically a couple of different customers or specifically taking this data and I’ve never been in the enterprise architecture world so it is relatively new to me for the last two years but there’s an entire use case to be able to take our end-of-life data or other asset-enriched data and pump that out into other enterprise architecture tools that will do the enterprise mapping and things that enterprise architects do and quite frankly that’s part of the industry that has become very popular with the data and I’ve never seen that before. Tom?

Tom Bosch: I think to add to that two use cases that I think have popped up pretty quickly are both kind of in and around HR. Basically, they help understand what the internal employees of the company are utilizing – and I always find it amazing when clients come back and this literally happens every time they run probably their first normalization is they’ll find probably a good example of twelve or fourteen different Xboxes that are connected to the network, which are typically not sanctioned devices in the business environment unless you work at Microsoft. Finding devices that shouldn’t be in the environment, finding software that shouldn’t be in the environment and then secondly I’ve seen HR quickly utilize this to help value the investments that they have to make in the on-boarding process with a new employee. They had no idea that the average employee might be somewhere in the vicinity of 2,500 to 3,000 dollars' worth of software and in excess of 5,000 dollars' worth of just IT investment alone.

Dez Blanchfield: This is another use case. It’s not so much a question. It’s just a point to throw out to share. I’ve had scenarios where we have had very, very large audits of an environment. We found legacy systems that the people originally put them in place where people maintaining them had moved on and note that it’s documented and note that it mapped out. In one case, they found a steel manufacturer that had an old group of 486 desktop PCs connected to modems that used to do dial up to the bank every day. This organization was a multibillion dollar steel manufacturer here in Australia and they didn’t realize that these 486 PCs were doing (inaudible) to the banking dial up every day.

The second one, the more interesting one, it was in a rail train builder manufacturing warehouse environment. They had a system that they thought was a simulator for train monitoring. It turned out it was actually the live system on an old AIX RS/6000 IBM machine and luckily those things just don't die because for nearly a decade, none of the staff that had implemented it was supporting it, and had actually left the department after being shut down, and they actually had started it running. The train's driving around the place and with this thing talking and capturing monitoring, but I think there are really interesting use cases that quite often people looking forward are going to tend to think about that if they start to look backwards, they see some very interesting things as well. With that, I’m going to hand it back to Robin because I think I have taken way too much of your time.

Eric Kavanagh: Robin, take it away.

Robin Bloor: So we’re kind of running out of time, so I mean one of the things that interests me is the purchase of a product like this – if you could speak to this, how many people come to you or come to this product, because they have got a very specific problem on their hands? How many actually come for strategic reasons because they just realize that they actually should have something like this because what they've actually got is fragmented or useless. That’s part of the question. The second one is, having adopted this very specific tactical reason, how many people make it strategic from then on?

Chris Russick: That’s a great question, Robin. I mean I think it’s human nature to be reactive. I would have to say that a good 95/100 times when clients come to us, it’s reacting to a situation that has driven them to acquire a solution. The one that’s just absolutely driving companies nuts these days is the auditing process. I literally have heard of customers receiving bills from software vendors in excess of a billion dollars before audit and you could only imagine what a CIO or CFO say when they see that. "How could this have happened and why don't we have better control of this?" People become very reactive to that.

Now, I can also tell you that in some of those situations, once they get their hands around what they actually had, it turns out that the vendors were a little aggressive in their approach to what they thought was in the environment. In several particular cases, I’ve seen clients go from very, very large pre-audit estimates to not owing the suppliers any money at all. A lot of that has to do with making sure that they clean this data up and doing it in a manner that is systematic and standard and standardized. There are a lot of companies that try to approach this thing from a manual process. It’s consuming that traditional audits take about a thousand to fifteen hundred man hours to prep for. So we really get down to the crux of the question. I think a lot of companies come to us, the majority come to us with a hot problem. Then I think ultimately as they become more mature in their understanding of what they have and whether they can utilize it, it becomes more strategic. That’s one of BDNA’s rules. Once the client had made the investment is to make sure that they understand and leverage that investment across their operation.

Eric Kavanagh: Let me throw one last question over to you because obviously there are existing tools out there in some organizations and someone has texted me right now – is there a natural process to migrate from multiple systems already in place to using your BDNA solution as the single source of truth, so to speak. What does that look like? How long does it take? It sounds pretty challenging, but you tell me.

Tom Bosch: Chris, let me make a quick comment and you can kind of talk about the technical side of it, right? We’ve seen clients with as few as one or two discovery solutions to as many as 25 and to bring them all in and aggregate them – that is what the normalized component of what the tool set does. How we do that is really a combination of standardized connectivity. Then in some cases, we have to build out some customer trackers. Chris, can you kind of maybe reiterate on that and explain to them how we do that?

Chris Russick: Absolutely, thanks Tom. We have 54 out-of-the-box extractions that we do use to pull that inventory down the data out of your existing solutions and we have a myriad of options to bring in some home-grown solutions potentially if you've got them in Excel or some other database. That aggregation process really isn’t that long to set up and stand out physically, two to four weeks and we’ve got your solutions set up and you’re getting data not too far down the road and thereafter, but what we ended up doing is after the aggregation and the duplication we’re going to narrow that data, that good clean data up to Technopedia and enrich that. Finally, we’ll pump that into a SQL or Oracle data cube and that data cube is then what is pumped out to wherever else you see that data or again to BDNA Analyze like what you saw today. Again, focusing on we’re not trying to replace where you’re getting the data, we’re not trying to replace where the data goes simply around the duplication and enrichment and then good quality data. I hope it answers the question. If not, please feel free to ask more.

Eric Kavanagh: That sounds good, folks. We have gone a bit over time here, but we’ve always liked to have a complete conversation and the folks from BDNA just sent me this list here. I put this link in the chat window, and you can see there is a lot of comprehensible list of different connectors that I’ve got there.

So folks I have to tell you, we’re going to wrap up here. We do of course archive all these webcasts. You can go to InsideAnalysis.com. It typically goes up the next day. We’ll also pass on some of the detailed questions that the folks sent in to us. We’ll pass that on to the speakers today. Feel free to reach out to them or of course yours truly, you can hit me up on Twitter @eric_kavanagh or of course by email, ek@mobiusmedia.com or eric.kavanagh@bloorgroup.com.

Big thank you to our friends from BDNA. Big thank you to our friends at Marketry for helping us bring you this content and of course big thanks to Techopedia and to Technopedia, because Techopedia is the media partner that we’ve got, a wonderful, wonderful website. Go to Techopedia.com and Technopedia is the website of the folks at BDNA put together. So this is great material, folks. Thank you so much for your time and attention. We have a lot of webcasts coming up for the next couple of weeks. Hopefully, you won’t mind hearing my voice too much.

With that, we’re going to bid you farewell. Thanks again and we’ll talk to you next time. Take care folks. Bye, bye.