Rebecca Jozwiak: Ladies and gentlemen, hello and welcome to Hot Technologies. “Embed Everywhere: Enabling the Citizen Data Scientist” is our topic today. I’m filling in for your usual host, this is Rebecca Jozwiak filling in for Eric Kavanagh. Yes, this year is hot. Particularly the term “data scientist” has been getting a lot of attention even though we used to call them boring names like “statistician” or “analytics expert,” pretty much tackling the same type of activities but it’s got a sexy new name and it’s garnering a lot of attention. They are highly desirable to have in the workplace, beneficial to the organization, and everybody wants one. But they are: 1) expensive, 2) hard to find. You know, it’s been all over the news about the data scientist skill shortage, yes, but still they offer tremendous value to the organization and people are kind of clamoring to figure out how to get that value without having to drop the dime, so to speak.
But the good news is we are seeing tools and software coming out that are kind of compensating for that shortage. We have automation, machine learning, embedded analytics, which is what we’re going to be learning about today, and it’s kind of given rise to this new term, “the citizen data scientist,” and what does that mean? No, it’s not your trained data scientist, it could be your business user, your BI expert, someone from IT, someone who does have the background but maybe not necessarily the expertise. But what it does, these tools and the software, is it gives more people access to those smart solutions even though they might not know the deep coding. But it just helps improve performance overall when you give everyone a little more access to that analytical thought. You don’t have to have the training necessarily to have the type of curiosity that can lead to good insights for your company.
Discussing that with us today is our own Robin Bloor, chief analyst at the Bloor Group, one of the elusive data scientists himself, Dez Blanchfield calling in, and then we have David Sweenor from Dell Statistica will be giving us a presentation today. And with that I’m going to pass it over to Robin Bloor.
Robin Boor: Okay, thanks for that introduction. I kind of thought about this in a historical context. What we’re actually looking at here is one of Leonardo da Vinci’s designs for a kind of glider that a man could put on his back. I have no idea whether it would actually work. I wouldn’t get into it, I have to say. However, da Vinci, whenever I think about da Vinci, I think of him as one of the most inquisitive and analytical people that’s ever existed. And it’s quite clear if you just look at that glider that it’s designed on the basis of a bird’s wing and he has in one way or another studied the flights of birds in order to build it.
If we take the historical perspective – I actually looked this up – analytics is perhaps the oldest application of mathematics. There are censuses that date back at least to Babylonian times. We know about this because there’s basically some cuneiform tablets that have data like that on them. It’s not known if there was anything that went back any earlier. But the obvious thing is you’ve got yourself a civilization with a large population of people, it actually requires planning and it’s worth knowing what you’re planning for and what the requirements of those people actually are.
And that’s kind of where it began and it’s also where computing began because the early computers, the early mechanical computers, were actually, I think the first was the census created by Hollerith, which became IBM, I believe. All of this has moved forward. There’s been some kind of interlude between perhaps the 1970s and the present day, where there are a vast number of other applications and analytics, you could say, took a back seat. Yes, there was analytics going on – it was happening in large organizations, particularly banks and insurance companies, and actually General Electric and telco and things like that – but it wasn’t generally used throughout business and now it’s starting to get used generally throughout business. And it’s changed the game, really. The first thing I thought I’d draw attention to is the data pyramid, which I particularly like. This is, I mean, I drew one of these 20 years ago – at least 20 years ago – to try and understand, really, at the time, I was trying to understand BI and some of the early data mining that was being done. What I’ve defined here is the idea of data and the examples are signals, measurements, recordings, events, transactions, calculations, aggregations, individual points of information. You might think of them as molecules of information, but they’re individual points. It becomes information as soon as it gets context. Linked data, structured data, databases, visualization of data, plotters, schemers, and ontologies – they all qualify in my mind as information because what you’ve done is aggregate a lot of variety together and created something much more than a data point, something that actually has a shape, a mathematical shape.
Above that we have knowledge. We can, by examining information, we can learn that there are various patterns and we can leverage those patterns by formulating rules, policies, guidelines, procedures, and then it takes the form of knowledge. And pretty much all computer programs, whatever they’re doing, are knowledge of a kind, because they’re working against data and applying rules to them. We have these three layers and there’s an increasing refinement that goes on between the layers. And on the left-hand side of this diagram you’re shown new data entering in, so a lot of these things are static. The data is accumulating, information is accumulating and knowledge is potentially growing. At the top, we have “Understanding” and I would maintain, although it is a philosophical argument, that understanding resides only in human beings. If I’m wrong about that, then we will all be replaced by computers at some point in time. But rather than have the debate, I’ll go on to the next slide.
When I looked at this, the interesting thing, this is something recent, the interesting thing was to try and figure out what analytics actually was. And eventually by drawing various diagrams and ending up with one that looked like this, I came to the conclusion, in actual fact, analytics development is really just software development with an awful amount of mathematical formulae. Analytical exploration is a little different to software development in the sense that you would actually take many, many different models and investigate them in order to generate new knowledge about data. But once you’ve generated it, it gets implemented either in what I think of as passive decision support, which is information just fed up to a user; interactive decision support, which is things like OLAP, where the user is given a structured set of data which they can investigate and deduce things for themselves using the various tools available. A lot of visualization is like that. And then we have automation if you can just turn some analytical insight that you’ve gathered into a set of rules that can be implemented, you don’t necessarily need a human being to be involved. That’s the kind of way that I looked at it when I did all of that. And various things started to occur to me. Once an area of activity, shall we say, once a domain of data is actually mined, thoroughly mined, thoroughly explored through every possible direction, eventually it just become crystallized BI. The knowledge that’s invented starts to become knowledge that informs various users in various ways, and increases their ability, hopefully, to actually do the work that they do.
One of the things that I noticed and I’ve looked at predictive analytics for about five years, but predictive analytics is becoming BI, in the sense that it’s just turning into useful information to feed to people and as I’ve already pointed out, there’s automated BI reporting, BI explorative, BI, very different gradations of it and predictive analytics is actually going in all three directions. And the analytical process as I pointed out is not that different to software development, just done by different people with slightly different skills. I suppose I should emphasize that the skills required to make a really good data scientist take years to acquire. They’re not easily acquired and not a large number of people can do it, but that’s because it involves understanding mathematics at a very sophisticated level in order to know what’s valid and what’s not valid. Analytics developments, discovery of new knowledge, analytics implantation, it’s about making the knowledge operational. That’s the kind of backdrop that I see to the whole of analytics. It’s a huge area and there are many, many dimensions to it, but I think that generalization applies to everything.
Then there’s the business disruption, as I mentioned there are a number of organizations, pharmaceutical companies is another one, that have in their DNA they’ve got analytics. But there are many organizations that really don’t have it in their DNA, and now they have the ability, now the software and the hardware is far more inexpensive than it used to be, now they have the ability to exploit it. I would say a number of things. The first thing is that analytics is, in many instances it’s R&D. You might be just applying analytics to a specific area of the organization and it might seem mundane that you are in one way or another analyzing the customer orders yet again from various perspectives, joining it with other data. But analytics actually creates the possibility to look at the organization as a whole and to pretty much analyze any particular activity that’s going on within the organization and whole chains of activities. But once you actually move into that area, I would maintain that it’s research and development. And there’s a question that I’ve been asked a couple of times, which is, “How much should a company spend on analytics?” And I think the best way to think about providing an answer to that is to think of analytics as R&D, and just ask, “Well how much would you spend on R&D in the area of the efficiency of the business?”
And the businesses that are not [inaudible] with analytics, there’s a lot of things that they don’t know. First of all, they don’t know how to do it. Normally if they’re actually going to one way or another adopt analytics within the organization – they really pretty much have no option but to go to a consultancy that can assist them in doing that because, it would be impossible or really very difficult for most businesses to actually hire a data scientist, finding one, paying for one, and actually trusting them to do what you want them to do. Very difficult. Most businesses don’t know how to hire or educate staff to actually do this work, and the reason for that is simply that it isn’t in their DNA yet, so it isn’t part of their natural business processes. This feeds into the next point. They don’t know how to make it a business process. Best way to do that, by the way, is to copy what pharmaceutical companies and insurance companies, just look, and some companies in the health care center, just look at the way that they use analytics and copy it. Because it is a business process. Don’t know how to police it or audit it. That really, especially now that an awful lot of software companies have created products that automate an awful lot of analytics. The point about auditing is important, when you have a consultancy or somebody on site that can be trusted to understand what the results of any analytical calculation is, that’s a kind of choice you have to make, but if you put really powerful analytical tools into the hands of people who don’t properly understand analytics, they’re likely to jump to conclusions that might not be correct. And as I said, companies don’t know how to budget for it.
These are flavors of analytics, I’ll just run through them. Statistical analytics and statistical modeling is significantly different to predictive analytics, most of which by the way is curve-fitting. Machine learning is different to those things, path analytics and time series, which is basically done on status streams are different again. Graph analytics is different again, and text analytics and semantic analytics are different again. This is just pointing out that this is a very multi-genre thing. It isn’t, you don’t start doing analytics, you start looking at problems you have and look for the various tools and various flavors of analytics that will suit those. And finally, the net net. Because of hardware and software evolution, in my opinion analytics is in its infancy. There’s much, much more yet to come and we’ll see it unfold in the coming years. I think I can pass the ball to Dez now.
Dez Blanchfield: Yeah, talk about a tough act to follow, Robin. I’m going to visit this topic briefly from one of my favorite angles, which is the angle of the human. There’s so many changes taking place in our everyday lives. One of the greatest disruptions in our day-to-day lives, currently in my view, is just everyday work. Turning up to work and trying to do the job you’re hired to do, and the increasing expectation that you are going to go from an everyday person to a superhero and the amount of information that’s flowing around organizations and emitting very, very quickly, it’s a significant challenge and more and more we’re having to provide better and better tools to people to try and cope with flow of knowledge and information and so I thought I’d try and come at this from a little bit of a fun angle. But, it always strikes me how we’ve got this high mind or flash mobs and so forth, that are sort of driving us towards what we talk about as analytics but really what we’re talking about is making information available to people, and allowing them to interact with it and do it in such a way that it’s natural and it feels normal.
And in fact, it reminds me of a YouTube video of a young child, little baby, sitting on the floor and it’s sitting there playing with an iPad and it’s flapping around and pinching and squeezing and moving out the images and playing with the screen, the data on there. And then the parent takes the iPad away and puts a magazine, a printed magazine on the child’s lap. And this child’s probably no more than two years old. The child starts to try and swipe with the screen of the magazine, and pinch and squeeze and the magazine doesn’t respond. The child lifts its finger up and looks at it and thinks, “Hmm, I don’t think my finger’s working,” and it pokes itself in the arm and thinks, “Ah no, my finger’s working I can feel my arm and that looks good,” and it wriggles the finger, and the finger wriggles and responds. Yes. Then it tries to interact with the magazine again, and low and behold it doesn’t pinch and squeeze and scroll. Then they take the magazine away and put the iPad back in its lap, and all of a sudden the thing works. And so here’s a baby who’s come along and been trained to use an analytical tool or a live streaming tool for entertainment and it can’t work out how a magazine should work and how to flip pages.
And that’s an interesting concept in itself. But when I think about knowledge moving around organizations, and the way data flows and the way that people behave, I often think about this concept of what people have learned to be a flash mob, which is an event where, and which social media makes this even easier to do, an idea as such which is go to this place at this time and date and action, or watch this video and learn these dances, or wear this colored hat and point north at one o’clock. And you push this out through your network, and invariably a whole load of people, hundreds of them, turn up in the same place at the same time do the same thing and there’s this wow factor, this like, “Holy cow, that was really impressive!” But actually it’s a really simple idea, and a simple concept just being pushed out through our networks and we get this outcome which is a visually stunning and audibly impressive thing. And when you think about an organization, the way we want people to behave and the way we want them to deal with information systems and customers, it’s often that simple, it’s an idea or a concept or a cultural or behavioral trait we try to pass through and empower with tools and information.
And underpinning all that is this mantra that I’ve had for over two and half decades and that is, if your staff can’t find what they need to do their job, be it tools or information, invariably they will reinvent the wheel. And so this is an ever-increasing challenge now, where’s we’ve got lots of knowledge and lots of information and things moving very quickly, that we want to stop people reinventing the wheel. And when we think about our working environment, coming back to the people angle, which is one of my favorites, I was amazed when we were surprised that cubicles weren’t a conducive environment for good outcomes, or we lined things up like this horrific pictures here, and it hasn’t changed much, just lowered the walls and called them open working spaces. But in the middle with the yellow loop around them, there are two people exchanging knowledge. And yet, if you look at the rest of the room they’re all sitting there dutifully banging away there, putting information into a screen. And more often than not, not really exchanging knowledge and data, and there’s a range reasons for that. But the interaction in the middle of the floor on the left there in the yellow circle, there’s two people chatting away there, swapping knowledge, and probably trying to find something, trying to say, “Do you know where this report is, where I can find this data, what tool do I use to do this thing?” And it probably hasn’t worked so they’ve got nothing, and wandered across the floor, broken the rule of cubicle office space and did it in person.
And we’ve had similar environments around the office that we jokingly poke fun at, but the reality is they’re quite powerful and effective. And one of my favorites is the mobile or fixed analytics platform called the water cooler, where people get up there and chit-chat around there and swap knowledge, and compare ideas and perform analytics while standing at the water cooler, swapping ideas. They’re very powerful concepts when you think about them. And if you can translate them to your systems and tools, you get an amazing outcome. And we’ve got the all-time favorite, which is essentially the office’s most powerful data distribution hub, otherwise known as the reception desk. And if you can’t find something, where do you go? Well you walk to the front of the office and you go to reception and say, “Do you know where x, y, z is?” And I dare anybody to tell me that they’ve not done that at least once in a new job or at one point in time when they just can’t find something. And you’ve got to ask yourself, why is that they case? It should be somewhere on the intranet or some tool or whatever. It should be easy to find.
And so when it comes to data and analytics and the tools we’ve provided our staff to do their job and the way that humans interact with jobs, I’ve got the view that prior to the recent emergence of analytics tools and big data platforms, or “data processing” as well call it in old school, reporting and knowledge sharing was far from dynamic or collaborative or open, and when you think about the type of systems we expect people to do their jobs with, we had classical, what people call legacy now, but the reality is that it’s only legacy that’s got on and it’s still here today, and therefore it’s not really legacy. But traditional HR systems and ERP systems – human resource management, enterprise resource planning, enterprise data management, and systems that we use to manage the information to run a company. It’s invariably siloed. And from the top end, simple platforms like departmental intranets, trying to communicate where things are and how to get them and how to interact with the knowledge around the place. We pop that up on our intranet. It’s only as good as the people who make time and effort to put that up there, otherwise it just gets left in your head. Or you’ve got data sitting all the way at the bottom of the food chain, at the corporate SANs and everything in between, so it’s storage area networks are full of files and data, but who knows where to find it.
More often than not, we’ve built these closed data platforms or closed systems, and so people have reverted to the likes of spreadsheets and PowerPoints to pass information around the place. But there was an interesting thing that took place recently, in my mind, and that was that mobile devices and the internet in general work in such to the idea that things could actually be better. And predominantly in the consumer space. And it’s an interesting thing that everyday life we started to have things like internet banking. We didn’t have to go to an actually bank physically to interact with them, we could do it by phone. Originally that was clunky but then the internet came around and we had a website. You know, and how many times have you actually been to your bank lately? I actually cannot, I had a conversation about this the other day, and I actually cannot remember the last time I went to my bank, which I was quite shocked by, I thought I must be able to recall this, but it was so long ago I actually can’t remember when I went there. And so we now have these gadgets in our hand in the form of mobiles and phones, tablets and laptops, we’ve got networks and access to tools and systems, and the consumer space we’ve learned that things can be better, but because of the rapid change in consumer space which has been more lethargic and glacial change inside enterprise and environments, we haven’t always taken that change to day-to-day working life.
And I love poking fun at the fact that you can’t live stream data to hardcopy. In this image here there’s a person sitting looking at some analytics that’s been performed, and there’s a beautiful graph that’s been produced by somebody who’s probably being paid a lot of money as a statistician or an actuary, and they’re sitting there trying to do analytics on a hardcopy and poking at it. But here’s the frightening thing for me: These people in this meeting room, for example, and I’ll use this as an example, they’re interacting with data that’s now historical. And it’s as old from when that thing was produced and then printed, so maybe it’s a week-old report. Now they’re making decisions on not so much bad data but old data, which invariably can be bad data. They’re making a decision today based on something that’s historical, which is a really bad place to be. We managed to replace that hardcopy with the likes of tablets and phones because we worked out very quickly in consumer space, and now we’ve worked it out in the enterprise space, that real time is insights is real time value.
And we’re getting better and better at that. And it brings me to the point that Robin raised earlier, that was the concept of the citizen data scientist and the drive of this concept. To me, a citizen data scientist is just regular people with the right tools and information on the likes of an iPad. They don’t have to the do the maths, they don’t have to know the algorithms, they don’t have to know how to apply the algorithms and rule data, they just need to know how to use the interface. And that brings me back to my introduction and the concept of the toddler sitting there with an iPad versus a magazine, versus an iPad. The toddler can very quickly, intuitively learn how to use the interface of an iPad to dive into information and interact with it, albeit maybe a game or streaming media or a video. But it couldn’t get the same response or interaction from a magazine bar and just flashing page after page, which is not very engaging, particularly if you’re a toddler that’s grown up with iPads. Invariably, human beings can look and learn very quickly how to drive tools and things that if we just provide them, and if we provide them with an interface like mobile devices and particularly tablets and smartphones with large enough screens, and particularly if you can interact them in the touch, with finger motions, all of the sudden you get this concept of a citizen data scientist.
Someone who can apply data science with the right tools, but without actually having to know how to do it. And in my mind a lot of this, as I said, was driven by consumer influence, that moved and transformed into demand and enterprise. A couple of really quick examples. We, many of us would start to do things with our blogs and websites, such as put in little ads or looking at tracking and movement, we used tools like Google Analytics and we were woken up to the fact that in our blogs and little websites, we could put little bits of code in there and Google would give us real-time insights into who’s visiting the website, when and where and how. And in real time we could actually see people hit the website, go through the pages and then vanish. And it was quite astonishing. I love still doing that, when I try to explain real-time analytics to people I dumb it down to just showing them a website with Google Analytics plugged in, and actually see the live interaction with people hitting websites and ask them, “Imagine if you had those kinds of insights into your business in real time.”
Take a retail example, and maybe a pharmaceutical, I think you call it a drug store in America, a pharmacy where you walk in and buy everything from headache tablets through to sun cream and hats. Trying to run that organization without real-time information is a scary concept now we know what we know. For example, you can measure foot traffic, you can put devices around the store with a smiley face on one side of the screen because you’re happy, and an unhappy red on the far right and some different shades in the middle. And there’s a platform called “Happy or Not” these days, where you walk into a shop and you can bang a happy face or a sad face, depending on your live customer sentiment feedback. And that can be interactive with real time. You can get live demand-driven pricing. If it’s lots of people in there, you can drive the prices up a little bit, and you can do stock availability and tell people, for example – airlines, for example, will tell people how many seats are available now on the website when you’re booking a flight, you don’t just randomly dial in and hope you can turn up and get a flight. Live HR data, you can tell when people are clocking on and clocking off. Procurement, if you‘re in procurement and you’ve got live data, you could do things like wait for an hour and hedge against the price of the U.S. dollar to buy your next load of stock and have a truckload of things turn up.
When I show people Google Analytics and I relay that kind of anecdote, this eureka moment, this “a-ha!” moment, this lightbulb goes off in their mind like, “Hmm, I can see lots of places where I could do that. If only I had the tools and if only I had access to that knowledge.” And we’re seeing this now in social media. Anyone that’s a savvy social media user other than just showing pictures of their breakfast, tends to look at how many likes they’re getting and how much traffic they’re getting and how many friends they’re getting, and they do that with the likes of, say, Twitter as an analytics tool. You can go to Twitter.com to use the tool, but you type into Google Twitter Analytics dot com, or click on the top right button and pull down the menu and do it, you get these pretty, live graphs that tells you how many tweets you’re doing yourself and how many interactions with them. And real-time analytics just on your personal social media. Imagine if we had the likes of Google Analytics and Facebook and LinkedIn and Twitter, eBay stats coming at you, but in your work environment.
Now we’ve got the live sort of web and mobile at our fingertips, it becomes a power concept. And so that draws me to my conclusion, and that is that invariably I’ve found that organizations that leverage tools and technology early, they gain such a significant advantage over their competitors that competitors may actually never catch up. And we’re seeing that now with the conflict of citizen data scientist. If we can take people with the skills, the knowledge that we hired them for, and we can give them the right tools, particularly the ability to see the real-time data and discover data and know where it’s at without having walk around the cubicles and ask questions out loud, having to go and stand at the water cooler to do some comparative analytics with people or go and ask the reception where the index is. If they can do that at their fingertips and they can take it to their meetings with them and sit in a boardroom flicking through screens in real time rather than hardcopy, all of the sudden we’ve empowered our staff who don’t need be actual data scientists, but to actually use data science and drive amazing outcomes for organizations. And I think this tipping point we’ve actually passed now where the consumer is driven into enterprise, the challenge is how do we provide that enterprise, and that’s the theme I guess of today’s discussion. And with that, I’m going to wrap up my piece and hand over to hear how we might solve that. David, over to you.
David Sweenor: All right, well thank you so much guys, and thank you Robin. You know, Robin, I agree with your original assessment. Analytic process, it is no different really than software development. I think the challenge within an organization is just really, you know, maybe things aren’t as well defined, perhaps there’s an exploratory component to it, and a creative component to it. And Dez, you know, I agree with you, there is a lot of reinventing the wheel, and you know, there’s not an organization that I go into today, you question, well, why are you doing it this way? Why does the business run this way? And it’s easy to question, and a lot of times when you’re within an organization, it’s hard to change. I love the analogy, the consumerization of things. And so no longer when I go to the airport and want to change my seat – I do it on my cellphone. I don’t have to go up to the agent at the booth, and watch that agent type something in on a monochrome monitor for 15 minutes to change my seat assignment. I just prefer to do it on my phone, and so it’s an interesting development.
Today, we’re going to talk a little bit about collective intelligence. For those who aren’t aware, Statistica is a leading-edge analytics platform, that it’s been around for over 30 years. If you look at any of the publications out there in the analyst industry, it always comes out on top as one of the most intuitive and easy to use advanced analytics software package. So we’ve spent the past few years working on a concept called collective intelligence, and we’re taking it to the next level. I wanted to start this conversation with: how does work get done in your organization?
And there’s two images here. The one on the left is an image from the 1960s, and I did not start my career in the 1960s, but the image on the right is – that’s a semiconductor factory where I started working. And I worked up in that black building, black rooftop up in the upper left. But they made semiconductor stuff. This is a recent picture from Google Images. But when you go back to the 1960s image on the left, it’s very interesting. You have these people sitting in a line, and they’re making, you know, integrated circuits and semiconductors. But there is a standardization, there is a standard way to do things, and there was a well-defined process. You know, perhaps since these people are all sitting in an open environment, maybe there was some collaboration. I think that we’ve lost a little bit of that within the knowledge workforce.
When I sat in that building in the upper left, if I wanted to collaborate with somebody, it wasn’t open. There were these offices, perhaps some of the team was remote, or perhaps I had to trek across this campus; it was a 25-minute walk, and I’d have to go talk to somebody in the building on the far right. I think we lost something along the way. And so, you know, I had that same thought is, why do people – how many people keep reinventing the wheel within your organization? I think, you know, organizations as a whole did a good job in the 1990s and 2000s with CRM and data warehousing, and to an extent BI. For some reason, the analytics has lagged a little bit. There were significant investments in data warehousing, and standardizing, and normalizing your data, and all this, and CRM, but analytics has lagged for some reason. And I’m wondering why. Maybe there’s a creative – maybe your process is not well-defined, maybe you don’t know what decision or lever you’re trying to turn, you know, in your business to change things. When we go into organizations today, there are a lot of people doing things very manually in spreadsheets.
And you know, I looked at a stat this morning, I think it said 80, 90 percent of spreadsheets have errors, and some of these can be very significant. Like the one in Whale, where JPMorgan Chase lost billions and billions of dollars due to spreadsheet errors. So I have the premise I think, there has to be a better way to get things done. And as we mentioned, we have these data scientists. These guys are expensive, and they’re hard to find. And sometimes they’re a bit of an odd duck. But I think, you know, if I had to sum up what a data scientist is, it’s probably someone who understands the data. I think it’s someone who understands the math, someone who understands the problem. And really, someone who can communicate the outcomes. And if you are a data scientist, you’re very lucky right now these days, because your salary has probably doubled in the past few years.
But the truth be told, a lot of organizations, they don’t have these data scientists, but your organization does have smart people. You have an organization, you have a lot of smart people, and they use spreadsheets. You know, statistics and mathematics is not their primary job, but they use data to drive the business forward. Really, the challenge that we’re addressing is, how do you take, if you’re lucky to have a data scientist or a statistician or two, how can you take them, and how can you improve the collaboration between those folks and the other individuals within your organization? If we take a look at kind of how our organization is structured, I’m going to start, and I’m going to go from right to left. And I know this is backwards, but we have this line of business users.
This is the bulk of your knowledge worker population, and for these folks, you need to embed analytics in your line of business applications. Perhaps they’re seeing analytic output on a call center screen or something, and it’s telling them the next best offer to give to a customer. Maybe it’s a consumer or supplier on a web portal, and it instantly gives them credit, or things like that. But the idea is, they’re consuming analytics. If we go to the middle, these are these knowledge workers. These are the people that are doing things with the spreadsheets today, but spreadsheets are error-prone and at some point they run out of gas. These citizen data scientists, as we call them, you know, what we’re trying to do for them is really increase the level of automation.
And you hear with analytics that 80 to 90 percent of the work is in the data prep piece, and it’s not the actual mathematics, but it’s the data prep. We’re trying to automate that, whether you do that, and we have wizards and templates and reusable things, and you don’t really have to have knowledge of the underlying infrastructure within your environment. And then if we look at the far left, we have these data scientists. And like I mentioned, they are in short supply. And what we’re trying to do to make them more productive, is allow them to create things that these citizen data scientists can do. Think of it like a Lego block, so these data scientists can create a reusable asset that a citizen data scientist can use. Build it once, so we don’t have to keep reinventing the wheel.
And then also, these guys may be worried about if we can do things in database, and leverage the existing technology investments that your company has made. You know, it doesn’t make sense in this day and age to shuffle data to and fro all throughout the world. So if we look at Statistica, like I mentioned, it’s a platform that’s been around for quite a long time. And it is a very innovative product. Data blending, there hasn’t been a data source that we can’t access. We have all the data discovery and visualization things that you would expect; we can do it in real time. And it probably has – I think there’s over 16,000 analytical functions within the software tool, so that’s more math than I ever could use or understand, but it’s there if you need it.
We have the ability to combine both business rules and analytic workflows to really make a business decision. You’re going beyond just, here’s an algorithm, here’s a workflow, but you have business rules that you always have to deal with. We’re very secure in governance. We are used in a lot of pharmaceutical clients, in that the FDA trusts us. You know, just proof in the pudding that we have the controls and audit ability to be accepted by them. And then lastly, you know, we are open and flexible and extensible, so you need to create a platform that is that, you want your data scientists to be productive, you want your citizen data scientists to be productive, you want to be able to deploy these analytic output to the workers within your organization.
If we take a look at it, here’s an example of some of the visualizations. But being able to distribute your analytic output to line-of-business users, so the first example in the left, that’s a network analytic diagram. And perhaps you’re a fraud investigator, and you don’t know how these connections are made, and these can be people, these can be entities, these can be contracts, anything really. But you can manipulate this with your mouse, and interact with it to really understand – if you’re a fraud investigator, to understand a prioritized list of who to go investigate, right, because you can’t talk to everybody, so you have to prioritize.
If we look at the image on the right side there, for a predictive maintenance dashboard, this is a really interesting problem. Perhaps you’re an owner of an airport, and you have these body scanners in there. These body scanners, if you go to an airport, there are some components in there that have about a nine-month shelf life. And these things are really, really expensive. If I have multiple entry points, multiple scanners in my airport, number one I want to make sure I’m staffed appropriately at each of the gates, and for the parts that are in the scanners, I don’t want to order them too early, and I do want to have them before it breaks down. We have capability, maybe if you own an airport, to be able to predict when these things will break and predict staffing levels.
If we look at the lower right, this is if you’re in a manufacturing environment, this is just a graphical representation of the manufacturing flow. And it’s slightly hard to see, but there’s red and green traffic lights on these various process sectors, and so if I’m an engineer, there’s very sophisticated mathematics going in there, but I can drill down in that particular process sector and look at the parameters, and input that, maybe causing that to be out of control. If we look at our citizen data scientist, our goal really is to make it easy for the citizen data scientist. We have wizards and templates, and one thing I think is really interesting, is we have this automated data health check node. And really what this does, it has built-in smarts.
I mentioned data prep – it takes a significant amount of time, that’s both in data aggregation and preparing it. But let’s assume I have my data, I can run it through this data health check node, and it checks for invariance, and sparseness, and outliers, and all these things, it fills in missing values and it does a lot of math I don’t understand, so I can either accept the defaults, or if I’m a little more clever, I can change them. But the point is, we want to automate that process. This thing does about 15 different checks and outcomes on a cleansed data set. What we are doing is making this easier for people to create these workflows.
This is where we’re talking about collaboration between the data scientists and citizen data scientists. If we look at these images on the right, we see this data prep workflow. And maybe this is very sophisticated, maybe this is your company’s secret sauce, I don’t know, but we know somebody within your organization can access one or more of these data silos that we have. We need a way to, number one, grab them and stitch them together, and number two, maybe there’s special processing we want to do, that it’s beyond our data health check, and that’s your company’s secret sauce. I can create this workflow within our organization, and it collapses as a node. You see the arrow pointing down, it’s just a node, and we can have a hundred of these things within an organization. The idea is, we have people that know something about a certain space, they can create a workflow, and someone else can reuse that. We’re trying to minimize the reinvention of the wheel.
And we can do the same thing with analytic modeling workflows. In this case on the right, this workflow, maybe there’s 15 different algorithms, and I want to pick the best one for the task. And I don’t have to understand as a citizen data scientist what’s going on in that spider web mess up there, but it just collapses into a node, and maybe that node simply says, “calculate credit risk score.” “Calculate the chance of a surgical site infection,” what have you. “Calculate the probability of something being a fraudulent transaction.” As a citizen data scientist, I can use this very sophisticated mathematics that someone else has built, maybe one of these data scientists has built within my organization.
From a data science perspective, you know, I’ve talked to data scientists who love to write code, and I’ve talked to data scientists who hate to write code. And that’s fine, so we have a very visual, graphical user interface. We can grab our data, we can do our automated data health check, and maybe I want to write code. I like Python, I like R, but the idea is, these data scientists, they are in short supply, and they like the code in a particular language. We don’t particularly have a preference for what language you want to code in, so if you want to do R, do R; if you want to do Python, do Python. That’s great. If you want to burst your analytics to Azure, burst your analytics to the cloud. And so the goal here is really to offer flexibility and options to make your data scientists as productive as they can be.
Now data scientists, they’re pretty smart people, but maybe they’re not a specialist in everything, and maybe there’s some gaps in what they can do. And if you look out within the industry, there’s a lot of different analytic marketplaces that exist out there. This is an example of, maybe I need to do image recognition and I don’t have that skill, well maybe I go out to Algorithmia and get an image recognition algorithm. Maybe I go out to Apervita and get a very special healthcare algorithm. Maybe I want to use something in the Azure machine learning library. Maybe I want to use something in the native Statistica platform.
Again, the idea here is we want to leverage the global analytics community. Because you’re not going to have all the skills within your four walls, so how can we create software – and this is what we’re doing – that allows your data scientists to use algorithms from a variety of marketplaces. We’ve been doing it with R and Python for a long time, but this is extending that to these app marketplaces that exist out there. And the same you see here on the top of this, we’re using H2O on Spark, so there’s a lot of analytic algorithms there. You don’t have to focus on creating these from scratch, let’s reuse these that live in the open source community, and we want these people to be as productive as possible.
The next step, after we have our citizen data scientists and our data scientists, is really how do you promote and you distribute these best practices? We have technology within our software that allows you to distribute analytics anywhere. And this is more of a model management view, but no longer am I bound by the four walls or a specific installation within Tulsa or Taiwan or California, or what have you. This is a global platform, and we have many, many customers that it’s deployed in its use by multiple sites.
And so really, the key things are, if you’re doing something in Taiwan and you want to replicate it in Brazil, that’s great. Go in there, grab the reusable templates, grab the workflows that you want. This is trying to create those standards, and the common way of doing things, so we’re not doing things completely different everywhere. And the other key component of this, is really we want to take the math to where the data lives. You don’t have to shuffle data between, you know, California and Tulsa and Taiwan and Brazil. We have technology that allows us to take the math to the data, and we’re going to have another Hot Technology webcast on that subject.
But we call this architecture, and here’s a sneak peek, Native Distributed Analytics Architecture. The key idea behind this is we have a platform, Statistica, and I can export an analytic workflow as an atom. And I could do a model, or an entire workflow, so that doesn’t matter. But I can create this, and export it in a language appropriate to the target platform. On the left side of this, a lot of people do this, but they do scoring in the source system. That’s fine, we can do scoring and we can do model building in database, so that’s interesting.
And then on the right side, we have Boomi. This is a companion technology, we work with all of these. But we can also take these workflows, and essentially transport it anywhere in the world. Anything that has an IP address. And I don’t have to have a Statistica installed on the public or private cloud. Anything that can run a JVM, we can run these analytic workflows, data prep workflows, or just models on any of these target platforms. Whether it’s in my public or private cloud, whether it’s in my tractor, my car, my home, my lightbulb, my internet of things, we have technology that allows you to transport those workflows anywhere in the world.
Let’s review. You know, we have line of business users, so these folks, we have technology allows them to consume output in a format that they’re comfortable with. We have citizen data scientists, and what we’re trying to do is improve the collaboration, make them part of a team, right? And so we want people to stop reinventing the wheel. And we have these data scientists, there could be a skill gap there, but they can code in a language they want, they can go to the analytic marketplaces and use algorithms there. And so with this, how could you not think that everything is awesome with this? This is perfect, this is what we’re doing. We’re building reusable workflows, we’re giving people instructions, we’re giving them the Lego blocks so they can build these mighty castles and whatever they want to do. To sum it up, we have a platform that does empower line of business users, citizen data scientists, programmer data scientists, we have – we can address any sort of IoT edge analytics use case, and we are enabling this notion of collective intelligence. With that, I think we’ll probably open it up for questions.
Robin Bloor: Well okay. I think the first – I mean, to be honest, I mean I’ve been briefed by Dell Statistica before, and to be honest I’m actually quite surprised at the things that I didn’t know that you brought up in the presentation. And I have to say that the one thing, it’s something that’s been a bugbear for me within the adoption of analytics, is that, you know, getting the tools isn’t it, you know? There’s an awful lot of tools out there, there’s open source tools, and so on and so forth, and there’s various, what I’d call, semi-platforms. But I think the difference that you have, I was particularly impressed with some of the workflow.
But the difference is you seem to provide end to end. It’s like analytics is a sophisticated business process that begins with the acquisition of data and then it goes through a whole series of steps, depending upon how flaky the data is, and then it can branch out in a whole series of different mathematical attacks at the data. And then results emerge in one way or another and those need to be actions. There’s a tremendous amount of analytics I’ve come across where a lot of great work was done but there’s no putting it to action. And you seem to have an awful lot of what’s required. I don’t know how comprehensive it is, but it’s way more comprehensive than I expected. I’m incredibly impressed with that.
I would like you to comment on spreadsheets. You’ve already said something, but one of the things that I noted, and I’ve noted over the years, but it’s just become more and more apparent, is that there’s an awful lot of spreadsheets which are shadow systems and really I think the spreadsheet, I mean, it was a wonderful tool when it was introduced and it’s been wonderful ever since in lots of different ways, but it’s a generalized tool, it isn’t really fit for purpose. It certainly isn’t very good in the BI context and I think it’s awful in the analytics context. And I wondered if you had some comment to make about, let’s say, examples where, you know, Statistica has flushed out, excessive spreadsheet use, or any comment you’d like to make about that?
David Sweenor: Yeah I think that, you know, you can go look up famous spreadsheet mistakes. Google or whatever search engine you’re using will come back with a litany of results. I don’t think, you know, we will ever replace spreadsheets. That’s not our intention, but a lot of organizations that I go to, there is a couple of these spreadsheet wizards or ninjas or whatever you want to call them, but they have these very sophisticated spreadsheets and you have to think, what happens when these people win the lotto and they don’t come back? And so what we’re trying to do is, we know spreadsheets will exist so we can ingest those, but I think what we’re trying to do is develop a visual representation of your workflow so it can be understood and shared with other people. Spreadsheets are pretty hard, pretty hard to share. And as soon as you pass your spreadsheet to me, I’ve changed it, and now we’re out of sync and we’re getting different answers. What we’re trying to do is put some guardrails around this and make things a bit more efficient. And spreadsheets are really terrible at combining multiple data sets together, you know? They fall down there. But we’re not going to replace them, we ingest them and we do have people that are starting to shift because if we have a node that says “calculate risk” that’s what the person using the spreadsheet is trying to do. So those are gone.
Robin Bloor: Yeah, I mean, I would say that, you know, from one of the perspectives that I look at things, I’d say that spreadsheets are great for creating information. They’re even great for creating islands of knowledge, but they’re really bad for sharing knowledge. They have no mechanism for doing that whatsoever, and if you pass a spreadsheet on to someone, it’s not like you can read it like it’s an article that explained exactly what they’re doing. It’s just not there. I think, you know, the thing that impressed me most about the presentation and about Statistica’s capabilities, it does seem to be incredibly agnostic. But it’s got this thread running through it of workflow. Am I right in assuming that you could look at an end-to-end workflow right across, you know, from data acquisition all the way through to embedding results in particular BI applications or even running applications?
David Sweenor: Yeah, absolutely. And it does have that end-to-end capability and some organizations use that wholly, and I’m under no illusion does any company these days buy everything from one vendor. We have a mix. Some people use Statistica for everything and some people use it for the modeling workflows, some people use it for the data prep workflows. Some people use it to distribute hundreds of engineering reports to engineers. And so we have everything in between. And it is really end-to-end and it is, you know, an agnostic platform, in that if there’s algorithms that you want to use in R or Python, Azure, Apervita, whatever, you know, use those. That’s great, be productive, use what you know, use what you’re comfortable with and we have mechanisms to make sure those are controlled and auditable and all that sort of things.
Robin Bloor: I particularly like that aspect of it. I mean, I don’t know if you can speak beyond what you’ve said to the wealth of what’s out there. I mean, I’ve looked at this but I haven’t looked at it in a comprehensive way and certainly there’s a vast amount of Python libraries in our libraries but is there anything you can add to that picture? Because I think that’s a very interesting thing, you know, the idea that you would have components that were trustworthy, because you knew various people who’d created them and various people that were using them that you could download. You know, can you enrich what you’ve already said about that?
David Sweenor: Yeah, I think some of the app marketplaces, you know, the algorithm marketplaces that are out there. For example, you know, Dr. John Cromwell at the University of Iowa, he’s developed a model that will predict, that’s used in real time whilst we’re being operated on, will give you a score if you’re going to get a surgical site infection. And if that score’s high enough they will take an intervention right in the operating room. That’s very interesting. So perhaps there’s another hospital that’s not as big. Well, Apervita is a health app marketplace for analytics. You can either go find one in a lot of these app marketplaces, you can go find one and re-use them, and the transaction is between you and whoever owns that, but you can either go find one or you can say, “Here’s what I need.” I think it’s harnessing that global community because everybody is a specialist these days, and you can’t know everything. I think R and Python are one thing but this idea of, “I want to do this function, put a spec out there on one of these app market places and have someone develop it for you.” And they can monetize that, I think that’s very interesting and very different than purely the open source model.
Robin Bloor: All right. Anyway, I’ll pass the ball to Dez. Would you like to dive in, Dez?
Dez Blanchfield: Absolutely and I’d like to stay on the spreadsheet thing just for a moment because I think it has captured the right gist of a lot of what we’re talking about here. And you made a comment, Robin, with regards to the transition from sort of the old spreadsheets in their physical form to electronic form. We had an interesting thing take place where, you know, when spreadsheets were originally a thing they were just sheets of paper with rows and columns and you’d manually write things down, then you’d power through and calculate them, either by doing it off the top of your head or with some other device. But we still have the opportunity to have errors slip in with handwriting mistakes or dyslexia, and now we've replaced it with typos. The risk is that with spreadsheets the risk profile is faster and larger, but I think the tools like Statistica invert the risk pyramid.
I often draw this picture on a whiteboard of a human being stick figure at the top, as one person, and then a collection of them down the bottom, let’s say, imagine ten of them at the bottom of that whiteboard, and I draw a pyramid where the point of the pyramid’s at the single person and the foot of the pyramid’s the collection of people. And I use this to visualize the idea that if one person at the top does a spreadsheet makes a mistake and shares it with ten people, and now we’ve got ten copies of the error. Be very careful with your macros and be very careful with your Visual Basic if you’re going to move to that. Because when we build electronic tools like spreadsheets it’s very powerful, but it’s also powerful in a good and a bad way.
I think tools like Statistica bring about the ability to invert that risk profile and that is that you can now get to the point where you got lots of tools that are available to the individual person and as they go from lots of tools at the top of the pyramid and then down to the very bottom where the point of the pyramid now being inverted is the actual tool, if we’ve got a team of people who are building those tools and those algorithms. And the data scientist doesn’t need to be a specialist in regressional analytics on their data. They might be able to use the tool, but you might have five or six statisticians and an actuary and a data scientist and some mathematicians working on that tool, that module, that algorithm, that plug-in and so in the spreadsheet parlance, so imagine that every spreadsheet published that you could use was actually written by specialists who tested the macros, tested the Visual Basic, made sure the algorithms worked, so when you got it you could just pop data into it but you couldn’t actually break it and therefore it’s been better to control.
I think a lot of the analytics tools are doing that. I guess coming to the point of that is, are you seeing that out in the field now, are you seeing the transition from spreadsheets that potentially could push errors and mistakes and risk, to the point where the tools that you’re building with your platforms now, with the data discovery being accurate in real time and the people who are building the modules and algorithms are removing or reducing that risk profile? Are customer service seeing that in a real sense or do you think that’s just happening and they don’t realize it?
David Sweenor: You know, I think there’s a couple ways to answer this. But what we’re seeing is, you know, in any organization, and I mentioned that analytics I think has maybe lagged from a corporate investment perspective, kind of what we did with data warehousing and CRM. But what we’re seeing, so, it takes a lot to change an organization, to get over that organizational inertia. But what we are seeing is people taking their spreadsheets, taking their workflows, and I mentioned the security and governance, “Well, perhaps I have a spreadsheet,” “Well, I can lock this down and I can version control it.” And we see a lot of organizations, maybe they just start there. And if it’s changed, there’s a workflow and I end up going, number one though, who changed it? Why they changed it. When they changed it. And I can also set up a workflow such that I’m not going to put this new spreadsheet into production unless it’s validated and verified by one, two, three, however many parties you want to define in your workflow. I think people are starting to take, and organizations are starting to take baby steps there, but I would probably suggest we have a long way to go.
Dez Blanchfield: Indeed and I think given you’re building in both the security controls and the governance in there, then the workload can automatically map that in and everything all the way up to the chief risk officer, which is now a thing. You can start to control how those tools and systems are being accessed and who’s doing what with them, so that’s very powerful. I think the other things that comes into this is that the types of tools you provide, for me, lend to human behavior more than to the traditional spreadsheets that we’re talking about, in that if I’ve got a room full of people with the same dashboard and access to the same data that they can actually get a different view and, as a result, get slightly different insights from the same information, that suits their needs so they can collaborate. We then have a more human view and interaction with the business and the decision-making process, as opposed to all going to the same meeting with the same PowerPoint, and the same spreadsheets printed out, all the same fixed data.
Do you see a transition in behavior and culture in organizations who sort of take up your tools now where they see that taking place, where it’s not like five people in the room looking at the same spreadsheet trying to just verbalize it and make notes on it, but now they’re actually interacting with the dashboards and the tools in real time, with visualization and analytics at their fingertips and getting a completely different flow on the conversation and the interaction, not just in meetings but just general collaboration around the organization? Because they can do it real time, because they can ask the questions and get a real answer. Is that a trend you’re seeing at the moment or has that not quite happened yet?
David Sweenor: No, I think it’s definitely started down that path and I think the very interesting thing is, you know, if we take the example of a factory, for example. Maybe someone who owns a particular process sector within that factory they want to look and interact with this data in a certain way. And maybe me, overlooking all the processes, maybe this one on the bottom, maybe I want to look at it across everything. I think what we’re seeing is, number one, people are starting to use a common set of visualizations or standard visualizations within their organizations, but it’s also tailored to the role they’re in. If I’m a process engineer, maybe that’s a very different view than someone who’s looking at it from a supply chain perspective, and I think that’s great because it has to be tailored and it has to be looked at through the lens that you need to get your job done.
Dez Blanchfield: I guess the decision process comes down, time-wise and speed, to actually make smart and accurate decisions increases rapidly too, doesn’t it? Because if you’ve got real-time analytics, real-time dashboards, if you’ve got the Statistica tools at your fingertips you don’t have to run across the floor to go and ask somebody about something, you’ve got it in hard copy. You can kind of collaborate, interact and actually make decisions on the fly and get that outcome immediately. Which I think some of the companies really haven’t grasped yet, but when they do it’s going to be this eureka moment that, yes, we can still stay in our cubicles and work at home, but we can interact and collaborate and those decisions we make as we collaborate turn into outcomes instantly. Look, I think it was fantastic to hear what you’ve got to say so far and I’m really looking forward to see where it goes. And I know we’ve got a lot of questions in the Q&A, so I’m going to run back to Rebecca to run through some of those so we can get to those as quickly as you can. Thank you very much.
Rebecca Jozwiak: Thanks Dez, and yeah Dave, we do have quite a few questions from the audience. And thanks Dez and Robin for your insights, too. I know this particular participant had to drop off right at the top of the hour, but she’s kind of asking, do you see the information systems departments are kind of putting more priority on sophisticated data controls rather than kind of being comfortable in providing tools to the knowledge workers? I mean, is that – go ahead.
David Sweenor: Yeah, I think it depends on the organization. I think a bank, an insurance company, maybe they have different priorities and ways of doing things, versus a marketing organization. I guess I would have to say it just depends on the industry and function you’re looking at. Different industries have different focuses and emphasis.
Rebecca Jozwiak: Okay good, that makes sense. And then another attendee wanted to know, what is the engine behind Statistica? Is it C++ or your own stuff?
David Sweenor: Well, I don’t know if I can get that specific with it in that this has been around for 30 years and it was developed before my time but there’s a core library of analytic algorithms that are Statistica algorithms that run. And you saw here that we also can run R, we can run Python, we can burst to Azure, we can run on Spark at H2O, so I guess I would have to answer that question in terms of, it’s a variety of engines. And depending on which algorithm you pick, if it’s a Statistica one it runs like this, if you pick one on H2O and Spark, it uses that, and so it’s a variety of them.
Rebecca Jozwiak: Okay good. Another attendee kind of asked specifically pointing to this slide, wanting to know, kind of, how does the citizen data scientist know which reusable templates to use? And I guess I’ll kind of make a broader question out of that. That, what are you seeing when line-of-business users or business analysts come in and they want to use these tools, how easy is it for them to pick up and go running?
David Sweenor: I guess I would answer that and if you can use, if you’re familiar with Windows, this is a Windows-based platform, so I cut off the top of these screenshots, but it’s got the Windows ribbon. But how do they know what workflow to use? It looks like the Windows Explorer, so there’s a tree structure and you can configure it and set it up however your organization wants to set it up. But it could be, you would just have these folders and you’d put these reusable templates within these folders. And I think there’s probably a nomenclature your company could adopt, say here’s the “calculate risk profile,” here’s the “get data from these sources” and you name them whatever you want. It’s just a free folder, you just drag the notes right out onto your canvas. So, pretty easy.
Rebecca Jozwiak: Okay good. Maybe a demo next time. Then another attendee kind of brings up, and it’s what you and Robin and Dez were talking about as far as the inaccuracies, especially on a spreadsheet, but the garbage in/garbage out, and he sees it as being even more critical when it comes to analytics. Kind of mentioning that, you know, misuse of data can really lead to some unfortunate decisions. And he’s wondering what your views are on the development of more failsafe algorithms, I guess for the, he uses the word, “overzealous” use of analytics. You know, somebody comes in, they get really excited, they want to do these advanced analytics, they want to run these advanced algorithms, but maybe they’re not quite sure. So what do you do to kind of safeguard against that?
David Sweenor: Yeah, so I guess I’ll answer this as best I can, but I think everything comes down to people, process and technology. We have technology that helps enable people and helps enable whatever process you want to put in within your organization. In the example of sending a coupon to somebody, maybe that’s not as critical, and if it’s digital it’s really no cost, maybe there’s one level of security controls and maybe we don’t care. If I am predicting surgical site infections, maybe I want to be a little more careful about that. Or if I am predicting drug quality and safety and things like that, maybe I want to be a little more careful about that. You’re right, garbage in/garbage out, so what we try to do is provide a platform that allows you to tailor it to whatever process your organization wants to adopt.
Rebecca Jozwiak: Okay good. I do have a few more questions, but I know we’ve gone quite a bit past the hour and I just want to tell our presenters, that was awesome. And we want to thank so much Dave Sweenor from Dell Statistica. Of course, Dr. Robin Bloor and Dez Blanchfield, thank you for being the analysts today. We are going to have another webcast next month with Dell Statistica. I know Dave kind of hinted about the topic. It will be about analytics at the edge, another fascinating topic, and I know that some very compelling use cases are going to be discussed on that webcast. If you liked what you saw today, come back for more next month. And with that, folks, I bid you farewell. Thanks so much. Bye bye.