Eric Kavanagh: Okay, ladies and gentlemen. Welcome back once again. It's Wednesday at 4:00 EST. That means it's time for Hot Technologies. Yes, indeed. My name is Eric Kavanagh, I will be your host.

For today's topic, it's an oldie but a goodie. It is getting better every day because it's shaping our data management world, “Data Modeling in an Agile Environment.” There's a slide about yours truly, hit me up on Twitter @eric_kavanagh. We should really put it on that slide. I'll have to get on that.

So the year's hot. Data modeling has been around forever. It's really been at the heart and soul of the information management business, designing data models, trying to understand business models and align them to your data models. That's really what you're trying to do, right?

Data model represents the business in a fundamental way, so how are all these new data sources changing the game? We're going to find out about that. We're going to find out how you can stay on top of things in an agile way. And of course, that is the word of the year.

Robin Bloor's with us, our chief analyst, Dez Blanchfield calling in from Sydney, Australia and Ron Huizenga, Senior Product Manager from IDERA – longtime friend of mine, excellent speaker in this space, knows his stuff, so don't be shy, ask him the hard questions, folks, the hard ones. With that, I’m going to make Robin the presenter, and take it away.

Dr. Robin Bloor: Okay. Well thank you for that, Eric. I have to say about modeling that I think, I was actually in the world of IT before it existed, in the sense that I remember in the insurance company that I worked for, that we had a guy come in and give us all a kind of workshop on how to model data. So we're looking at about 30 years, is it 30 years? Maybe even longer than that, maybe 35 years ago. A long, long time modeling has actually been a part of the industry and of course it has got nothing to do with ladies on catwalks.

The thing that I wanted to say, because what we normally do, is me and Dez talk about different things and I just thought I'd give the general overview to modeling, but there is a reality to this, that's now becoming apparent.

We have, you know, the big data reality, we have more data, more data sources, we've got data streams that have entered the equation in the last three or four years and is starting to get a bigger part of it, and there is a greater need to understand the data and an increase in the rate of change that is more data being added and also more data structures being used.

It’s a difficult world. Here's a picture of it, which is actually something we drew about three years ago but basically, once you include streaming into the mix and you get this idea of data refinery, data hub, data link or whatever, you see that there is data that's genuinely at rest, in the sense that it's not moving about much. And then there's the data, the streams and you've got all of the transactional application, plus nowadays you've got events, event dataflows that happen in applications and may need to, and nowadays with the lambda architectures everybody's talking about, are genuinely having an impact on just the whole field of data.

And nowadays think in terms of there being a data layer. The data layer exists in a kind of virtual way, in the sense that a good piece of it could be in the cloud and it can be spread across data centers, it can exist on workstations. The data layer is, to some extent, everywhere and in that sense, there are processes everywhere that are attempting in one way or another to process the data and moving the data about. But also knowing what it is when you're moving it about, is a big deal.

If we look at data modeling in the most general sense, at the bottom of this kind of stack you have files and databases. You have data elements, which have keys, element definitions, aliases, synonyms, specific physical formats and then we have this metadata layer.

The interesting thing about metadata is that metadata is entirely how data gets its meaning. If you don't actually have metadata, then at best you can guess the meaning of the data, but you're going to have an awful lot of difficulties. Metadata needs to be there, but meaning has structure. I don't want to go into the philosophy of meaning, but even in the way we deal with data, there are a lot of sophistication in human thought and human language, which doesn't easily express itself in data. But even in terms of the data that we actually process in the world, metadata has meaning and the structure of the metadata – one piece of data in relation to another and what that means when they're put together and what that means when they're joined with other data, demands that we model it. It's not good enough to just record metadata tags to things, you actually have to record meaning per structures and the relationship between the structures.

Then we have at the top layer, the business definitions, which is normally a layer that attempts to transfer meaning between metadata, which is a form of data definition that accommodates the way that data is organized on the computer and human meaning. So you have business terms, definitions, relationships, entity-level concepts that exist in that layer. And if we're going to have an incoherence between these layers, then we have to have data modeling. It's not really optional. The more you can actually do it in terms of automating it, the better. But because it's to do with meaning, it's really difficult to alternate. It's easy enough to catch the metadata within a record and be able to get it from a series of meanings, but it doesn't tell you the structure of the records or what the records mean or context of the record.

So, this is what data modeling, in my opinion, is about. Points to note: the more complex the data universe becomes, the more you need to model it. In other words, it's a bit like we're adding not just more instances of things to the world, which would correspond to data records, but we're actually adding more meaning to the world by capturing data on more and more things. It's becoming a more and more complex feel that we have to understand.

In theory there's a data universe and we need a view of it. In practice, the actual metadata is part of the data universe. So, it's not a simple situation. Beginning modeling is top-down and bottom-up. You need to build in both directions and the reason for that is, data has meaning to the computer and the process, that have to process it, but it has meaning in its own right. So, you need a bottom-up meaning, which satisfies the software that needs to access the data and you need the top-down meaning so that human beings can understand it. The building of metadata models is not and never can be a project; it's an ongoing activity – should be an ongoing activity in every environment they exist. Fortunately, there are a lot of environments, where that actually isn't the case and things spin out of control accordingly.

Going forward, modeling increases with importance as technology moves forward. That's my opinion. But if you look at the IoT we can understand mobile more than we used to, although it's introduced new dimensions: the dimension of location with mobile. Once you get to the IoT, we're looking at extraordinary data problems that we never actually had to do before and we need to, one way or other, to properly understand exactly what we've got, exactly how we can aggregate it, what we can do in terms of getting meaning from aggregation, and of course, what we can do with it, when we've processed it.

I think that's me having said enough. I'm going to pass on to Dez Blanchfield, who'll say something else entirely.

Dez Blanchfield: Thank you. Always a tough act to follow, but this is a topic we agreed and spoke about this briefly in the preshow banter, and if you dialed in early, you'd probably caught a whole bunch of great gems. One of the takeaways, and I don't want to steal the thunder of this particular one, but one of the takeaways from our preshow banter that I want to share, in case you didn't catch it, was just around the topic of the journey of data, and it struck me to actually write it down thinking about the journey the data takes in a different context around generational lifetime – year, months, week, day, hour, minute, second – and the context around the data are positioned within that context. Whether I'm a developer running code, or whether I'm a data specialist and I'm thinking about the structure and the format and the metadata around each of the elements, or the way that the systems and the business interact with it.

It's an interesting little takeaway just to note, but anyway, let me dive in. Data design, in particular, is a phrase that I use to talk about all things data and specifically development of either applications or database infrastructure. I think data design is a term that just captures it all very well in my mind. These days when we talk about data design, we talk about modern agile data design, and my view is that it wasn't so long ago that developers and data experts worked alone; they were in their own silos and pieces of design went from one silo to another. But I'm very much of the view these days, that not only is it the case that's changed, but it has to change; it’s sort of a necessity and that is that application – developers and anything to do around development that deals with data, the designers who do the relevant design elements of schemas and fields and records and location and database systems and infrastructures, modeling and the whole management challenge around that. That’s a team sport now and hence my picture of a bunch of people jumping out of an airplane acting as a team to play out that visually interesting image of people falling through the sky.

Third, what’s happened to bring this about? Well, there’s an article in 1986 written by a couple of gentlemen whose names I tried desperately to do justice to, Hirotaka Takeuchi and Ikujiro Nonaka, I think it is pronounced, produced an article that they titled “Moving the Scrum Downfield.” They introduced this idea of a methodology of winning a game of rugby going from this scrum activity, where everyone gets around in one place and two teams essentially lock heads in something called a scrum to try and get control of the ball and play it down the field to get to the try line and touch the ground with the ball and get a point, called a trine, and you repeat this process and you get more points for the team.

This article was published in 1986 in the Harvard Business Review, and curiously it actually got a lot of attention. It got a lot of attention because it introduced amazing new concepts and here is a screenshot of the front of it. So they took this concept of scrum out of the game rugby and they brought it into business and particularly in the game of design and project delivery, specifically project delivery.

What scrum did was gave us a new methodology in comparison to the likes of PRINCE2 or PMBOK that we had previously used in what we called the waterfall methodology, you know, do this thing and this thing and this thing and follow them in sequence and connect all the dots around, which depends on what you had, or don’t do part two until you’ve done part one because it depended on part one. What it gave us is a new methodology to be a little bit more agile, which is where the term comes from, about how we deliver things, and specifically around design and development grassroots project delivery.

Some of the key tenants – just so I get on with this – is around the key tenants of scrum. It introduced the idea of building instability, that effectively if you think about the fear of chaos, the world exists in a state of chaos, but the planet formed, which is interesting, so building instability, the ability to bounce around a little bit and still actually make things work, self-organizing project teams, overlapping favors through very responsible development, different types of learning and control through the journey of the project delivery, the organizational transfer of learning. So how do we take information from one part of the business and transfer it to another from people who have an idea but don’t develop code or don’t develop databases and infrastructures, but data to those people? And specifically time-boxed outcomes. In other words, let’s do this for a period of time, either a day as in 24 hours, or a week or a couple of weeks and see what we can do and then step back and look at it.

And so, if you pardon the pun, this is really a new game in project delivery and the three core components to it which will make sense as we get a little further along here – there’s the product: all these people have the idea and have a need to get something done and the story that surrounds them. Developers who operate in the agile model of getting their stories and through daily standups using the scrum methodology to discuss it and understand what they need to do, and then just go and get on and do it. Then people, we’ve heard of scrum masters who oversee this whole thing and understand the methodology well enough to drive it. We’ve all seen these images that I got on the right-hand side here of walls and whiteboards full of Post-It notes and they were served as Kanban walls. If you don’t know who Kanban is, I invite you to Google who Mr. Kanban was and why it was a change in the way we move things from one side to the other in a wall literally but in a project.

At a glance, the scrum workflow does this: it takes a list of things that an organization wants to do, run them through a series of things we call sprints that are broken up into 24-hour periods, month-long periods, and we get this incremental series of outputs. It’s a significant change to the way projects are delivered, were delivered up to that stage because part of that flows like the U.S. army who had a great part of developing something called PMBOK, like the idea that don’t take the tank into the field until you put bullets into the thing because if a tank in the field doesn’t have bullets, it’s useless. So therefore part one is put bullets in the tank, part two is put the tank in the field. Unfortunately, though, what happened with developers in the development world somehow got a hold of this agile methodology and ran with it flat out, if you pardon the pun, at a sprint.

Invariably what happened is, when we think of agile we usually think of developers and not databases and anything to do with the world of databases. It was an unfortunate outcome because the reality is that agile is not limited to developers. In fact, the term agile in my view is often wrongly associated exclusively with software developers and not database designers and architects. Invariably the same challenges that you face in software and application development are faced in all things to do with the design and development and operation and maintenance and therefore of data infrastructure and particularly databases. The actors in this particular data cast include the likes of data architects, molders, the administrators, managers of the database infrastructures and the actual databases themselves all the way through to business and systems analysts and architects, people who sit and think about how the systems and business operate and how we’ve gotten to flow data through these.

It’s a topic that I regularly bring up because it’s a constant frustration of mine in that I’m very much of the view that data specialists must – not should – must now intimately be involved in every component of project delivery, really, particularly development. For us to not, then we’re really not giving ourselves the best chance for a good outcome. We often have to circle back and have another think about these things because there exists a scenario, we get to an application being built and we discover the developers aren’t always data experts. Working with databases requires very specialized skills, particularly around data, and builds an experience. You don’t just instantly become a database guru or data knowledge expert overnight; this is often something that comes from a lifetime experience and certainly with the likes of Dr. Robin Bloor on the Code Today, who quite richly wrote the book.

In many cases – and it’s unfortunate but it’s a reality – that there’s two parts of this coin, that is software developers have a blackout of their own as to database specialist and built the skills you need in database design modeling, model development being just the fundamental for gurus' engineering of how data comes in and how the organization of the journey it takes and what it should or shouldn’t look like, or undoubtedly that ingested and understanding that it’s gotten usually in native skills set for software developers. And some of the common challenges we face, just to put that in context, includes the likes of just basic creation and maintenance and management of core database design itself, documenting the data and the database infrastructure and then reusing those data assets, schema designs, schema generations, administration and maintenance of schema and the use of them, the sharing the knowledge around why this schema is designed in a particular way and the strengths and weaknesses that come with that over time cause data changes over time, data modeling and the types of models we apply to the systems and data we flow through them. Database code generation and it goes on integration and then modeled data around them and then more quickly access to control security around the data, the integrity of the data are we moving the data around as we’re retaining its integrity, is there enough metadata around it, should sales see all the records in the table or should they only see the address, the first name, last name that sends you stuff in the post? And then of course the greatest challenge of all is that modeling database platforms which is entirely a different conversation in itself.

I’m very much of the view that with all of this in mind to make any of this nirvana possible, it’s absolutely critical that both the data specialists and developers have the appropriate tools and that those tools be capable of team-focused project delivery, design, development and ongoing operational maintenance. You know, things like collaborating across projects between data experts and software developers, single point of truth or single source of truth for all things around documentation of the databases themselves, the data, the schemas, where the records come from, owners of those records. I think in this day and age it’s absolutely critical, we’re going to get this nirvana of data being king, that the right tools have to be in place because the challenge is too big now for us to do it manually, and if people move in and out of one organization, it’s too easy for us to not follow the same process or methodology that one person might set up that are good and not necessarily transfer those skills and capabilities going forward.

With that in mind, I’m going to head over to our good friend at IDERA and hear about that tool and how it addresses these very things.

Ron Huizenga: Thank you so much and thanks to both Robin and Dez for really setting the stage well, and you’re going to see a little bit of overlap in a couple of things that I’ve talked about. But they’ve really set a very solid foundation for some of the concepts that I’m going to be talking about from a data modeling perspective. And a lot of the things that they have said echoes my own experience when I was a consultant working in data modeling and data architecture, along with teams – both waterfall in the early days and evolving into more modern products with projects where we were using agile methodologies to deliver solutions.

So what I’m going to talk about today is based on those experiences as well as a view of the tools and some of the capabilities in the tools that we utilize to help us along that journey. What I’m going to cover very briefly is, I’m not going to go into scrum in a lot of detail; we just had a really good overview of what that is. I’m going to talk about it in terms of, what is a data model and what does it really mean to us? And how do we enable the concept of the agile data modeler in our organizations, in terms of, how do we engage the data modelers, what’s the participation of modelers and architects during the sprint, what are the types of activities they should be engaged in, and, as a backdrop to that, what are a few of the important modeling tool capabilities that we utilize to really help make that job easier? Then I’m going to go into a bit of a wrap-up and just talk a little bit about some of the business values and benefits of having a data modeler involved, or the way I’m actually going to tell the story is, the problems of not having a data modeler fully engaged in the projects and I’ll show you that based on experience and a defect chart of a before and after image of an actual project that I was involved with many years ago. And then we’ll summarize a few more points and then have questions and answers in addition to that.

Very briefly, ER Studio is a very powerful suite that has a lot of different components to it. The Data Architect, which is where the data modelers and architects spend most of their time doing their data modeling. There are other components as well that we’re not going to talk about at all today such as the Business Architect, where we do process modeling and the Software Architect, for some of the UML modeling. Then there’s the Repository, where we check in and we share the models and we allow the teams to collaborate on those and publish them out to the team server so that multiple stakeholder audiences that are engaged in a project can actually see the artifacts that we’re creating from a data perspective as well as the other things that we’re doing in the project delivery itself.

What I’m going to be focusing on today is going to be a few things that we’re going to see out of Data Architect and because it’s really important that we have the collaboration of Repository-based aspects of that. Particularly when we start talking about concepts like change management which are imperative to, not only agile development projects, but any type of development going forward.

So let’s talk about the Agile Data Modeler for a moment. As we’ve, kind of, alluded to earlier on in the presentation, is that it’s imperative that we have data modelers and/or architects fully engaged in the agile development processes. Now, what’s happened historically is, yes, we have really thought about agile from a development perspective, and there are a couple of things that have gone on that really have caused that to come about. Part of it was due to just the nature of the way the development itself unfolded. As the agile development started and we started with this concept of self-organizing teams, if you drank the Kool-Aid a little bit too pure and you were on the extreme programming side of things, there was a very literal interpretation of things like the self-organizing teams, which a lot of people interpreted to mean, all we need is a group of developers that can build an entire solution. Whether it means developing the code, the databases or the datastores behind it and everything was relegated to the developers. But what happens with that is you lose out on the special abilities that people have. I’ve found that the strongest teams are those that are composed of people from the different backgrounds. Such as a combination of strong software developers, data architects, data modelers, business analysts, and business stakeholders, all collaborating together to drive out an end solution.

What I’m also talking about today is, I’m going to do this in the context of a development project where we’re developing an application that obviously is going to have the data component associated with it as well. We do need to take a step backwards before we do that though, because we need to realize that there are very few Greenfield development projects out there where we have total focus on the creation and the consumption of data that’s limited only within that development project itself. We need to take a step backwards and look at the overall organizational point of view from a data perspective and a process perspective. Because what we find out is the information that we’re utilizing may already exist somewhere in the organizations. As the modelers and architects we bring that to light so we know where to source that information from in the projects themselves. We also know the data structures that are involved because we have design patterns just like developers have design patterns for their code. And we also need to take that overall organizational perspective. We can’t just look at data in the context of the application that we’re building. We need to model the data and make sure that we document it because it lives long beyond the applications themselves. Those applications come and go, but we need to be able to look at the data and make sure it’s robust and well-structured, not only for [inaudible] application, but also for decisions that report activities, BI reporting and integration to other applications, internal and external to our organizations as well. So we need to look at that whole big picture of the data and what the life cycle of that data is and understand the journey of pieces of information throughout the organization from cradle to grave.

Now back to the actual teams themselves and how we actually need to work is, the waterfall methodology was perceived as to being too slow to deliver results. Because, as pointed out with the tank example, it was one step after another and it often took too long to deliver a workable end result. What we do now is we need to have an iterative work style where we’re incrementally developing components of it and elaborating it through time where we’re producing usable code or usable artifacts, I’m going to say, for every sprint. The important thing is collaboration amongst the technical stakeholders in the team and the business stakeholders as we’re collaborating to drive out those user stories into an implementable vision of code and the data that supports that code as well. And the Agile Data Modeler itself will often find that we don’t have enough modelers in organizations so one data modeler or architect may simultaneously be supporting multiple teams.

And the other aspect of that is, even if we do have multiple modelers, we need to make sure that we have a tool set that we’re utilizing that allows collaboration of multiple projects that are in flight at the same time and sharing of those data artifacts and the check-in and check-out capabilities. I’m going to go over this very quickly because we already covered it in the previous section. The real premise of agile is that you’re basing things off of backlog, of stories or requirements. Within the iterations we’re collaborating as a group. Typically a two-week or a one-month sprint, depending on the organization, is very common. And also daily review and standup meetings so that we’re eliminating blockers and making sure that we’re moving all aspects forward without getting halted in different areas as we go through. And in those sprints we want to make sure that we’re producing usable deliverables as a portion of every sprint.

Just a slightly different take on that, expanding it further, scrum is the methodology I’m going to talk about more specifically here and we’ve just basically augmented that previous picture with a few other facets. Typically there’s a product backlog and then there’s a sprint backlog. So we have an overall backlog that, at the beginning of every sprint reiteration, we pare down to say, “What are we going to be building out this sprint?” and that’s done in a sprint planning meeting. Then we break up the tasks that are associated with that and we execute in those one- to four-week sprints with those daily reviews. As we’re doing that we’re tracking our progress through burn-up charts and burn-down charts to track basically what’s left to build versus what we’re building to establish things like what’s our development velocity, are we going to make our schedule, all those types of things. All those are elaborated continuously during the sprint rather than going a few months down the road and finding out that you’re going to come up short and you need to extend the project schedule. And very important, as part of it, the entire teams, there’s a sprint review at the end and a sprint retrospective, so before you kick off the next iteration you’re reviewing what you did and you’re looking for ways that you can improve on the next time through.

In terms of deliverables, this is basically a slide that summarizes the typical types of things that go on in sprints. And it’s very development-centric, so a lot of the things that we see here, such as, functional designs and use cases, doing design code tests, when we look at these boxes here, and I’m not going to go through them in any level of detail, they’re very development oriented. And buried underneath here is the fact that we also need to have those data deliverables that go hand in hand with this to support this effort. So every time we see things like the backlogs, the requirements and user stories, as we’re going through we need to look at what are the development pieces we have to do, what are the analysis pieces we need to do, how about the data design or the data model, what about things like the business glossaries so we can associate business meaning to all of the artifacts that we’re producing? Because we need to be producing those usable deliverables in every sprint.

Some people will say we need to produce usable code at the end of every sprint. That’s not necessarily the case, that is it in a purest development perspective, but quite often – especially at the beginning – we may have something like the sprint zero where we are focused purely on standing things up, doing things like getting our test strategies in place. A high-level design to get it started before we start to fill out the details and making sure that we have a clean set of starting stories or requirements before we start engaging other audiences and then building forward as a team as we go forward. There’s always a little bit of prep time, so quite often we will have a sprint zero or even sprint zero and one. Might be a bit of a startup phase before we hit full flight in delivering the solution.

Let’s talk about data models in this context very briefly. When people think of data models, they often think of a data model as being a picture of how the different pieces of information tie together – that is just the tip of the iceberg. To fully embody the spirit of how you really want to approach data modeling – whether it’s in agile development and other things – is you need to realize that data model, if done correctly, becomes your full specification for what that data means in the organization and how it’s deployed in the back-end databases. When I say databases, I mean not only the relational databases that we may be using, but in today’s architectures where we have big data or NoSQL platforms, as I prefer to call them. Also those big data stores because we may be combining a lot of different data stores in terms of consuming information and bringing it into our solutions as well as how we persist or save that information out of our solutions as well.

We may be working with multiple databases or data sources simultaneously in the context of a given application. What is very important is we want to be able to have a full specification, so a logical specification of what this means to a sprint organizational perspective, what the physical constructs are in terms of how we actually define the data, the relationships between it in your databases, your referential integrity constraints, check constraints, all of those validation pieces that you typically think about. The descriptive metadata is extremely important. How do you know how to utilize the data in your applications? Unless you can define it and know what it means or know where it came from to make sure you are consuming the correct data in those applications – making sure that we have correct naming conventions, full definitions, which means a full data dictionary for not only the tables but the columns that comprise those tables – and detail deployment notes about how we utilize that because we need to build up this knowledge base because even when this application is done, this information will be used for other initiatives so we need to make sure that we have all that documented for future implementations.

Again, we get down to things like data types, keys, indexes, the data model itself embodies a lot of the business rules that come into play. The relationships are not just constraints between different tables; they often help us to describe what the true business rules are around how that data behaves and how it works together as a cohesive unit. And of course, value restrictions are very important. Now of course, one of the things we are constantly dealing with, and it’s becoming more and more prevalent, are things like data governance. So from a data governance perspective, we also need to be looking at, what we are defining here? We want to define things like security classifications. What types of data are we dealing with? What’s going to be considered master data management? What are the transactional stores that we are creating? What reference data are we utilizing in these applications? We need to make sure that is properly captured in our models. And also data quality considerations, there are certain pieces of information that are more important to an organization than others.

I’ve been involved in projects where we were replacing over a dozen legacy systems with new business processes and designing new applications and data stores to replace them. We needed to know where the information was coming from. Which for the most important pieces of information, from a business perspective, if you look at this particular data model slide that I’ve got here, you will see that the bottom boxes in these particular entities, which is just a small subset, I’ve actually been able to capture the business value. Whether high, medium or low for these types of things for these different constructs within the organization. And I’ve also captured things like the master data classes, whether they are master tables, whether they are reference, if they were transactional. So we can extend our metadata in our models to give us a lot of other characteristics outside of the data itself, which really helped us with other initiatives outside the original projects and carry it forward. Now that was a lot in one slide, I’m going to go through the rest of these fairly quickly.

I am now going to talk very quickly about what does a data modeler do as we are going through these different sprints. First of all, a full participant in the sprint planning sessions, where we are taking the user stories, committing to what we are going to deliver in that sprint, and figuring out how we are going to structure it and deliver it. What I am also doing as a data modeler is I know I’m going to be working in separate areas with different developers or with different people. So one of the important characteristics that we can have is when we are doing a data model, we can divide that data model into different views, whether you call them subject areas or sub-models, is our terminology. So as we are building up the model we are also showing it in these different sub-model perspectives so the different audiences only see what’s relevant to them so they can concentrate on what they are developing and putting forward. So I might have somebody working on a scheduling part of an application, I might have somebody else working on an order entry where we are doing all these things in a single sprint, but I can give them the viewpoints through those sub-models that only apply to the area that they are working on. And then those roll up to the overall model and the whole structure of sub-models to give different audience views what they need to see.

Fundamentals from a data modeling perspective that we want to have is, always have a baseline that we can go back to because one of the things we need to be able to do is, whether it is at the end of a sprint or at the end of several sprints, we want to know where we started and always have a baseline to know what was the delta or the difference of what we produced in a given sprint. We also need to make sure that we can have a quick turnaround. If you come into it as a data modeler but in the traditional gatekeeper role of saying “No, no, you cannot do that, we have to do all this stuff first,” you are going to be excluded from the team when you really need to be an active participant in all those agile development teams. That means some things fall off the wagon doing a given sprint and you pick them up in later sprints.

As an example, you may focus on the data structures just to get the development going for say, that order entry piece that I was talking about. In a later sprint, you may come back and fill in the data like some of the documentation for the data dictionary around some of those artifacts that you have created. You are not going to complete that definition all in one sprint; you are going to keep going at your deliverables incrementally because there will be times that you can fill in that information working with business analysts when the developers are busy building the applications and the persistence around those data stores. You want to facilitate and not be the bottleneck. There are different ways that we work with developers. For some things we have design patterns so we are a full participant up front, so we may have a design pattern where we will say we will put it into the model, we will push it out to the developers’ sandbox databases and then they can start to work with it and request changes to that.

There may be other areas that developers have been working on, they have got something they are working on and they are prototyping some things so they try some things out in their own development environment. We take that database that they have been working with, bring it into our modeling tool, compare to the models that we have and then resolve and push out the changes back to them so they can refactor their codes so they are following the proper data structures that we need. Because they may have created some things that we already had elsewhere, so we make sure they are working with the right data sources. We just keep iterating all the way through this to our sprint so that we get the full data deliverables, full documentation and the definition of all those data structures that we are producing.

The most successful agile projects that I have been involved with in terms of very good deliveries is, we had a philosophy, model all changes to the full physical database specification. In essence, the data model becomes the deployed databases that you are working with for anything new that we are creating and has full references of the other data stores if we are consuming from other outside databases. As part of that we are producing incremental scripts versus doing a full generation every time. And we are utilizing our design patterns to give us that quick lift in terms of getting things going in sprints with the different development teams that we are working with.

In sprint activities as well, is again that baseline for compare/merge, so let us take the idea of modeling each change. Every time we do a change, what we want to do is we want to model the change and what is very important, what has been missing from data modeling until recently, in fact, until we reintroduced it, is the ability to associate the modeling tasks and your deliverables with the user stories and tasks that actually cause those changes to occur. We want to be able to check in our model changes, the same way developers check in their codes, referencing those user stories that we have so we know why we made changes in the first place, that is something we do. When we do that, we generate our incremental DDL scripts and post them so that they can be picked up with the other development deliverables and checked into our build solution. Again, we may have one model or working with multiple teams. And like I have talked about, some things are originated by the data modeler, other things are originated by the developers and we meet in the middle to come up with the overall best design and push it forward and make sure it is properly designed in our overall data structures. We have to maintain the discipline of ensuring that we have all of the proper constructs in our data model as we go forward, including things like null and not null values, referential constraints, basically check constraints, all of those things we would typically think about.

Let us talk about now just a few screenshots of some of the tools that help us do this. What I think is important is having that collaborative repository, so what we can do as data modelers – and this is a snippet of part of a data model in the background – is when we are working on things we want to make sure that we can work on just the objects that we need to be able to change, make the modifications, generate our DDL scripts for the changes that we have made as we check things back in. So what we can do is, in ER Studio is an example, we can check out objects or groups of objects to work on, we don’t have to check out a whole model or sub-model, we can check out just those things that are of interest to us. What we want to after that is at either check-out or check-in time – we do it both ways because different development teams work in different ways. We want to make sure that we associate that with the user story or task that is driving the requirements for this and that will be the same user story or task that the developers will be developing and checking their code in for.

So here is a very quick snippet of a couple of screens of one of our change management centers. What this does, I am not going to go through in great detail here, but what you are seeing is the user story or task and indented underneath each one of those you are seeing the actual change records – we have created an automated change record when we do the check-in and check-out and we can put more description on that change record as well. It is associated with the task, we can have multiple changes per task, like you would expect. And when we go into that change record we can look at it and more importantly see, what did we actually change? For this particular one, the highlighted story there I had one type of change that was made and when I looked at the actual change record itself, it has identified the individual pieces in the model that has changed. I changed a couple of attributes here, resequenced them and it brought along for the ride the views that needed to be changed that were dependent on those as well so they would be generated in the incremental DLL. It is not only modeling on the base objects, but a high-powered modeling tool like this also detects the changes that have to be rippled through the dependent objects in the database or the data model as well.

If we are working with developers, and we do this in a couple of different things, that is doing something in their sandbox and we want to compare and see where the differences are, we use compare/merge capabilities where on the right side and left side. We can say, “Here is our model on the left side, here is their database on the right side, show me the differences.” We can then pick and choose how we resolve those differences, whether we push things into the database or if there are some things they have in the database that we bring back into the model. We can go bidirectional so we can go both directions simultaneously updating both source and target and then produce the incremental DDL scripts to deploy those changes out to the database environment itself, which is extremely important. What we can also do is we can also use this compare and merge capability at any given time, if we are taking snapshots on the way through, we can always compare the start of one sprint to start or end of another sprint so we can see the full incremental change of what was done in a given development sprint or over a series of sprints.

This is a very quick example of an alter script, any of you that have been working with databases will have seen this type of thing, this is what we can push out of the code as an alter script so that we are making sure that we retain things here. What I pulled out of here, just to reduce clutter, is what we also do with these alter scripts is we assume there is data in those tables as well, so we will also generate the DML which will pull the information of the temporary tables and push it back into the new data structures as well so we are looking at not only the structures but the data that we may already have contained in those structures as well.

Going to talk very quickly about automated build systems because when we are doing an agile project quite often we are working with automated build systems where we need to check in the different deliverables together to make sure that we don’t break our builds. What that means is we synchronize the deliverables, those change scripts that I spoke about with the DDL script need to be checked in, the corresponding application code needs to be checked in at the same time and a lot of developers development now of course is not being done with direct SQL against the databases and that type of thing. Quite often we are using persistence frameworks or building data services. We need to make sure that the changes for those frameworks or services are checked in at exactly the same time. They go into an automated build system in some organizations and if the build breaks, in an agile methodology, it is all hands on deck fixing that build before we move forward so that we know that we have a working solution before we go further. And one of the projects that I was involved with, we took this to an extreme – if the build broke we actually had attached to a number of the computers in our area where we were colocated with the business users, we had red flashing lights just like the top of police cars. And if the build broke, those red flashing lights started to go off and we knew it was all hands on deck: fix the build and then proceed with what we were doing.

I want to talk about other things, and this is a unique capability to ER Studio, it really helps when we are trying to build these artifacts as developers for these persistence boundaries, we have a concept called business data objects and what that allows us to do is if you look at this very simplistic data model as an example, it allows us to encapsulate entities or groups of entities for where the persistence boundaries are. Where we as a data modeler may think of something like a purchase order header and the order align and other detailed tables that would tie into that in the way we build it out and our data services developers need to know how things persist to those different data structures. Our developers are thinking of things like the purchase order as an object overall and what is their contract with how they create those particular objects. We can expose that technical detail so that the people building the data servers can see what is underneath it and we can shield the other audiences from the complexities so they just see the different higher-level objects, which also works very well for communicating with business analysts and business stakeholders when we are talking about the interaction of different business concepts as well.

The nice thing about that as well is we constructively expand and collapse these so we can maintain the relationships between the higher-order objects even though they originate at constructs that are contained within those business data objects themselves. Now as a modeler, get to the end of the sprint, at the end of sprint wrap-up, I have a lot of things that I need to do, which I call my housekeeping for the next sprint. Every sprint I create what I call the Named Release – that gives me my baseline for what I now have at the end of the release. So that means that is my baseline going forward, all these baselines or Named Releases that I create and save in my repository I can use to do a compare/merge so I can always compare to any given end of sprint from any other sprint, which is very important to know what your changes were to your data model on the way through its journey.

I also create a delta DDL script using the compare/merge again from start to end of sprint. I may have checked in a whole bunch of incremental script, but if I need it I now have a script that I can deploy to stand up other sandboxes so I can just say this is what we had at the beginning of the one sprint, push it through, build a database as a sandbox to start with the next sprint and we can also use those things to do things like standup QA instances and ultimately of course we want to be pushing our changes out to production so we have multiple things going on at the same time. Again, we fully participate in the sprint planning and retrospectives, the retrospectives are really the lessons learned and that is extremely important, because you can get going very quickly during agile, you need to stop and celebrate the successes, as now. Figure out what is wrong, make it better the next time around, but also celebrate the things that went right and build on them as you keep moving forward in the next sprints going forward.

I am now going to very quickly talk about business value. There was a project that I got involved with many years ago that started as an agile project, and it was an extreme project, so it was a pure self-organizing team where it was just developers that were doing everything. To make a long story short, this project was stalling and they were finding they were spending more and more times on remediating and fixing the defects that were identified than they were on pushing forth more functionality and, in fact, when they looked at it based on the burn-down charts they were going to have to extend the project six months at a huge cost. And when we looked at it, the way to remediate the problem was to utilize a proper data modeling tool with a skilled data modeler involved on the project itself.

If you look at this vertical bar on this particular chart, this is showing cumulative defects versus cumulative objects, and I am talking about data objects or constructs that were created such as the tables with the constraints and those types of things, if you looked at it before the data modeler was introduced, the number of defects was actually exceeding and starting to build a bit of a gap over the actual number of objects that were produced up until that point in time. After week 21, that is when the data modeler came in, refactored the data model based on what there was to fix a number of things and then started modeling as part of the project team going forward, the changes as that project was being pushed forward. And you saw a very quick turnaround that within about a sprint and a half, we saw a huge uptick in the number of objects and data constructs that were being generated and constructed because we were generating out of a data modeling tool rather than a developer stick building them in an environment, and they were correct because they had the proper referential integrity and the other constructs it should have. The level of defects against those almost flatline. By taking that appropriate action and making sure that the data modeling was fully engaged, the project was delivered on time with a much higher level of quality, and in fact, it would not have delivered at all if those steps had not taken place. There are a lot of agile failures out there, there are also a lot of agile successes if you would get the right people in the right roles involved. I’m a huge proponent of agile as an operational discipline, but you need to make sure that you have the skills of all the right groups involved as your project teams as you go forward on an agile type of endeavor.

To summarize, data architects and modelers must be involved in all development projects; they really are the glue that holds everything together because as data modelers and architects we understand, not only the data constructs of the given development project, but also where the data exists in the organization and where we can source that data from and also how it is going to be used and utilized outside the particular application itself that we are working on. We understand the complex data relationships and it is paramount to be able to move forward and also from a governance perspective to map document and understand what your full data landscape looks like.

It is like manufacturing; I came from a manufacturing background. You cannot inspect quality into something at the end – you need to build quality into your design upfront and on your way through, and data modeling is a way of building that quality into the design in an efficient and cost-effective manner all the way through. And again, something to remember – and this is not to be trite, but it is the truth – applications come and go, data is the vital corporate asset and it transcends all those application boundaries. Every time you are putting in an application you are probably being asked to preserve the data out of other applications that came before, so we just need to remember that it is a vital corporate asset that we keep maintaining over time.

And that’s it! From here we will take more questions.

Eric Kavanagh: Alright, good, let me throw it over to Robin first. And then, Dez, I’m sure you have a couple of questions. Take it away, Robin.

Dr. Robin Bloor: Okay. To be honest, I have never had any problem with agile development methods and it seems to me what you’re doing here makes eminent sense. I remember looking at something in the 1980s which indicated, really, that the problem that you actually run into in terms of a project spinning out of control, is normally if you let a mistake persist beyond a particular stage. It just becomes more and more difficult to fix if you don’t get that stage right, so one of the things that you’re doing here – and I think this is the slide – but one of the things that you’re doing here in sprint zero, in my opinion, is absolutely important because you’re really trying to get the deliverables pinned there. And if you don’t get deliverables pinned, then deliverables change shape.

That’s, kind of, my opinion. It’s also my opinion – I really don’t have any argument with the idea that you’ve got to get the data modeling right to a certain level of detail before you go through. What I’d like you to try and do because I didn’t get a complete sense of it, is just describe one of these projects in terms of its size, in terms of how it flowed, in terms of who, you know, where the problems cropped up, were they resolved? Because I think this slide is pretty much the heart of it and if you could elaborate a little bit more on that, I’d be very grateful.

Ron Huizenga: Sure, and I’ll use a couple of example projects. The one that, kind of, went off the rails that was brought back on by actually getting the right people involved and doing the data modeling and everything was really a way of making sure that the design was understood better and we obviously had better implementation design on the way through by modeling it. Because when you model it, you know, you can generate your DDL and everything out of the back and out of the tool rather than having to stick build this like people might typically do by going straight into a database environment. And typical things that will happen with developers is they’ll go in there and they’ll say, okay, I need these tables. Let’s say we’re doing order entries. So they might create the order header and the order detail tables, and those types of things. But they’ll quite often forget or neglect to make sure that the constraints are there to represent the foreign key relationships. They might not have the keys correct. The naming conventions may be suspect as well. I don’t know how many times I’ve gone into an environment, for instance, where you see a bunch of different tables with different names, but then the column names in those tables are like ID, Name, or whatever, so they’ve really lost the context without the table of exactly what that is.

So, typically when we’re data modeling we’ll make sure that we’re applying proper naming conventions to all the artifacts that’s getting generated out in the DDL as well. But to be more specific about the nature of the projects themselves is, generally speaking, I’m talking about fairly large initiatives. One of them was $150 million business transformation project where we replaced over a dozen legacy systems. We had five different agile teams going on simultaneously. I had a full data architecture team, so I had data modelers from my team embedded in every one of the other application area teams, and we were working with a combination of in-house business experts that knew the subject matter, that were doing the user stories for the requirements themselves. We had business analysts in each of those teams that were actually modeling the business process, with the activity diagrams or business process diagrams, helping to flesh out the user stories more with the users before they got consumed by the remainder of the team as well.

And then, of course, the developers that were building the application code over top of that. And we were also working with, I think it was four different systems integration vendors that were building different parts of the application as well where one team was building the data services, the other was building application logic in one area, another one that had expertise in another business area was building the application logic in that area. So we had a whole collaboration of people that were working on this project. On that one in particular we had 150 people on shore on the team and another 150 resources offshore on the team that were collaborating two-week sprints to drive this thing out. And to do that you need to make sure you’re firing on all cylinders, and everybody is well synchronized in terms of what their deliverables are, and you had those frequent resets to make sure that we were completing our deliveries of all the necessary artifacts at the end of every sprint.

Dr. Robin Bloor: Well that’s impressive. And just for a little more detail on that – did you end up with a complete, what I would call, MDM map of the whole data area at the end of that project?

Ron Huizenga: We had a complete data model that was broken down with the decomposition among all the different business areas. The data dictionary itself in terms of full definitions fell a little bit short. We had most of the tables defined; we had most of the columns defined as to exactly what they meant. There were some that weren’t there and, interestingly enough, a lot of those were pieces of information that came from the legacy systems where, after the end of the project scope itself, that was still being documented as a carry-forward set of artifacts, as it were, outside of the project itself, because it was something that needed to be sustained by the organization going forward. So at the same time the organization took a much increased viewpoint of the importance of data governance because we saw a lot of shortcomings in those legacy systems and those legacy data sources that we were trying to consume because they weren’t documented. In a lot of instances we only had databases that we had to reverse engineer and try to figure out what was there and what the information was for.

Dr. Robin Bloor: It doesn’t surprise me, that particular aspect of it. Data governance is, let’s call it, a very modern concern and I think, really, there’s a lot of work that, let’s say, should have been done historically on data governance. It never was because you could, kind of, get away with not doing it. But as the data resource just grew and grew, eventually you couldn’t.

Anyway, I’ll pass over to Dez because I think I’ve had my allotted time. Dez?

Dez Blanchfield: Yes, thank you. Through this whole thing I’m watching and thinking to myself that we’re talking about seeing agile used in anger in many ways. Although that’s got negative connotations; I meant that in a positive way. Could you maybe just give us a scenario of, I mean, there’s two places I can see this being a perfect set: one is new projects that just need to be done from day one, but I think invariably, in my experience, it’s often the case that when projects get large enough that this is necessary in many ways, there’s an interesting challenge between gluing the two worlds, right? Can you give us any sort of insight into some success stories that you’ve seen where you’ve gone into an organization, it’s become clear that they’ve got a slight clash of the two worlds and you’ve been able to successfully put this in place and bring large projects together where they might have otherwise gone on the rails? I know it’s a very broad question but I’m just wondering if there’s a particular case study you can, sort of, point to where you said, you know, we put this all in place and it’s brought all of the development team together with the data team and we’ve, sort of, addressed something that might have otherwise sunk the boat?

Ron Huizenga: Sure, and in fact the one project which happened to be a pipeline project was the one that I alluded to where I showed that chart with the defects before and after the data modeler was involved. Quite often, and there are preconceived notions, particularly if things are spun up where it’s done from a purely development perspective of, it’s just developers that are involved in these agile projects to deliver the applications. So what happened there, of course, is they did get off the rails and their data artifacts in particular, or the data deliverables that they were producing, fell short of the mark in terms of the quality and really addressing things overall. And there’s quite often this misconception that data modelers will slow projects down, and they will if the data modeler doesn’t have the right attitude. Like I say, you have to lose the – sometimes there are data modelers that have that traditional gatekeeper attitude where, “We’re here to control what the data structures look like,” and that mentality has to disappear. Anybody who’s involved in agile development, and particularly the data modelers, have to take on a role as a facilitator to really help the teams move forward. And the best way to illustrate that is to very quickly show teams how productive they can be by modeling the changes first. And again, that’s why I talked about the collaboration.

There are some things that we can model first and generate the DDL to push out to the developers. We also want to make sure that they don’t feel like they’re being restricted. So, if there are things that they’re working on, let them keep working in their development sandboxes, because that’s where developers are working on their own desktops or other databases to make some changes where they’re testing things out. And collaborate with them and say, “Okay, work with that.” We’ll bring it into the tool, we’ll resolve it and then we’ll push it forward and give you the scripts that you can deploy it to update your databases to upgrade them to what the actual sanctioned true production view of it’s going to be as we continue to move forward. And you can turn that around in a very quick fashion. I found that my days were filled where I was just going back and forth iterating with different development teams, looking at changes, comparing, generating scripts, getting them going, and I was able to keep up myself with four development teams rather easily once we achieved a momentum.

Dez Blanchfield: One of the things that comes to mind out of that is that, you know, a lot of the conversations I’m having on a daily basis are about this freight train coming at us of, sort of, the machine-to-machine and IoT. And if we think we’ve got a lot of data now on our current environments in enterprise, you know, if we take the unicorns aside for a moment where we know that the Googles and the Facebooks and the Ubers have petabytes of data, but in a traditional enterprise we’re talking about still hundreds of terabytes and a lot of data. But there’s this freight train coming at organizations, in my view, and Dr. Robin Bloor alluded to it earlier, of the IoT. You know, we’ve got a lot of web traffic, we’ve got social traffic, we’ve now got mobility and mobile devices, the cloud has, sort of, exploded, but now we’ve got smart infrastructure, smart cities and there’s this whole world of data that’s just exploded.

For an everyday organization, a medium to large organization who’s sitting there and seeing this world of pain come at them and not have an immediate plan in mind, what are some of the takeaways, in just a couple of sentences, that you’d put to them as to when and where they need to start thinking conversationally about putting some of these methodologies in place. How early do they need to start planning to almost sit up and pay attention and say this is the right time to get some tools in place and get the team trained up and get a conversation of vocab going around this challenge? How late in the story is too late or when’s too early? What does that look like for some of the organizations you’re seeing?

Ron Huizenga: I would say for most organizations that if they haven’t already done it and adapted the data modeling and data architecture with powerful tools like this, the time they need to do it is yesterday. Because it’s interesting that, even today, when you look at data in organizations, we have so much data in our organizations and generally speaking, based on some surveys that we’ve seen, we’re using less than five percent of that data effectively when we look across organizations. And with the IoT or even NoSQL, big data – even if it’s not just IoT, but just big data in general – where we’re now starting to consume even more information that’s originating from outside our organizations, that challenge is becoming larger and larger all the time. And the only way we have a chance of being able to tackle that is to help us to understand what that data is about.

So, the use case is a little bit different. What we find ourselves doing is when we look at that data, we’re capturing it, we need to reverse engineer it, see what’s in those, whether it’s in our data lakes or even in our in-house databases, synthesize out what the data is, apply meanings to it and definitions to it so we can understand what the data is. Because until we understand what it is, we cannot ensure that we’re using it correctly or adequately. So we really need to get a handle on what that data is. And the other part of that is, don’t do it because you can, in terms of consuming all this external data, make sure that you have a use case that supports consuming this external data. Focus on the things that you need rather than just trying to pull and utilize things that you might need later on. Focus on the important things first and as you work your way through it, then you’ll get to consuming and trying to understand the other information from outside.

A perfect example of that is, I know we’re talking IoT and sensors, but the same type of problem has actually been in many organizations for many years, even before IoT. Anybody who has a production control system, whether they’re a pipeline company, manufacturing, any process-based companies that have things where they’re doing a lot of automation with controls and they’re using [inaudible] data streams and things like that, have these firehoses of data that they’re trying to drink out of to figure out, what are the events that have occurred in my production equipment to signal – what’s happened and when? And amongst this huge stream of data there are only specific pieces of information or tags that they’re interested in that they need to sift out, synthesize, model and understand. And they can ignore the rest of it until it comes time to really understand it, and then they can expand their scope to pull more and more of that into scope, if that makes sense.

Dez Blanchfield: It does, indeed. There’s one question that I’m going to lead into that came from a gentleman called Eric, and we’ve been chatting about it privately. I’ve just asked his permission, which he’s given, to ask it of you. Because it leads in nicely to this, just to wrap up, because we’re going a little bit over time now, and I’ll hand back to Eric. But the question from another Eric was, is it reasonable to assume that owners of a startup be familiar with and understand the unique challenges around modeling terminology and so, or should it be handed to somebody else for interpretation? So, in other words, should a startup be capable and ready and willing and able to focus on and deliver on this? Or is it something they should probably shop out and bring experts in on board with?

Ron Huizenga: I guess the short answer is it really depends. If it’s a startup that doesn’t have somebody in-house who is a data architect or modeler that really understands the database, then the quickest way to start is bringing somebody with a consulting background that is very well versed in this space and can get them going. Because what you’ll find – and in fact, I did this on a lot of engagements that I did before I came over to the dark side in product management – is I would go into organizations as a consultant, lead their data architecture teams, so that they could, kind of, refocus themselves and train their people on how to do these types of things so that they sustain it and carry the mission going forward. And then I would go on to my next engagement, if that makes sense. There are a lot of people out there that do that, that have very good data experience that can get them going.

Dez Blanchfield: That’s a great takeaway point and I totally agree with it and I’m sure Dr. Robin Bloor would as well. Particularly in a startup, you’re focused on being an SME on the particular value of proposition you’re looking to build as part of your startup business itself and you should probably not need to be an expert on everything, so great advice. But thank you very much, a fantastic presentation. Really great answers and questions. Eric, I’m going to hand back to you because I know we’ve gone probably ten minutes over time and I know you like to stick close to our time windows.

Eric Kavanagh: That’s okay. We have at least a couple of good questions. Let me throw one over to you. I think you’ve answered some of the others. But a very interesting observation and question from one attendee who writes, sometimes agile projects have the data modeler not having the entire long-term picture and so they wind up designing something in sprint one and then having to redesign in sprint three or four. Doesn’t this seem counterproductive? How can you avoid that kind of thing?

Ron Huizenga: It’s just the nature of agile that you’re not going to get everything absolutely right in a given sprint. And that’s actually part of the spirit of agile, is: work with it – you’re going to be doing prototyping where you’re working on code in a given sprint, and you’re going to make refinements to it. And a part of that process is as you’re delivering things the end user sees it and says, “Yeah that’s close, but I really need to have it do this little bit extra as well.” So that not only impacts the functional design of the code itself but quite often we need to modify or add more data structure underneath these certain things to deliver what the user wants. And that’s all fair game and that’s why you really want to use the high-powered tools because you can very quickly model and make that change in a modeling tool and then generate the DDL for the database that the developers can then work against to deliver that change even more quickly. You’re saving them from having to do that hand coding, as it were, of the data structures and letting them concentrate on the programming or application logic that they’re most proficient at.

Eric Kavanagh: That makes complete sense. We had a couple of other people just asking specific questions around how does this all tie back to the tool. I know you spent some time going through examples and you’ve been showing some screenshots about how you actually roll some of this stuff out. In terms of this whole sprint process, how often do you see that in play in organizations versus how often do you see more traditional processes where things just, kind of, plod along and take more time? How prevalent is the sprint-style approach from your perspective?

Ron Huizenga: I think we’re seeing it more and more. I know that, I would say, probably in the last 15 years in particular, I’ve seen much more of an adoption of people recognizing that they really need to embrace quicker delivery. So I’ve seen more and more organizations jump on the agile bandwagon. Not necessarily entirely; they may start out with a couple of pilot projects to prove out that it works, but there are some that are still very traditional and they do stick with the waterfall method. Now, the good news is, of course, that the tools work very fine in those organizations as well for those type of methodologies, but we have the adaptability in the tool so that those who do jump on board have the tools in the toolbox at their fingertips. Things like the compare and merge, things like the reverse-engineering capabilities, so they can see what the existing data sources are, so they can actually compare and generate out the incremental DDL scripts very quickly. And as they start to embrace that and see that they can have the productivity, their inclination to embrace agile even more increases.

Eric Kavanagh: Well, this is great stuff, folks. I just posted a link to the slides there in the chat window, so check that out; it’s a little bit of a Bitly in there for you. We do have all these webcasts for later viewing. Feel free to share them with your friends and colleagues. And Ron, thank you very much for your time today, you’re always pleasant to have on the show – a real expert in the field and it’s obvious that you know your stuff. So, thanks to you and thanks to IDERA and, of course, to Dez and our very own Robin Bloor.

And with that we’re going to bid you farewell, folks. Thanks again for your time and attention. We appreciate you sticking around for 75 minutes, that’s a pretty good sign. Good show guys, we’ll talk to you next time. Bye bye.