Digital.ai Interview: ‘We’re All Going to Become Prompt Engineers’

Why Trust Techopedia

Software development is one of the first industries to experience the impact of artificial intelligence (AI) — changing workflows and employee roles with abandon.

While AI may promise increased productivity and faster delivery, the tech industry needs to be careful about how quickly it integrates AI — something we are about to dig deeply into with Adam Kentosh, Field CTO for North America, at the digital transformation firm Digital.ai.

Techopedia spoke with Kentosh about the practical steps companies need to take to incorporate AI into software development, how developer roles will change in the coming years, and why we are all going to be prompt engineers.

Key Takeaways

  • Adam Kentosh tells Techopedia that generative AI is boosting software development — but needs careful integration.
  • ‘Shift Left Testing’ and productivity gains are reasons for companies adopting AI.
  • Security frameworks like SAST and DAST help prevent AI-generated code risks.
  • Developers must embrace roles as prompt engineers in the AI era — and we are all going to become one.
  • Keeping explainability and governance at the forefront of AI integration is essential in building trust.

About Adam Kentosh

Adam Kentosh
Adam Kentosh, Digital.ai (Supplied)

Adam Kentosh, Field CTO for North America at Digital.ai, has extensive experience in the technology industry. Kentosh most recently worked at Digital.ai as Senior Director of Sales Engineering and previously served as the Head of Solutions Architecture at Spot by NetApp and Regional Manager of Solutions Engineering at HashiCorp.

Kentosh has also worked as a Senior Solutions Architect at Red Hat, a DevOps Consulting Manager at Rolta AdvizeX, and a Senior System Engineer at CAS.

How Should Companies Incorporate AI into Software Development?

Q: How can companies use AI effectively in their software development and balance that with human input?

Advertisements

A: The good news is we’re not seeing developers become extinct. If anything, everybody’s focused on the productivity gains, and that’s a good thing.

There are three paths for companies to bring in artificial intelligence or machine learning:

They can write their own machine learning models, which is very technically intensive and probably not something that a lot of us need to be doing.

They can pair their existing data with machine learning models to start driving some strategic outcomes for the business. Or they can use generative AI for the productivity gain.

It’s clear we’re seeing a maturity curve here, where most companies are starting with generative AI. It’s a fairly easy return on investment.

We can hopefully measure the impact that GenAI is having on software development specifically and how it’s enabling developers to go faster — or how it’s enabling testing to convert from manual to automated testing, for instance.

There’s a lot we can do in that space, and that’s why we’re seeing such a drive towards productivity gains through generative AI.

Now, there’s a lot to be gained by the second tier, which is how we apply our engineering intelligence data to a machine learning model and start to get some good information from that. That’s where we’ll start to see a lot of focus over the next two years.

Gartner, back in March, coined the term “the software engineering intelligence platform“. I’ve had no less than five or six conversations about this in the past three or four weeks where companies are starting to realize they’re generating a lot of data and asking what they can do with that data.

It’s important as companies make these investments to use machine learning and artificial intelligence to the best of our abilities. However, if we haven’t gone through the effort to baseline where we’re at today, it’s going to be hard to determine if what we’re doing is helping move the needle, especially in terms of development practices.

Another angle here to think about is the push toward Shift Left Testing. There are stats now through GitHub and other surveys that show less than 30% of developers’ time is spent actually coding.

If that’s the case, we’ve got to make up productivity somewhere. Ideally, we’ll be able to augment some of those processes that are taking some of the developer’s time, whether it’s code reviews, security review or a writing task for the code they’ve written.

While we are asking things to shift left for good reasons, we can leverage machine learning and generative AI to help and augment the productivity of the developers that we’re putting a lot of mental strain and stress on.

Q: What are some of the metrics or ways that companies can measure the effectiveness of the AI they use?

A: We’ve seen it start with DORA [DevOps Research and Assessment] metrics. From a DORA metric standpoint, we’re looking at the health of an organization’s delivery — deployment frequency, change lead time, time to restore, and change failure risk.

Those give us an indication of whether we’re deploying in a meaningful way.

What they did right with DORA was they made 4-5 simple metrics and rolled those into an index. By doing that, the cool thing is we can start to look at different teams’ performance across that index so we can implement best practices.

For instance, we can measure developer experience to make sure it’s good — keep developers engaged and prevent top developers from getting burned out.

Once we know the priorities, we can say what businesses should be measuring and help advise them on what those measurements are. Then, we can set the KPIs and aggregate the data to help give them that visibility.

Once we come up with this index and understand how it’s attributing back to the business goals that they’re after, we can then start to measure change based on the technologies they’re introducing.

That could be co-pilot technology or technology that helps shift from manual to automated testing. We can start to introduce them and if they have measurable benefits, we can move forward in implementing them throughout the rest of the organization.

Managing AI Risk

Q: How can companies establish a framework to ensure compliance and avoid the security risks associated with using AI to generate code?

A: We’re seeing things happen in real time where SAST and DAST scanning is important, and those are things we can do to protect ourselves from ourselves. We can introduce the ability to find out if we have a vulnerable package in an application that we need to fix.

We have to be more proactive, as threats are coming from all over the place. First, we need to standardize what governance means for the organization. That could be enforcing SAST and DAST, cloud security scanning, or enforcing that mobile and web applications use obfuscation and these new techniques that allow us to keep our applications more secure.

We can get to a place where releases are templatized — not just taking an application and putting it on some endpoint — but in the way that the release goes through the development lifecycle in a templatized format that an organization can use as a best practice.

From there, we also need to think about the level of security that applications need to have as they make it into the wild versus internal applications.

As soon as you publish an application on the web or in the App Store, your code is effectively in the hands of a potential threat actor. I can go to the App Store, download an application, unpack it with a tool, and then look at the code.

As apps have expanded into the mobile space, we now have this new layer of security we need to think about, which is like the outer layer of a jawbreaker, which is obfuscation and anti-tampering ingrained into the code itself.

Then if I were to download the application and try to run it through a decompiler, all I’m going to see is garbage text — it’s not human readable anymore, and that’s a great deterrent.

We can also do things even more deeply aligned toward the application in the operating system. On a jailbroken or rooted device, you can prevent the application from starting up because it recognizes that it’s an unsafe environment.

We can program as much or as little as we want to react to the threats that are actively coming in. This type of security is also useful from an AI standpoint.

AI has to train itself on something, so it’s training itself on code that has vulnerabilities in it and guess what, the code that it writes may also have a vulnerability in it.

So now we’re back to this holistic approach. We need SAST and DAST to protect ourselves from ourselves and make sure that we’re not introducing any vulnerabilities. But we also need this other layer of security.

We’re protecting others and it goes full circle when it helps protect ourselves from what AI might be doing inside of our application that we weren’t aware of.

AI and the Role of Human Oversight

Q: What can companies do to ensure they have the right balance of human oversight to check that the code is accurate and reliable?

A: From a human standpoint, there are a couple of things that we need to do.

There’s always going to be a concern that it’s going to take over our jobs — but that’s not going to happen.

It’s important to explain to people the intention of introducing this technology and the productivity gains and specific value you’re looking to get out of it. Being open and honest about the conversation helps dissuade some of those fears.

Beyond that, there has to be clear ownership over the technologies. A lot of the generative AI models and copilots are being independently validated as well.

However, for large organizations, having stakeholders who are responsible for accuracy, cleansing of data, and validation, as well as understanding their responsibility for building trust for the AI they’re starting to use, is extremely important.

When we get into leveraging existing data with AI machine learning models, we also need to think about its explainability to help build trust.

For instance, we’ve worked with companies before where we run their data through our models, and what comes back is this almost forceful response: “That can’t be my result — there’s no way that’s what’s happening with my team”.

So it’s really important to be able to explain to them the key factors that contributed to this data model and their team’s results.

Having explainability, having a clear owner, and then anything that we can do to reduce machine bias and handle some of the data privacy issues that we need to be worried about — those are still all works in progress.

We’re still figuring out the regulatory compliance around machine learning and artificial intelligence, which is also why it’s been a challenge for businesses to decide whether they can use it.

Training Developers for the AI Era

Q: When it comes to upskilling and training, what can help employees and developers understand the capabilities and limitations of AI?

A: A recent GitHub survey showed that 90% of the developers they polled are using generative AI today, whether that’s inside or outside of work.

What that really meant to me was that 90% of developers are using generative AI.

It doesn’t really matter if it’s in or outside of work. If they’re doing it outside of work, it’s probably making its way inside of work.

Everybody wants to be engaged with AI. So even if you are in a position where you’re not entirely comfortable with it, there’s value in introducing pilot projects allowing your leads to be engaged and to start thinking about how to use generative AI.

Get them in a more safely regulated environment that is controlled for your purposes, and let them get their hands on it.

There’s so much out there now, too, in terms of training that people can do depending on their interest. Is it a situation where they want to become a machine learning specialist and be able to create their own models?

Or is it a situation where they just want to be a better developer, and they want to see how this augmentation of development can help their productivity?

That’s where those controlled pilots become helpful, as well as surveys to gauge interest and start a conversation about how to leverage AI inside of the organization.

Showing proactiveness and then helping them think about how they would like to use it day-to-day and then trying to let them take some ownership of it, that’s always going to go far in terms of giving them the enablement they want and making them feel like they’re heard.

New Era, New Skills

Q: What kind of new skill sets or roles do you see that developers might need?

A: We’re seeing engineers having to learn to write effective prompts for generative AI. That’s probably a natural role that every developer will have to fit into. It’s probably a skill that we as humans will have to get used to.

In reality, we’re all going to be prompt engineers to some extent, just like we’re all Google experts.

We’re seeing there are model trainers which take a deeper level knowledge of TensorFlow or other frameworks to help fine-tune the model and then get some level of statistics.

We’re seeing an influx of generative AI applications, not just ChatGPT and Copilot, but new products come to market every day, and we will continue to see that.

Not only does it open up the opportunity to become a model trainer or a prompt engineer, but you could even create your own new product.

There are, of course, considerations around ethics. In addition to making sure that the models aren’t biased, we need to think about how we can make sure the data that is generated is accurate.

When it comes to data engineering, we’re all very open with letting ChatGPT scan the Internet, but at some point — and we already see this with many enterprises today — they don’t want their data being in a public model, for good reason.

So, when we need to synthesize data that the model can train itself on, data engineering is going to continue to be important as well.

Advertisements

Related Reading

Related Terms

Advertisements
Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.