Part of:

GitLab Users: AI Is Necessary for Developers — but We’re Also Worried

Why Trust Techopedia

While organizations are optimistic about adopting generative artificial intelligence, they are still concerned that AI tools will access sensitive corporate data and intellectual property, according to a GitLab survey.

While organizations are optimistic about adopting generative artificial intelligence (AI), they are still concerned that AI tools will access sensitive corporate data and intellectual property, according to a recent GitLab survey.

The report, titled “The State of AI in Software Development” offers insights from 1,001 global senior technology executives, developers, security, and operations professionals about their challenges, successes, and priorities for adopting AI.

According to the GitLab developer survey, 83% of respondents say that implementing AI in their software development processes is critical to allow them to remain competitive; however, 79% express concerns about AI tools having access to intellectual property or private information.

And 95% of senior technology executives prioritize protecting privacy and intellectual property when they select AI tools, according to the survey.

In addition, 32% of respondents were “very” or “extremely” concerned about introducing AI into the software development lifecycle.

Of those, 39% say they are concerned that AI-generated code may introduce security vulnerabilities, and 48% worry that AI-generated code may not be subject to the same copyright protection as human-generated code.


Complex Relationship Between Adopting AI and Privacy, Security Concerns

The relationship between adopting AI and the concerns surrounding cybersecurity and privacy is complex and multifaceted, says Sergey Medved, vice president of product management and marketing at Quest Software.

“It’s interesting that only 32% of the respondents to GitLab’s survey expressed reservations about incorporating AI into their software development lifecycle,” he says. “But it makes a certain kind of sense, since [nearly] half [40%] of the respondents work at [small and midsize businesses] or startups with 250 or fewer employees.”

For smaller or younger organizations, the allure of AI comes from its potential to bolster efficiency and competitiveness with fewer resources, which might outweigh its perceived cybersecurity risks, according to Medved.

In contrast, larger enterprises, particularly those developing software for critical infrastructure, earmark a greater portion of their IT budgets for security, including code security and supply chain risk management, he adds. And an increase in developer productivity may not be worth the heightened security or legal risks.

“This research shows that while there are absolutely cybersecurity concerns around AI for developers, we can’t apply a one-size-fits-all approach to mitigate them,” Medved says.

Increased Workloads for Security Pros

While 40% of those surveyed cite security as a key benefit of AI, 40% of security professionals say they worry that AI-powered code generation will increase their workloads.

“The transformational opportunity with AI goes way beyond creating code,” says David DeSanto, chief product officer of GitLab, in a statement. “According to the GitLab Global DevSecOps Report, only 25% of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60% of developers’ day-to-day work.”

The survey also notes that increased developer productivity may widen the existing gap between developers and security professionals.

The reason, as mentioned, is that security professionals are concerned that AI-generated code could cause more security vulnerabilities, increasing their workload. The result of the survey bear that out as developers report that they spend just 7% of their time identifying and mitigating security vulnerabilities.

“I believe this is a valid concern given the hallucinations, potential for bias, and lack of explainability given by large language models,” says Tony Lee, chief technology officer at Hyperscience, a provider of enterprise AI solutions.

However, a well-trained model should be able to generate secure code just as well as a professionally trained engineer does, he adds.

“The important thing for companies to remember is that code review, code analysis, and testing are critical to ensure the code is secure before going to production,” Lee says.

Additionally, 48% of developers compared to 38% of security professionals identify faster cycle times as a benefit of AI, according to the GitLab survey. But overall, 51% of those surveyed identify productivity as a key benefit of AI implementation.

How Organizations Can Mitigate Their Concerns About AI

This latest report from GitLab is another example of how major security concerns linger for organizations as sensitive and personally identifiable information is input into ChatGPT and other large language models, such as Google Bard, says Ron Reiter, co-founder and chief technology officer at Sentra, a cloud data security company.

“As the survey states, 79% of respondents noted concerns about AI tools having access to private information or intellectual property,” he says. “As AI seemingly becomes ubiquitous with office work, we can expect that number to rise dramatically and as a result, AI-related data theft will become a new threat.”

To mitigate these concerns, organizations should closely analyze their use of large language models (LLMs), Reiter adds. Specifically, they should realize that while there is no question that AI will play a vital role in the advancement of technology, they must take proactive steps to define the boundaries of acceptable AI behavior.

“One way of doing so is being aware of the rise of threat vectors propagated by ‘copy and paste’ prompts,” Reiter explains. “If security teams can educate employees about the risks of prompting LLMs, they can capitalize on the tool’s benefits while also protecting sensitive data in the same breath. Being smart about how to integrate AI means creating guardrails to ensure the ethical and responsible use of AI.”

What Companies Should Consider To Adopt AI Successfully

Data from S&P Global Market Intelligence’s “Voice of the Enterprise: AI and Machine Learning, Infrastructure 2023” report suggests a disconnect between the AI ambitions of organizations and their infrastructural realities, says Alexander Johnson, research analyst at S&P Global Market Intelligence and a member of the data, AI, and analytics research group.

“This is further heightened by enthusiasm surrounding generative AI,” he says. “By this I mean only around a third of organizations are able to meet the full scale of existing internal AI workload demand, and the average organization loses 38% of their projects before they enter production — with infrastructure performance and data quality the biggest drivers of that failure.”

There is a lot of focus on the availability of AI accelerators, in particular GPUs, but bottlenecks are much broader, according to Johnson. Many businesses see a need for higher performance networking and storage to improve the performance of their AI workloads, for example.

“Organizations with ambitions to invest in AI will need to pair that intent with a meaningful strategy around AI infrastructure and partnerships,” he adds.

There are three steps every organization should take to ensure their AI implementations are successful, says Lee.

“They should consider total cost of ownership when looking for a new solution — think beyond the initial install cost and look at the entire lifespan of the software,” he says. “They should also understand what data the models were trained on as well as the potential biases that may exist. And they should provide guardrails to protect their models from hallucinations, bias, and poor quality.”

The Bottom Line

Organizations should be cautious about introducing AI into the software development lifecycle, but robust reviewing and testing processes can help mitigate risks, according to Johnson.

“That said, it is important organizations guard against early overextension,” he says. “The risk may come less from experienced developers and more from enthusiastic business-line users experimenting with these tools, as they may sit outside of strategies surrounding the design and implementation of controls.”

Executives should also remain aware of legal and privacy implications, Johnson adds.

“Particularly if code generation tools are cloud-based or use external application programming interfaces, data handling processes need to be assessed and relevant security staff brought into the tool selection process,” he says. “In addition, ensure any code used to tune code generation tools meets licensing requirements.”

Simply put, it’s wise for companies to begin thinking about deploying AI to generate code as they would think about hiring a new engineer, Lee says.

“Organizations need to build trust in the data AI generates and shouldn’t expect perfection right away,” he adds.


Related Reading

Related Terms

Linda Rosencrance
Technology journalist
Linda Rosencrance
Technology journalist

Linda Rosencrance is a freelance writer and editor based in the Boston area, with expertise ranging from AI and machine learning to cybersecurity and DevOps. She has been covering IT topics since 1999 as an investigative reporter working for several newspapers in the Boston metro area. Before joining Techopedia in 2022, her articles have appeared in TechTarget,, TechBeacon, IoT World Today, Computerworld, CIO magazine, and many other publications. She also writes white papers, case studies, ebooks, and blog posts for many corporate clients, interviewing key players, including CIOs, CISOs, and other C-suite execs.