Password manager 1Password today released a report showing that 9 out of 10 (92%) of security professionals have security concerns about generative AI.
Some of the concerns listed by the research include employees entering sensitive data into AI tools (48%), using AI systems trained with incorrect or malicious data (44%), and falling for AI-enhanced phishing attempts (42%).
The study surveyed 1,500 North American workers, including 500 IT security professionals, to evaluate the state of enterprise security.
In the process, the researchers revealed that security practitioners are deeply concerned over their ability to control AI risks in the workplace, both due to employee usage and phishing scams powered by large language models (LLMs).
Why Is Generative AI Causing Security Teams So Much Anxiety?
One of the main reasons why generative AI is causing security teams anxiety is due to the widespread use of remote working and hybrid working.
Jeff Shiner, CEO of 1Password, said in the announcement press release: “Since the pandemic, employees have gained unprecedented flexibility in where and how they work, and that flexibility often extends to the apps and devices they use.
“Productivity has become paramount, leaving significant security challenges for IT and security leaders—who often feel like they don’t have bandwidth or budget to keep employees secure.”
In a short space of time, security practitioners have not only had to develop policies to secure remote working environments post-COVID-19 but have also had to keep up with the wave of AI-generated threats following the release of ChatGPT in November 2022.
On one end of the spectrum, teams must grapple with securing remote working environments and protecting devices and identities that are located offsite in employee’s homes and in public spaces.
Likewise, on the other side, they have needed to implement controls that enable the use of generative AI without exposing proprietary data to leakage if an employee accidentally enters company secrets, information that breaks privacy laws, or personally identifiable information (PII) into a ChatGPT prompt.
Factors like remote working and generative AI have divided the attention of security teams, with research finding that more than two-thirds of security pros (69%) admit they’re at least partly reactive when it comes to security. The main reason cited was that they were being pulled in too many conflicting directions (61%).
We recently reported on the rise of Chief Information Security Officers (CISOs) considering leaving their jobs, as well as the hardest cybersecurity jobs to fill in 2024.
Remote Working and The Door to Shadow AI
Remote working may have given employees the freedom to work where they’re most productive or comfortable, but it has also introduced some serious security complications.
At a glance, organizations have no way of knowing if employees working remotely are using cybersecurity best practices or acting negligently. Simple actions such as visiting restricted sites, using an unauthorized personal device, or failing to update software can introduce vulnerabilities that an organization is unaware exist.
A commonplace risk identified in the report was the use of Shadow IT. More specifically, the study found that 34% of employees use unapproved apps and tools. These workers would typically use a total of five shadow IT apps or tools, which aren’t maintained by the security team and are prone to leakage.
Ashley Leonard, CEO of Syxsense, told Techopedia:
“Companies should be considering the increased risk of employees using genAI and entering sensitive data into non-approved systems,”
“One way to approach this is to encourage the use of genAI but within specific boundaries and tools. We’ve seen employees leverage non-approved IT tools to get their jobs done, and companies still battle with Shadow IT today. By enabling the use of this new technology – within limits – you can reduce the risk.”
Examining The Risks of Generative AI in the Workplace
Generative AI introduces some significant risks in the workplace and requires users to be highly knowledgeable about the limitations of the technology.
Above all, employees need to be aware that information entered into prompts can be used to train the vendor’s models, which makes it unsuitable to enter proprietary data, PII, or other sensitive information.
While some solutions like ChatGPT Enterprise give assurances that prompts aren’t used to train models, these guarantees can’t be relied on due to the lack of transparency over a third-party organization’s data handling and training practices.
There is also the risk of AI systems being trained on copyrighted materials, incorrect data, or prejudiced content, which can result in harmful outputs.
It’s also worth noting that security teams can’t necessarily rely on employees to use AI solutions responsibly, with 22% of employees admitting to knowingly violating company rules on the use of generative AI.
If this wasn’t bad enough, a further one in four employees (26%) stated they don’t understand the security concerns over using AI tools at work.
When considering these factors alongside the risk of phishing due to AI-generated content, it is no surprise that security teams are concerned about the impact that generative AI has had on the threat landscape.
Where We Go Next
Grappling with the security challenges of generative in decentralized working environments is a difficult process, but for many organizations, the answer lies in investment in AI and AI knowledge.
In 2023, 40% of security teams implemented AI-specific security tools, and 42% of teams hired IT or security workers with AI-specific expertise.
Through the use of AI, enterprises can automatically identify anomalous access to user accounts and block unauthorized individuals from accessing privileged data. This significantly reduces the amount of pressure on human security professionals to react at speed.
Similarly, investing time and money into educating employees on how LLMs and other AI-driven tools can help to educate employees on security best practices and the risks of negligent use.
The Bottom Line
Generative AI unlocks new opportunities for employee productivity but organizations need to prioritize investing time and money into educating employees about the risk of misuse to lower the potential for data leakage.
Other measures that organizations are turning to to decrease risk in remote working environments include antivirus software, virtual private networks, Security Incident and Event Management, biometrics, multi-factor authentication, and passkeys.
The key challenge now is to stop using these tools as part of a disparate patchwork and to invest in building centralized solutions so that security teams can manage each component as part of a unified solution.