World’s First Cybersecurity & AI Guidelines: Experts Weigh in

Why Trust Techopedia
KEY TAKEAWAYS

Cybersecurity and AI guidelines led by the UK and the US and endorsed by 18 countries signal a commitment to secure AI innovation. What do industry leaders think?

In a groundbreaking move, the United Kingdom has taken the lead in strengthening cybersecurity for artificial intelligence (AI) systems by releasing the world’s first comprehensive guidelines for AI development.

Developed by the National Cyber Security Centre (NCSC) in collaboration with the US Cybersecurity and Infrastructure Security Agency (CISA) and more than 20 international partners, these guidelines have garnered endorsements from 18 countries, underscoring a collective commitment to secure AI innovation.

We explored the guidelines in-depth here, including their key features and impacts, and now we take to the cybersecurity and AI halls of research to find out what experts think.

Speaking to Techopedia, Nic Chavez, Field Chief Information Officer at DataStax, noted that one of the important takeaways is the cautious and collaborative approach employed by the UK to develop the guideline.

“I think it’s important to recognize the caution and collaboration with which NCSC approached this endeavor. By seeking feedback from the international community, including other NATO nations, NCSC was able to triangulate recommendations that were reasonable, swiftly actionable and strong.”

In his reaction, Jeff Schwartzentruber, Senior Machine Learning Scientist at eSentire and Industry Research Fellow at Toronto Metropolitan University, told Techopedia that releasing these AI guidelines is a step in the right direction as it will help to expand international cooperation and accelerate commitments on the regulation and appropriate use of AI technologies.

“I see this as a positive step forward in terms of expanding the international cooperation and discourse on the regulation and appropriate use of AI technologies. As such, this initial document really speaks to the geopolitical landscape of AI advancement and its effect on national security intelligence.”

According to the Director of Applied Research and head of AI and Privacy at TripleBlind, Gharib Gharibi, the absence of China might affect the adaptability of the guidelines. He also fears the guidelines lack a technical implementation blueprint, leaving room for varied misinterpretations and applications.

Advertisements

In his words:

“The broad nature of the guidelines, while comprehensive and defines a clear scope of AI security, might lead to varied interpretations and applications across different organizations and countries without deeper technical details. This, along with the absence of major AI stakeholders, like China, might limit the universal applicability and effectiveness of these guidelines.”

Could there be Risks for the UK’s AI Development?

While this move might put guardrails across the AI development ecosystem, some experts believe it carries some implications for the future advancement of AI.

With the UK at the forefront of these new guidelines, there is fear that it won’t be long before we start witnessing tighter regulations against developing and deploying AI systems across the country.

CEO and quantum expert at Entanglement Inc., Jason Turner, argues that, ideally, this development should not affect the UK. However, he noted that:

“If the guidelines are developed based on fear, it is sometimes overreaching. If that is the case, then this could definitely impede the much-needed innovation in this sector and affect the UK’s global competitiveness.”

Schwartzentruber argues that while regulations may be inevitable following the release of these guidelines, over-regulation might hamper AI development in the UK.

“There is a valid risk considering that over-regulation can significantly stall or halt AI advancement, especially within the commercial enterprise. However, I consider this risk minimal to the potential upside of such partnerships, where increased research collaborations, knowledge sharing and thought leadership will act as a catalyst that further advances AI for all participating nations.”

While the above stance may be popular among some experts, Chavez has a contrary opinion. He believes that the guidelines will not, in any way, harm the UK’s drive to be among the top homes of AI developments in the world.

“These guidelines absolutely do not put the UK at risk. Much to the contrary, security is becoming an increasingly critical selling point, and by prioritizing it, the UK could attract partnerships and collaborations seeking secure and reliable AI solutions. The guidelines could position the UK as a hub for secure AI innovation, where security is not a constraint but a competitive advantage.”

During the launch event in London, attended by key industry, government, and international partners, NCSC CEO Lindy Cameron emphasized the need for security.

She called for “concerted international actions across governments and agencies” to keep pace with AI development.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyberspace will help us all to safely and confidently realize this technology’s wonderful opportunities.”

The Bottom Line

There has been a resounding cry for governments and professional bodies to look into the safety of AI development. These guidelines appear to be the first step in that direction.

While they can be a good reference point for implementing AI security, Martin Rand, Co-Founder and CEO at Pactum AI, argues that “future versions of these guidelines could do more to address the other abstract and complex problems associated with AI, like the ethical use of AI, misinformation, bias in AI models, and the impact on democracy and social systems.”

Advertisements

Related Reading

Related Terms

Advertisements
Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.