After a Turbulent Week for OpenAI, We Need to Balance Safety Without Destroying Innovation

Why Trust Techopedia

As Sam Altman was removed from OpenAI, before finding a new place at Microsoft, we explore how governments and other authorities will need to work together to tackle the new problems and issues that generative AI poses in a way that still allows innovators to realize its potential.

Generative AI has the potential to transform industries, but it also poses risks around bias, discrimination, the spread of misinformation, and the unpredictability of the algorithms as their capacities scale up.

The ouster of OpenAI co-founder and chief executive officer (CEO) Sam Altman may have, in part, been driven by concerns among the non-profit organization’s senior leadership about the pace of development and an emphasis on speed over safety.

The White House’s recent Executive Order aims to address some of the issues around safety by directing US government agencies to set guidelines for using artificial intelligence (AI) algorithms.

The order stated:

“As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems”.

The UK government recently hosted an AI Safety Summit that brought together representatives from 28 countries, as the UK aims to become a technological leader in the sector.

“Clear, effective government policy should strike a balance between safety and cultivating innovation and growth in the UK’s technology sector.


“Lack of clarity and insufficient government incentives could hamper the development of AI technologies and discourage overseas investors. This would mean less growth for the UK economy,” stated RSM, an audit, tax, and consulting advisory firm.

RSM’s latest Real Economy Report on AI showed that 50% of middle market business leaders want government policy to strike a balance between innovation and safety. In comparison, 37% would urge the government to prioritize safety over innovation.

So, where is the balance? How can governments approach regulating a technology that is evolving rapidly in ways that humans cannot necessarily foresee?

How Should Regulators Approach AI?

“With new emerging technologies and capabilities, we do tend to either let them run until we see problems and then react and regulate,” Mike Connell, AI scientist and chief operating officer (COO) at Enthought, told Techopedia.

“That’s often not great because then the regulations are more heavy-handed. If people self-regulate, that’s better. Self-regulation, though, is not as thorough.

“On the other hand, I see regulation suffocating innovation… there should be the minimum amount of regulation needed to make things work effectively and safely.

“It’s easiest to play it at one extreme or the other, no regulation or all regulation, and there’s so much value that’s lost or so much risk that’s unleashed at those two extremes. There are better places in the middle but they’re very hard to define and then to implement but it is valuable.”

Some existing principles can be applied to AI, such as the EU’s General Data Protection Regulation (GDPR) regulation, which aims to protect consumers’ privacy by allowing them to opt out of allowing businesses to collect or use their personal data.

Connell added: “There are principles that will carry over, but how they get implemented obviously needs to be completely rethought.”

The public should know when they are interacting with an AI or a bot and should know the provenance of information, such as social media, email, and government communications.

For example, verification processes that identify humans using personal data could enable people to turn off interactions with AI, ensuring they only interact with humans.

However, the implementation will require entirely new frameworks and ways of thinking.

Connell said: “How you operationalize such regulations needs to be completely rethought from the ground up. I don’t think we’ve seen anything like this before; the verifiably, the auditability—the old frameworks don’t work for that.”

While under GDPR, personal data records can be deleted from a database, there is no way to remove data from a generative AI large language model (LLM) without having to retrain it entirely – which is prohibitively expensive. A regulation that would require the removal of data would stifle innovation and result in inadequate data sets.

“We need a higher level of transparency than we’ve had in the past. Regulation should be pushing for that,” Connell said.

“We have to figure out how to make technologies that embed generative AI safe and be perceived as safe for people to want to participate. Giving that power to people to say I don’t want AI in my feed, or I don’t want to be interacting with an AI — at least until we figure this out better — can help.”

“We Need to Re-Think Copyright”

The generative AI models that have been created so far have been trained on Internet data, meaning that anyone who has ever used the Internet has participated without providing consent.

“That changes the game, and we need to think very hard about that… We need to start rethinking outmoded frameworks like IP, especially copyright,” Connell said. “Copyright is completely broken when it comes to generative AI” because neural networks learn from data like artists learn from studying the works of those who came before them.

“People are thinking about how we regulate this thing so it doesn’t put actors and writers out of business or so it doesn’t violate copyright, but let’s look beyond that incremental issue for the larger set of issues,” Connell said.

“All intellectual property needs to be rethought because there’s also human-AI collaboration. We need to think about building these regulatory frameworks and their implementation not based on a narrow view of trying to protect what we have today — but thinking about how the affordances fundamentally need to change how we operate.”

AI technologies and applications are complex to categorize, and creating legislation around categories inevitably leaves gray areas.

That can inhibit innovation as those at the forefront of development become concerned about the uncertainty or become subsumed into categories that are not appropriate and held to requirements that are not relevant.

The Role of Public-Private Partnerships

“Generative AI is different; it’s not really a technology in the sense that we’re used to thinking about it. It’s incredibly powerful, and we’re just getting started with it, but we’ve crossed a threshold where we’re going to start seeing even more powerful things that it can be used to scaffold,” Connell noted.

“It’s more like an artifact that you discover, and you have to study it to see what it does, how it behaves, what it’s capable of — and then ideally, we can figure out how to put it into engineered systems where it behaves predictably to do what we want.

“We need to think it at the particular level of the individual—rights, data, privacy—but also at the systems level—public-private relationships and how industries will be affected.”

The question of whether generative AI is a public good or a publicly constructed artifact has a bearing on how public-private relationships could potentially work together to share the expense and responsibility of providing the data to train algorithms.

The startup costs of building a generative AI model are too high for most companies that may be able to use it, creating barriers to entry, even though these models use curated data sets of public information.

“Everyone wants to hoard their data, but nobody has enough data, so if we can build a culture and a set of frameworks to make it both possible and rewarding for people and organizations to pool their data and to share the expense, you could combine the curated datasets with open-source technologies and people could create their own LLMs,” Connell said.

“I’m not super excited to have the government do that, but I think open-source could be one funded by organizations including the government and for-profit companies and benefactors. This is the sort of thing we need to rethink regarding how the regulations must differ both in theory and implementation.”

The Potential for Blockchain Auditability

The misuse of generative AI in proliferating misinformation is critical and increases the urgency of developing traceability systems that have long been needed.

“The state of social media and what it’s doing to our societies is linked to the fact that there’s no chain of custody identifying who generated this information and what happened to it before it got to me.

“Generative AI with chatbots and deep fakes just ramps up the importance of us figuring that out,” Connell said.

There is a role for blockchain technology and other forms of cryptography in encoding data to create auditability, Connell said, so that the identity of a source of information can be verified and the chain of custody can be recorded transparently. This would help to identify whether a piece of information was tampered with between the time it was created and the point at which it reaches an individual.

For economic and security reasons, democratizing access should be a key consideration, and yet with the acknowledgment that this is a double-edged sword that provides access to bad actors.

The Importance of Global Cooperation

The White House noted in the executive order that it will work with allies abroad to develop a robust international framework to govern the development and use of AI and has already consulted with several other countries in recent months.

Given the global nature of the Internet and accessibility to generative AI tools, regulation is likely to be more effective if applied across borders.

“I’m not a huge fan of heavy-handed regulation, but in this case, to the extent that we can come up with global principles and implement them, the broader that we can have enforced standards, the better off we will be,” Connell said.

“It’s just a question of whether it’s the Tower of Babel problem — given the different values and economic structures and the implications of any given principle — anything with local scope could behave differently.”

Connell notes that there is also likely to be pushback on any comprehensive regulation—with good reason. “We should have a healthy discussion here about whether we are regulating too early or too much.”

And suppose some countries implement more aggressive regulation while others are more lax. In that case, there is the risk that the more permissive jurisdictions will advance faster and become dominant in creating technologies and services that the more regulated areas will end up importing rather than developing and profiting from their own.

The Bottom Line

The potential benefits and risks of generative AI require a new approach to regulatory frameworks that allow for building on the foundations already in place.

Governments and other authorities will need to work together to tackle the new problems and issues that generative AI poses in a way that still allows innovators to realize its potential.


Related Reading

Related Terms

Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.