The New York Times lawsuit against OpenAI for copyright infringement cast a shadow over the industry at the end of 2023. This case has raised serious questions about the legality and moral implications of training artificial intelligence (AI) models on intellectual property.
In response to this case, Techopedia contacted Nikki Pope, Head of AI and Legal Ethics at Nvidia, to find out what she sees as the key legal trends to watch following the NYT lawsuit.
This includes a brief look at some of the top risks presented by AI, how enterprises can respond to the uncertainty over copyright, such as best practices to mitigate exposure, and a potential “existential risk” presented by the technology.
About Nikki Pope
Nikki Pope is an attorney, educator, and award-winning author. Her career spans decades in advertising, product management, marketing, securities law, legal education, filmmaking, criminal justice advocacy, and tech ethics.
She currently leads the Trustworthy AI Initiative at NVidia. Prior to joining NVidia, Nikki was the managing director of the High Tech Law Institute at Santa Clara University School of Law.
Before HTLI, she was a corporate attorney at Cooley LLP and a trial attorney at the US Department of Justice. Nikki has held leadership roles at large corporations, including American Express, Comcast, and J. Walter Thompson, and has worked with a number of tech startups as an advisor and an employee.
Key Takeaways
- Nikki Pope from Nvidia discusses key ethical AI trends, including risks such as defamation through AI-generated false information and potential issues related to Equal Employment Opportunity Commission regulations.
- The New York Times lawsuit against OpenAI is a live demonstration of the concerns about the legal and moral implications of training AI models on intellectual property.
- We also discuss the Federal Trade Commission’s involvement in AI regulation and the importance of addressing biases in AI models.
- Pope emphasizes the need for AI accessibility in various languages and communities and advocates for responsible AI practices and safety protocols.
AI Ethics and the New York Times vs. OpenAI Lawsuit
Q: What do you see as the biggest legal risks surrounding Large Language Models (LLMs) in 2024 following the New York Times lawsuit?
A: It’s important to remember that existing laws on intellectual property, product liability, data privacy, and other areas also apply to AI. It will be interesting to see how plaintiffs argue their position within the context of the applicable existing law.
The New York Times complaint filed against Microsoft and OpenAI alleging copyright infringement is an example of this because the Times alleges that the defendants illegally used its content to train AI models and now compete with the Times to create new content and produce content that copies NYT articles.
Another potential legal risk is the creation of false information that defames an individual. In April 2023, a chatbot wrongly accused a law professor of sexual assault.
The chatbot described a trip that the law professor had not taken where the sexual assault allegedly took place and even referenced a Washington Post article that doesn’t exist.
If the law professor were to bring a case for defamation, a key question would be where the liability lies if the chatbot’s output is deemed defamatory. Is it the company that deployed the chatbot? Is it the person who input the prompt that generated the defamatory content?
Companies that use LLMs for workflow, such as drafting letters of recommendation, may run afoul of Equal Employment Opportunity Commission regulations regarding employment discrimination.
Last year, a study found that certain chatbots were biased in how they described potential workers in recommendation letters, labeling men “listeners” and “thinkers” while framing women as “beauty” and “grace.”
In 2018, Amazon stopped using an AI-powered resume review tool that showed bias against hiring female applications. Likewise, when companies use AI for recruiting, they must be mindful of whether their AI treats different groups adversely because of a characteristic like gender or race.
The Federal Trade Commission is using its regulatory authority in consumer protection to extend its scope into AI. A recent FTC blog post advised companies that “there is no AI exemption from the laws on the books.” The post goes on to warn companies that are not transparent about data collection that they may be violating consumer protection laws.
What Comes Next?
Q: How do you expect the debate around AI legal ethics will evolve over the next 12 months?
A: Trends in AI and legal ethics are likely to focus on the impact of AI on different groups and the ability of various communities to participate in the AI revolution. Companies need to identify and mitigate biases in the data used to train AI.
With over 7,000 spoken or signed languages in the world, AI needs to be accessible to people in many languages. This doesn’t mean translating English into another language but providing the tools to help communities develop AI tools in their native language, including sign languages.
Building an AI model is expensive and requires skilled developers. The tech community must ensure that the cost of building AI models does not increase the gap between communities and groups who have access to technology and those who do not.
Training workers to use AI and build AI tools will be essential to closing the knowledge gap. It is and will continue to be important to educate the public about the benefits and potential risks of AI.
AI and Current Legal Frameworks
Q: How can enterprises that want to use LLMs address concerns over intellectual property and copyright violations?
A: Companies should comply with intellectual property laws, which are not uniform around the world. For example, the US fair use doctrine that allows for using limited portions of copyrighted work as a quote or sample does not exist in the EU.
Beyond that, a company should also consider the impact on its brand should a violation, perceived or otherwise, occur.
Q: What standardized safety protocols and best practices would you like to see to mitigate potential legal risks surrounding AI?
A: We should always start with what currently exists. At its core, AI is a product and should comply with the market safety requirements for any product in a particular category.
Self-driving cars have to meet safety requirements for cars. And they may require additional safety protocols for detecting bicyclists and pedestrians.
Tools are being developed to help assess, measure, and mitigate bias in AI output. Having benchmarks against which companies can measure an AI’s performance would be helpful. Various safety testing such as pen tests and red-teaming can help identify issues before an AI product is released.
Guardrails to control prompts, output, and topics can help guide an AI to avoid proscribed discussion areas. The type of test needed should depend on the level of risk associated with an AI.
For example, a recommender model for a streaming service has a lower risk profile than a recommender model for a medical procedure. Safety protocols for the latter should be more robust.
A certification framework could help customers and users know when an AI has met minimum safety requirements or standards and provide a baseline for AI developers.
What Does Responsible AI Mean To You?
Q: What does responsible AI mean from your perspective? Is it fair to say the concept of ‘responsible AI’ is evolving?
A: At its core, the goal of responsible AI is to ensure that AI products and systems are designed and implemented in ways that consider their impact on individuals, communities, and society.
How we define the impact of responsible AI is evolving. Some in the tech industry speak of the “existential threat” of AI – referring to the possibility that humans lose control of AI and risk annihilation.
I believe an existential threat exists today in communities that do not have access to the benefits of AI, either because of barriers to accessing AI or to data that includes members of the community.
An AI that treats black kidney patients adversely compared with white kidney patients poses a potential threat to the lives of black kidney patients. These biases and disparities on race, gender, and other characteristics are in AI now. Individuals and groups who interact with these AI would call this the real existential threat.