As artificial intelligence (AI) continues to reshape businesses, Techopedia speaks to Francesca Rossi, an IBM Fellow and a prominent advocate for Ethical AI, for a thought-provoking perspective on integrating human values at the core of AI systems.
Rossi delves into IBM’s approach to promoting AI and their strategies for enhancing intelligence, safeguarding data ownership rights, and ensuring transparency in AI technology.
Learn more about Rossi and IBM’s endeavors to address biases in AI and how ethical practices play a crucial role in fostering innovation.
About Francesca Rossi
Francesca Rossi is an IBM Fellow and AI Ethics Global Leader. She is also a GPAI expert and member of the Steering Committee, IEEE member of the Executive Committee of AI Ethics initiative, and the WEF GFC on AI for Humanity co-chair. She is the co-author of ACM’s TechBrief: Generative Artificial Intelligence.
Her research projects are aimed at embedding human values in AI systems.
Rossi emphasizes the significance of collaborative efforts as we move toward a future where AI is developed and — hopefully — utilized responsibly.
Q: How is IBM approaching developing and implementing ethical AI principles in its operations?
A: We have centralized internal governance, which means that even though IBM is a global company with offices everywhere in the world, the centralized governance on AI ethics and principles are concrete actions that are the same wherever the company operates.
We evaluate the risks of specific solutions based on AI we deliver to our clients to understand whether they align with our principles. This is the first step in every AI ethics journey an organization can go through.
It’s a good starting point that sets the scenario about where you want to go.
For example, our principles say that AI should augment human intelligence and not replace it. That doesn’t mean no task should be automated, but we should use this technology to augment our capabilities, intelligence, creativity, and problem-solving ability.
We also believe that data belongs to whoever created and generated this data rather than those who use it. Since IBM is a B2B, our clients are not individuals. Our clients are other companies. So when we do something for a client, we don’t reuse the insights and data we use for the client for another client. So, that data remains there.
Another principle is that the technology, including AI, should be explainable and transparent. So, starting from these principles, we focused on transparency, explainability, robustness, and privacy, and now, with generative AI, responsible content generation and its societal impact.
One of the essential things we learned over the years is that when there is a problem with technology, a tech company tends to say that the problem can only be solved with more technology.
So, people say: “There is a problem with AI”, and then they say we should use AI to solve it. But the most complex part is changing the frame of mind of the people building the technology.
Until a few years ago, every programmer was used to thinking only about the technology, not the societal impact.
Teams did not know what it meant to be biased or create discrimination because of bias in the data or development process. So we needed to help them understand. We did this via education, design thinking sessions, and what all those things mean in their everyday job.
Potential Biases in AI
Q: How is IBM addressing potential biases in AI systems, and what strategies are in place to ensure fairness and inclusivity in AI models?
A: In traditional machine learning approaches, bias or possible biases can be added, which must be mitigated because it can lead to machine learning solutions that create discrimination between different groups of people. Training data may correlate with variables we don’t see as human beings.
Still, the technology and the machine learning training steps can be found and used when making decisions.
Bias can be included in every step the developers take in building, from training to testing and everything else to the final solution.
However, these machines may be biased because we humans are biased. And we don’t even know about it. We are unconsciously biased. We have so many different cognitive biases when we make decisions. So that’s why we find these biases in the training set; we might find them in every decision these developers make.
One important thing to do is to ensure teams are as diverse as possible, with different backgrounds, genders, and knowledge, so that they can discover each other’s biases when making team decisions.
Q: Can you share examples of how IBM’s ethical AI practices have influenced the AI industry and even set new standards for others?
A: IBM built AI Fairness 360, which has also been donated to the Linux Foundation. It is open-source, has many AI tools that have been developed, and is available for everybody who wants to build ethical AI. We have also made other toolkits for explainability, fairness, robustness, and privacy, released openly.
There is a tendency now to release open-source solutions to address some of the risks technology creates. But it also ensures that tech is more accessible to everybody, even academia, which has fewer resources than companies.
It is essential to advance correctly and avoid the power imbalance between those who have the resources to build and use the tools and those who do not. It’s crucial to ensure greater accessibility to improve and accelerate the advancement of AI capabilities while accelerating the understanding and advancement of mitigations of its risks.
Societal Challenges of AI
Q: Are there any other ethical or societal challenges you foresee becoming more prominent in the next few years?
A: We know that generative AI systems struggle to recognize what is false and what is true, and so they can generate content that is not true.
These so-called hallucinations can also generate content not aligned with specific values. The fact that they can create misinformation or information that is not true is certainly something to be taken care of very carefully.
Another risk is deepfakes. Of course, these systems are very good at generating images or videos that are almost indistinguishable from the real ones. So, we must understand how to tackle those situations with policies.
The risk with AI is that it’s usable for many different purposes. Some may be very high risk, but others may be very mundane.
A shallow risk could be deciding which recommended movie to watch on Netflix, but this is a different level of risk than a recommendation on which therapy can be used for a patient. But they can both be built from the same technique and foundation model.
This tells me that regulation should focus on the uses rather than upstream technology. So there are these additional challenges related to what the technology is capable of, but also the limitations and policy challenges that make this new wave of AI more complex.
Q: What should businesses do to ensure responsible advancements in deploying generative AI?
A: One thing that IBM recently released is this new platform called Watson X, which is a platform to help our clients build an AI solution. This platform has three components.
The first is called WatsonX, which focuses on data essential to every machine-learning approach.
Then, the second component is called WatsonX.AI, where you build the machine learning approach and a solution.
Finally, the third component is called WatsonX.Governance — where we put everything we learned about AI ethics.
The governance must be as important as the other two pieces. It should not be an additional thing to be done that is “nice to have”. It must be an integral part, just like the data and just like building the AI solution.
Another thing that businesses should consider is that, sometimes, they see AI ethics as slowing them down from making money. But this is very short-sighted. It should not be slowing down but accelerating the path to the right kind of innovation.
Innovation has to be responsible. Otherwise, it’s going to generate all sorts of chaos and damage to many actors in society.
So, AI ethics does not go in a different direction from profit but goes in the same direction to get profit from the right kind of innovation.
We have seen from recent studies that companies that embrace that attitude are more successful than those that insist on shortcuts.
Q: How should organizations and individuals stay informed about the ethical implications of rapidly evolving AI technologies?
A: While we develop many educational materials internally at IBM, we also work externally. We work with Coursera to develop modules for educating users about everything from AI ethics to explainability.
Recently, in Watson.X, we included a risk atlas. This information is then transformed into an online table that clients can search on the web, considering the risks related to content generation.
It’s essential to raise awareness because many things go poorly when AI is not used appropriately or without awareness of its current limitations and risks.
Q: What makes you hopeful about the future?
A: We have to work together. In the last ten years, I’ve seen how we have gone from a small group of people thinking about what AI ethics means to a place where it’s everywhere. But for the future, we need to work together, and it needs to be intentional.
In a passive approach, we wait for the problems and then make some patches. But we must be proactive, intentional, and collaborative.
This open-source tendency or open innovation ecosystem is significant for everybody. Not just those who have enough resources to participate in the AI revolution should be able to bring their talent to advance AI and also advance the responsible uses of AI.
Companies have an essential role to play, but also governments, standard bodies, academia, and educators for the next generation that will build AI systems and use them.
But also, each of us, in our own life, needs to think carefully about using technology most responsibly and think about the implications of technology in our lives.