4 Principles of Responsible Artificial Intelligence Systems

Why Trust Techopedia
KEY TAKEAWAYS

The need for Responsible AI is undisputed, but implementing it is challenging.

As AI becomes all-pervading, AI systems need to be more transparent about how they arrive at their decisions. Without a standard governance framework, however, the task of supporting explainable AI is not easy. (Also read: Why Does Explainable AI Matter Anyway?)

Recently, Techopedia brought together the following leaders to discuss how and why organizations are adopting Responsible AI as a governance framework:

Anthony Habayeb, co-Founder and CEO of Monitaur.
Andrew Pery, AI ethics evangelist.

The panel discussion produced some great talking points that you can use to inspire discussions about AI governance in your organization. They include the ideas that:

  • Stakeholders should stop treating Responsible AI and ethical AI as synonyms.
  • Responsible AI systems should be developed around a standardized framework.
  • Stakeholders should not expect the same Responsible AI framework to address the needs of multiple industries.
  • Organizations will need to balance competing priorities to support both corporate governance policies and Responsible AI principles.

Here is a discussion of each of these talking points in more depth:

1. Define the Scope of Responsible and Ethical AI

One concern that came up at Techopedia’s recent webinar is that the concepts of Responsible AI and ethical AI are often being treated as if they were the same thing. This is not correct and it can create misunderstandings when project stakeholders treat the two terms as synonyms.

So, what’s the difference?

According to our panelists, ethical AI focuses on aspirational values such as producing fair outcomes and recognizing the human right to keep one’s personally identifiable information (PII) private.

In contrast, Responsible AI focuses on the technological and organizational measures that enable organizations to achieve those aspirational objectives. The sum total of these two systems can also be called trustworthy AI.

2. Expect to Balance Responsible AI With Established Corporate Governance Policies

Next, our experts touched on how organizations need to balance the interests of the company’s shareholders, customers, community, financiers, suppliers, government and management. This can make incorporating and executing Responsible AI systems difficult because a broad mix of stakeholders can have competing priorities.

That’s why it’s important for organizations to align the principles of Responsible AI with their corporate governance policies to provide the following:

  • Alignment of an AI system with the organization’s values.
  • Strategies for conflict resolution when stakeholder priorities compete.
  • Clarity and transparency for an AI model’s decision-making processes.
  • Accountability for an AI model’s decisions.

A responsible AI system must be equipped to handle conflicts of interest between the shareholders and the customers. The Volkswagen incident our experts discussed is an instructive case study: When corporate leadership wanted to reward shareholders at their customers’ expense, it didn’t go well.

It’s important that AI systems be transparent about conflict of interests in both corporate and government sectors. (Also read: Explainable AI Isn’t Enough; We Need Understandable AI.)

3. Debate the Ethical Issues That Affect AI Systems

An AI system, irrespective of the industry, must accommodate disparate stakeholders and the organization’s reputation and public perception can be negatively impacted when mystery box AI systems are not explainable.

For example, it’s important that the AI systems used to automate loan approvals be transparent and not weighted down by demographic or socio-economic biases. Many fintech institutions use AI to evaluate applications for loans or mortgages. However, when an AI system is only trained with historical data, it can result in turning down individuals in certain demographic groups whose Fair Isaac Corporation (FICO) credit scores have been low in the past.

The ecological and environmental impact of AI systems must also be discussed. Some research shows that a single training AI system can emit as much as £150,000 of carbon dioxide. When choosing a governance framework for Responsible AI, it’s important for organizations to balance AI development with its impact on the environment.

Lastly, don’t forget security! Corporate deep neural networks are often trained with proprietary data as well as huge volumes of data scraped from the internet. The proprietary data can be a goldmine for hackers, so it’s important to discuss how your AI system will be protected from malicious actors. (Also read: AI in Cybersecurity: The Future of Hacking is Here.)

4. Follow a Mature Framework for Responsible AI

Organizations such as the Institute of Electrical and Electronics Engineers (IEEE), the European Commission (EU) and the Partnership on AI have already been developing frameworks for developing and maintaining Responsible AI systems. The frameworks are based on the following principles:

  • Objective and quantifiable parameters: For example, an AI medical system should be able to accurately diagnose the medical conditions of patients and prescribe customized remedies without regarding the billing aspect.
  • Fairness: AI systems must apply the same evaluation, assessment, and judgment parameters regardless of scenarios or people. For example, applicant tracking systems that are using AI to evaluate employment applications must apply the same parameters to all applicants irrespective of their race, gender or age.
  • Privacy and safety: AI systems must stringently safeguard confidential data. For example, medical AI systems must safeguard patient data to prevent patients from falling victim to scams.

Conclusion

The importance of Responsible AI is beyond debate, but ensuring that all AI systems are transparent and explainable is not an easy task. The more complex the deep learning model, the harder it becomes to understand how decisions have been made.

The need for Responsible AI frameworks is still a nascent idea, but it is one that is developing quickly in response to real-world problems. Our experts predict that AI frameworks for ensuring confidentiality, fairness and transparency will soon be common across every industry. (Also read: Experts Share 5 AI Predictions for 2023.)

Related Reading

Related Terms

Kaushik Pal
Technology Specialist
Kaushik Pal
Technology Specialist

Kaushik is a Technical Architect and Software Consultant with over 23 years of experience in software analysis, development, architecture, design, testing and training. He has an interest in new technologies and areas of innovation. He focuses on web architecture, web technologies, Java/J2EE, open source software, WebRTC, big data and semantic technologies. He has demonstrated expertise in requirements analysis, architecture design and implementation, technical use cases and software development. His experience has spanned across industries like insurance, banking, airlines, shipping, document management and product development etc. He has worked on a wide range of technologies ranging from large scale (IBM S/390),…