This forecast has broad socio-economic implications because, for businesses, AI is transformative—according to a recent McKinsey study, organizations implementing AI-based applications are expected to increase cash flow 120% by 2030.
But implementing AI comes with unique challenges. For consumers, for example, AI can amplify and perpetuate pre-existing biases—and do so at scale. Cathy O’Neil, a leading advocate for AI algorithmic fairness, highlighted three adverse impacts of AI on consumers:
- Opacity. AI is a black box to many consumers: Most lack insight into how it works.
- Scale. AI often produces biased outcomes that may be replicated across a wider class of protected groups.
- Damage. AI’s biased outcomes haven’t yet been countered with a reliably effective remedy to seek damages.
In fact, a PEW survey found that 58% of Americans believe AI programs amplify some level of bias, revealing an undercurrent of skepticism about AI’s trustworthiness. Concerns relating to AI fairness cut across facial recognition, criminal justice, hiring practices and loan approvals—where AI algorithms have proven to produce adverse outcomes, disproportionately impacting marginalized groups.
But what can be deemed as fair—as fairness is the foundation of trustworthy AI? For businesses, that is the million-dollar question.
Defining AI Fairness
AI’s ever-increasing growth highlights the vital importance of balancing its utility with the fairness of its outcomes, thereby creating a culture of trustworthy AI.
Intuitively, fairness seems like a simple concept: Fairness is closely related to fair play, where everybody is treated in a similar way. However, fairness embodies several dimensions, such as trade-offs between algorithmic accuracy versus human values, demographic parity versus policy outcomes and fundamental, power-focused questions such as who gets to decide what is fair.
There are five challenges associated with contextualizing and applying fairness in AI systems:
1. Fairness may be influenced by cultural, sociological, economic and legal boundaries.
In other words, what may be considered “fair” in one culture may be perceived as “unfair” in another.
For instance, in the legal context, fairness means due process and the rule of law by which disputes are resolved with a degree of certainty. Fairness, in this context, is not necessarily about decision outcomes—but about the process by which decision-makers reach those outcomes (and how closely that process adheres to accepted legal standards).
There are, however, other instances where application of “corrective fairness” is necessary. For example, to remedy discriminatory practices in lending, housing, education, and employment, fairness is less about treating everyone equally and more about affirmative action. Thus, recruiting a team to deploy an AI rollout can prove a challenge in terms of fairness and diversity. (Also read: 5 Crucial Skills That Are Needed For Successful AI Deployments.)
2. Fairness and equality aren’t necessarily the same thing.
Equality is considered to be a fundamental human right—no one should be discriminated against on the basis of race, gender, nationality, disability or sexual orientation. While the law protects against disparate treatment—when individuals in a protected class are treated differently on purpose—AI algorithms may still produce outcomes of disparate impact—when variables, which are on-their-face bias-neutral, cause unintentional discrimination.
To illustrate how disparate impact occurs, consider Amazon’s same-day delivery service. It’s based on an AI algorithm which uses attributes—such as distance to the nearest fulfillment center, local demand in designated ZIP code areas and frequency distribution of prime members—to determine profitable locations for free same-day delivery. Amazon’s same-day delivery service was also found to be biased against people of colour—even though race was not a factor in the AI algorithm. How? The algorithm was less likely to deem ZIP codes predominantly occupied by people of colour as advantageous locations to offer the service. (Also read: Can AI Have Biases?)
3. Group fairness and individual fairness call for different strategies.
Group fairness’ ambition is to ensure AI algorithmic outcomes do not discriminate against members of protected groups based on demographics, gender or race. For example, in the context of credit applications, everyone ought to have equal probability of being assigned a good credit score, resulting in predictive parity, regardless of demographic variables.
On the other hand, AI algorithms focused on individual fairness strive to create outcomes which are consistent for individuals with similar attributes. Put differently, the model ought to treat similar cases in a similar way.
4. Statistical parity must be balanced with fairness outcomes.
In this context, fairness encompasses policy and legal considerations and leads us to ask, “What exactly is fair?”
For example, in the context of hiring practices, what ought to be a fair percentage of women in management positions? In other words, what percentage should AI algorithms incorporate as thresholds to promote gender parity? (Also read: How Technology Is Helping Companies Achieve Their DEI Goals in 2022.)
5. Fairness implicates issues of power.
Before we can decide what is fair, we need to decide who gets to decide that. And, as it stands, the definition of fairness is simply what those already in power need it to be to maintain that power.
Responsible Data Science and Trustworthy AI
As there are many interpretations of fairness, data scientists need to consider incorporating fairness constraints in the context of specific use cases and for desired outcomes. Responsible Data Science (RDS) is a discipline influential in shaping best practices for trustworthy AI and which facilitates AI fairness.
RDS delivers a robust framework for the ethical design of AI systems that addresses the following key areas:
- Unbiased outcomes through the application of appropriate fairness constraints to the training data.
- Algorithmic outcomes interpreted in a manner that is meaningful to end users.
- Resilience in how AI systems deliver accurate results and respond to change in inputs.
- Accountability for the system’s outcomes; and
- Safeguarding confidentiality of training data through privacy-enhancing measures.
Trust Aware Process Mining to Ensure AI Fairness
While RDS provides the foundation for instituting ethical AI design, organizations are challenged to look into how such complex fairness considerations are implemented and, when necessary, remedied. Doing so will help them mitigate potential compliance and reputational risks, particularly as the momentum for AI regulation is accelerating.
Conformance obligations to AI regulatory frameworks are inherently fragmented—spanning across data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality process activities. These processes involve multiple steps across disparate systems, hand-offs, re-works, and human-in-the-loop oversight between multiple stakeholders: IT, legal, compliance, security and customer service teams.
Process mining is a rapidly growing field which provides a data-driven approach for discovering how existing AI compliance processes work across diverse process participants and disparate systems of record. It is a data science discipline that supports in-depth analysis of how current processes work and identifies process variances, bottlenecks and surface areas for process optimization.
Who has a stake in complying with AI regulations?
R&D teams, who are responsible for the development, integration, deployment, and support of AI systems, including data governance and implementation of appropriate algorithmic fairness constraints.
Legal and compliance teams, who are responsible for instituting best practices and processes to ensure adherence to AI accountability and transparency provisions; and
Customer-facing functions, who provide clarity for customers and consumers regarding the expected AI system inputs and outputs.
How does trust aware process mining help organizations learn to fulfill AI compliance processes and mitigate risks?
By visualizing compliance process execution tasks relating to AI training data—such as gathering, labeling, applying fairness constraints and data governance processes.
By discovering record-keeping and documentation process execution steps associated with data governance processes and identifying potential root causes for improper AI system execution.
By analyzing AI transparency processes, ensuring they accurately interpret AI system outputs and provide clear information for users to trust the results.
By examining human-in-the-loop interactions and actions taken in the event of actual anomalies in AI systems’ performance.
By monitoring, in real time, to identify processes deviating from requirements and trigger alerts in the event of non-compliant process tasks or condition changes.
Trust aware process mining can be an important tool to support the development of rigorous AI compliance best practices that mitigate against unfair AI outcomes.
That’s important—because AI adoption will largely depend on developing a culture of trustworthy AI. A Capgemini Research Institute study reinforces the importance of establishing consumer confidence in AI: Nearly 50% of survey respondents have experienced what they perceive as unfair outcomes relating to the use of AI systems, 73% expect improved transparency and 76% believe in the importance of AI regulation.
At the same time, effective AI governance results in increased brand loyalty and in repeat business. Instituting trustworthy AI best practices and governance is good business. It engenders confidence and sustainable competitive advantages.
Author and trust expert Rachel Botsman said it best when she described trust as, “the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown.”