AI & Understanding —  Part 6        “Fairness Is Not Neutral:                   Who Decides What ‘Fair’ Means?”

We often ask whether AI systems are fair.


But fairness is not a technical setting.


It is a decision.


And behind every definition of fairness is a set of values — often unspoken, often embedded quietly into systems that appear objective.


In the Age of Understanding, the question is no longer: Is this system fair?


It is: Fair according to whom?

The Illusion of Objective Fairness


In everyday language, fairness feels intuitive.


We assume it means:


• Equal treatment
• Equal opportunity
• Equal outcomes


But in practice, these are not the same.


An AI system can be:


• Fair in accuracy
• Unfair in outcomes
• Neutral in design
• Biased in impact


And often — it cannot satisfy all definitions at once.


Fairness is not a single destination.


It is a set of competing priorities.

When Fairness Conflicts With Itself


In machine learning, there are multiple formal definitions of fairness:


Equal accuracy across groups
Equal false positive rates
Equal opportunity (same chance of success)
Demographic parity (equal outcomes across groups)


Here is the problem:


Many of these definitions are mathematically incompatible.


You cannot optimize all of them simultaneously.


So every system makes a choice — explicitly or implicitly.


And that choice reflects values.

A Simple Example (That Isn’t Simple)


Imagine an AI tool used to screen job applicants.


It predicts who is most likely to succeed in a role.


Now consider two fairness goals:


1. Equal accuracy across all groups
2. Equal hiring rates across all groups


If historical opportunity has been unequal, these goals may conflict.


• Optimizing for accuracy may reinforce past patterns
• Optimizing for equal outcomes may require adjusting predictions


So what should the system do?


There is no purely technical answer.


This is a moral decision disguised as a mathematical one.

The Hidden Power of Defaults


Most systems do not openly declare their fairness definition.
They encode it through:


• Default thresholds
• Training data
• Optimization targets
• Business objectives


Fairness becomes invisible — not because it is absent, but because it is assumed.


And what is assumed is rarely questioned.

Who Gets to Decide?


Fairness as Governance, Not Just Design


Global AI frameworks increasingly recognize this.


The OECD AI Principles emphasize fairness, accountability, and human-centered values.


The European Union Artificial Intelligence Act requires risk assessments and oversight for high-impact systems.


But even with regulation, one question remains unresolved:


Regulation can require fairness.


It cannot define it universally.


The Risk of “Technically Fair, Socially Unjust


A system can meet formal fairness metrics and still produce outcomes that feel unjust.


Why?


Because metrics simplify reality.


They measure what is visible.


But they cannot fully capture:


• Historical inequality
• Structural barriers
• Human context
• Lived experience


Fairness, when reduced to metrics alone, risks becoming performative.

Toward Participatory Fairness
If fairness cannot be purely technical, it must be relational.


This means shifting from: Designed fairness → Participatory fairness


Where:


• Affected communities are included in system design
• Trade-offs are made visible
• Decisions are explained, not hidden
• Feedback loops are real, not symbolic


Fairness becomes something we negotiate — not something we assume.


A More Honest Question
Instead of asking:


“Is this system fair?”


We should ask:


• What definition of fairness is being used?
• What trade-offs were made?
• Who benefits from this definition?
• Who might be disadvantaged?
• Can this system be challenged or changed?


These questions move us from passive trust to active understanding.


Closing Reflection


In the Age of Information, fairness was often assumed.


In the Age of Understanding, it must be examined.


Because fairness is not neutral.


It is shaped.


And what is shaped can be reshaped.

Comments

Leave a comment