Tag: machine learning bias

  • AI & Understanding —  Part 6        “Fairness Is Not Neutral:                   Who Decides What ‘Fair’ Means?”

    We often ask whether AI systems are fair.


    But fairness is not a technical setting.


    It is a decision.


    And behind every definition of fairness is a set of values — often unspoken, often embedded quietly into systems that appear objective.


    In the Age of Understanding, the question is no longer: Is this system fair?


    It is: Fair according to whom?

    The Illusion of Objective Fairness


    In everyday language, fairness feels intuitive.


    We assume it means:


    • Equal treatment
    • Equal opportunity
    • Equal outcomes


    But in practice, these are not the same.


    An AI system can be:


    • Fair in accuracy
    • Unfair in outcomes
    • Neutral in design
    • Biased in impact


    And often — it cannot satisfy all definitions at once.


    Fairness is not a single destination.


    It is a set of competing priorities.

    When Fairness Conflicts With Itself


    In machine learning, there are multiple formal definitions of fairness:


    Equal accuracy across groups
    Equal false positive rates
    Equal opportunity (same chance of success)
    Demographic parity (equal outcomes across groups)


    Here is the problem:


    Many of these definitions are mathematically incompatible.


    You cannot optimize all of them simultaneously.


    So every system makes a choice — explicitly or implicitly.


    And that choice reflects values.

    A Simple Example (That Isn’t Simple)


    Imagine an AI tool used to screen job applicants.


    It predicts who is most likely to succeed in a role.


    Now consider two fairness goals:


    1. Equal accuracy across all groups
    2. Equal hiring rates across all groups


    If historical opportunity has been unequal, these goals may conflict.


    • Optimizing for accuracy may reinforce past patterns
    • Optimizing for equal outcomes may require adjusting predictions


    So what should the system do?


    There is no purely technical answer.


    This is a moral decision disguised as a mathematical one.

    The Hidden Power of Defaults


    Most systems do not openly declare their fairness definition.
    They encode it through:


    • Default thresholds
    • Training data
    • Optimization targets
    • Business objectives


    Fairness becomes invisible — not because it is absent, but because it is assumed.


    And what is assumed is rarely questioned.

    Who Gets to Decide?


    Fairness as Governance, Not Just Design


    Global AI frameworks increasingly recognize this.


    The OECD AI Principles emphasize fairness, accountability, and human-centered values.


    The European Union Artificial Intelligence Act requires risk assessments and oversight for high-impact systems.


    But even with regulation, one question remains unresolved:


    Regulation can require fairness.


    It cannot define it universally.


    The Risk of “Technically Fair, Socially Unjust


    A system can meet formal fairness metrics and still produce outcomes that feel unjust.


    Why?


    Because metrics simplify reality.


    They measure what is visible.


    But they cannot fully capture:


    • Historical inequality
    • Structural barriers
    • Human context
    • Lived experience


    Fairness, when reduced to metrics alone, risks becoming performative.

    Toward Participatory Fairness
    If fairness cannot be purely technical, it must be relational.


    This means shifting from: Designed fairness → Participatory fairness


    Where:


    • Affected communities are included in system design
    • Trade-offs are made visible
    • Decisions are explained, not hidden
    • Feedback loops are real, not symbolic


    Fairness becomes something we negotiate — not something we assume.


    A More Honest Question
    Instead of asking:


    “Is this system fair?”


    We should ask:


    • What definition of fairness is being used?
    • What trade-offs were made?
    • Who benefits from this definition?
    • Who might be disadvantaged?
    • Can this system be challenged or changed?


    These questions move us from passive trust to active understanding.


    Closing Reflection


    In the Age of Information, fairness was often assumed.


    In the Age of Understanding, it must be examined.


    Because fairness is not neutral.


    It is shaped.


    And what is shaped can be reshaped.

  • AI and the Illusion of Objectivity

    Why Algorithmic Decisions Feel Neutral — Even When They’re Not

    AI and the Illusion of Objectivity
    Why Algorithmic Decisions Feel Neutral — Even When They’re Not
    We tend to trust numbers.


    A score feels neutral.
    A ranking feels fair.
    An algorithm feels unbiased.


    After all, machines don’t have opinions.


    Or do they?


    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.


    The Seduction of Data


    Artificial intelligence systems are often described as “data-driven.” That phrase carries weight. Data implies measurement. Measurement implies precision. Precision implies fairness.


    But data does not emerge from nowhere.


    It is collected by humans.
    Labeled by humans.
    Selected by humans.
    Interpreted by humans.


    Large language models and predictive systems — whether deployed in hiring, lending, healthcare, or criminal justice — are built on historical information. And history is not neutral.


    When we say an algorithm is objective, what we often mean is that its reasoning is hidden.


    Opacity is not neutrality.


    When Bias Scales


    In 2018, investigative reporting by ProPublica revealed racial disparities in the COMPAS risk assessment tool used in U.S. courts. The algorithm, designed to predict recidivism, disproportionately flagged Black defendants as higher risk compared to white defendants.


    The system did not “intend” bias.


    It reflected patterns in historical data and institutional practices.


    Similarly, researchers at MIT and Stanford University demonstrated in 2018 that commercial facial recognition systems had significantly higher error rates for darker-skinned women compared to lighter-skinned men (Buolamwini & Gebru, 2018).


    Again, the models were trained on skewed datasets.


    Bias did not disappear in automation.
    It scaled.


    When human decisions are imperfect, harm is localized.
    When algorithmic decisions are imperfect, harm replicates.


    The Psychological Comfort of Automation


    Part of the illusion of objectivity comes from us.


    Psychologists refer to “automation bias” — the tendency to over-trust automated systems, even when they are flawed. When a decision is delivered by a machine, it can feel less emotional, less political, less personal.


    It feels clean.


    Nobel laureate Daniel Kahneman explains in Thinking, Fast and Slow that humans equate structured reasoning with reliability. Clear outputs reduce cognitive strain. Reduced strain increases perceived credibility.


    In other words:


    If it looks systematic, we assume it is fair.
    But structured output is not the same as just outcome.


    Objectivity vs. Optimization
    Artificial intelligence systems do not pursue fairness. They pursue objectives defined in their training and design.


    They optimize for:


    Prediction accuracy
    Engagement
    Efficiency
    Risk minimization
    Profit


    Those objectives are chosen by organizations.


    Even large language models like GPT-4 are trained to generate statistically probable responses, not verified truths. As acknowledged in OpenAI’s technical documentation, these systems are probabilistic — they predict patterns in language rather than confirm reality.


    An AI model cannot be more neutral than the goal it is given.


    If the optimization target embeds bias, the output will reflect it.


    Governance Is a Human Question


    Recognizing this, global institutions have begun emphasizing accountability.


    The OECD AI Principles call for transparency, robustness, and human oversight. The World Economic Forum has identified algorithmic bias and AI-driven misinformation as emerging global risks.


    These are not fringe concerns.


    They are governance concerns.


    When an algorithm influences:
    Who gets hired
    Who receives credit
    Who is flagged for risk
    Who receives medical prioritization


    The question is no longer technical.


    It is ethical.


    And ethical systems require accountability.


    The Deeper Issue


    The illusion of objectivity is powerful because it relieves us of discomfort.


    If the algorithm decided, no one had to.


    Responsibility diffuses.


    But AI does not eliminate judgment.


    It relocates it:


    Into training data
    Into system design
    Into objective functions
    Into deployment decisions


    Human judgment never disappears.


    It simply becomes less visible.


    A Personal Practice


    When I encounter AI-generated analysis, scores, or summaries, I now ask:


    What data trained this?
    Who defined the objective?
    What might be missing?
    Who benefits from this output?
    Who might be harmed by it?


    These questions do not reject AI.


    They contextualize it.


    The Age of Understanding requires more than technological literacy.


    It requires structural literacy.


    Closing Thought


    An algorithm can calculate.


    It cannot deliberate.


    It can predict.


    It cannot weigh justice.


    Objectivity is not achieved by removing humans from systems.


    It is achieved by making human responsibility explicit.


    Selected References
    Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research.
    Angwin, J., et al. (2016). Machine Bias. ProPublica.
    Kahneman, D. (2011). Thinking, Fast and Slow.
    OpenAI. (2023). GPT-4 Technical Report.
    OECD. (2019). OECD AI Principles.
    World Economic Forum. (2024). Global Risks Report.