Tag: age of understanding

  • AI & Understanding —  Part 6        “Fairness Is Not Neutral:                   Who Decides What ‘Fair’ Means?”

    We often ask whether AI systems are fair.


    But fairness is not a technical setting.


    It is a decision.


    And behind every definition of fairness is a set of values — often unspoken, often embedded quietly into systems that appear objective.


    In the Age of Understanding, the question is no longer: Is this system fair?


    It is: Fair according to whom?

    The Illusion of Objective Fairness


    In everyday language, fairness feels intuitive.


    We assume it means:


    • Equal treatment
    • Equal opportunity
    • Equal outcomes


    But in practice, these are not the same.


    An AI system can be:


    • Fair in accuracy
    • Unfair in outcomes
    • Neutral in design
    • Biased in impact


    And often — it cannot satisfy all definitions at once.


    Fairness is not a single destination.


    It is a set of competing priorities.

    When Fairness Conflicts With Itself


    In machine learning, there are multiple formal definitions of fairness:


    Equal accuracy across groups
    Equal false positive rates
    Equal opportunity (same chance of success)
    Demographic parity (equal outcomes across groups)


    Here is the problem:


    Many of these definitions are mathematically incompatible.


    You cannot optimize all of them simultaneously.


    So every system makes a choice — explicitly or implicitly.


    And that choice reflects values.

    A Simple Example (That Isn’t Simple)


    Imagine an AI tool used to screen job applicants.


    It predicts who is most likely to succeed in a role.


    Now consider two fairness goals:


    1. Equal accuracy across all groups
    2. Equal hiring rates across all groups


    If historical opportunity has been unequal, these goals may conflict.


    • Optimizing for accuracy may reinforce past patterns
    • Optimizing for equal outcomes may require adjusting predictions


    So what should the system do?


    There is no purely technical answer.


    This is a moral decision disguised as a mathematical one.

    The Hidden Power of Defaults


    Most systems do not openly declare their fairness definition.
    They encode it through:


    • Default thresholds
    • Training data
    • Optimization targets
    • Business objectives


    Fairness becomes invisible — not because it is absent, but because it is assumed.


    And what is assumed is rarely questioned.

    Who Gets to Decide?


    Fairness as Governance, Not Just Design


    Global AI frameworks increasingly recognize this.


    The OECD AI Principles emphasize fairness, accountability, and human-centered values.


    The European Union Artificial Intelligence Act requires risk assessments and oversight for high-impact systems.


    But even with regulation, one question remains unresolved:


    Regulation can require fairness.


    It cannot define it universally.


    The Risk of “Technically Fair, Socially Unjust


    A system can meet formal fairness metrics and still produce outcomes that feel unjust.


    Why?


    Because metrics simplify reality.


    They measure what is visible.


    But they cannot fully capture:


    • Historical inequality
    • Structural barriers
    • Human context
    • Lived experience


    Fairness, when reduced to metrics alone, risks becoming performative.

    Toward Participatory Fairness
    If fairness cannot be purely technical, it must be relational.


    This means shifting from: Designed fairness → Participatory fairness


    Where:


    • Affected communities are included in system design
    • Trade-offs are made visible
    • Decisions are explained, not hidden
    • Feedback loops are real, not symbolic


    Fairness becomes something we negotiate — not something we assume.


    A More Honest Question
    Instead of asking:


    “Is this system fair?”


    We should ask:


    • What definition of fairness is being used?
    • What trade-offs were made?
    • Who benefits from this definition?
    • Who might be disadvantaged?
    • Can this system be challenged or changed?


    These questions move us from passive trust to active understanding.


    Closing Reflection


    In the Age of Information, fairness was often assumed.


    In the Age of Understanding, it must be examined.


    Because fairness is not neutral.


    It is shaped.


    And what is shaped can be reshaped.

  • The Word I Would Ban

    If you could permanently ban a word from general usage, which one would it be? Why?

    If I could permanently ban one word from everyday language, it wouldn’t be a swear word.


    It wouldn’t be slang.


    It wouldn’t even be something obviously harmful.


    It would be:


    Should.


    Not because it’s dramatic.
    Not because it’s offensive.
    But because it is quietly corrosive.


    The Problem With “Should”


    “Should” sounds responsible.


    Mature.


    Productive.


    Adult.


    But listen closely to how it shows up:


    I should exercise more.
    I should be further along by now.
    I should have known better.
    They should act differently.
    You should be grateful.


    It sounds motivational.
    It rarely is.


    Underneath “should” is often something much less noble:


    Shame
    Comparison
    Unrealistic timelines
    Moral superiority
    Regret dressed up as logic


    “Should” doesn’t ask questions.
    It delivers verdicts.


    And verdicts rarely invite growth.


    The Quiet Pressure of a Small Word
    In cognitive psychology, “should statements” are considered a cognitive distortion. They create rigid expectations about how we and others must behave.


    Rigid expectations feel structured.
    But they are brittle.


    When we don’t meet them, we don’t gain clarity.


    We gain guilt.


    When others don’t meet them, we don’t gain curiosity.


    We gain resentment.


    “Should” sounds like discipline.


    Often, it’s just pressure wearing a respectable outfit.


    A Small Experiment
    Try this shift:


    Instead of:
    I should be better at this.

    Try:
    I want to improve at this.
    I’m disappointed in my progress.
    This matters to me.


    Notice the difference?


    One is accusation.
    The other is information.


    One tightens the chest.
    The other opens the door.


    Language shapes thought.
    Thought shapes emotion.
    Emotion shapes behavior.


    Sometimes the smallest word carries the heaviest weight.


    If “Should” Actually Worked…
    If “should” were an effective motivational tool, none of us would need alarm clocks.


    “I should wake up early.”
    “I should stop scrolling.”
    “I should drink more water.”


    And yet… here we are.


    If “should” burned calories, we’d all be Olympic athletes.


    What I’d Replace It With
    Not lower standards.
    Not apathy.
    Not indifference.


    I’d replace “should” with awareness.
    Instead of:
    I should be further ahead.
    Ask:
    According to who?


    Instead of:
    I should handle this better.
    Ask:
    What would handling this well actually look like?


    The goal isn’t to remove responsibility.
    It’s to remove unnecessary self-punishment disguised as productivity.


    If I could ban one word, it wouldn’t be to control language.


    It would be to invite clarity.


    Because growth rarely begins with accusation.


    It begins with honesty.


    And sometimes the most powerful change you can make isn’t in your schedule…


    It’s in your vocabulary.