Tag: #Technology and Society

  • AI and the Illusion of Objectivity

    Why Algorithmic Decisions Feel Neutral — Even When They’re Not

    AI and the Illusion of Objectivity
    Why Algorithmic Decisions Feel Neutral — Even When They’re Not
    We tend to trust numbers.


    A score feels neutral.
    A ranking feels fair.
    An algorithm feels unbiased.


    After all, machines don’t have opinions.


    Or do they?


    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.


    The Seduction of Data


    Artificial intelligence systems are often described as “data-driven.” That phrase carries weight. Data implies measurement. Measurement implies precision. Precision implies fairness.


    But data does not emerge from nowhere.


    It is collected by humans.
    Labeled by humans.
    Selected by humans.
    Interpreted by humans.


    Large language models and predictive systems — whether deployed in hiring, lending, healthcare, or criminal justice — are built on historical information. And history is not neutral.


    When we say an algorithm is objective, what we often mean is that its reasoning is hidden.


    Opacity is not neutrality.


    When Bias Scales


    In 2018, investigative reporting by ProPublica revealed racial disparities in the COMPAS risk assessment tool used in U.S. courts. The algorithm, designed to predict recidivism, disproportionately flagged Black defendants as higher risk compared to white defendants.


    The system did not “intend” bias.


    It reflected patterns in historical data and institutional practices.


    Similarly, researchers at MIT and Stanford University demonstrated in 2018 that commercial facial recognition systems had significantly higher error rates for darker-skinned women compared to lighter-skinned men (Buolamwini & Gebru, 2018).


    Again, the models were trained on skewed datasets.


    Bias did not disappear in automation.
    It scaled.


    When human decisions are imperfect, harm is localized.
    When algorithmic decisions are imperfect, harm replicates.


    The Psychological Comfort of Automation


    Part of the illusion of objectivity comes from us.


    Psychologists refer to “automation bias” — the tendency to over-trust automated systems, even when they are flawed. When a decision is delivered by a machine, it can feel less emotional, less political, less personal.


    It feels clean.


    Nobel laureate Daniel Kahneman explains in Thinking, Fast and Slow that humans equate structured reasoning with reliability. Clear outputs reduce cognitive strain. Reduced strain increases perceived credibility.


    In other words:


    If it looks systematic, we assume it is fair.
    But structured output is not the same as just outcome.


    Objectivity vs. Optimization
    Artificial intelligence systems do not pursue fairness. They pursue objectives defined in their training and design.


    They optimize for:


    Prediction accuracy
    Engagement
    Efficiency
    Risk minimization
    Profit


    Those objectives are chosen by organizations.


    Even large language models like GPT-4 are trained to generate statistically probable responses, not verified truths. As acknowledged in OpenAI’s technical documentation, these systems are probabilistic — they predict patterns in language rather than confirm reality.


    An AI model cannot be more neutral than the goal it is given.


    If the optimization target embeds bias, the output will reflect it.


    Governance Is a Human Question


    Recognizing this, global institutions have begun emphasizing accountability.


    The OECD AI Principles call for transparency, robustness, and human oversight. The World Economic Forum has identified algorithmic bias and AI-driven misinformation as emerging global risks.


    These are not fringe concerns.


    They are governance concerns.


    When an algorithm influences:
    Who gets hired
    Who receives credit
    Who is flagged for risk
    Who receives medical prioritization


    The question is no longer technical.


    It is ethical.


    And ethical systems require accountability.


    The Deeper Issue


    The illusion of objectivity is powerful because it relieves us of discomfort.


    If the algorithm decided, no one had to.


    Responsibility diffuses.


    But AI does not eliminate judgment.


    It relocates it:


    Into training data
    Into system design
    Into objective functions
    Into deployment decisions


    Human judgment never disappears.


    It simply becomes less visible.


    A Personal Practice


    When I encounter AI-generated analysis, scores, or summaries, I now ask:


    What data trained this?
    Who defined the objective?
    What might be missing?
    Who benefits from this output?
    Who might be harmed by it?


    These questions do not reject AI.


    They contextualize it.


    The Age of Understanding requires more than technological literacy.


    It requires structural literacy.


    Closing Thought


    An algorithm can calculate.


    It cannot deliberate.


    It can predict.


    It cannot weigh justice.


    Objectivity is not achieved by removing humans from systems.


    It is achieved by making human responsibility explicit.


    Selected References
    Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research.
    Angwin, J., et al. (2016). Machine Bias. ProPublica.
    Kahneman, D. (2011). Thinking, Fast and Slow.
    OpenAI. (2023). GPT-4 Technical Report.
    OECD. (2019). OECD AI Principles.
    World Economic Forum. (2024). Global Risks Report.

  • When AI Is Confidently Wrong

    The Illusion of Competence

    Why We Must Ask for Sources in the Age of Large Language Models

    We are living in a time when answers arrive instantly.


    Type a question.
    Receive a paragraph.
    Polished. Structured. Persuasive.


    Tools like ChatGPT and other large language models don’t hesitate. They don’t appear uncertain.

    They rarely say, “I don’t know.”
    And that is precisely the problem.


    The Illusion of Competence


    Large language models such as GPT-4 are trained on vast datasets and designed to predict the most statistically probable next word in a sequence.

    They generate language that sounds coherent and authoritative.


    But they do not “know” facts.
    They do not verify claims.
    They do not distinguish between truth and probability.


    They generate what is likely — not what is confirmed.


    Even OpenAI acknowledges this. In its GPT-4 Technical Report (2023), the organization notes that the model can produce incorrect information and fabricate details while presenting them fluently.


    When these systems are wrong, they are often wrong beautifully.


    Fluent error is more dangerous than obvious error.


    A typo invites skepticism.
    A polished paragraph invites trust.


    What “Confidently Wrong” Looks Like


    Researchers at Stanford University have documented the phenomenon known as AI “hallucination” — instances where models generate plausible but false information (Ji et al., 2023).


    It can look like:
    •Fabricated academic citations
    • Incorrect statistics stated precisely
    • Invented quotes  attributed to real people
    • Outdated research presented as current
    • Logical explanations built on false premises


    The tone does not change.
    The formatting does not falter.
    The confidence remains intact.


    And that creates a new cognitive risk.
    We begin outsourcing discernment.


    A Real-World Consequence
    In 2023, attorneys submitted a legal brief containing case citations generated by ChatGPT that did not exist. The case, Mata v. Avianca, Inc., resulted in sanctions from a federal judge after the fabricated cases were discovered.


    The AI had produced authoritative-sounding legal precedent.


    It simply wasn’t real.


    The risk is not theoretical.


    Why We Believe Fluent Language


    Psychologist Daniel Kahneman explains in Thinking, Fast and Slow that humans are deeply influenced by cognitive ease. Information that is clear, well-structured, and easy to process feels more true.


    Research by Reber and Schwarz (1999) further demonstrates that statements presented fluently are more likely to be judged as accurate — regardless of their factual correctness.


    We are wired to trust clarity.


    In the past, misinformation often looked chaotic.


    Now, it looks professional.


    And that changes everything.


    The Responsibility Shift


    The rise of tools like Claude, Gemini, and Copilot has democratized content production.


    But verification has not been automated.


    In fact, the responsibility has shifted:


    From publisher → to user.


    Organizations such as the OECD emphasize transparency, accountability, and human oversight in their AI principles. The World Economic Forum has identified AI-generated misinformation as a growing global risk.


    The message is consistent:


    AI is powerful.
    Human judgment remains essential.


    If you use AI:


    • Ask for sources.
    • Confirm publication dates.
    • Verify statistics through primary references.
    • Be cautious with medical, legal, or financial claims.
    • Treat outputs as drafts, not declarations.


    AI can accelerate thinking.


    It cannot replace due diligence.


    This Isn’t an Anti-AI Argument


    This is a pro-literacy argument.


    Large language models are extraordinary tools. They help synthesize ideas, structure thoughts, and explore complex themes quickly.


    But they are not epistemic authorities.


    They are probability engines.


    The Age of Understanding requires something new from us:


    Disciplined curiosity.


    Not paranoia.
    Not fear.
    Active verification.


    A Personal Practice
    Before I share anything publicly that originated from AI, I ask:
    • Where did this come from?
    • Can I find the original study?
    • Is this current?
    • Does it align with reputable institutions?


    In a world where answers are instant, credibility must be intentional.


    Selected References


    OpenAI. (2023). GPT-4 Technical Report. arXiv:2303.08774.


    Ji, Z., et al. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys.


    Kahneman, D. (2011). Thinking, Fast and Slow.


    Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth. Consciousness and Cognition.


    Mata v. Avianca, Inc. (S.D.N.Y. 2023).


    Series Note
    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.

  • Why I’m Choosing to Study AI at This Stage of My Life

    I could choose comfort right now.


    I have lived through enough seasons to justify slowing down. I have raised children. I am helping raise grandchildren. I have navigated adversity, health challenges, caregiving, and the kind of life experiences that reshape a person quietly over time.


    I have earned the right to rest.


    And yet, I find myself waking up early to study artificial intelligence.


    Not because I want to become a technologist.
    Not because I am chasing relevance.
    But because I believe we are living through a pivotal shift — and I don’t want to stand at the edge of it uninformed.


    Some mornings I am reading about machine learning models while helping my grandchildren find their shoes.


    It’s an odd pairing — future systems and spilled cereal.


    But maybe that’s exactly the point.


    The future is not abstract.
    It is unfolding in ordinary kitchens.


    AI is not just another wave of technology. It is an acceleration engine. It expands capability — in productivity, in healthcare, in education, in decision-making.


    Used well, it can enhance human capacity in extraordinary ways. It can reduce friction. It can assist learning. It can support complex analysis. It can help solve problems at scale.


    But tools amplify what already exists.


    If we are thoughtful, they scale thoughtfulness.
    If we are impatient, they scale impatience.
    If our systems contain bias, they can scale that too.


    That realization is what keeps me curious — and cautious.


    I am not interested in hype or fear. I am interested in stewardship.


    Artificial intelligence can process context and relationships between ideas at remarkable speed. But it does not carry conscience. It does not wrestle with moral tension. It does not absorb decades of lived experience and turn it into wisdom.


    Humans do.


    And that difference matters.


    I believe we have been given a conscience — an internal moral compass that can be strengthened or dulled by our choices. Self-awareness sharpens it. Empathy strengthens it. Contribution to something larger than oneself refines it.


    Ego erodes it.
    Narcissism erodes it.
    Power without reflection erodes it.


    Technology does not remove moral responsibility. It increases it.


    If we are going to build tools this powerful, then we must grow our moral maturity alongside our technical capability.


    This is why I am studying AI now.


    Not to compete with it.
    Not to fear it.
    But to understand it.


    To understand how it can expand human capability without eroding conscience.


    To model adaptability for the next generation.
    To remain intellectually alive.
    To participate responsibly in the world my grandchildren will inherit.


    We are no longer merely in the Age of Information.


    Information is abundant.
    Understanding is scarce.


    Understanding requires integration — of knowledge, experience, and moral clarity.


    AI can assist with knowledge.
    It cannot replace wisdom.


    That remains our responsibility.


    I choose curiosity over fear.
    I choose responsibility over passivity.
    And I choose to keep learning — because expansion is inevitable.


    Erosion is optional.