Author: Betty Jean

  • Why Has It Taken So Long to Go Back to the Moon?


    Age of Understanding Series


    It’s a question that feels almost absurd when you first ask it.


    We went to the Moon in 1969.
    We walked on its surface.
    We returned multiple times.


    And then… we stopped.


    More than fifty years later, we’re only now preparing to go back.


    So what happened?


    Did we lose the technology?
    Did we lose the will?
    Or did something else change?

    We Didn’t Stop Because We Couldn’t


    The common assumption is that returning to the Moon should be easy.


    After all, we’ve already done it.


    But the truth is more complex—and more revealing.


    The Apollo Program was never just about exploration.


    It was about urgency.


    It was about proving something—during the Space Race—on a global stage.


    And when that goal was achieved, something quietly shifted.


    The urgency disappeared.

    When the Reason Fades, So Does the Momentum


    The Apollo missions were fueled by an extraordinary level of investment—financial, political, and cultural.
    But once the objective was met, the question became:


    Why continue?


    The answer, at the time, wasn’t compelling enough.


    Funding shifted. Priorities changed.


    Attention moved toward:


    Low Earth orbit
    The International Space     Station
    Robotic exploration


    The Moon, for a time, became something we had already done.

    We Didn’t Just Pause—We Moved On


    Exploration didn’t stop.


    It expanded.


    We sent rovers to Mars.
    We built telescopes that could see back in time.
    We began to understand our own planet from space in ways we never had before.


    The Moon wasn’t abandoned.


    It was… deprioritized.

    And Then, Something Subtle Happened


    When we decided to return, we didn’t decide to repeat the past.


    We changed the question.


    Apollo asked:
    Can we get there?


    Today, through the Artemis program, we are asking something far more difficult:


    Can we stay?

    This Is Where Things Get Complicated


    Going to the Moon once is a technical achievement.


    Building a sustained presence there is something else entirely.


    It requires:
    New systems
    New infrastructure
    New ways of thinking about risk, safety, and sustainability


    Even the tools we once used no longer exist in their original form.


    The Saturn V is gone.


    The supply chains are gone.


    Much of what we are doing now is not continuation.


    It is reconstruction—at a higher standard.

    Why the Moon—Again?


    If the goal is long-term exploration, why not go straight to Mars?


    Why return to a place we’ve already been?


    Because distance changes everything.


    The Moon is three days away.

    Mars is months.


    On the Moon, we can:
    Test systems
    Adapt quickly
    Learn from failure


    On Mars, we cannot.


    And so the Moon becomes something unexpected:


    Not the destination.
    But the proving ground.

    The Shift from Exploration to Infrastructure

    This is where the story changes.


    We are no longer planning missions.


    We are planning systems.

    At the Moon’s south pole, there is water ice—locked in shadow, preserved over time.


    That water can be transformed.


    Not just into something we use…
    But into something we build with.


    Fuel.
    Air.
    Sustainability.


    The Moon begins to look less like a place we visit…
    and more like a place we use to go further.

    The Gateway We Didn’t Expect


    If we can produce fuel on the Moon, something fundamental changes.


    Space travel no longer begins and ends on Earth.


    It becomes layered.


    Connected.


    Possible in ways it never was before.


    The Moon becomes:


    A refueling point
    A testing ground
    A foundation


    And in that role, it quietly becomes more important than Mars—at least for now.


    The Deeper Question


    This is where the surface-level question reveals something more.


    “Why has it taken so long to go back to the Moon?”


    Because we are no longer trying to go back.


    We are trying to go forward—differently.


    More deliberately.
    More sustainably.
    With a deeper understanding of what it actually means to leave Earth.


    A Final Thought


    We rushed to the Moon once—because we needed to prove that we could.


    We are returning now—because we need to understand what comes next.


    Not just how to arrive.
    But how to remain.


    And perhaps that is the real shift of our time:


    From capability…
    to responsibility.

  • AI & Understanding —  Part 6        “Fairness Is Not Neutral:                   Who Decides What ‘Fair’ Means?”

    We often ask whether AI systems are fair.


    But fairness is not a technical setting.


    It is a decision.


    And behind every definition of fairness is a set of values — often unspoken, often embedded quietly into systems that appear objective.


    In the Age of Understanding, the question is no longer: Is this system fair?


    It is: Fair according to whom?

    The Illusion of Objective Fairness


    In everyday language, fairness feels intuitive.


    We assume it means:


    • Equal treatment
    • Equal opportunity
    • Equal outcomes


    But in practice, these are not the same.


    An AI system can be:


    • Fair in accuracy
    • Unfair in outcomes
    • Neutral in design
    • Biased in impact


    And often — it cannot satisfy all definitions at once.


    Fairness is not a single destination.


    It is a set of competing priorities.

    When Fairness Conflicts With Itself


    In machine learning, there are multiple formal definitions of fairness:


    Equal accuracy across groups
    Equal false positive rates
    Equal opportunity (same chance of success)
    Demographic parity (equal outcomes across groups)


    Here is the problem:


    Many of these definitions are mathematically incompatible.


    You cannot optimize all of them simultaneously.


    So every system makes a choice — explicitly or implicitly.


    And that choice reflects values.

    A Simple Example (That Isn’t Simple)


    Imagine an AI tool used to screen job applicants.


    It predicts who is most likely to succeed in a role.


    Now consider two fairness goals:


    1. Equal accuracy across all groups
    2. Equal hiring rates across all groups


    If historical opportunity has been unequal, these goals may conflict.


    • Optimizing for accuracy may reinforce past patterns
    • Optimizing for equal outcomes may require adjusting predictions


    So what should the system do?


    There is no purely technical answer.


    This is a moral decision disguised as a mathematical one.

    The Hidden Power of Defaults


    Most systems do not openly declare their fairness definition.
    They encode it through:


    • Default thresholds
    • Training data
    • Optimization targets
    • Business objectives


    Fairness becomes invisible — not because it is absent, but because it is assumed.


    And what is assumed is rarely questioned.

    Who Gets to Decide?


    Fairness as Governance, Not Just Design


    Global AI frameworks increasingly recognize this.


    The OECD AI Principles emphasize fairness, accountability, and human-centered values.


    The European Union Artificial Intelligence Act requires risk assessments and oversight for high-impact systems.


    But even with regulation, one question remains unresolved:


    Regulation can require fairness.


    It cannot define it universally.


    The Risk of “Technically Fair, Socially Unjust


    A system can meet formal fairness metrics and still produce outcomes that feel unjust.


    Why?


    Because metrics simplify reality.


    They measure what is visible.


    But they cannot fully capture:


    • Historical inequality
    • Structural barriers
    • Human context
    • Lived experience


    Fairness, when reduced to metrics alone, risks becoming performative.

    Toward Participatory Fairness
    If fairness cannot be purely technical, it must be relational.


    This means shifting from: Designed fairness → Participatory fairness


    Where:


    • Affected communities are included in system design
    • Trade-offs are made visible
    • Decisions are explained, not hidden
    • Feedback loops are real, not symbolic


    Fairness becomes something we negotiate — not something we assume.


    A More Honest Question
    Instead of asking:


    “Is this system fair?”


    We should ask:


    • What definition of fairness is being used?
    • What trade-offs were made?
    • Who benefits from this definition?
    • Who might be disadvantaged?
    • Can this system be challenged or changed?


    These questions move us from passive trust to active understanding.


    Closing Reflection


    In the Age of Information, fairness was often assumed.


    In the Age of Understanding, it must be examined.


    Because fairness is not neutral.


    It is shaped.


    And what is shaped can be reshaped.

  • AI & Understanding — Part 5                       When Efficiency Replaces Expertise                                    The Quiet Automation of Human Judgment

    Efficiency has always been a virtue in modern systems.


    We celebrate faster workflows.
    Quicker decisions.
    Reduced friction.


    Artificial intelligence accelerates this trend dramatically. It organizes information, detects patterns, summarizes complexity, and produces recommendations in seconds.


    In many contexts, this is an extraordinary achievement.


    But speed has a quiet side effect.


    When efficiency increases, something else can begin to fade.


    Expertise.


    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.

    What Expertise Actually Is


    We often imagine expertise as knowledge.


    But expertise is more than accumulated information.


    It is pattern recognition shaped by experience.
    It is the ability to notice subtle signals others overlook.
    It is the discipline to pause when something feels inconsistent.


    Experts do not simply process data.


    They interpret it.
    They question it.


    They recognize when something does not fit the expected pattern.


    This kind of judgment develops slowly — through years of practice, mistakes, and reflection.


    Artificial intelligence does not erase expertise.


    But it can quietly change how it is used.

    The Drift Toward Automation


    When a system produces rapid answers, people naturally adapt their behavior.


    Instead of asking:
    What do I think?


    We begin asking:
    What does the system suggest?


    This shift is subtle. It rarely feels like surrender.


    It feels like assistance.
    Over time, however, reliance on automated recommendations can reshape professional habits. Studies in aviation, medicine, and decision science show that heavy automation can lead to reduced monitoring, skill erosion, and increased dependence on automated guidance.


    The system becomes the first voice in the room.


    Human judgment becomes the second.

    When Expertise Moves to the Background


    This shift does not happen because people stop caring about quality.


    It happens because systems reward efficiency.


    If a recommendation appears quickly, clearly, and confidently, questioning it introduces friction.


    And friction slows the process.


    In many organizations, slowing the process feels like inefficiency.


    So expertise becomes quieter.


    Not eliminated.


    Just less frequently exercised.


    The expert remains in the room, but their role changes.


    Instead of interpreting information, they validate the system’s output.

    The Risk of Passive Expertise


    This transformation carries an unexpected risk.


    When expertise becomes passive, it weakens.


    Skills sharpen through use.


    They dull through inactivity.


    In aviation research, pilots who rely heavily on autopilot systems sometimes experience decreased situational awareness. In healthcare, studies have shown that diagnostic support systems can influence clinical decisions — sometimes even when the algorithmic recommendation is incorrect.


    None of this suggests that automation is harmful.


    It suggests that expertise must remain active.


    Automation works best when it assists judgment, not when it replaces the habit of exercising it.

    A Question for the Age of AI
    Artificial intelligence can process more data than any human.


    But expertise is not only about processing.


    It is about interpretation.


    It is about context.


    It is about recognizing when a pattern is misleading.


    Machines can accelerate analysis.


    They cannot accumulate lived experience.


    That remains a human capability.

    A Personal Reflection


    When I use AI tools, I notice something interesting.


    The answers arrive so quickly that it becomes tempting to move forward immediately.


    The pace invites momentum.


    But sometimes the most valuable question is the simplest one:


    Would I have reached the same conclusion without the tool?


    That question does not reject technology.


    It protects judgment.

    Closing Thought


    Artificial intelligence will continue to make systems faster.


    That is inevitable.


    But speed should not quietly displace expertise.


    Tools should expand human capability.


    Not shrink the space in which human judgment operates.


    In the Age of Understanding, the goal is not to compete with machines.


    It is to remain fully human while using them.

  • When Humans Stop Questioning the Machine

    The Quiet Rise of Automation Bias

    When a system makes a recommendation,
    something in us exhales.


    A number appears.
    A score is calculated.
    A ranking is delivered.


    The uncertainty narrows.

    The burden lightens.


    And sometimes… so does our vigilance.

    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.

    The Comfort of Structure


    Human beings are not only seekers of truth.
    We are seekers of certainty.


    When an algorithm presents a structured answer — clean, formatted, confident — it reduces ambiguity. And ambiguity is cognitively expensive.

    Psychologist Daniel Kahneman describes how the mind favors cognitive ease. Information that is clear and coherent feels more reliable. It reduces mental strain. It gives us relief.


    Artificial intelligence excels at this.


    It delivers outputs that look:


    • Organized
    • Measured
    • Quantified
    • Decisive


    It feels authoritative.


    Not because it possesses wisdom.


    Because it possesses format.

    What Automation Bias Really Is


    Automation bias is the tendency to over-trust automated systems — even when they are wrong.


    It does not arise from ignorance.


    It arises from subtle psychological shifts.


    At first, we double-check.


    Then we confirm occasionally.


    Then we notice the system is “usually right.”


    Then we begin to defer.


    The drift is gradual.


    No one announces it.


    There is no dramatic surrender.


    Just a quiet redistribution of attention.


    Eventually, a sentence appears in meeting rooms and decision logs:


    “We just followed the system.”


    That sentence dissolves something.


    • Agency.
    • Ownership.
    • Moral friction.

    Friction Is Where Judgment Lives


    Friction slows us down.


    It forces pause.


    Pause invites evaluation.


    Evaluation invites responsibility.


    Artificial intelligence removes friction.


    It reduces the time between question and answer.
    Between uncertainty and resolution.
    Between doubt and direction.


    Efficiency increases.


    But when friction disappears, so does the moment in which we wrestle.


    We are not anti-efficiency.


    We are pro-awareness.


    When decisions become easier, we interrogate them less.


    And interrogation is where discernment lives.

    The Subtle Relief of Delegation


    There is something emotionally appealing about delegation.
    If the model ranked the candidates,
    if the system flagged the anomaly,
    if the tool predicted the risk —


    then the weight feels shared.


    Or sometimes, removed.


    But responsibility does not disappear.
    It relocates.


    When humans stop questioning automated outputs, bias does not vanish. It embeds more deeply. Errors do not evaporate. They replicate quietly.


    And the most concerning part?


    Automation bias does not feel unethical.


    It feels modern.
    Efficient.
    Rational.


    It feels like progress.

    A Personal Observation


    When I use AI tools, I notice the temptation to accept the first answer.


    Not because I am careless.


    Because it is easier.
    Because it is fast.
    Because it sounds coherent.


    Ease is seductive.


    But discernment requires a second look.


    A pause.
    A question.
    Where did this come from,
    What might be missing?
    Does this align with what I know to be true?


    These are small interruptions.


    But they keep judgment active.

    The Test of This Era


    Artificial intelligence does not remove human judgment.


    It tests whether we are willing to exercise it.


    The more seamless the system becomes,
    the more intentional our attention must be.


    The quieter the machine grows,
    the louder our discernment must remain.


    In the Age of Understanding, the question is not whether machines will become more capable.


    They will.


    The question is whether we will remain engaged.


    Because when humans stop questioning the machine,
    the machine does not gain wisdom.


    It simply gains silence.

    Selected References


    Kahneman, D. (2011). Thinking, Fast and Slow.


    Skitka, L. J., Mosier, K., & Burdick, M. (1999). Does automation bias decision-making?

    International Journal of Human-Computer Studies.
    NIST. (2023). AI Risk Management Framework.


    Research on AI-assisted clinical decision-making, JAMA (2020–2023).

  • If Everyone Is Responsible, No One Is

    The Accountability Gap in AI Decisions

    If an AI system rejects a qualified job applicant, who made that decision?


    If an automated tool flags someone as “high risk,” who answers for what happens next?


    “The algorithm” is not a person.
    And yet the consequences land on people.


    This is Part 3 of AI & Understanding — a series exploring how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.

    The Accountability Gap Isn’t a Mystery. It’s a Design Outcome

    AI decisions often move through a pipeline:


    Data → Model → Product → Workflow → Human action → Human impact


    By the time harm occurs, responsibility has been fragmented across teams, vendors, and processes. Everyone touched it. No one owns it.


    Researchers who study algorithmic auditing describe this as an end-to-end accountability problem: accountability must be designed across the lifecycle, not retroactively assigned when something goes wrong.

    The Accountability Stack
    A practical way to name “who owns what”

    Here is the simplest way I’ve found to make responsibility visible again:


    1) Data Owners — What went in
    Accountability question:
    Who owns the quality, representativeness, and provenance of the data?


    Non-negotiables:
    Document where data came from


    Track known gaps and skews


    Define what “good enough” means for the context


    If your inputs reflect inequality, your outputs will inherit it—no matter how clean the dashboard looks.

    2) Model Builders / Providers — What was built
    Accountability question:
    Who can explain the model’s intended use, limitations, and failure modes?


    Non-negotiables:
    Clear documentation (what it can and can’t do)


    Evaluation against known risks


    Ongoing monitoring expectations


    Governance frameworks increasingly emphasize lifecycle risk management—especially the need to “govern, map, measure, and manage” risks in real deployments.

    3) Deployers — Where it’s used
    This is the layer most organizations underestimate.


    Accountability question:
    Who is responsible for how the system behaves inside your workflow?


    Because even a “good” tool can become harmful when:
    It’s used beyond its intended purpose


    Staff are pressured to follow it


    Overrides aren’t supported


    Errors are treated as “exceptions” instead of signals


    The EU AI Act’s approach to “high-risk” systems puts explicit duties on deployers, including assigning competent human oversight and monitoring use.

    4) Decision Owners — Who acts on it
    This is the easiest layer to miss, because it feels like a formality:


    “Humans are in the loop.”


    But “human in the loop” can mean:


    a real decision-maker with authority
    or
    a checkbox at the end of a pipeline


    Accountability question:
    Who has the authority to disagree with the model—without punishment?


    If a human cannot realistically override the system, then the system is the decision-maker.

    5) Appeals, Audits, and Aftercare — What happens when it harms
    This is where accountability becomes real.


    Accountability question:
    If the AI is wrong, how does a person correct it—and how fast?


    Non-negotiables:
    A clear appeal path (not buried, not vague)


    A timeline (days, not months)


    A way to contest inputs and outputs


    Logging and traceability (so issues can be investigated)


    This is also where internal algorithmic audits matter most—because they don’t just ask “does it work?” but “does it work fairly and safely in practice?”

    The 3-Question Accountability Test


    (Use this on any AI tool before you trust it)


    If an organization can’t answer these, it’s not ready to deploy:


    1. Who is accountable for outcomes?
    Name a role. Not a department. Not “the vendor.”


    2. Where can people appeal or correct it?
    Make it simple. Make it visible. Make it fast.


    3. How is it audited over time?
    Because models drift. Workflows change. Incentives distort use.


    This is why credible frameworks emphasize governance as a cross-cutting function—accountability is not a one-time checkbox.

    A Simple RACI Map


    (What accountability looks like in practice)


    If you want AI to be “responsible,” you need a responsible structure.


    Responsible: Product owner / Ops lead (day-to-day performance and monitoring)


    Accountable: Executive sponsor (owns outcomes and risk acceptance)


    Consulted: Legal, privacy, domain experts, frontline staff, impacted users


    Informed: Everyone affected by decisions—especially when rights, access, or employment are involved


    When accountability is named, systems behave differently.
    When it isn’t, harm becomes “nobody’s fault.”

    The Point Isn’t to Slow AI Down


    It’s to stop pretending it carries moral weight.


    AI can calculate.
    It can predict.
    It can recommend.


    But it cannot absorb responsibility.


    Accountability is not the enemy of innovation.
    It is the scaffolding that prevents innovation from becoming careless power.


    Closing Thought


    When responsibility is distributed, harm becomes invisible.


    And when harm becomes invisible, it becomes repeatable.


    If everyone is responsible, no one is.


    So we name it.
    We design for it.
    We keep it human.

    Selected References
    • Raji, I. D., et al. (2020). Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. ACM FAccT.
    • NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0).
    NIST Publications
    • OECD. (2019; updated 2024). OECD AI Principles.
    • European Commission. (EU AI Act). Human oversight & deployer obligations for high-risk AI systems.

  • AI and the Illusion of Objectivity

    Why Algorithmic Decisions Feel Neutral — Even When They’re Not

    AI and the Illusion of Objectivity
    Why Algorithmic Decisions Feel Neutral — Even When They’re Not
    We tend to trust numbers.


    A score feels neutral.
    A ranking feels fair.
    An algorithm feels unbiased.


    After all, machines don’t have opinions.


    Or do they?


    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.


    The Seduction of Data


    Artificial intelligence systems are often described as “data-driven.” That phrase carries weight. Data implies measurement. Measurement implies precision. Precision implies fairness.


    But data does not emerge from nowhere.


    It is collected by humans.
    Labeled by humans.
    Selected by humans.
    Interpreted by humans.


    Large language models and predictive systems — whether deployed in hiring, lending, healthcare, or criminal justice — are built on historical information. And history is not neutral.


    When we say an algorithm is objective, what we often mean is that its reasoning is hidden.


    Opacity is not neutrality.


    When Bias Scales


    In 2018, investigative reporting by ProPublica revealed racial disparities in the COMPAS risk assessment tool used in U.S. courts. The algorithm, designed to predict recidivism, disproportionately flagged Black defendants as higher risk compared to white defendants.


    The system did not “intend” bias.


    It reflected patterns in historical data and institutional practices.


    Similarly, researchers at MIT and Stanford University demonstrated in 2018 that commercial facial recognition systems had significantly higher error rates for darker-skinned women compared to lighter-skinned men (Buolamwini & Gebru, 2018).


    Again, the models were trained on skewed datasets.


    Bias did not disappear in automation.
    It scaled.


    When human decisions are imperfect, harm is localized.
    When algorithmic decisions are imperfect, harm replicates.


    The Psychological Comfort of Automation


    Part of the illusion of objectivity comes from us.


    Psychologists refer to “automation bias” — the tendency to over-trust automated systems, even when they are flawed. When a decision is delivered by a machine, it can feel less emotional, less political, less personal.


    It feels clean.


    Nobel laureate Daniel Kahneman explains in Thinking, Fast and Slow that humans equate structured reasoning with reliability. Clear outputs reduce cognitive strain. Reduced strain increases perceived credibility.


    In other words:


    If it looks systematic, we assume it is fair.
    But structured output is not the same as just outcome.


    Objectivity vs. Optimization
    Artificial intelligence systems do not pursue fairness. They pursue objectives defined in their training and design.


    They optimize for:


    Prediction accuracy
    Engagement
    Efficiency
    Risk minimization
    Profit


    Those objectives are chosen by organizations.


    Even large language models like GPT-4 are trained to generate statistically probable responses, not verified truths. As acknowledged in OpenAI’s technical documentation, these systems are probabilistic — they predict patterns in language rather than confirm reality.


    An AI model cannot be more neutral than the goal it is given.


    If the optimization target embeds bias, the output will reflect it.


    Governance Is a Human Question


    Recognizing this, global institutions have begun emphasizing accountability.


    The OECD AI Principles call for transparency, robustness, and human oversight. The World Economic Forum has identified algorithmic bias and AI-driven misinformation as emerging global risks.


    These are not fringe concerns.


    They are governance concerns.


    When an algorithm influences:
    Who gets hired
    Who receives credit
    Who is flagged for risk
    Who receives medical prioritization


    The question is no longer technical.


    It is ethical.


    And ethical systems require accountability.


    The Deeper Issue


    The illusion of objectivity is powerful because it relieves us of discomfort.


    If the algorithm decided, no one had to.


    Responsibility diffuses.


    But AI does not eliminate judgment.


    It relocates it:


    Into training data
    Into system design
    Into objective functions
    Into deployment decisions


    Human judgment never disappears.


    It simply becomes less visible.


    A Personal Practice


    When I encounter AI-generated analysis, scores, or summaries, I now ask:


    What data trained this?
    Who defined the objective?
    What might be missing?
    Who benefits from this output?
    Who might be harmed by it?


    These questions do not reject AI.


    They contextualize it.


    The Age of Understanding requires more than technological literacy.


    It requires structural literacy.


    Closing Thought


    An algorithm can calculate.


    It cannot deliberate.


    It can predict.


    It cannot weigh justice.


    Objectivity is not achieved by removing humans from systems.


    It is achieved by making human responsibility explicit.


    Selected References
    Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research.
    Angwin, J., et al. (2016). Machine Bias. ProPublica.
    Kahneman, D. (2011). Thinking, Fast and Slow.
    OpenAI. (2023). GPT-4 Technical Report.
    OECD. (2019). OECD AI Principles.
    World Economic Forum. (2024). Global Risks Report.

  • The Word I Would Ban

    If you could permanently ban a word from general usage, which one would it be? Why?

    If I could permanently ban one word from everyday language, it wouldn’t be a swear word.


    It wouldn’t be slang.


    It wouldn’t even be something obviously harmful.


    It would be:


    Should.


    Not because it’s dramatic.
    Not because it’s offensive.
    But because it is quietly corrosive.


    The Problem With “Should”


    “Should” sounds responsible.


    Mature.


    Productive.


    Adult.


    But listen closely to how it shows up:


    I should exercise more.
    I should be further along by now.
    I should have known better.
    They should act differently.
    You should be grateful.


    It sounds motivational.
    It rarely is.


    Underneath “should” is often something much less noble:


    Shame
    Comparison
    Unrealistic timelines
    Moral superiority
    Regret dressed up as logic


    “Should” doesn’t ask questions.
    It delivers verdicts.


    And verdicts rarely invite growth.


    The Quiet Pressure of a Small Word
    In cognitive psychology, “should statements” are considered a cognitive distortion. They create rigid expectations about how we and others must behave.


    Rigid expectations feel structured.
    But they are brittle.


    When we don’t meet them, we don’t gain clarity.


    We gain guilt.


    When others don’t meet them, we don’t gain curiosity.


    We gain resentment.


    “Should” sounds like discipline.


    Often, it’s just pressure wearing a respectable outfit.


    A Small Experiment
    Try this shift:


    Instead of:
    I should be better at this.

    Try:
    I want to improve at this.
    I’m disappointed in my progress.
    This matters to me.


    Notice the difference?


    One is accusation.
    The other is information.


    One tightens the chest.
    The other opens the door.


    Language shapes thought.
    Thought shapes emotion.
    Emotion shapes behavior.


    Sometimes the smallest word carries the heaviest weight.


    If “Should” Actually Worked…
    If “should” were an effective motivational tool, none of us would need alarm clocks.


    “I should wake up early.”
    “I should stop scrolling.”
    “I should drink more water.”


    And yet… here we are.


    If “should” burned calories, we’d all be Olympic athletes.


    What I’d Replace It With
    Not lower standards.
    Not apathy.
    Not indifference.


    I’d replace “should” with awareness.
    Instead of:
    I should be further ahead.
    Ask:
    According to who?


    Instead of:
    I should handle this better.
    Ask:
    What would handling this well actually look like?


    The goal isn’t to remove responsibility.
    It’s to remove unnecessary self-punishment disguised as productivity.


    If I could ban one word, it wouldn’t be to control language.


    It would be to invite clarity.


    Because growth rarely begins with accusation.


    It begins with honesty.


    And sometimes the most powerful change you can make isn’t in your schedule…


    It’s in your vocabulary.

  • When AI Is Confidently Wrong

    The Illusion of Competence

    Why We Must Ask for Sources in the Age of Large Language Models

    We are living in a time when answers arrive instantly.


    Type a question.
    Receive a paragraph.
    Polished. Structured. Persuasive.


    Tools like ChatGPT and other large language models don’t hesitate. They don’t appear uncertain.

    They rarely say, “I don’t know.”
    And that is precisely the problem.


    The Illusion of Competence


    Large language models such as GPT-4 are trained on vast datasets and designed to predict the most statistically probable next word in a sequence.

    They generate language that sounds coherent and authoritative.


    But they do not “know” facts.
    They do not verify claims.
    They do not distinguish between truth and probability.


    They generate what is likely — not what is confirmed.


    Even OpenAI acknowledges this. In its GPT-4 Technical Report (2023), the organization notes that the model can produce incorrect information and fabricate details while presenting them fluently.


    When these systems are wrong, they are often wrong beautifully.


    Fluent error is more dangerous than obvious error.


    A typo invites skepticism.
    A polished paragraph invites trust.


    What “Confidently Wrong” Looks Like


    Researchers at Stanford University have documented the phenomenon known as AI “hallucination” — instances where models generate plausible but false information (Ji et al., 2023).


    It can look like:
    •Fabricated academic citations
    • Incorrect statistics stated precisely
    • Invented quotes  attributed to real people
    • Outdated research presented as current
    • Logical explanations built on false premises


    The tone does not change.
    The formatting does not falter.
    The confidence remains intact.


    And that creates a new cognitive risk.
    We begin outsourcing discernment.


    A Real-World Consequence
    In 2023, attorneys submitted a legal brief containing case citations generated by ChatGPT that did not exist. The case, Mata v. Avianca, Inc., resulted in sanctions from a federal judge after the fabricated cases were discovered.


    The AI had produced authoritative-sounding legal precedent.


    It simply wasn’t real.


    The risk is not theoretical.


    Why We Believe Fluent Language


    Psychologist Daniel Kahneman explains in Thinking, Fast and Slow that humans are deeply influenced by cognitive ease. Information that is clear, well-structured, and easy to process feels more true.


    Research by Reber and Schwarz (1999) further demonstrates that statements presented fluently are more likely to be judged as accurate — regardless of their factual correctness.


    We are wired to trust clarity.


    In the past, misinformation often looked chaotic.


    Now, it looks professional.


    And that changes everything.


    The Responsibility Shift


    The rise of tools like Claude, Gemini, and Copilot has democratized content production.


    But verification has not been automated.


    In fact, the responsibility has shifted:


    From publisher → to user.


    Organizations such as the OECD emphasize transparency, accountability, and human oversight in their AI principles. The World Economic Forum has identified AI-generated misinformation as a growing global risk.


    The message is consistent:


    AI is powerful.
    Human judgment remains essential.


    If you use AI:


    • Ask for sources.
    • Confirm publication dates.
    • Verify statistics through primary references.
    • Be cautious with medical, legal, or financial claims.
    • Treat outputs as drafts, not declarations.


    AI can accelerate thinking.


    It cannot replace due diligence.


    This Isn’t an Anti-AI Argument


    This is a pro-literacy argument.


    Large language models are extraordinary tools. They help synthesize ideas, structure thoughts, and explore complex themes quickly.


    But they are not epistemic authorities.


    They are probability engines.


    The Age of Understanding requires something new from us:


    Disciplined curiosity.


    Not paranoia.
    Not fear.
    Active verification.


    A Personal Practice
    Before I share anything publicly that originated from AI, I ask:
    • Where did this come from?
    • Can I find the original study?
    • Is this current?
    • Does it align with reputable institutions?


    In a world where answers are instant, credibility must be intentional.


    Selected References


    OpenAI. (2023). GPT-4 Technical Report. arXiv:2303.08774.


    Ji, Z., et al. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys.


    Kahneman, D. (2011). Thinking, Fast and Slow.


    Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth. Consciousness and Cognition.


    Mata v. Avianca, Inc. (S.D.N.Y. 2023).


    Series Note
    This article is part of AI & Understanding — an ongoing exploration of how artificial intelligence intersects with human judgment, bias, ethics, and responsibility in the Age of Understanding.

  • To the Girl Who Navigated Alone

    What I Would Tell My Teenage Self

    I wouldn’t start with advice.
    I would start with an apology.


    I’m sorry you felt alone in rooms full of people.


    I’m sorry you thought being “low maintenance” made you easier to love.
    I’m sorry you learned so early to swallow what hurt.


    You were just a girl.
    You were not dramatic.
    You were not weak.
    You were not asking for too much.


    You were asking to be seen.


    I’m sorry you had to grow up without a map.


    Other girls had parents guiding them — helping them choose, correcting them, protecting them, reassuring them when they doubted themselves.

    You were navigating without direction.


    You learned to make decisions alone.
    To carry responsibility quietly.
    To pretend you weren’t scared when you were.


    No one told you it was okay not to know what you were doing.


    So you acted like you did.


    That wasn’t maturity.
    That was survival.


    I know how heavy it felt sometimes.


    The quiet sadness you didn’t always have words for.
    The way you froze instead of fought.
    The way you convinced yourself it didn’t matter — when it did.


    You thought strength meant not needing anyone.


    It doesn’t.


    Strength is letting yourself feel without shaming yourself for it.


    I would tell you this gently, because I know you wouldn’t believe it right away:


    You are worthy of love.


    Not because you are useful.
    Not because you are responsible.
    Not because you hold everything together.


    Worthy.
    As you are.


    Before you achieve anything.
    Before you prove anything.
    Before you fix anything.


    There will be seasons of pain.


    There will be loss.
    There will be moments when your body feels like it has betrayed you.
    There will be days when you question your value.


    But listen carefully:


    None of it is a verdict on your worth.


    You survive more than you think you will.
    You grow softer, not harder.
    You learn to speak — not loudly, but clearly.


    And one day you will realize the sadness didn’t ruin you.


    It deepened you.


    You will build a life that feels steadier.


    You will wake up some mornings and feel peace — not because everything is perfect, but because you are no longer fighting yourself.


    You will laugh more than you expect to.
    You will forgive more than you thought you could.
    You will choose differently.


    And you will stop chasing love.


    Because you will finally understand:


    You deserved it all along.

  • AI as Amplifier:  What It Expands Depends on Us

    Some tools make life easier.
    Some tools make us faster.
    And some tools quietly change us.


    Artificial intelligence belongs to the third category.


    It answers quickly.

    It organizes information effortlessly.

    It drafts, summarizes, analyzes, predicts.

    It reduces friction in ways that feel almost invisible.


    Used well, it expands human capability.


    But tools do not create character.
    They amplify it.


    A calculator doesn’t make someone good at math — it enhances what they already understand. Social media didn’t create division — it scaled it. Wealth does not create generosity — it reveals it.


    AI is no different.


    It amplifies productivity.
    It amplifies efficiency.
    It amplifies access to knowledge.


    It can also amplify bias.
    It can amplify misinformation.
    It can amplify ego.
    It can amplify overconfidence.


    Artificial intelligence can be confidently wrong.


    It processes context and patterns at extraordinary speed, but it does not pause to question its own assumptions. It does not wrestle with conscience. It does not feel the weight of consequence.


    That responsibility remains human.


    We often speak about AI alignment — how to align machines with human values.


    But perhaps the deeper question is whether we are aligned ourselves.


    When capability accelerates faster than maturity, systems destabilize.


    History shows us this repeatedly.

    Power without restraint creates imbalance. Innovation without wisdom creates unintended consequences.

    Expansion without reflection creates erosion.


    AI expands what we are already bringing to it.


    If we approach it thoughtfully, it scales thoughtfulness.
    If we approach it carelessly, it scales carelessness.
    If we use it to serve something larger than ourselves, it can magnify contribution.
    If we use it to serve ego, it will magnify that too.


    This is not an argument against AI.


    It is a reminder about us.


    Conscience is not automatic. It is shaped by choice.

    Self-awareness strengthens it.

    Empathy sharpens it. Contribution refines it. Narcissism erodes it. Power without accountability dulls it.


    Technology does not erode conscience on its own.


    But it does remove friction.


    And when friction disappears, character becomes visible.


    We are no longer simply asking what AI can do.


    We must ask who we are becoming while using it.


    The question is not whether artificial intelligence will grow more powerful.

    It will.


    The question is whether our moral maturity will grow alongside it.


    Expansion is inevitable.


    Alignment is a choice.