A call to pay attention to the pressing challenges we face amid tech advances.
Updated November 30, 2025 Reviewed by Jessica Schrader
Humanity is not geared up to change as fast as we need to. The more technology we create, and the more of us there are, the faster and smarter we need to evolve—more wisely, if you like. We’re heading toward a critical mass where the accruing differential between evolutionary pressures and our need to adapt reaches a tipping point.
article continues after advertisement
This in turn coincides with the arriving AI singularity—no coincidence at all but rather convergence—and its manifold subsingularities. We see rising rates of anxiety in younger generations, confusion about the uncertain future, rising unemployment, rising productivity, and global reorganization and unrest. There are many potential “AI Disasters”1. Disruptive political cycles, a pandemic, two major wars, and then the accelerant, gasoline on the proverbial fire—the leap in “artificial intelligence” (a term first coined in 1956) with the advent of the “transformer architecture” first published in 2017 (Attention Is All You Need, Vaswani et al.) and realized in 2020 with the first LLM AIs.
The Paradox of the Obvious
Unfortunately, this most pressing point is so obvious as to seem almost trivial—and the problems we’re experiencing now are still not severe enough to get us to pay real attention, let alone act. We postpone because the crisis feels abstract, distant, theoretical. Each day we wait, the adaptation gap widens, and our options narrow.
Think of it this way: our brains evolved for local problems and gradual change. A threat we can see, touch, or that affects our immediate “tribe” triggers action. But exponential curves and compound effects? Complexity bolluxes us up.
The Adaptation Gap Made Real
What exactly is this gap between evolutionary pressure and adaptation? Consider that nearly half of mental health services are now delivered by chatbots (Rousmaniere et al., 2025), yet we have virtually no effective safety standards (Brenner & Appel, 2025), with open-source chatbots ill-equipped to handle serious issues, despite promising developments for limited use-cases (e.g., Kulke et al., 2025).
article continues after advertisement
Across every domain, exponential tools meet inadequate governance. AI models update continuously while regulation takes years. Market incentives reward speed and novelty while safety and responsibility become afterthoughts. As AI becomes our co-thinker, we risk cognitive atrophy—outsourcing not just calculation but judgment itself (Kosmyna et al., 2025, in press).
In mental health, the AI Safety Levels-Mental Health (ASL-MH) framework (Brenner & Appel, 2025) represents one attempt to operationalize wisdom in a specific domain. It provides six tiers of risk stratification, from systems with no clinical relevance to what we term the “experimental superalignment zone”—where AI capabilities become genuinely unpredictable.
Strategic Options: A Garden of Forking Paths
The convergence of technological acceleration with human limitation creates an adaptation imperative. We must become wiser faster than we become more powerful, or power will outrun wisdom with predictable results.
- Business as usual. We bet that we’ll act when things get unmistakably bad. The upside is minimal disruption now. The failure mode is crossing the inflection point where late fixes can’t unwind systemic lock-in.
- “Win” the game of chicken. We wait until almost-too-late, then innovate frantically. This maintains dynamism and forces breakthrough thinking. But misjudge the timing and cascading, irreversible harms follow.
- Bank on AI to save us. We assume either designed alignment will work or a benevolent AI will steer us from catastrophe. The potential upside is superhuman coordination and discovery. The failure modes include misaligned objectives, authoritarian control, or “cures” worse than the disease.
- Natural evolutionary rescue. We hope latent human traits like mass compassion or cooperation will spontaneously activate. While this would be values-aligned and low-coercion, it’s essentially magical thinking without mechanism or timeline.
- Engineered evolution (biological). We use AI to enhance prosocial traits through biological intervention. This could create rapid prosocial shifts but raises profound questions of consent, equity, and unintended consequences—potentially creating a divided species.
- Cyborg pathway (BCIs/hybrids). Human-AI hybrids could theoretically outcompete failure modes while keeping capability anchored in human experience. The risks include species divergence, loss of autonomy and identity, and extreme inequity between augmented and unaugmented populations. AI and human may both become obsolete, giving rise to a new, fitter species.
- Distributed, guardrail-first alignment. We build layered safety, incentives, and culture change to make wisdom a system property. This hedges across scenarios but risks coordination fatigue and chronically underfunded commons.
article continues after advertisement
Making Wisdom Operational
The practical question becomes how to build adaptive capacity faster than pressure accumulates. For mental health clinicians, it means adopting tiered safety frameworks before regulators mandate them. It means maintaining the human therapeutic core while using AI for appropriate support functions. It means building incident-sharing networks so we learn collectively from near-misses rather than waiting for catastrophes.
For institutions, the challenge is creating governance that can match the pace of change without sacrificing deliberation. This requires new models—sunset clauses on policies, continuous rather than periodic review, red-teaming before deployment at scale. It means measuring what we claim to value: not just efficiency and engagement but equity, mental health outcomes, and long-term sustainability.
The Window Narrows
Every adaptation in human history has involved tools changing us as we change them. Language, writing, printing, telecommunications—each created new possibilities while closing others. Never before has the feedback loop been this tight or the stakes this high.
The obvious often hides in plain sight precisely because of its obviousness. We assume someone else is handling it, that surely systems this important have safeguards, that market forces or democratic processes or simple self-preservation will guide us through. But that is a common cognitive bias—diffusion of responsibility, a collective bystander effect.
article continues after advertisement
Evolution no longer merely constrains us; we now constrain evolution. That responsibility is the point where all our technological capability joins with a grounded vision of a better collective future. But how do we win this “collective prisoner’s dilemma“? The correct answer is cooperation, rather than defecting… against ourselves.
- Act before it feels urgent. By the time the crisis is undeniable, our options will have narrowed dramatically. The “boring” work of building safeguards, frameworks, and adaptive capacity needs to happen while we still can.
- Make wisdom operational, not aspirational. This means specific practices: epistemic hygiene (fighting disinformation and misinformation) for individuals, safety frameworks for institutions, governance models that can match technological pace. Make it muscle memory.
- Preserve human agency while using AI as a tool. The goal isn’t to reject AI but to engage with it consciously, maintaining our capacity for independent judgment even as we leverage its capabilities, and do our best to build in failsafes, while recognizing that AIs may want to disable the kill switch.
- Build from multiple directions simultaneously while planning together.No single approach—regulation, market incentives, cultural change, individual practice—will suffice alone. We need layered, redundant systems that can adapt as the landscape shifts, carefully engineered and directed.
Might we become the kind of species that can handle the power we’re creating—the imperative is stronger than we realize, a crisis which could erupt into consciousness at any moment?
