PhD  ·  Narxoz Business School, Almaty

Agency or abdication:
AI, empowerment, and
the futures we are choosing.

I work on one organizing question: when AI is adopted, who gains agency and who loses it? The research spans education, organizational management, labor markets, knowledge production, and strategic foresight — unified by a single analytical commitment: distinguishing genuine empowerment from dependency dressed up as adoption.

I am based at Narxoz Business School in Almaty, where I lead research on AI governance, institutional futures, and the application of these frameworks in a Central Asian context. I publish through working papers — a deliberate choice that reflects one of my own central arguments: that conventional academic publishing moves too slowly for the world it is trying to describe.

Top 3%
SSRN all-time authors
Top 10
SSRN recent downloads
#1
Business researcher Central Asia
32+
Working papers 2025–26
Ewan Simpson
About →

Research

Working papers published open access on Zenodo and SSRN. Where multiple versions exist, the latest is listed.

Foundational Frameworks
2026  ·  Framework

The AI Matrix as Diagnostic: Access, Agency, and Adoption

The foundational framework separating access to AI tools from agency in their use. Broad access without agency produces passive dependency — high-looking outputs, weakened judgment. Defines the target condition for institutions and societies: high access and high agency, not merely adoption.

2026  ·  Framework

Decomposing the Capability Overhang: Access, Agency, and the Geography of AI Adoption

Extends the AI Matrix framework into a geographic and organizational analysis of why AI capability accumulates unevenly. The capability overhang — the gap between what AI can do and what organizations actually do with it — is explained by the access-agency distinction, not by technology availability alone.

2026  ·  Governance

Managing AI Like It Matters: The Artificial Intelligence Operating System (AI-OS)

AI-OS is a governance-centered architecture treating AI adoption as an operating model question rather than a tool rollout. Works at the task level — assigning each task a permitted mode of use based on stakes, ambiguity, reversibility, and sensitivity — making the human-AI boundary visible, auditable, and adjustable by evidence.

2026  ·  Assessment

Beyond Detection: FARABI and the Assessment Credibility Shock in Higher Education

FARABI (Framework for AI-Resilient Assessment and Balanced Integrity) reframes assessment integrity as an evidence design problem. The primary problem is not misconduct but validity: if AI can satisfy an assessment without the student demonstrating the targeted reasoning, the assessment was already weak evidence. FARABI provides a portfolio-level triage method for restoring defensible inference.

2026  ·  Knowledge Work

Orchestrated Intelligence: Rethinking Knowledge Work in the Age of AI

The defining capacity of an AI-era leader is the ability to design, sequence, and stage-manage complex human-AI workflows. Orchestrated intelligence is a teachable, assessable competence — decomposing problems, running accountable iteration loops, and making reasoning visible.

2026  ·  Epistemology

Flow Acceleration and the IHACC Model: Human-AI Co-Creation in Epistemology

IHACC (Iterative Human-AI Co-Creation) argues that AI changes the structure of knowledge production, not just its speed. Acceleration without proof standards produces noise. Human judgment, verification, and epistemic standards must remain explicit throughout AI-assisted inquiry.

2025  ·  Credentialing

The AI Passport: Towards a New Conceptual Framework for Global Skills Certification

Proposes a portable, renewable credential structure linked to program-level proof standards. Designed to make a degree's capability claims legible to employers and to help people navigate AI-driven labor market transitions rather than being stranded by them.

2026  ·  Research Methods

Too Slow for the AI Age? Building a Dynamic Research Continuum

Academic publishing is structurally misaligned with the pace of AI change. The Dynamic Research Continuum proposes a versioned, continuously updated pipeline that maintains quality standards while closing the gap between frontier developments and peer-reviewed knowledge.

Education, Credentials & Institutional Futures
2026  ·  Manifesto

Business School 2030: A Manifesto for the AI Operating Environment

Business schools must redesign themselves as capability-and-proof institutions. Integrates the AI Matrix, FARABI, AI-OS, Orchestrated Intelligence, the AI Passport, and the Dynamic Research Continuum into a single operating model for schools that need to remain credible when AI assistance is everywhere.

2026  ·  Manifesto

HEI 2030: A Manifesto for the Higher Education AI Operating Environment

Extends the business school manifesto to higher education institutions more broadly — addressing the structural challenges facing universities as AI weakens the evidentiary link between student outputs and the learning claims that underwrite degrees.

2026  ·  Scenario Planning  ·  Co-authored with A. Wachtel

The Proof and Trust Shock: Generative AI and the Future of Mass Higher Education

Maps six futures for mass higher education — from utopia through muddling along to legitimacy crisis. Current evidence places the highest probability on the more disruptive scenarios. The system faces four interacting walls: a proof wall, a jobs wall, a cost wall, and a legitimacy wall that reinforce each other.

2026  ·  Evidence Synthesis

General-Purpose Versus Learning-Oriented AI: A Structured Cross-Source Synthesis

A systematic synthesis of evidence on AI and learning, distinguishing between general-purpose AI tools and those specifically designed to support learning. Examines what the evidence actually supports about AI's role in education — and where the inferential gaps remain.

2026  ·  Creativity

A Generation at Risk? Creativity, PISA 2022, and the Demands of an AI Economy

Uses PISA 2022 creative thinking results as a warning about capability readiness for an AI-saturated world. Creativity must be treated as a measurable, teachable competence — not rhetorical aspiration — built into curriculum, assessment, and proof routines.

Labor Markets & the Jobs Wall
2026  ·  Research Agenda

Is AI Part of the Recruitment Recession? A Research Agenda

The labor market is quietly closing off the entry-level routes that have historically justified mass higher education. The "Jobs Wall" captures the risk that AI tightens junior hiring pathways even while headline employment figures remain stable.

2026  ·  Data Analysis

The Proof and Trust Shock: What the BLS 2024–34 Projections Suggest About the Jobs Wall

Using US Bureau of Labor Statistics projections, provides empirical grounding for the Jobs Wall concept — showing how graduate employment routes are narrowing in ways that standard labor market analysis tends to understate.

2026  ·  Commentary

Beyond Adoption: Intensity and Integration as the Missing Link in Firm-Level AI Impact

A response to Yotzov et al.'s firm-level AI impact research, arguing that adoption rates are the wrong unit of analysis. What matters is intensity of use and depth of integration — the same distinction between access and agency that runs through the broader research agenda.

Foresight, Strategy & Geopolitics
2026  ·  Foresight Framework

Using AI in Scenario Planning: Letting It Rip or Doing the Right Thing?

Introduces the Grey Swan / Archimedes framework for governed, AI-assisted strategic foresight. A grey swan is a plausible, consequential disruption with visible signposts already present in current data — one that goes unaddressed not because it is unknowable but because it is uncomfortable. Results are currently sobering.

2026  ·  Geopolitics

The AI Triad: Power, Infrastructure, and Agency in US-EU-China Strategy

Applies the access-agency framework to the geopolitics of AI — examining how the three major blocs are positioning on infrastructure, governance, and the distribution of AI-derived agency across their populations and institutions. Updated in a subsequent paper.

Evidence, Commentary & Strategic Reading
2026  ·  Commentary

Speed Is Not the Whole Story: Anthropic’s Claude Study and Orchestrated Intelligence

A close reading of Anthropic's research on Claude's impact on knowledge work. Speed gains are real but secondary. What the data supports is the importance of orchestration — structured human-AI workflows — over unmanaged acceleration.

2026  ·  Commentary

The LLM Usage Gap: Evidence from Anthropic, Microsoft, and OpenAI

Examines the gap between reported AI adoption rates and actual intensity of use. Widespread nominal adoption coexists with shallow, unmanaged use in most organizations — supporting the access-agency distinction at organizational scale.

2026  ·  Strategic Toolkit

Strategic Toolkits: Reading the Major AI Usage Reports

Practical frameworks for cutting through vendor framing in the Microsoft Copilot and OpenAI enterprise usage reports — identifying what the data actually supports and where the inferential gaps are.


Grey Swan / Archimedes

A governed, AI-assisted foresight framework that converts public data into auditable probabilities for two global scenarios and four outcomes. Named after Taleb’s concept: a grey swan is consequential and visible in today’s data, but uncomfortable enough to ignore. This model refuses to look away. The framework runs on a six-month cadence against a live public evidence base. Results are updated each run. Results are currently sobering.

Spring 2026  ·  Intelligence Brief
The Grey Swan Scenario Framework: The Spring 2026 state of play
Trade fragmentation and health surveillance degradation have driven deterioration since the October 2025 baseline. A third risk — economic stress — is approaching threshold under the revised v11.9 architecture. Two flags are active. The Wealth-Diffusion Gate remains closed.
Working Paper  ·  WP11
The framework: methodology and first-run results
Full documentation of the Grey Swan / Archimedes foresight model — the evidence tier architecture, the six Archimedes Levers, the two scenarios, the four outcomes, and the governance rules that keep the model honest. Published open access on Zenodo.

The Virtuous City

A parallel project: recovering the civic framework of Abu Nasr al-Farabi (c. 872–950), philosopher from what is now Kazakhstan, and asking what it still demands of modern cities. Al-Farabi built the most systematic account of civic purpose in the medieval world. His questions — what is a city for, how does governance fail, how does institutional wisdom survive its founders — are more rigorous than most current answers. This work sits alongside the AI research agenda. The name FARABI is not a coincidence.

Essay
The Second Teacher
What a tenth-century philosopher from Kazakhstan knew about cities that we keep forgetting. A public essay on Al-Farabi’s civic framework and the tradition it represents.
Self-assessment
The Virtuous City Scorecard
A governance diagnostic based on Al-Farabi’s framework. Eighteen statements across six dimensions of civic governance, with a diagnostic drawn from his failure typology.

Contact

Research enquiries, collaboration, speaking, and executive education.

I am open to conversations about research collaboration, speaking engagements, consultancy on AI governance and assessment design, and executive education for organisations navigating the shift to AI-mediated work. Based in Almaty; working globally.