I work on one organizing question: when AI is adopted, who gains agency and who loses it? The research spans education, organizational management, labor markets, knowledge production, and strategic foresight — unified by a single analytical commitment: distinguishing genuine empowerment from dependency dressed up as adoption.
I am based at Narxoz Business School in Almaty, where I lead research on AI governance, institutional futures, and the application of these frameworks in a Central Asian context. I publish through working papers — a deliberate choice that reflects one of my own central arguments: that conventional academic publishing moves too slowly for the world it is trying to describe.
Working papers published open access on Zenodo and SSRN. Where multiple versions exist, the latest is listed.
The foundational framework separating access to AI tools from agency in their use. Broad access without agency produces passive dependency — high-looking outputs, weakened judgment. Defines the target condition for institutions and societies: high access and high agency, not merely adoption.
Extends the AI Matrix framework into a geographic and organizational analysis of why AI capability accumulates unevenly. The capability overhang — the gap between what AI can do and what organizations actually do with it — is explained by the access-agency distinction, not by technology availability alone.
AI-OS is a governance-centered architecture treating AI adoption as an operating model question rather than a tool rollout. Works at the task level — assigning each task a permitted mode of use based on stakes, ambiguity, reversibility, and sensitivity — making the human-AI boundary visible, auditable, and adjustable by evidence.
FARABI (Framework for AI-Resilient Assessment and Balanced Integrity) reframes assessment integrity as an evidence design problem. The primary problem is not misconduct but validity: if AI can satisfy an assessment without the student demonstrating the targeted reasoning, the assessment was already weak evidence. FARABI provides a portfolio-level triage method for restoring defensible inference.
The defining capacity of an AI-era leader is the ability to design, sequence, and stage-manage complex human-AI workflows. Orchestrated intelligence is a teachable, assessable competence — decomposing problems, running accountable iteration loops, and making reasoning visible.
IHACC (Iterative Human-AI Co-Creation) argues that AI changes the structure of knowledge production, not just its speed. Acceleration without proof standards produces noise. Human judgment, verification, and epistemic standards must remain explicit throughout AI-assisted inquiry.
Proposes a portable, renewable credential structure linked to program-level proof standards. Designed to make a degree's capability claims legible to employers and to help people navigate AI-driven labor market transitions rather than being stranded by them.
Academic publishing is structurally misaligned with the pace of AI change. The Dynamic Research Continuum proposes a versioned, continuously updated pipeline that maintains quality standards while closing the gap between frontier developments and peer-reviewed knowledge.
Business schools must redesign themselves as capability-and-proof institutions. Integrates the AI Matrix, FARABI, AI-OS, Orchestrated Intelligence, the AI Passport, and the Dynamic Research Continuum into a single operating model for schools that need to remain credible when AI assistance is everywhere.
Extends the business school manifesto to higher education institutions more broadly — addressing the structural challenges facing universities as AI weakens the evidentiary link between student outputs and the learning claims that underwrite degrees.
Maps six futures for mass higher education — from utopia through muddling along to legitimacy crisis. Current evidence places the highest probability on the more disruptive scenarios. The system faces four interacting walls: a proof wall, a jobs wall, a cost wall, and a legitimacy wall that reinforce each other.
A systematic synthesis of evidence on AI and learning, distinguishing between general-purpose AI tools and those specifically designed to support learning. Examines what the evidence actually supports about AI's role in education — and where the inferential gaps remain.
Uses PISA 2022 creative thinking results as a warning about capability readiness for an AI-saturated world. Creativity must be treated as a measurable, teachable competence — not rhetorical aspiration — built into curriculum, assessment, and proof routines.
The labor market is quietly closing off the entry-level routes that have historically justified mass higher education. The "Jobs Wall" captures the risk that AI tightens junior hiring pathways even while headline employment figures remain stable.
Using US Bureau of Labor Statistics projections, provides empirical grounding for the Jobs Wall concept — showing how graduate employment routes are narrowing in ways that standard labor market analysis tends to understate.
A response to Yotzov et al.'s firm-level AI impact research, arguing that adoption rates are the wrong unit of analysis. What matters is intensity of use and depth of integration — the same distinction between access and agency that runs through the broader research agenda.
Introduces the Grey Swan / Archimedes framework for governed, AI-assisted strategic foresight. A grey swan is a plausible, consequential disruption with visible signposts already present in current data — one that goes unaddressed not because it is unknowable but because it is uncomfortable. Results are currently sobering.
Applies the access-agency framework to the geopolitics of AI — examining how the three major blocs are positioning on infrastructure, governance, and the distribution of AI-derived agency across their populations and institutions. Updated in a subsequent paper.
A close reading of Anthropic's research on Claude's impact on knowledge work. Speed gains are real but secondary. What the data supports is the importance of orchestration — structured human-AI workflows — over unmanaged acceleration.
Examines the gap between reported AI adoption rates and actual intensity of use. Widespread nominal adoption coexists with shallow, unmanaged use in most organizations — supporting the access-agency distinction at organizational scale.
Practical frameworks for cutting through vendor framing in the Microsoft Copilot and OpenAI enterprise usage reports — identifying what the data actually supports and where the inferential gaps are.
A governed, AI-assisted foresight framework that converts public data into auditable probabilities for two global scenarios and four outcomes. Named after Taleb’s concept: a grey swan is consequential and visible in today’s data, but uncomfortable enough to ignore. This model refuses to look away. The framework runs on a six-month cadence against a live public evidence base. Results are updated each run. Results are currently sobering.
A parallel project: recovering the civic framework of Abu Nasr al-Farabi (c. 872–950), philosopher from what is now Kazakhstan, and asking what it still demands of modern cities. Al-Farabi built the most systematic account of civic purpose in the medieval world. His questions — what is a city for, how does governance fail, how does institutional wisdom survive its founders — are more rigorous than most current answers. This work sits alongside the AI research agenda. The name FARABI is not a coincidence.
Research enquiries, collaboration, speaking, and executive education.
I am open to conversations about research collaboration, speaking engagements, consultancy on AI governance and assessment design, and executive education for organisations navigating the shift to AI-mediated work. Based in Almaty; working globally.
ewan@ewansimpson.org