I work on one organizing question: when AI is adopted, who gains agency and who loses it? The research spans education, organizational management, labor markets, knowledge production, and strategic foresight — unified by a single analytical commitment: distinguishing genuine empowerment from dependency dressed up as adoption.
I am based at Narxoz Business School in Almaty, where I lead research on AI governance, institutional futures, and the application of these frameworks in a Central Asian context. I publish through working papers rather than traditional journals — a deliberate choice that reflects one of my own central arguments: that conventional academic publishing moves too slowly for the world it is trying to describe.
Working papers published open access on Zenodo and SSRN. Where multiple versions exist, the latest is listed.
The foundational framework separating access to AI tools from agency in their use. Broad access without agency produces passive dependency — high-looking outputs, weakened judgment. The matrix defines the target condition for institutions and societies: high access and high agency, not merely adoption. This 2026 version extends the original into a practical diagnostic tool.
AI-OS is a governance-centered architecture treating AI adoption as an operating model question rather than a tool rollout. It works at the task level — assigning each task a permitted mode of use based on stakes, ambiguity, reversibility, and sensitivity — making the human-AI boundary visible, auditable, and adjustable by evidence rather than informal drift.
FARABI (Framework for AI-Resilient Assessment and Balanced Integrity) reframes assessment integrity as an evidence design problem. The primary problem is not misconduct but validity: if AI can satisfy an assessment without the student demonstrating the targeted reasoning, the assessment was already weak evidence — AI has simply made that visible. FARABI provides a portfolio-level triage method for restoring defensible inference.
The defining capacity of an AI-era leader is the ability to design, sequence, and stage-manage complex human-AI workflows. Orchestrated intelligence is a teachable, assessable competence — decomposing problems, running accountable iteration loops, and making reasoning visible — not a technical skill but a strategic and cognitive one.
IHACC (Iterative Human-AI Co-Creation) argues that AI changes the structure of knowledge production, not just its speed. Acceleration without proof standards produces noise. Human judgment, verification, and epistemic standards must remain explicit throughout AI-assisted inquiry, not assumed.
Proposes a portable, renewable credential structure linked to program-level proof standards rather than badge inflation. Designed to make a degree's capability claims legible to employers — and to help people navigate AI-driven labor market transitions rather than being stranded by them.
Academic publishing is structurally misaligned with the pace of AI change. The Dynamic Research Continuum proposes a versioned, continuously updated pipeline that maintains quality standards while closing the gap between frontier developments and peer-reviewed knowledge.
The central argument is that business schools must redesign themselves as capability-and-proof institutions. Integrates the AI Matrix, FARABI, AI-OS, Orchestrated Intelligence, the AI Passport, and the Dynamic Research Continuum into a single operating model for schools that need to remain credible when AI assistance is everywhere.
Extends the business school manifesto to higher education institutions more broadly — addressing the structural challenges facing universities as AI weakens the evidentiary link between student outputs and the learning claims that underwrite degrees.
Maps six futures for mass higher education — from utopia through muddling along to legitimacy crisis and institutional collapse. Current evidence places the highest probability on the more disruptive scenarios. The system faces four interacting walls: a proof wall, a jobs wall, a cost wall, and a legitimacy wall. These are not independent risks — they interact and reinforce each other.
Uses PISA 2022 creative thinking results as a warning about capability readiness for an AI-saturated world. Argues that creativity must be treated as a measurable, teachable competence — not rhetorical aspiration — and that business schools need to build it into curriculum, assessment, and proof routines as a matter of urgency.
The labor market is quietly closing off the entry-level routes that have historically justified mass higher education. The "Jobs Wall" captures the risk that AI tightens junior hiring pathways even while headline employment figures remain stable, leaving both graduates and institutions exposed.
Using US Bureau of Labor Statistics projections, this paper provides empirical grounding for the Jobs Wall concept — showing how graduate employment routes are narrowing in ways that standard labor market analysis tends to understate.
A response to Yotzov et al.'s firm-level AI impact research, arguing that adoption rates are the wrong unit of analysis. What matters is intensity of use and depth of integration — the same distinction between access and agency that runs through the broader research agenda.
Introduces the Grey Swan / Archimedes framework for governed, AI-assisted strategic foresight. A grey swan is a plausible, consequential disruption with visible signposts already present in current data — one that goes unaddressed not because it is unknowable but because it is uncomfortable. The framework produces quarterly, auditable probability estimates across two postures, with results that are currently sobering.
Applies the access-agency framework to the geopolitics of AI — examining how the three major blocs are positioning on infrastructure, governance, and the distribution of AI-derived agency across their populations and institutions.
A close reading of Anthropic's research on Claude's impact on knowledge work, arguing that speed gains are real but secondary. What the data actually supports is the importance of orchestration — structured human-AI workflows — over unmanaged acceleration. Updated in a subsequent research note.
Examines the gap between reported AI adoption rates and actual intensity of use, drawing on usage data from three major providers. The findings support the access-agency distinction: widespread nominal adoption coexists with shallow, unmanaged use in most organizations.
A pair of practical frameworks for strategists and researchers working with the Microsoft Copilot and OpenAI enterprise usage reports — cutting through vendor framing to identify what the data actually supports and where the inferential gaps are.
Research enquiries, collaboration, speaking, and executive education.
I am open to conversations about research collaboration, speaking engagements, consultancy on AI governance and assessment design, and executive education for organisations navigating the shift to AI-mediated work. Based in Almaty; working globally.
[email protected]