Ready to make AI actually work for your business? Book a Call with Javan

Engagement

AI Curriculum Design

Apprenticeship providers, universities, and corporate L&D teams keep running into the same problem: AI curriculum that's technically accurate but doesn't produce learners who can actually apply what they've learned. Programmes that look good on paper, satisfy the awarding body, and then collapse under delivery — because the design wasn't built backwards from competency, the tool-specific content wasn't separated from the underlying principles, and the documentation isn't usable by anyone other than the original author. The fix is structured learning design with clear architecture: tool-agnostic principles as the foundation, tool-specific execution layered cleanly on top, and documentation rigorous enough that a second SME could pick it up and build from it without a verbal briefing.

Book a scoping call

15 minutes. No sales pitch.

Recent work

What a curriculum engagement looks like in practice

Anonymised at the client's request. Further engagement summaries published as they're approved for release.

Engagement structure

How curriculum design works

Most curriculum projects run through four phases: discovery and standards mapping, detailed learning design, verification and review, and handover. The first two are where the architecture is set; the third is where the rigour shows up; the fourth is where the engagement makes itself unnecessary. Where internal facilitator enablement is part of the picture, the work tends to dovetail with the formats from hackathons and workshops; where the curriculum is intended to land in ongoing operational practice, it pairs naturally with an adoption programme downstream.

  1. Phase 1: Discovery and Standards Mapping

    Understand the qualification framework, the target learners, the assessment requirements, and the tools the programme will teach. Map learning objectives to KSBs or competency standards. Identify which content is platform-agnostic — the transferable principles that hold whether the learner ends up using Google, Microsoft, or something that doesn't exist yet — and which is tool-specific: the swappable execution layer that needs maintaining as products evolve. The architecture decisions made here determine whether the curriculum ages well or has to be rebuilt every twelve months.

    Deliverable: Curriculum architecture document with module structure, KSB mapping, and assessment alignment.

  2. Phase 2: Learning Design

    Build the detailed design documentation for each unit — what every page teaches, the non-negotiable concepts, the frameworks, the knowledge checks, the hands-on activities. The discipline here is to design for the content production pipeline, not just the learner. The documentation needs to be usable by a second SME, a content generator, or a QA team without a verbal briefing. If the only person who can interpret the spec is the person who wrote it, the curriculum is fragile by construction.

    Deliverable: Complete learning design documentation per unit, ready for content production.

  3. Phase 3: Verification and Review

    Every tool reference verified against the live product. Every KSB mapping cross-referenced against the marking rubric and assessment plan. Quality review against three criteria: could a second SME build from this, could a coach understand the learning journey, and could a newcomer maintain it without a briefing. Where references are time-sensitive — a feature that may be renamed or deprecated within a year — they're tagged for volatility so future maintainers know exactly which sections need monitoring.

    Deliverable: Verified, reviewed documentation with volatility tags for maintenance.

  4. Phase 4: Handover and Enablement

    Transfer the documentation to the client's content team or delivery team. Walk through the design decisions, the quality patterns, and the maintenance requirements so the choices are legible, not just the artefacts. If the client wants their internal facilitators to deliver, co-deliver the first cohort and debrief afterwards. The end state is the client owning the curriculum — able to maintain, deliver, and evolve it independently — with no ongoing dependency on me.

    Deliverable: The client owns the curriculum and can maintain, deliver, and evolve it independently.

A view on curriculum

"The goal of every engagement is to make Twisthand unnecessary. If you still need me after six months, I've failed."

Curriculum is, more than any other engagement type, an exercise in capability transfer. The deliverable isn't a stack of documentation — it's a self-sustaining capability inside the client organisation: a learning team that understands the design choices, can maintain the volatile sections as products evolve, and can extend the curriculum into new units without restarting from scratch. Consultants who build dependency are consultants who haven't done the job. The longer version of why this matters sits in how we think.

Common questions

What people usually ask before booking

Can you design curriculum that our own facilitators deliver?

Both models work. Some clients want me to deliver directly. Others want me to design the programme and enable their internal team to run it. The second model is more sustainable and it's where I think most organisations should aim. I build the content, train your facilitators, co-deliver the first cohort, and hand over. You own the capability.

How do you measure learner outcomes beyond completion rates?

Completion tells you nothing about adoption. I design assessment that tests whether learners can apply what they've learned — reasoning and judgment, not recall. The real measure is whether learners are using AI workflows independently three months later. I build that measurement into the design from day one, not as an afterthought.

What's your experience with apprenticeship standards and accredited programmes?

I'm currently designing the Google tooling track for a national Level 4 AI & Automation Practitioner curriculum, including full learning design documentation mapped to KSBs, EPA requirements, and marking rubrics. I understand IfATE standards, backwards design methodology, and how to build content that satisfies the awarding body while being genuinely practical for the learner.

What engagements look like

Curriculum projects are scoped by the number of modules, the assessment framework, and whether you need design only or design plus delivery. Typically measured in weeks, not months.

If you're building an AI programme to standard

The scoping call is where we work out what shape the engagement should take — full architecture and learning design for a new programme, a verification-and-review pass on existing content, or facilitator enablement on a curriculum already in flight.

Book a scoping call

15 minutes. No slides. Just a conversation.