Skip to content

Staff AI Skills Training in Higher Education: What Actually Works in 2026

Here’s a pattern that keeps repeating itself across UK universities right now.

The institution rolls out Microsoft Copilot. An AI policy gets drafted – usually by a small working group, usually under time pressure. A communication goes out to all staff. And then… not much happens. People carry on more or less as before, occasionally opening a prompt window out of curiosity, mostly ignoring the whole thing.

This isn’t apathy. It’s what happens when tool deployment runs ahead of capability building. And it’s costing institutions more than they realise – not just in underused licences, but in staff anxiety, inconsistent practice, and the slow erosion of trust that comes when people feel like transformation is being done to them.

Staff AI skills training, done properly, is how you close that gap. But “done properly” is doing a lot of work in that sentence. So let’s get into what that actually means.

The problem with most AI training in HE

The standard playbook looks something like this: a lunchtime session, a shared folder of resources, a Teams message saying “have a go and let us know how you get on.” Attendance is optional. Follow-up is nonexistent. Three months later, nothing has changed – and when you ask staff about it, they either didn’t know the session happened or were too busy to attend.

The failure here isn’t enthusiasm. Most university staff are curious about AI, even if some are anxious about it. The failure is design.

Generic AI training assumes a uniform starting point. It assumes that a finance officer and a research administrator have roughly the same relationship with these tools, the same risk exposure, the same daily workflows that AI might touch. They don’t. Not even close. And when training treats them as if they do, both groups disengage – one because it’s too basic, the other because it doesn’t speak to their actual work.

Three things separate training that produces real behaviour change from training that produces completed attendance registers.

The first is meeting people where they are. Some staff are already using AI tools every day – building prompts, automating tasks, pushing at the edges of what’s possible. Others have never typed a prompt in their lives and aren’t sure they want to. Any training programme that doesn’t account for that range is going to struggle from the start. A proper readiness assessment before you design anything isn’t a luxury – it’s the only way to avoid wasting everyone’s time.

The second is specificity. Library services teams have different use cases, different data sensitivities, and different efficiency opportunities than HR teams or student experience teams. Training that speaks directly to the work people actually do – with examples they recognise, from a context they understand – lands completely differently from a generic slide deck about AI in the abstract. This is the piece that’s hardest to get from off-the-shelf providers, and it’s the piece that matters most.

The third is addressing anxiety before addressing skills. A meaningful proportion of HE staff are worried, at some level, that AI is coming for their jobs. You can disagree with that worry. You can have data that complicates it. But if you skip past it and go straight into prompt engineering, you’ve lost the room before you’ve started. The emotional dimension of AI adoption isn’t a detour – it’s part of the curriculum.

A framework for thinking about AI competency

When we developed the WorkSmart-AI Staff Competency Framework, we were trying to answer a simple question: what does it actually mean for a member of university staff to be genuinely competent with AI tools? Not just comfortable – genuinely competent.

We landed on five dimensions.

The first is foundational understanding – knowing what AI can and can’t do, including where it goes wrong. Hallucination, bias, data leakage risks: these aren’t edge cases. Staff need to understand them before they start relying on outputs.

The second is practical prompting – being able to construct prompts that produce useful, reliable results rather than vague outputs that require more work to fix than they saved.

The third is responsible use – understanding the institutional policy, the GDPR implications, and the specific data governance considerations that apply to their role. This looks different for someone handling student records than for someone writing marketing copy.

The fourth is workflow integration – knowing when AI is genuinely the right tool, and when it isn’t. This is underrated. A lot of early AI adoption creates new inefficiencies by applying tools to tasks they’re not suited for.

The fifth is critical evaluation – the ability to look at an AI output and make a sound judgement about whether it’s accurate, appropriate, and fit for purpose before doing anything with it.

Most off-the-shelf training touches the first dimension partially and gestures at the second. That’s not enough.

Progression, not events

One of the most useful shifts you can make in how you think about staff AI skills training is moving from “has this person been trained?” to “where is this person on their adoption journey?”

We map individual progression across six levels – from exploratory, conversational use at Level 1 through to workflow automation and multi-agent systems at Level 6. Most university staff are sitting at Levels 1 or 2. Most institutional ambitions require them to reach Level 3 or 4 within the next year or two.

That gap is bridgeable. But not with a single training event followed by nothing. The institutions that are seeing genuine productivity gains from AI adoption are the ones treating capability development as an ongoing programme – something with structure, check-ins, and a clear sense of direction – rather than a box to tick before moving on.

The role-specific challenge

Universities are peculiar organisations in the best possible way. A single institution might employ several thousand people across academic roles, professional services, student experience, finance, estates, IT, marketing, research support, and senior leadership. Those groups don’t just have different job titles – they have fundamentally different relationships with information, different workflows, and different risk profiles when it comes to AI.

Our curriculum maps use cases across 14 distinct role types that are common across UK HE. That level of specificity isn’t us being precious about our methodology. It’s the thing that determines whether training produces changed behaviour or just filled seats.

Making the case internally

Training and development teams in HE know the challenge: making the case for investment upwards. “Staff feel more confident about AI” is real, and it matters, but it’s not a number you can put in a business case.

The good news is that AI productivity gains are measurable if you build the measurement in from the start rather than bolting it on later. Time saved per task type, reduction in administrative load, improvement in output turnaround – these translate into figures that finance committees and governance boards can work with. Training designed with outcome measurement baked in gives L&D and HR leads something to stand behind, not just a story to tell.

Questions worth asking any provider

If you’re evaluating external providers, a few things worth pushing on before you commit.

Does the curriculum address HE specifically, or is it a corporate package with a few sector references added? Is there a readiness assessment before training design begins, or does everyone get the same programme regardless of where they start? How are outcomes measured beyond satisfaction surveys? Does responsible use training engage with UK HE policy and GDPR in concrete terms, or just generic AI ethics? And is there any provision for ongoing support, or does the relationship end when the workshop does?

These aren’t trick questions. They’re just the things that separate training that produces lasting change from training that produces a line on a completion report.

Where to start

If you’re early in the process, the single most useful thing you can do before commissioning any training is understand where your staff actually are. A structured readiness assessment gives you a baseline – and, perhaps more valuably, it starts a different kind of conversation. Not “here is the training you’re going to do” but “here is what we found, here is where the opportunities are, here is where the anxiety is concentrated, and here is what we think would help.”

That diagnostic stage is often where the most honest and productive conversations happen. And it’s the foundation that makes everything that follows more likely to work.


WorkSmart-AI designs and delivers staff AI skills training programmes built specifically for UK higher education. Our curriculum maps across 14 staff role types, is grounded in a five-dimension competency framework, and is designed to produce outcomes you can measure – not just confidence scores.

Thinking about AI capability development for your institution? We’d welcome a conversation.