“We’ve seen productivity improvements closer to an hour a day”

Stuart Fenton, CEO Ingentive
It’s one of the first detailed public-sector studies into the use of generative AI tools in a live work environment, and for that, it deserves recognition.
It’s important to remember that this experiment focused on productivity-focused AI, not the more disruptive “Agentic” or “Orchestration” AI we are beginning to see emerge. The latter will undoubtedly be the driver of significant structural change – what I often call the coming “bonfire of the white collars.” But for now, we’re dealing with assistive tools. And in that space, the findings offer both promise and caution.
Positives: Encouraging usage and clear benefits
One of the most notable positives was the adoption rate – over 80% usage across the pilot participants. That’s encouraging and suggests a high level of engagement. However, usage varied significantly by product and workload. That makes sense. Some Microsoft tools are further along in their AI integration than others.
Take Microsoft Word, for example: it’s arguably the best showcase for CoPilot, where it enhances drafting and summarising tasks impressively. Excel, on the other hand, still feels underpowered – though I suspect major improvements are only weeks away. OneNote and PowerPoint also show promise, but the features don’t quite live up to expectations yet. That said, considering ChatGPT only launched 31 months ago – and CoPilot even more recently- the pace of progress remains extraordinary.
Usage depends heavily on role and context
The range of users in the study was large, and naturally, some roles benefited more than others. Content creators and knowledge workers likely saw immediate gains, whereas others working in legacy systems – disconnected from AI tooling – may have found limited impact. The data suggests roles in engineering were heavier users of CoPilot, but it’s unclear what proportion of their day-to-day work was affected.
I was surprised by the relatively low uptake among legal professionals. AI is already proving itself in legal drafting, summarisation, and document analysis – especially in systems trained specifically on legal data. It may only be a matter of time before CoPilot is trained on the entirety of UK legislation and regulatory frameworks, which could disrupt legal teams in a serious way. For some, that might be a welcome efficiency.
Scepticism on the “26-minute time saving”
The most talked-about metric – an average time saving of 26 minutes per user per day – raised some questions for me. How are users measuring this, and how accurate are these self-reported estimates? Could some users be underreporting the impact out of fear that AI success might accelerate job displacement?
In our own client base, we’ve seen productivity improvements closer to an hour a day, particularly among those who embrace the technology and use it effectively. Of course, skill levels vary enormously – and this remains one of AI’s biggest hurdles. It’s still astonishing how many users struggle with basic features in Excel or PowerPoint, despite decades of Microsoft Office availability. If those same users are now expected to leverage AI tools without foundational digital skills, we’ll see a wide spectrum of effectiveness.
And don’t get me started on spelling, grammar, and sentence structure in emails…
Final thoughts: A good start, but far more to come
Overall, the conclusion that most users found CoPilot beneficial is encouraging. It validates the direction of travel. But it also raises a final question: if it’s helping so many, is 26 minutes really the upper bound of daily time savings? Given that most government work involves administrative tasks – prime territory for automation- one would expect higher figures, especially over time as tools and user proficiency improve.
In summary, this is a positive early sign for productivity AI. But the real disruption is still to come.