Mastering AI, New Use Cases, and What ChatGPT Voice Still Gets Wrong
January 2026 brought a stronger Advanced Voice Mode, underwhelming memory improvements, new General Purpose use cases, and the launch of our Mastering AI conversation series.

March 2026 brought one genuinely meaningful ChatGPT update, plus a cluster of signals about where AI adoption is going next.
The headline product change was GPT-5.4 Thinking. Around it sat a few other developments worth noting: OpenAI's early work on Skills, growing interest in agentic workflows, and continued demand for practical AI training that focuses on how teams actually work.
OpenAI launched GPT-5.4 Thinking at the start of March 2026. According to OpenAI's own testing, individual claims are 33% less likely to be false and full responses are 18% less likely to contain any errors than GPT-5.2.
Those numbers matter because accuracy is still one of the main constraints on real business use. Most organisations do not need a model that sounds more impressive. They need one that makes fewer mistakes when working through documents, analysis, or research-heavy tasks.
The improvements appear strongest in:
The context window has also expanded significantly, which makes the model more practical for teams working with long contracts, regulatory material, board packs, or multiple reference documents in a single workflow.
The simplest advice is this: if your team previously wrote off a task as "too error-prone for ChatGPT", it is worth testing again with GPT-5.4 Thinking.
Skills entered beta for Business, Enterprise, and Edu plans in March. The idea is straightforward but important: take the best repeatable workflows in your organisation and turn them into reusable automations.
If OpenAI executes well, Skills could become one of the bridges between one-person prompting and team-wide operational use. It would let organisations standardise high-value AI workflows instead of relying on scattered individual habits.
That is why we are paying attention to it even before general release. The more AI becomes embedded in repeatable work, the less useful it is to think in terms of isolated prompts. The real question becomes: which workflows should be systematised, shared, and improved over time?
Around the same time, our Chief AI Officer, Tom Hewitson, wrote in The Guardian about what organisations get wrong when they approach AI.
The core point was simple: most people struggle with AI because they misunderstand what it is. If you treat it as a shortcut, results are inconsistent. If you treat it as a skill, outcomes improve quickly.
That distinction matters. Teams that succeed with AI do not just buy access to tools. They develop judgement about where AI is useful, where it is risky, and how to work with it deliberately.
We also shared our latest conversation with Access Holdings, one of the world's leading private equity firms.
The most useful lesson from that discussion was that Access did not wait for perfect data conditions before moving. They advanced data strategy and AI adoption in parallel.
That is an important pattern for anyone leading rollout. Too many organisations create a false choice between "fix all the foundations first" and "start experimenting now". In practice, the better route is often to improve data and adoption together, as long as the work is tightly scoped and operationally grounded.
One of the clearer signals in March was that demand for practical AI training is still rising alongside model capability.
That is not surprising. Better models help, but they do not remove the need for people to understand how to use them well. In most organisations the real bottleneck is no longer access to AI. It is the ability to apply it reliably inside actual workflows.
That is why structured training still matters. Teams need more than awareness. They need practice, examples, and a shared sense of where AI helps, where it fails, and how to build stronger ways of working around it.
March also gave us the first public outing for our Agent Builder Workshop.
In the session, participants configured separate AI subagents with distinct research briefs and had them screen a live deal in parallel before producing a structured investment committee memo. The interesting part was not that the agents produced output. It was that different teams got different results because they encoded different investment logic.
That is the point of agentic AI in a business setting. The value is not simply automation. It is the ability to turn a team's logic, judgement, and process into something that can be run, tested, and refined.
March's theme was clear: the AI tools are getting better, but the bigger opportunity still sits with organisations that know how to apply them well.
GPT-5.4 Thinking looks like a worthwhile capability improvement. Skills could become a meaningful operational layer. Agent workflows are becoming more tangible. And teams that build practical fluency now will be far better positioned than teams that wait for the technology to become magically self-explanatory.
Where to next
If this topic is relevant to your team, these pages are the best next stop.
Explore how we train editorial, communications, and creative teams to use AI without flattening quality.
See how we help brand, retail, and customer-facing teams move faster without losing control of voice or quality.
See a real client example of bespoke AI training inside a fast-moving consumer business.
January 2026 brought a stronger Advanced Voice Mode, underwhelming memory improvements, new General Purpose use cases, and the launch of our Mastering AI conversation series.
ChatGPT-5.1 changes one of the most frustrating parts of earlier reasoning models by calibrating how long it thinks automatically. Voice mode also improved, and OpenAI added mid-query refinement for long-running research.
GPT-5 is here and it’s both pretty impressive and also more of an evolution than a revolution. Here’s what it can do and some instant reflections.