Mastering AI in Customer Experience: Full Transcript
A conversation between Tom Hewitson (General Purpose) and Rebecca Brown (Think Wow), recorded February 2026.
Last updated on Tuesday 20 February 2026———————
What does AI adoption actually look like inside customer experience teams – and why are so many organisations still stuck at the “dabbling” stage? In this conversation, Tom Hewitson, Chief AI Officer at General Purpose, and Rebecca Brown, co-founder of CX agency Think Wow, work through the practical realities: where AI is helping behind the scenes, why starting with the customer journey matters more than choosing a tool, what good prompting looks like, and where AI should never be deployed without human oversight. This is the full, lightly edited transcript.
What this conversation covers
Why most organisations are still “dabbling” with AI – and what structured adoption looks like
The disconnect between leadership strategy and frontline reality
Why CX transformation has to start with employees, not customers
The lingering damage of bad chatbot experiences – and what’s changed
How Harrods and Octopus Energy are using AI behind the scenes
Why journey mapping matters more than tool selection
Prompting as the most accessible first step for any team
How hallucinations work and when to trust AI output
The “fifteen minutes a day” approach to staying current
Where AI should never be used: vulnerable customers and high-stakes decisions
Transcript
Tom Hewitson: Hello, and welcome. I’m very pleased to be joined today by Rebecca, the co-founder of Think Wow, a CX agency working with organisations like Save the Children, Forestry and Land Scotland, and Mobile Mini UK. Before that, she led branding and experience functions at Purplebricks, RICS, and Ordnance Survey. She’s been named a top global CX influencer three years running by CXM, who are the arbiters of such things in the CX industry. And she’s delivered training and keynotes to over 5,000 people. If anyone knows where AI should be touching CX, it’s going to be Rebecca. Welcome, Rebecca.
Rebecca Brown: Thanks so much, Tom. I always sit there blushing while people introduce me.
I’d like to introduce you in return. This is Tom Hewitson. Tom is Chief AI Officer at General Purpose, the UK’s leading AI training company. General Purpose have helped thousands of people to get value from AI at work, working with companies like BT, Hilton, and Network Rail. Prior to General Purpose, Tom founded Webby Award-winning conversational AI studio labworks.io. He’s worked with huge brands like Meta, Government Digital Service, and BBC Worldwide. Tom literally lives and breathes AI and how organisations can adopt it. If anyone can answer your burning AI questions, it’s going to be Tom. No pressure there, Tom! But back over to you.
TH: What an intro. I feel like we’ve really set ourselves up there, haven’t we? We’ve said that we could answer any questions about CX or AI, which hopefully means that this will be a really good webinar. But also, we’re blatantly going to get stumped. Someone’s going to ask a really difficult question. We can answer any question you throw at us. It’s just that the answer might be, “I don’t know.” In the training that we run, there’s no such thing as a silly question, just silly answers from us.
I want to talk at the beginning about CX and how AI is starting to play a role. Rebecca, with your experience, you obviously work with CX teams the whole time. When you walk in on Day One with a new team, what’s the typical state of AI adoption? Is it good? Is it non-existent? Where are people on this journey?
RB: There’s no one-size-fits-all answer. But most organisations, I wouldn’t describe them as AI mature. They’re early on in their journey. They’re perhaps having a bit of an experimental phase. When we talk to people, one of the questions we ask is, “What level of readiness are you for this change? Where are you at in terms of your training, your knowledge?” And often teams are in the process of being sent on AI courses. Perhaps they’ve got some pilot schemes set up. Quite confidently approaching AI, wanting to embrace it, but not at the point yet where AI is interwoven as an operational layer within their business.
TH: We definitely see a lot of firms describe themselves as “dabbling” when we meet them. They’ve tried a few bits here and there. Maybe some people have tried a few use cases. But there’s no systematic approach. There’s no proper governance in place. It’s a challenge that a lot of firms are facing. But because CX is so integral to almost everything the business does, it touches absolutely everything. That probably leads to some real potential issues.
Do you see a difference in the way that CX teams are thinking about AI versus the rest of the business? We see leaders having very different conceptions of how the business is using AI compared to the people on the frontline. I’m assuming that in CX that might be the case as well.
RB: To give you a bit of context, we work alongside teams at every level when we walk into an organisation. We talk to really entry-level team members all the way through to executive leadership. And I’ve actually spent time sat as part of the SLT, around the board table, having these strategic discussions and thinking, “We’ve got a real sense of direction here. We know what we’re doing next. We have a plan.” And then you watch as everyone goes back out to the rest of the organisation, and it becomes a little bit like Chinese Whispers: “We had this really strong, clear messaging ... and now all of a sudden, someone’s explained it slightly differently in one team versus another.” And it’s so easy for the cultural change to become really fragmented almost overnight.
We definitely see a difference between what senior leadership teams think is happening in an organisation and what’s actually happening on the ground. There’s also often a difference in what CX teams think is happening, because when you’re in a CX team, it can feel a little bit lonely. The organisation almost sees you as the person who’s going to bash them on the head for doing things wrong. And you can get that reputation for being on the customer’s side over being on the employee’s side.
That’s obviously not the way it needs to work. It needs to be that we help the customer by helping employees, by enabling them, empowering them, by making sure that they have everything that they possibly need to be able to deliver.
But you have to get everybody on-side first, and the critical thing is that customer experience isn’t the job of a customer experience person or a customer experience function alone. It has to be everybody’s job. Everybody has to understand their role in the customer experience. And that’s really the first step – making sure that you’re all speaking the same language.
TH: That’s interesting because when we work with organisations, we very much look at the organisation through the lens of the employees and staff. It’s very much around, how do we get the staff excited and enabled? How do we drive culture change within the staff? Whereas CX is really centred around the customer. And that’s exciting because it’s trying to achieve the same things but from a different perspective.
RB: I would say that we’re not actually different. People think we’re different, but we always approach it as employees first. Any customer experience work that we do, we have to get the teams on side first. We start right at the beginning with understanding, where are the teams now? What are they concerned about? What are the things getting in the way of them being able to do their job confidently? And we get their ideas. We bring them along with us on that journey from the very beginning.
The employee experience and the customer experience should be absolutely interlinked. And that’s really when you see success. When you treat it as “whatever the customer wants, we deliver” – that doesn’t work. All it ever really leaves people feeling is a little bit browbeaten and unrealistic about what they can actually achieve.
TH: What I was reflecting on there – when I used to work in AI, we used to talk about building “better than human experiences,” BTHX. The idea that we all hear of chatbots and we all think they’re terrible. But if you can build a chatbot that allows somebody to do whatever task they want to do at two in the morning – because that’s when they’ve lost their credit card or whatever – rather than having to ring the call centre the next day, that is a better-than-human experience. And there’s something really interesting about how the customer centricity of CX allows you to think about AI adoption as an organisation in a slightly different way.
RB: Absolutely.
TH: What would you say is the biggest misconception about AI and CX?
RB: It basically follows on from exactly what you were just saying. The biggest misconception is that AI is still the terrible chatbots from fifteen years ago. The ones that were so clunky, and you could absolutely tell it was a robot. It’s interesting because it’s simultaneously a misconception, but it’s also still a reality in a lot of places. That’s the sad thing – there are still those terrible chatbots in existence that still give AI a horrible name. And obviously, they need to catch up quickly. Otherwise, they’re going to lose out.
But there are two very different types of AI that I see. I’m not an AI expert, so Tom, please correct me if I’m wrong. But you’ve got your front-of-house AI – your chatbots, your on-screen instructions. And then you’ve also got your behind-the-scenes AI that can really help empower teams, give them the knowledge that they need when they need it, help pull together conversation threads across multiple agents. There are so many uses for AI.
There are still organisations who were burned really badly back in the day by terrible AI chatbots – poor implementation, no support around it, no training for their teams. And it went down like a ton of bricks, not just with their budgets and their return on investment, but also with their teams. When they now say, “Hey, let’s try AI,” there are still people who remember that. There are still teams who really shrink and think, “Oh no. Not again. We can’t go there again.” Actually, it doesn’t have to be like that now. Things are so much better. That’s probably the biggest misconception we see – there’s still lingering mistrust, for the wrong reasons.
TH: We’ve talked a little bit about where things have gone wrong. Can you give some examples of where things are going right? What is working with AI and CX?
RB: There have been some really powerful examples over the last few years, specifically where AI has been used more behind the scenes.
I’m going to talk about Harrods, who are a complicated brand in terms of wider public perception with some of the stuff that’s been ongoing over recent years. I’m not necessarily holding them up as a moral benchmark, but what we can say is that they have really focused on joining up customer data across touchpoints to build more complete views of behaviour, looking at that behaviour to drive analytics, using machine learning to turn that into smarter loyalty and personalisation decisions. That’s where we can see AI used really well in customer experience. And we know that Harrods’ approach is working, because not only have they publicly talked about the uplift in their ability to understand customers and where their customers are going and what they want, but they’ve also won an award. They won the AI Trailblazer category at the CX Excellence Awards back in 2024. And that shows us the huge power that can be had from deploying AI not necessarily front of house, but actually behind the scenes.
Another example – and people might roll their eyes about this depending on how many of these webinars they’ve been on recently – is Octopus Energy. The reason people might roll their eyes is because Octopus are so good that they get talked about all the time. People are like, “Oh, okay, it’s Octopus again. Give us something new.” But actually, we do need to talk about them. They aren’t perfect. They’ve had billing difficulties in recent times as well. But Octopus Energy use AI behind the scenes with their front-of-house agents so that they can pull together conversation threads, surface customer history in real time in a way that really makes it much quicker for them to respond. They use it in email responses, chat responses. I saw a figure that was around 40% of the time. They’re doing a huge chunk of their customer communications supported by AI, which is then in turn helping to free up their human people for conversations that really do make a big difference to their customers.
I’m actually a customer of Octopus myself, and it might surprise you to hear that I am a hard customer to have. As someone who likes to get customer experience right and knows what good looks like, you don’t really want me as a customer. But Octopus – I’ve actually been pretty happy with. Those are some really good examples of where it can go right and is at the moment going right.
TH: If you’re a company just starting out on this journey and you’re thinking, “That sounds really great, the work Harrods or Octopus are doing, where they’re managing to get data about the total customer and empower their staff” – what tools are out there to do it? Are you building stuff yourself? Are there good things that you would recommend at different stages in the process? What can people do as that first step?
RB: We do get asked that a lot. I don’t recommend tools specifically for people because each organisation looks slightly different. They’ll have different platforms already integrated. Most organisations that we work with already have way too many platforms that they’re using, and they’re probably in the process of consolidating down, trying to organise what their tech stack looks like.
We don’t necessarily lead with “this is the tool you need.” What we lead with is, what are you trying to achieve in terms of outcomes? What are you aiming for? Because where we come in is to help them organise their approach around tools, and we can help identify the right tools once we know what their values are, what their technical ability is within their teams, what they need to integrate with in terms of CRMs. We wouldn’t just say blanket, “This is the best tool.” We would say we need to look at it a bit deeper – as I’m sure you guys probably find as well.
And it’s more around, what are the systems and processes around the tools that you’re putting in so that you’re not just looking at AI as an isolated delivery mechanism, but as a very substantial operational layer within your strategy? Looking at the end-to-end customer journey.
If you’re going back to “where do they start?” – that’s where we would normally start. We would say, let’s look at your customer journey. Let’s understand what all the steps your customers go through are, from the very first moment they hear about you all the way through to the last time they have any dealings with you. Let’s look to see where that journey is already quite complicated, where some AI could help simplify it, where perhaps it’s a bit of a bottleneck behind the scenes, where you have team members who are sat waiting for days on other team members to reply to them, and we can say, “Look, AI could help here.”
It’s really about looking at the experience end to end first, where it can be supported, and working back from there – so that you don’t just end up launching a tool, hoping it will solve everything, and then thinking, “Oh, actually, it hasn’t.”
I’ve worked with an organisation back in the day where terrible chatbots were the norm. They launched their chatbot on the same day they took their phone number off their website, because they were like, “Well, we don’t need that anymore because they’ve got the chatbot now.” And not only did people not have help from the chatbot, but they had no way of knowing how to get help or how to phone through. That’s a really good example of what can happen if you don’t look at the holistic picture.
TH: It’s terrible, isn’t it? It’s something that we see the whole time as well – firms want the tool to be the solution. They want to just be able to buy a piece of software and have it give them a silver bullet that makes their business efficient and everything work. And that’s not the reality.
We always say to clients when we get asked what tool they should be adopting: the tools are all fairly similar. Maybe one is better one month and then the next month something different’s better. What matters is whether people are using it effectively. You get far more bang for your buck by selecting any tool and making sure that people use it properly and effectively, and that it’s connected to your data in the right way, than you do worrying about which specific features a particular system has. Because you can usually do what you’re trying to do in most things.
I’m just selfishly asking for myself what a company like General Purpose might want to use, because to date, I’ve actually just been vibe coding and building almost like a CRM but for our learners, just so that we can track them on an end-to-end journey in our learner platform. But I’ve been building it myself because I don’t know what else out there might be good. And I’m interested in your reflection both on what I should be looking at, but also what you think about the role that vibe coding is playing in allowing firms to create bespoke software. It’s something that we’re starting to see a lot – companies using AI to create software to enable specific processes that are unique to them – and whether that might have an impact on CX.
RB: It will have an impact on CX. All of the AI that we’re talking about will impact CX. That’s not one of my strongest areas, and you probably know more about it than I do. What I would say is that we are potentially seeing more people approach areas of non-expertise with a degree of confidence that they’ve perhaps borrowed from AI, thinking, “Oh, well, actually, I can do this myself now.”
We are seeing things like a reduction in outsourced work for certain things like copywriting, for example. That’s been a pretty quick one to fall by the wayside – people think, “Well, AI can just do it. I’ll tell it what my tone of voice is and say, write me something compelling.” And all of a sudden, you’ve got someone who was a genuine expert in trying to convert someone on a landing page, who isn’t able to do their job.
That’s probably going to be the same with the coding side as well. If you have someone who understands enough of it – that wouldn’t be me, coding is not my bag – but if you have someone who understands enough of it, they’re probably going to give things a go that historically we’d have hired experts for. Sometimes that will work. And sometimes it will backfire horribly. It’s probably a bit soon to say what that looks like in reality.
TH: We’ve certainly seen a couple of horror stories where people have built internal apps to try and make processes more efficient, but without enough technical knowledge, and they’re leaking all sorts of user data and staff data. We spend a lot of time trying to stamp on that sort of stuff. Making sure people know how to use these vibe coding tools properly and safely is actually very difficult, and it requires quite a lot of intensive training because there’s so much that you can do with them.
If I was to ask you for one thing that you would recommend people who work in CX and maybe have a ChatGPT account could do to better improve their CX approach using AI – something easily accessible – what would that be?
RB: The thing that’s probably most easily accessible to most organisations, regardless of size, regardless of budget, is to actually really hone your prompting skills. Almost all of us could learn to do that better. If you do have access to ChatGPT or another platform, whatever you’re being trusted to be able to open up, try to be really accurate with your prompts so that you’re not relying on the AI to do all the hard work for you in terms of trying to understand what your brand might be or what you’re trying to aim for.
Give some context around a prompt. What are you trying to achieve with what you’re asking it to do? Make sure you’ve given it things like your brand voice, any brand guidelines. Do you talk a little bit more informally? Do you want it to say “I’m” instead of “I am”? Otherwise it’s going to sound like a robot.
But critically, genuinely verify what it gives you back before you use it. There’s a misconception among those who are trusting AI that they can trust it to be a hundred per cent accurate all of the time. Actually, AI is still learning, and it’s learning from huge varieties of sources. Sometimes it’s going to give you the truth, and sometimes it’s going to give you what someone else thought was the truth. And all of a sudden, you’ve started spouting out something that you think is fact, and someone says, “That’s interesting – that’s not actually the truth.”
Get really specific with your prompting. Practise it. You can actually ask it to coach you on that as well. You don’t have to know how to be specific with a prompt. You can say to it, “Can you tell me what I need to give you to get the right answer here?” If I say, “I want you to act as an expert in GDPR” – that’s another scary subject that we all want to try and hide from, isn’t it? – you can say, “Act as an expert in GDPR. Tell me what I need to know to be accurate in this,” and then you can have a look through. You can even get it to sense-check itself. You can say, “Can you verify this against reputable sources? Can you tell me where you’ve got this information from? Is it from a UK-based government website that I can trust, or is it actually from someone down the road who’s decided to write a magazine on GDPR?”
Prompting is accessible to absolutely everybody even if you haven’t got a great budget. You can do it at home. A lot of us are actually probably braver at home with AI, using it in our personal lives, than we are at work, because we haven’t got the fear of being slapped on the wrist for sharing too much data or anything.
And then just pairing that with: don’t ever do anything in isolation. Look at your wider customer journey alongside anything you’re asking AI to do and ask, what’s the impact there? Where could we be damaging or enhancing things?
TH: When we work with clients, the first thing we do is ask people how they’re using AI at home in their personal life, because it correlates really closely with how proficient they are at work. And they’re much more honest about their AI usage at home than they are about their AI usage at work.
On the hallucination side – three years ago, every client request was, “I just want you to get my team using ChatGPT.” Now every single client we work with is saying, “We want to make sure that they are thinking critically about how they use AI. We want people to remain aware that the human is responsible for the output. We want them to still take ownership over what the AI is creating and make sure that they’ve signed it off and made sure it’s good.”
Too few people understand how these systems work well enough to know that the fundamental nature of an LLM is to hallucinate, because it is just predicting the next most likely word. If there is a lot of factual evidence for something in the training data – i.e. out on the internet – the next most likely word is probably going to be factually correct. But if there is no signal, it’s going to predict whatever it thinks comes next, which might just be, “The next word I need is a verb.” It’s always going to predict the next most likely word, and it’s going to do that on whatever signal it has available.
If there is not enough signal available, one of the first things that we teach in the classes is: when you are asking a question, you need to ask yourself, is it reasonable to expect this information to be available on the internet? And if the answer is no, then it’s going to hallucinate the answer when it gives it to you. There’s not really any point in asking it.
Audience question: What are the key use cases that you think are the lowest-hanging fruit for CX across sectors? And what are the most common mistakes that you have seen?
RB: The key use cases are probably similar to the ones I’ve highlighted already with Octopus and Harrods. It’s delivering joined-up insights from customer feedback coming into the organisation. If you can say, “We want to be more customer-centric,” the answer to how you get more centric hasn’t changed over the fifteen years that I’ve worked in CX. It’s still: you listen. And if you can listen in a way that isn’t missing anything, then that’s even better.
That’s one of the things AI is really good at – pulling disparate threads of information around customers together into one single source of truth. If ten agents spoke to twenty customers over here, and then someone actually just went to a conference and heard a couple of comments, and we enter it all in, we can pull it all together and say, “What do customers actually want?” Clean up your data to make sure that what you’ve got behind the scenes actually makes sense before you start making strategic customer experience decisions.
And then specifically, giving agents support when people are live and talking to customers – getting them the information they need as fast as they can in a way that they can easily digest. That’s really important.
Those things aren’t necessarily easy, though. The question I would ask all of you is: how clean is your data right now? Do you trust the data your company has? Do you think you have accurate, up-to-date customer information across all of your systems? Do you trust that your agents are taking notes after every call? Or, actually, is that something that AI could do for you? There are things we can look at, but we can’t look at AI in isolation. It has to be looked at as part of the wider picture.
The common mistakes – I’m keen to hear what you see actually, Tom, because I’m sure you can probably spot some of these quicker than I can. But the two biggest ones: the first is thinking that AI can replace people, because it still can’t. As you said, we need to be able to critically question what’s spat back at us. And the second is thinking that it will solve everything, because it can’t. It might well make things a bit easier in one place.
Oh, and while I’m thinking about it – one of the other big ones is trying to use AI that’s so good you can’t really tell it’s AI, and then actually not telling people it’s AI. Because if they then find out it was AI all along, that really damages trust. If we’re using AI for parts of your customer-facing operations, even if it’s really good and it’s even delivering better empathy than our human counterparts, still let people know. “Hey, you’re talking to one of our AI systems today. If you need to talk to a human, here’s how you do it.”
TH: Completely agree on the transparency thing. The common mistakes that we see are not really people’s fault. They are a byproduct of how quickly the space is moving.
We still see a lot of people who saw on LinkedIn eighteen months ago a prompt framework where it’s like, “Act as this person. Do this in a seven-stage step prompt.” All of that’s completely irrelevant today. It doesn’t do anything to benefit the answer that you get, and half the time it’s actively detrimental.
We also see people who tried AI eighteen months ago, it couldn’t do something, and they formed their opinion based on that. The technology has moved on a lot since then. The Claude Opus 4.5 model that came out late last year, the more recent 4.6, and the latest GPT Codex model – they are qualitatively better at all sorts of tasks. Tasks that twelve months ago would have been completely impossible, like building a good reliable set of reports or dashboards from a spreadsheet. Didn’t work twelve months ago. Now it’s as simple as asking. And a lot of people don’t realise that because it didn’t work when they tried. There’s no way for them to easily find out – apart from being a General Purpose customer and getting an email from us. Apart from that, there is no good way to discover, “Oh wait, this thing that I really wanted to do that it couldn’t do, it now does.” Because nobody really tells you.
One of the things that we try to encourage people in the classes to do is to look at AI usage as something they need to continually invest in. We recommend that people spend fifteen minutes a day – just fifteen minutes a day – either playing with AI or reading about it, just enough to keep them on top of what’s going on and to keep their skills building, without it becoming really onerous. What we don’t want is a situation where people feel like, “Oh god, I’ve got to spend half my day playing with AI.” Because most people have actual jobs to do, and we want them to actually be able to do those jobs. It’s trying to find enough time to improve your AI ability or stay current whilst not cutting into your work too much. And fifteen minutes is the sweet spot.
The nice thing with that is that it becomes a positive cycle. For every fifteen minutes you spend, you learn something that will save you more time in the future. You free up more of your time, and then that commitment becomes easier.
RB: We say a similar thing with customer experience – for everything that you improve, the time will come back to you. And then all of a sudden, you’ve got all this extra time to think strategically that you didn’t have twelve months ago.
TH: Exactly. We recommend people make a cup of tea sometime mid-morning and have a little play with AI or a little read about what the latest things that have come out are.
I know that we’re getting on towards time now, so I’m going to ask you a couple of wrap-up questions if that’s okay. I know from the people who’ve introduced themselves that there are some CX leaders in this conversation. If they have a limited budget – they might work at a big organisation, but often when you’re starting out you have a limited budget and a sceptical team – what’s the one thing that you would tell them to prioritise in terms of adopting AI into their CX?
RB: Make sure that you are highlighting the right places in your journey to implement it. Journey mapping is fairly time-consuming, but it’s not hugely expensive. As long as you’ve got a team member, or you can bring in people like me and Dan to do some journey mapping for you. But ultimately, it’s about looking at what are the step-by-step things that my customer is going through, and where are they getting stuck. And prioritising those places first.
With one caveat. You need to be really critically aware of anywhere that you might have vulnerable customers. I would always say: don’t implement AI where there is either vulnerability or where being given a wrong answer could materially affect something quite substantial in someone’s life – around their finances, around any legal advice, anything like that.
Identifying through journey mapping: where are our crunch points? Where could we make life easier for our customers? Whether that’s front of house, getting them the right self-service information quicker – chatbots, clever routing on the website – or whether it’s actually behind the scenes in terms of what we’re giving our teams, how we’re empowering them to be able to help people better. But do it strategically by identifying where you’re at biggest risk in that customer journey first.
TH: I completely agree with that. When I worked at GDS, we always used to talk about the importance of giving people escape hatches – that if they were going through a process and it wasn’t working for them for whatever reason, there was an alternate way to solve their problem.
And the thing that’s really nice about your conclusion there is that a lot of the things that still matter are the human element – that empathy, that understanding of how people want to interact with your service, how people want to experience your business. And that is something that AI is never going to take away, because AI doesn’t have emotions. It doesn’t have feelings. And so it can never do that job effectively.
That’s probably a good point to wrap up, unless there are some questions. If anyone has any final questions, now is your chance to put them in the chat. I’ll give you a minute to post them in. And Rebecca, if you have any questions for me that you’d like to ask while we’re waiting?
RB: My biggest burning question would be: given how much better AI is now – we talked about how awful it used to be, we know it’s much better – why do you think good AI still fails?
TH: Fundamentally, as I said earlier, AI is just predicting the next most likely word, and it’s got very good at doing that – to the extent that it is able to make a pretty accurate facsimile of intelligence, a lot of the time. But there are all sorts of other things that go into human intelligence that are not captured in a large language model, including emotions. It’s not really possible for the AI to be good at that because there’s no training data for it to learn from. And so there will always be things that AI cannot really do because there’s no way for it to easily learn them. That’s always going to be the purview of people. That would be my answer for why AI is not going to replace us in our entirety.
RB: I love it. And it’s a nice hopeful note to end on, isn’t it? Don’t worry, everyone. We’re still necessary.
TH: All right. Well, that looks like no more questions. Thank you so much for attending, and thank you so much, Rebecca. Really enjoyed this chat. I hope you did too. Please look out for the blog post. The recording didn’t work, so this will be very much an exclusive, one-time thing. You were either there or you weren’t, but there will be a blog post afterwards. Look out for that. Thanks, everyone.
RB: Thank you very much.
Join our next events
Enjoyed this conversation? We run regular webinars and workshops on putting AI to work.
Beyond Digital Optimisation – Learn from One of the World's Leading Private Equity Firms (16 March 2026) – A fireside chat with business leaders from Access Holdings on the value AI is unlocking across their portfolio -> generalpurpose.com/events/mastering-ai-private-equity-access-holdings
ChatGPT Essentials: Half-Day Online Workshop (30 March 2026) – Hands-on training for anyone who wants to use AI confidently in their daily work -> generalpurpose.com/events/chatgpt-essentials-march-2026
Explore more
See the results our AI training delivers -> generalpurpose.com/results
Visit Think Wow -> thinkwow.co.uk
About the speakers
Tom Hewitson
Tom is Chief AI Officer at General Purpose, the UK’s leading AI training company. He previously founded Webby Award-winning conversational AI studio labworks.io and has led AI projects for Meta, Government Digital Service, and BBC Worldwide. He works with organisations of all sizes to embed AI into their teams through practical, hands-on training.
Rebecca Brown
Rebecca is co-founder of Think Wow, a customer experience consultancy working with organisations including Save the Children, Forestry and Land Scotland, and Mobile Mini UK. She has led CX and branding functions at Purplebricks, RICS, and Ordnance Survey, and has been named a top global CX influencer three years running by CXM. She has delivered training and keynotes to over 5,000 people.