How to Adopt AI Without Abandoning Your Principles
Aisling Connaughton, founder of sustainability consultancy Cyd Connects, in conversation with Tom Hewitson, Chief AI Officer at General Purpose in October’s AI Ethics Webinar
Organisations adopting AI face a practical problem. The technology offers genuine productivity gains, but it raises legitimate questions about environmental impact, job displacement, and who actually benefits from the innovation.
Most people are stuck between two bad options: waiting for perfect ethical frameworks that aren't coming, or moving fast and ignoring the consequences. The answer is to adopt deliberately, with clear principles that protect your values whilst capturing value from the technology.The real‑world numbers that matter
A Framework That Works in Practice
The reality is that ethical AI adoption isn't about solving every philosophical question before you start. It's about making deliberate choices as you go. And I think there are some pretty clear principles you can follow.
Get everyone using AI in their daily work first. This is the foundation of ethical adoption, and it's the opposite of what most organisations do. They try to set all the policies and guardrails before anyone touches the technology. But here's the thing: when everyone in your organisation is actually using AI day-to-day, you're far more likely to spot biased outputs, problematic patterns, or unintended consequences.
More importantly, you're building the skills and confidence people need to articulate concerns about ethical usage. Someone who's never used ChatGPT can't really engage in a meaningful conversation about when it should or shouldn't be used. But someone who's been using it for three months? They've got opinions. They've seen where it works and where it falls down. They can contribute to the ethical framework because they understand the technology.
This means proper, ongoing training for everyone. Not just the technical team. Not a two-hour introduction. Real upskilling that gives people confidence and capability to use these tools thoughtfully.
Be transparent about what AI actually does. Once people are using AI, be clear about how it works and what decisions it's making in your business. If AI is processing customer data, tell your customers. If it's influencing hiring or performance reviews, document exactly how and who has final authority.
And the thing is, this isn't just good ethics. When something goes wrong (and eventually, something will), prior transparency is the difference between a manageable incident and a trust crisis that takes months to repair.
Keep humans in the decision loop. AI tools should be assisting human judgement, not replacing it. Build processes that require human sign-off for anything that materially affects people - hiring decisions, dismissals, customer complaints, contract terms.
The technology will push you towards full automation because it's cheaper and faster. But I think you need to push back on that. The most significant ethical failures in AI happen when organisations remove human oversight to save costs, then discover too late what their systems were actually doing.
Understand your actual environmental impact. Okay, so let's talk about the carbon footprint question, because there's a lot of confusion here and I think it's worth getting the facts straight.
You've probably heard claims that using ChatGPT is like boiling a kettle or buying a latte. The latte comparison? It's nonsense. A single ChatGPT query emits about 2-4 grams of CO2. A latte emits around 840 grams. That's roughly 200 times more carbon-intensive.
The more useful comparison is flights. You'd need to make something like 125,000 to 150,000 ChatGPT queries to equal the carbon footprint of one economy seat on a transatlantic flight from London to New York.
Now, does this mean AI has no environmental impact? Of course not. Data centres use significant energy, and as AI scales, that impact grows. But for most businesses, the carbon footprint of your AI usage is probably smaller than your office heating, your staff commute, or your business travel.
The point isn't to ignore the environmental question. It's to understand it accurately. Track your usage. Know your footprint. Make informed trade-offs rather than either dismissing the issue entirely or treating every query as an environmental crisis.
Why This Matters Strategically
Here's what I think most people miss about AI ethics. This isn't just about doing the right thing (though obviously that matters). It's about power and leverage.
The AI industry is consolidating really fast. Training sophisticated models requires hundreds of millions in capital and access to scarce technical expertise. The number of organisations capable of building foundational AI systems is small and shrinking. We're looking at a natural monopoly situation.
And for mid-sized businesses, this creates a real problem. If you don't develop internal capability now, you're going to be buying expensive tools from monopolistic providers later, with no influence over how they work or what values they embed.
The organisations that establish ethical AI practices today will have leverage tomorrow. They'll be able to demand transparency, fair pricing, decent terms from providers. They'll have the expertise to know when they're being sold something that doesn't actually serve their interests.
The organisations that wait? They'll just take what they're given.
There's also this broader question that individual businesses can't solve alone, but I think need to start pushing on. These AI systems are being built using publicly funded research and infrastructure. But the returns are flowing primarily to a small group of shareholders. And whether AI becomes a tool for broad economic benefit or just another efficient wealth concentration mechanism depends largely on regulatory choices being made right now.
Governments struggle with this. They lack technical expertise. They're terrified of driving AI companies overseas. They're caught between innovation concerns and genuine risks.
But you have leverage as customers. You can demand transparency, interoperability, fair pricing from providers. Industry groups can push for regulation that prevents monopolistic abuse without killing innovation. Businesses that adopt AI ethically can show what responsible practice actually looks like, creating pressure on others to follow.
And I think the collective action point really matters here, because individual ethical choices, whilst important, aren't going to prevent monopolisation. That requires coordinated pressure. But that coordination only works if enough organisations develop the internal expertise to understand what they should be demanding in the first place.
Where to Actually Start
Start by getting people using AI in their daily work. Not in one controlled pilot process, but across the organisation for the tasks they're already doing - writing emails, analysing data, preparing presentations, researching competitors.
Give everyone access. Provide proper training. Create space for people to experiment and learn. Encourage them to share what works and what doesn't. This is how you build the institutional knowledge to make good ethical decisions about AI.
Once people have real experience with the technology, then you can have meaningful conversations about where to deploy it more formally, what guardrails you need, and what your ethical principles should actually look like in practice.
The goal isn't to solve every ethical question upfront. It's to develop the institutional muscle to make thoughtful choices as you scale. The organisations building this capacity now will shape how AI gets used in their sector. The ones that don't? Those decisions get made for them by a handful of tech companies in California.
And here's the uncomfortable bit. The technology isn't going away. The window for shaping how it works in your industry is open, but it's not going to stay open forever.
So, choose deliberately. Start soon. Because the alternative isn't avoiding AI's downsides. It's accepting someone else's ethics by default.
Want to go deeper? Watch our full Ethical AI Webinar, where General Purpose’s Tom Hewitson and Cyd Connects’ Aisling Connaughton discuss how to adopt AI responsibly — covering environmental impact, jobs, governance, and inclusion. Watch the video → https://www.generalpurpose.com/updates/can-ai-use-ever-be-ethical.