MarekPhotoDesign.com – stock.ado
Everyone would agree that taking an ethically principled approach to using AI is essential – but in practice, delivering governance and assurance mechanisms around implementing AI responsibly is what really matters
By
Tess Buckley, TechUK
Published: 14 Nov 2024
The conversation around digital ethics has reached a critical juncture. While we experience an overwhelm of frameworks and guidelines that tell us what responsible artificial intelligence (AI) should look like, organisations face a pressing question – how do we actually get there?
The answer may lie not in more ethical principles, but in the practical tools and standards that are already helping organisations transform ethical aspirations into operational reality.
The UK’s approach to AI regulation, centred on five core principles – safety, transparency, fairness, accountability, and contestability – provides a solid foundation. But principles alone aren’t enough.
What has emerged is a practical array of standards and assurance mechanisms that organisations can use to implement these principles effectively.
Standards and assurance
Consider how this works in practice.
When a healthcare provider deploys AI for patient diagnosis, they don’t just need to know that the system should be fair – they need concrete ways to measure and ensure that fairness.
This is where technical standards like ISO/IEC TR 24027:2021 come into play, providing specific guidelines for detecting and addressing bias in AI systems. Similarly, organisations can employ and communicate assurance mechanisms such as fairness metrics and regular bias audits to monitor their systems’ performance across different demographic groups.
The role of assurance tools is equally crucial. Model cards, for instance, are supporting organisations to demonstrate the ethical principle of transparency by providing standardised ways to document AI systems’ capabilities, limitations, and intended uses. System cards go further, capturing the broader context in which AI operates. These aren’t just bureaucratic exercises, they’re practical tools that are helping organisations understand and communicate how their AI systems work.
Accountability and governance
We’re seeing particularly innovative approaches to accountability and governance. Organisations are moving beyond traditional oversight models to implement specialised AI ethics boards and comprehensive impact assessment frameworks. These structures ensure a proactive approach, being certain that ethical considerations aren’t just an afterthought but are embedded throughout the AI development lifecycle.
The implementation of contestability mechanisms represents another significant advance. Progressive organisations are establishing clear pathways for individuals to challenge AI-driven decisions. This isn’t just about having an appeals process – it’s about creating systems that are genuinely accountable to the people they affect.
But perhaps most encouraging is how these tools work together. A robust AI governance framework might combine technical standards for safety and security with assurance mechanisms for transparency, supported by clear processes for monitoring and redress. This comprehensive approach helps organisations address multiple ethical principles simultaneously.
The implications for industry are significant. Rather than viewing ethical AI as an abstract goal, organisations are approaching it as a practical engineering challenge, with concrete tools and measurable outcomes. This shift from theoretical frameworks to practical implementation is crucial for making responsible innovation achievable for organisations of all sizes.
Three priorities
However, challenges remain. The rapidly evolving nature of AI technology means that standards and assurance mechanisms must continually adapt. Smaller organisations may struggle with resource constraints, and the complexity of AI supply chains can make it difficult to maintain consistency in ethical practices.
In our recent TechUK report, we explored three priorities that emerge as we look ahead.
First, we need to continue developing and refining practical tools that make ethical AI implementation more accessible, particularly for smaller organisations.
Second, we must ensure better coordination between different standards and assurance mechanisms to create more coherent implementation pathways.
Third, we need to foster greater sharing of best practices across industries to accelerate learning and adoption.
As technology continues to advance, our ability to implement ethical principles must keep pace. The tools and standards we’ve discussed provide a practical framework for doing just that.
The challenge now is to make these tools more widely available and easier to implement, ensuring that responsible AI becomes a practical reality for organisations of all sizes.
Tess Buckley is programme manager for digital ethics and AI safety at TechUK.
Read more on Artificial intelligence, automation and robotics
What is AI ethics?
By: Alexander Gillis
Responsible AI vs. ethical AI: What’s the difference?
By: Kashyap Kompella
IT Sustainability Think Tank: Sustainable innovation in the age of AI
By: Shane Herath
Government insists it is acting ‘responsibly’ on military AI
By: Sebastian Klovig Skelton
Information contained on this page is provided by an independent third-party content provider. This website makes no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact editor @pleasantgrove.business