Industry insight
Safe use of AI on the edge by Dominic Lenton
A new IET and Dstl publication provides expert advice for executives and senior managers on how to approach implementation of AI in safety-related systems.
The public may not have a deep understanding of how AI and machine learning work, but they are increasingly aware of the technology’s ability to enhance the functionality of computer-based systems. Chatbots, text and image generation tools and sophisticated search engines are just a few AI-enabled tools that are firmly established within everyday life.
With more significant applications emerging in fields like healthcare diagnostics, security and finance, the extent to which decisions made by AI can affect people’s lives is attracting greater attention. People are naturally concerned about issues such as bias, explainability, fairness, legislative compliance, ethics and trust.
This interest – and the need for developers to take it into account – is particularly significant in safety-related applications, where it’s becoming clear that closer scrutiny is required to ensure that the use of AI can be justified, and the risks understood and mitigated.
If AI is adopted as rapidly as many forecasts predict, it will become hard to avoid situations where it is critical to ensuring safety. The plane or train you’re travelling in may rely on it for a host of features. Even if your car doesn’t offer autonomous driving, its anti-lock braking, electronic stability control and airbag systems could all use it. Attend a hospital appointment and it might control medical devices, analyse test results, or be used to track patient data through electronic health record and health monitoring systems. Workers in a factory or power plant will be protected by AI that operates emergency stop and machine-guarding systems.
Techniques used to develop the algorithms behind this kind of decision-making are very different to those that underpin established safety standards. It’s become clear that alternative safety arguments, developed with AI in mind, are needed to bridge the gap in being able to demonstrate compliance with conventional good practice.
At the same time, when implementing AI there needs to be a consideration of issues such as the risk of adoption, governance and ethics issues, and the implications of collecting and processing data then using it in machine-learning applications. Knowing how a system is constructed isn’t enough; developers need to understand its behaviour so that there is confidence in its output.

Alignment with human factors is important too. Because systems are often a pairing of machine and user, questions of where responsibility for maintenance and operation lies need to be clearly articulated and understood. A system should only be used in a way that is safe, and only in environments and contexts for which it has been validated.
Although there is no shortage of guidance available about how AI can be used to increase productivity in business, senior managers responsible for safety-related systems can find it hard to locate authoritative advice on their specific needs. Our Engineering Safety Policy Panel has gone some way towards remedying that with The Application of Artificial Intelligence in Functional Safety, a new publication written by a team of experts with support from the Defence Science and Technology Laboratory (Dstl).

Aimed at non-specialist senior management, this is a high-level work designed to provide information that is vital to decision making. At its heart are 10 key ‘pillars of assurance’ that need to be considered when assessing the risks of using automated systems in safety-related environments. With the objective of ensuring the safety and wellbeing of individuals, as well as protecting assets and the environment, it outlines additional considerations required in engineering processes and provides advice on building assurance cases.
Ten ‘key pillars’ highlight points that should be considered when implementing AI elements within safety-related systems.
IET fellow Dr Alec Banks, senior principal scientist in the Advanced & Dependable Autonomy Team at Dstl, contributed to the publication. As he explains, it is not an attempt to examine AI technologies in detail, provide a formal ‘rule book’ or approved method of software development. “Our aim was to highlight the risk associated with the use of AI-based systems in safety-related systems, adopting recognised definitions where they are already established and addressing the fundamental differences between traditional and AI-based software based throughout the engineering life cycle.”
Importantly, the working group was drawn from volunteers across industry and government, he adds. “This provided a broad range of perspectives and avoided fixation on any specific sector needs. We believe it will be of value to senior level decisionmakers in understanding the high level considerations that should be made around the adoption of AI in functional safety.”
Join our ethical AI webinar
Hit the child who has run into the road, or swerve and collide with other pedestrians or vehicles? The scenario in which an autonomous vehicle has to quickly choose between different courses of action, either of which has potentially fatal outcomes, is a well-known example of how legal and ethical issues come into play when we trust AI to make decisions.
A set of values, principles and techniques based on accepted standards of right and wrong is emerging to guide the development and use of AI technologies. Known as AI ethics, this sub-field of applied ethics addresses the harms that misuse, abuse, poor design or negative unintended consequences can cause.
As president Dr Gopichand Katragadda emphasises in his article, engineers have a responsibility to think about all the implications for individuals and society when they use the power of AI in products and systems. ‘Navigating the AI landscape – Laws, ethics, and responsibility’, a webinar organised by our AI Technical Network (TN) to be held on 18 September 2024, provides an opportunity to get up to date with current thinking and contribute to the debate about the direction it needs to take in the future.

“As the world of AI has and is still gaining traction, and we are innovating more than ever, we always need to remember our obligations to ethics, law and safety,” says Kirsten McCormick, a systems engineer and AI lead at General Dynamics Mission Systems UK who chairs the AI TN and is among the expert speakers lined up for the event.
“As we navigate this AI landscape, we encounter new challenges in the forms of data complexities, model intricacies and ethical dilemmas. It is crucial for us to be responsible and accountable for our solutions, ensuring that we put in place all processes necessary to develop ethically for the good of society.”