Start of main content

Artificial intelligence and functional safety

Our paper provides a top-level summary to support decision making on the use of artificial intelligence (AI) in safety related systems. We aim to highlight the behaviour and risks associated with AI and to consider various techniques and measures used during the engineering lifecycle.

What is AI-safety? How can AI impact safety culture? We define AI as software used to solve problems that it was not specifically programmed for. Current technologies have only achieved relatively low levels of narrow AI. AI is an enabling technology for autonomous systems. Its use in safety-critical product development is increasing significantly and delivering benefits for users.

We have focused on 10 key pillars: data; legal and ethical considerations; learning; verification and validation; security; algorithmic behaviours; human factors; dynamic hazards and safety arguments; maintenance and operation; and specification.

This is the first in a series of IET outputs on this topic. A more detailed document is currently being developed and will be published shortly.

Other factfiles you may be interested in:

We’re upgrading our contact centre to make it quicker and easier for you to speak to the right member of our team.

From Thursday, 30 to  Friday, 31 October, our phone lines will be temporarily unavailable while we upgrade our systems. For any urgent enquiries during this time, please email membership@theiet.org and we’ll get back to you as soon as possible.

From Monday, 3 November, we’re moving to one central phone number for all enquiries. This change will make it easier to reach the right team quickly and improve your experience. 

Thank you for your patience as we improve how we connect with you.

Close this message