Artificial intelligence and functional safety
Our paper provides a top-level summary to support decision making on the use of artificial intelligence (AI) in safety related systems. We aim to highlight the behaviour and risks associated with AI and to consider various techniques and measures used during the engineering lifecycle.
What is AI-safety? How can AI impact safety culture? We define AI as software used to solve problems that it was not specifically programmed for. Current technologies have only achieved relatively low levels of narrow AI. AI is an enabling technology for autonomous systems. Its use in safety-critical product development is increasing significantly and delivering benefits for users.
We have focused on 10 key pillars: data; legal and ethical considerations; learning; verification and validation; security; algorithmic behaviours; human factors; dynamic hazards and safety arguments; maintenance and operation; and specification.
This is the first in a series of IET outputs on this topic. A more detailed document is currently being developed and will be published shortly.