Advancing safety in transport through automation looks at four modes of transport: road, air, maritime and rail.
It reviews current approaches to safety and risk and moves on to cover the expected impact of autonomy in the sector with the view that improvements in safety standards, cost and knowledge sharing could in turn speed up the rate at which autonomy could be introduced.
In one major recommendation, the report calls for the Department for Transport (DfT) to establish a road crash investigations branch, like current practises in rail, air and maritime in order to contribute to safety improvements as the automotive industry moves towards autonomy.
Other recommendations to improve safety consider the use and storage of data collected by artificial intelligence (AI), reviewing, and implementing robust cyber security standards and investing further in the public perception of autonomous vehicles.
This in turn will help build the trust in inclusive systems, increase take-up and address concerns around negative impacts.
Report author and chair of the IET’s Transport Policy Panel Safety in Autonomy Working Group, Lambert Dopping-Hepenstal, said:
“The development of autonomous transport systems brings new challenges. Each mode of transport takes its own approach to measuring and mitigating risk and we must move to a model where-by there is cross-modal learning and development. The aim is that each transport mode must achieve the same, if not significantly improving its safety standards with autonomy.
“For this to happen there must be assurance of safety throughout the design and operation of the systems.
This work is also crucial to boosting public trust and the perception of autonomy if we are to realistically see people using and operating autonomous modes of transport in the near future.”
Whilst autonomy has been well established in the transport industry for many years, recent advances in AI and Data Science (DS) bring with it the need to greatly improve regulations for safety-critical technologies and cyber-based systems.
An example is data bias – where behaviours of a software element will depend on the data sets used to create it, which could have an impact on a system’s actions and therefore safety.
Lambert concludes: “We need to establish standards and regulations for safety critical AI systems, particularly those that have a real-time role.
These include ongoing data usage, the behavioural aspects of AI-to-AI connectivity and the risk of cyber-attacks. There also needs to be a national resource for the collection, dissemination and use of data drawn from all four sectors.
“This work is now crucial if we are to move into a position where autonomy is able to transform how we move people and goods.”
The full report Advancing safety in transport through automation is available on the IET’s website.