The European Safety and Reliability Conference (ESREL) will have Special Sessions and a Panel dedicated to IWASS.
ESREL is the key annual event for meetings and knowledge exchange in the area of risk assessment, risk management, and optimization of the performance of socio-technological systems in Europe, and among the most important internationally.
Tuesday 09h30 - 10h50, Session 9F
Safety Analysis of Evtol Landing in Urban Centers
Sarah F. S. Borges (Aeronautics Institute of Technology, Brazil)
Moacyr M. Cardoso Jr. (Aeronautics Institute of Technology, Brazil)
Diogo S. Castilho (Flight Test and Research, Brazil)
The operation of Electric Vertical Take-Off and Landing is scheduled to begin in 2024. Most aircraft will be flown by at least one pilot. There are also fully autonomous models such as Wisk's eVTOL Cora, CityAirbus, and EH216. Billions of dollars have already been invested and studies and projects are underway to make this revolution in aviation happen. In these scenarios, new safety issues cannot be left out. This research article aims to identify hazards and causal scenarios that would lead to losses in birds strikes during an eVTOL landing in urban centers. The System-Theoretic Process Analysis method is used to identify hazards, losses, loss scenarios, and safety requirements. To better understand the scenarios, the concepts of Skills Rules Knowledge frameworks by Rasmussen and the Hierarchy of automation levels by Endsley and Kaber will be considered. This STPA is focused on the approach for landing on helipads at the top of buildings. The Level of Automation considered was the Shared Control, with monitoring, generation of alternatives, and implementation, which could be done by the pilot or by a computer, while the selection of the alternative landing could only be carried out by the pilot. Helicopter pilots participated in the refinement of the analysis. Three Unsafe Control Actions are presented along with the related loss scenarios and mitigation requirements, revealing the importance of in-depth studies before the actual operation.
Inertial Measurement Unit Sensor Faults Detection for Unmanned Aerial Vehicles using Machine Learning
Sheng Ding (University of Stuttgart, Germany)
Niloy Chakraborty (University of Stuttgart, Germany)
Andrey Morozov (University of Stuttgart, Germany)
Unmanned Aerial Vehicles (UAVs) are playing inescapable roles in many facets of engineering. The increasing demand for the usage of UAVs shows the necessity to address potential safety issues. In this research, a generic non-linear dynamic model of UAV is adapted. IMU is an essential UAV sensor and one of the most safety-critical UAV components. Several seconds of IMU malfunction lead to the crash of the drone. To address this, a unified representation of sensor-level faults, such as stuck-at, package drop, bias/offset, and noise, is presented using Simulink-based Fault Injection (FI) blocks. UAV’s Inertial Measurement Unit (IMU) sensors such as three-axis accelerometers, three-axis gyroscopes, and control commands are selected as the data source. The model is repeatedly simulated to collect data while maintaining data quality as well as a balanced ratio between healthy and faulty data. The paper presents results of Random Forest for accelerometer and gyroscope fault classification. The results of the experiments are based on extensive training and comparative test performance analysis between the implemented algorithms. Our meticulous study reports promising test accuracy and F1 score of the fault classification for accelerometer sensor and the gyroscope sensor.
Towards a Harmonized Framework for Vessel Inspection via Remote Techniques
Aspasia Pastra (World Maritime University, Sweden)
Tafsir Johansson (World Maritime University, Sweden)P
Remote inspection techniques (RIT) in performing inspections of the steel structure on ships and floating offshore are changing the landscape of ship inspection and hull cleaning. Unmanned Aerial Vehicles perform global visual inspections, ultrasonic thickness measurements and close-up surveys for ships undergoing intermediate and renewal surveys. Magnetic Crawlers can conduct ultrasonic thickness measurements and perform hull cleaning, whereas Remotely Operated Vehicles can perform underwater surveys. Moving forward, efforts to maintain good environmental stewardship, especially at the EU level, will not only require the seamless integration of RIT, but also a guarantee that all techno-regulatory elements vital to the semi-autonomous platform are streamlined into policy through multi-stakeholder cooperation. The aim of this extended abstract is to present some of the findings of the research conducted by the World Maritime University -Sasakawa Global Ocean Institute within the framework of the European Union H2020 BugWright2 project (www.bugwright2.eu/). The project aims to change the European landscape of robotics for infrastructure inspection and maintenance. The findings are related to the main elements that need to be considered for semi-autonomous platforms to form a harmonized regulatory blueprint.
Trust case for autonomous vehicles
Tor Stålhane (Norwegian University of Science and Technology, Norway)
Thor Myklebust (SINTEF ICT, Norway)
The TrustMe project is developing a safety case for autonomous vehicles. A safety case is mostly based on information from the developers and refers to one or more relevant safety standards. However, trust is not the same as reliability or safety. While reliability is based on data analysis and statistics, trust is a person-to-person or person-to-thing relationship. Thus, we also need a Trust case. In order to make self-driving busses a success they need to be considered trustworthy. We started out with a rather simple relationship model to explain trust. This model was mainly based on the Technology Acceptance Model (TAM), a literature review and the results from two focus groups done in cooperation with the local bus service provider – see [1].
However, two new focus groups and a new survey, based on 54 persons, showed that this model was too simple. The main new issue that surfaced was the users’ need for information from the vehicle in order to feel safe – situational awareness. Although [2] claims that situational awareness “can guide reliance without necessarily impacting trust”, we have chosen to stick to the definitions we made in [1] and claim that “trust is confidence in or reliance on some person, organization or quality”. Thus situational awareness is a factor that influences trust.
The paper will discuss the following issues • Our first focus groups and the resulting trust case • How two new focus groups and a survey changed our understanding of people’s trust in self-driving vehicles • Our improved trust case based on a survey, two new focus groups and the work of Hoff and Bashir • The new trust case’s influence on how we may create public trust in self-driving vehicles • Our view on how the interaction between model building and data collection creates research goals and new models
The conclusion chapter will describe where we are research-wise, where we want to go and what we hope to achieve.
______________________________________________________
Tuesday 14h00 - 15h20, Session 11F
The Ethics of AI in Autonomous Transport
Claire Blackett (Institute for Energy Technology, Norway)
In recent years we have seen an enormous uptake in the use of artificial intelligence (AI) in society. There is no doubt that AI can have positive effects in, for example, advancing healthcare through the detection of diseases, or making everyday life easier through the provision of virtual assistants and recommendation systems. However, there are an increasing number of examples of widespread misuse and/or failure of AI technologies, that give rise to questions about ethics and responsibility. For example, in 2018 it was disclosed that the consulting firm Cambridge Analytica used machine learning algorithms to harvest personal data from approximately 87 million Facebook users without their knowledge or consent and used this data to provide analytical assistance in the 2016 USA presidential election (Andreotta et al., 2021). Facebook was fined $5bn for violation of their user’s privacy in this incident. In 2020, it was revealed that the facial recognition firm Clearview AI had used machine learning to scrape approximately 10 billion images of people from social media websites, again without the users’ knowledge or consent, and sold this technology to law enforcement agencies for identification and surveillance purposes (Rezende, 2020). Clearview AI has been ordered to destroy all images belonging to individuals living in countries such as Australia and UK, and investigations of the incident are ongoing. These are but two of several recent examples that have highlighted how AI can be misused in ways that raise ethical concerns about privacy, surveillance, bias, discrimination, and attempts to influence human behaviour. One could argue, in the Facebook and Clearview AI cases, that by using social media, or other publicly available technologies, users must expect and accept that personal data is being collected about them. However, this argument ignores the users’ right to privacy and the fundamental principle of informed consent, i.e., that a user should have sufficient information to be able make their own decision about whether to participate in, or opt out of, the data gathering exercise. Although the informed consent principle originated in healthcare to manage medical ethics and law, there is an increasing need for an equivalent principle to deal with the ethical and legal challenges of AI deployment in society (Pattinson et al., 2020). The issue of informed consent regarding use of AI technologies becomes even more complex when the potential impact of technology misuse or failure extends beyond the immediate user, and especially in the transportation sector where the potential for physical harm to others may be greatest. Consider the spate of road accidents and fatalities involving the Tesla Autopilot driver assistance system, which raise serious doubts about the maturity of the AI and its readiness for deployment on public roads. Again, one could argue that by sitting behind the wheel of a “self-driving” car, the driver implicitly consents to the potential consequences of failure or misuse of the AI. However, if the car is driven on a public road and something goes wrong, there is a high likelihood that it will involve the inhabitants of another vehicle or other road users who did not consent to participation in the use of the AI technology. By allowing the deployment of AI technologies in public situations without sufficient evidence of the safety and reliability of the technologies, we unwittingly participate in mass experimental testing of this new technology, often without knowledge that the technology is being used or the potential impact on us if there is a failure. This appears to be both irresponsible and unethical. In this paper, I will explore the issue of responsible and ethical deployment of AI in society in more detail, using examples from real-life transport accidents to illustrate what can happen when this goes wrong. I will argue that the misuse and/or misunderstanding of AI technology is seemingly a direct result of the technology developer/manufacturer’s failure to adequately inform users about the presence, capabilities, and limitations of the technology. I will challenge the commonly used Levels of Automation (LOA) model and describe how it fails to consider human factors aspects, which is becoming a critical issue as the potential impacts of AI misuse or failure continue to spread beyond the immediate user. Finally, I will consider ways in which organisations could adjust and change their behaviours to enable more responsible and ethical AI technology development practices in the future.
Modeling Fleet Operations of Autonomous Driving Systems in Mobility as a Service for Safety Risk Analysis
Camila Correa-Jullian (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
John McCullough (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Marilia Ramos (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Jiaqi Ma (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Enrique Lopez Droguett (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Ali Molseh (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
System risk analysis and safety assessments of Autonomous Driving Systems (ADS) have mostly focused on aspects of the vehicle's functionality, performance, and interactions with other road users under various driving scenarios. However, as the deployment of ADS becomes more common, the importance of addressing risks arising from fleet management operations is critical, such as the role of fleet operators in the context of Mobility as a Service (MaaS). In this work, we present a system breakdown of ADS remote operations and discuss the role and participation of fleet operators as separate entities from ADS developers. Selected high-level accident scenarios are analyzed, focused on collision events caused by the ADS vehicle operating in unsafe conditions and failed interventions by remote fleet management centers. In particular, key roles of the fleet operator identified include periodically performing inspection and maintenance procedures and acting as a safety barrier for MRC mechanisms limitations.
A System-Theoretic Process Analysis for autonomous ferry operation: a case study of Sundbåten
Hyungju Kim (University of South-Eastern Norway, Norway)
Henning Mathias Methlie (University of South-Eastern Norway, Norway)
The System-Theoretic Process Analysis (STPA) is a relatively new hazard analysis method that was developed to analyse modern complex socio-technical and software intensive control systems. The main objective of this study is to apply STPA to autonomous ferry operation and discuss advantages and limitations of the method. For this purpose, we investigated hazard analysis methods required by current autonomous ship safety guidelines in Norway and discussed their limitations. We have then conducted STPA for autonomous ferry operation with a case study of Sundbåten project, and we established control structure for the fire hazard of the autonomous ferry including two human operators and two autonomous systems. The results showed that the complex interactions between the human operators and the autonomous systems can lead to serious consequences even there is no component failures. Based on the analysis results, we finally discussed the advantages of STPA for comprehensive hazard analysis of autonomous ferry operation.
Real time risk analysis for autonomous ferry operation: A case study of Sundbåten
Hyungju Kim (University of South-Eastern Norway, Norway)
Deepen Prakash Falari (University of South-Eastern Norway, Norway)
One of the key elements of successful autonomous ferry operation is the safety because there would be untrained passengers onboard. While we have couple of safety guidelines for autonomous ships in Norway, they have some limitations, and we may need to introduce a new safety approach to ensure the safe operation of autonomous ferries. The main objective of this study is to emphasize the necessity of the real-time risk analysis for autonomous ferry operation, and to demonstrate it with a case study of Sundbåten. For this purpose, we have first investigated the safety guidelines for autonomous ships in Norway and compared their limitations with the advantages of real time risk analysis. A preliminary real-time risk analysis model has been established by combination of the two different methods: Bayesian Network and Fuzzy Logic. The real-time risk model was established using three risk themes, and one of the risk themes was further developed by eleven risk influencing factors. The preliminary model successfully demonstrated the changing risk of the autonomous ferry, and remaining future works are suggested at the end of this study.