SYSTRA Scott Lister hosts successful Human Factors and Automation Event
SYSTRA Scott Lister’s NSW office, recently held an evening on Human Factors and Automation structured around 4 topics.
Trust in automation - Remote operation of a safety critical function
The first presentation described a rail project that SSL are supporting that is developing a solution to automate and remotely operate a safety critical function previously performed by personnel in the field. One of the implications is the transfer of responsibility from the field to the control room so in future (in normal operation) the safety functions will be solely owned by the control room operators whom will depend on automation to assure them of safe system status. The discussion focused on trust in automation and how to build trust among operators so that they accept and use the system.
Vigilance, attention and regaining manual control - Self driving cars
The second presentation outlined the events of a fatal accident involving a self-driving car. The driver was using the advanced driver assistance features (“autopilot”). As the vehicle approached the area dividing the main travel lanes it moved to the left hitting the crash attenuator then colliding with two other vehicles. It is unclear from the report why the vehicle moved to the left or why the driver did not take action but it raises an interesting question for Human Factors around vigilance, attention and regaining manual control. As Lisanne Bainbridge highlights in her paper Ironies of Automation, when manual take-over is needed there is likely to be something wrong so “the operator needs to be more rather than less skilled and less rather than more loaded than average”. Discussion focused on whether in an “auto pilot” mode the driver is encouraged to maintain vigilance and supervisory control or if they are more a passenger in the vehicle. The driver’s ability to take over at short notice would require them to build up situation awareness and examples were provided where this has been found to take 35-40s.
Design in automation – Airplane crashes
The third presentation described two recent airplane crashes and provided details from news reports which suggests that the automation design contributed to the events. The discussion focus was around increasing levels of automation and how to ensure that systems are designed to provide the information that operators need, since this is not always considered sufficiently in designs. The failure modes need to be considered to identify the information that the operator requires. This can present a particular challenge in the design of complex systems where failure modes can be difficult to anticipate and simulate in advance.
Machine bias - Ethics in AI and machine learning
The final presentation focused on AI and machine learning citing examples identified in research by the AI Now Institute of examples where AI has had unintended or undesirable consequences. The issue is summed up by a quote from a podcast on machine learning that said:
“I want a machine-learning algorithm to learn what tumours looked like in the past, and I want it to become biased toward selecting those kind of tumours in the future, but I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”
A theme that ran through the discussions and a key challenge for human factors is how we accurately predict operator information needs and task load in degraded modes early enough to inform design decisions. Participants also reflected on how the current suite of human factors tools and techniques are well placed to identify and manage the risks of increased levels of automation, however, the acceptance and adoption of these techniques will continue to challenge the profession, though in exciting new forms.