, , , , , , , , , , , , , , , , , ,

Human 2.0

#amomentwithmaghie #discussionzone

Can we re-create the “human experience?” Is it possible to remove the human factor from dynamic thinking for integration into machine systems? 

Based on current knowledge, the resounding answer is “no,” yet the quest for automated systems that resemble human experiences is nothing new: from flight to farming, humans and machines have been evolving alongside one another for centuries. 

Limitations are present on both sides of the spectrum as machines cannot re-create independently while humans are comprised of trial-and-error idea generation (Beck, 2016). Automation bias is one of the most notable issues facing the progression of the human-machine relationship:  

“Studies in aviation dating back to the 1980s and 1990s and analysis of incident reports recorded in the Aviation Safety Reporting System found that pilots frequently failed to monitor important flight indicators or did not disengage the autopilot and automated flight management systems in the cockpit in case of malfunction (Sujan et. al., 2019).”  

Accidents due to an “over-reliance” on automated systems can be extremely costly yet could be prevented with proper human-machine interfacing. This “automation bias” is a result of consistent use of automation without close monitoring due to being reliant on that system to perform duties to allow the human factor to complete tasks unique to self. 

“Automation bias can lead to omission errors, where people do not take a required action because the automation failed to alert them, and it can lead to errors of commission, where people follow the inappropriate advice of an automated system (Sujan et. al., 2019).” 

Errors on both sides of the human-machine relationship lead to the need for a more balanced approach to integration. More engagement during the flight may have prevented the China Airlines Boeing 747 SP-9 accident of 1985 referenced in the Sujan et. al. study. 

This integration threat factor is also seen in daily life with the Internet and personal devices. Due to automation bias, people are at risk daily of exposing sensitive information. This could be dangerous for several reasons including fraud, trafficking, and other offensive activities. Ease of access to information and heavy reliance on the queries provided by automated systems should be challenged as the notion of the “echo chamber” (Beck, 2016) limits the full potential for human error to generate innovative ideas. 

Automation bias can be a positive by-product of the HMI (human-machine interaction) in creating space for the human mind to reach new levels; however, the need for perfect machines does not outweigh the need for human creativity to flourish. If reliable systems are developed based on human learning, the interface becomes seamless and improves the dangers associated with reliance on this technology, thus allowing the human factor to remain secure for higher-level cognitive functions. 

To construct a study to examine how to demonstrate this aspect of cognition, the specific research question is:  

“How can we use technology to serve us rather than replace us?” 

When considering the trend of technological developments in the past decade, more automated systems are appearing daily (Business News Daily, 2022). With automation being a large movement of development, considerations to the usability and experience of the systems must be considered if they are to be deemed successful for integration into a human-centric environment. Focusing development with a human-centered design lessens the dangers associated with automation bias and enhances the overall human experience. 

In aviation, there is an underlying fear of drones replacing pilots. While the actual flying may one day be replaced, the human factor brings the element of security with an ability to react to situations that are not programmed, such as an engine failure. The development of flight is the result of human curiosity to understand surroundings and learn to interact with an experience using creations from the mind. It is this ability to generate new thoughts and ideas that make us human (Beck, 2016). Therefore, interfaces to lessen automation bias and stimulate the human mind to improve learning and reactions would allow us to use technology to serve us rather than replace us. 

To develop these interfaces, a look into the real-life realm brings learning to the forefront. Understanding learning and presentation of information via automated systems currently face challenges using videos, games, and online classrooms. Social learning must be re-created through interfaces, yet these have not yet been developed, creating a gap in learning. Therefore, the reliance on automated systems is so great that many people would not be able to live life day to day without access. Automation bias also causes frustration in office spaces when programs do not function properly, or co-workers encounter issues with email correspondence. Complacency has also been linked to automation bias (Goddard et. al., 2012). Ergonomics would benefit from improved automated integrations allowing more productivity resulting from more positive HMI. 

Conclusion 

The need for balance between automation and human input has become apparent in the 21st century as user experience must be improved on all levels to be based on cognition and learning. We must move the focus from re-creating the human experience to improving the experience of being human by reducing automation bias. 

References 

Beck, H. (2016). What is a Thought? How the Brain Creates New Ideas. Retrieved October 22, 2022, from What is a Thought? How the Brain Creates New Ideas | Henning Beck | TEDxHHL   

Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association: JAMIA, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089 

How automation is changing workplaces everywhere. Business News Daily. (2022.). Retrieved October 22, 2022, from https://www.businessnewsdaily.com/9835-automation-tech-workforce.html 

Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli, I., & Reynolds, N. (2019). Human factors challenges for the safe use of artificial intelligence in patient care. BMJ health & care informatics, 26(1), e100081. https://doi.org/10.1136/bmjhci-2019-100081  

Advertisement

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


Website Powered by WordPress.com.

%d bloggers like this: