Many people express concern about where the development of artificial intelligence will take human society. This thought piece comes from the Human Factors and Ergonomics Society with some suggested guardrails. Certainly worth thinking about.
Artificial intelligence (AI) is being rapidly developed for a wide variety of applications including:
a) Recommendation systems for consumers;
b) Generative AI systems that create images, audio, video, and text;
c) Systems that take on tasks or portions of tasks in safety-critical systems such as aviation, air traffic control, automated process control, drilling, transportation systems and healthcare; and
d) Natural language systems that support conversations.
Although there are a number of challenges associated with the use of AI, including potential long-term effects on employment, wealth disparity, loss of skills, and the development and testing of robust, trustworthy AI software, a critical and immediate concern is the effect of AI on the people who interact with it.
AI has the potential to augment human capabilities and to improve overall human well-being. However, human-decision making and performance can also be negatively impacted by AI systems, resulting in inappropriate decision biasing, reduced awareness and understanding of situational information, and poor performance. Guardrails are needed to protect people from significant harm from these systems, and to support people’s ability to engage with AI usefully and effectively.
A number of organizations have begun to develop principles for the use of AI and the regulations that should be in place for its use. For example, the European Union has introduced the AI act, establishing initial regulations for the use of AI11. The U. S. Department of Defense has stated that AI should be responsible, equitable, traceable, reliable and governable. The White House Office of Science Technology and Policy (OSTP) has issued an AI Bill of Rights stating that AI should (1) be safe and effective, (2) protect people from algorithmic discrimination, (3) have built in data privacy protections, (4) provide notice that it is an AI and explanations for its actions, and (5) allow for human alternatives, considerations and fallback allowing people to opt out of the AI. These are all sound and appropriate principles that need to be further detailed to allow for meaningful and actionable implementation in different settings.
Human factors researchers are in a unique position to provide scientifically based input on the question of how to avoid negative outcomes from AI systems, while retaining the positive effects that AI can provide. Over the past 40 years, this profession has conducted a substantial body of research exploring the effects of AI and automation on human performance, as well as determining the key characteristics of the technology that lead to both good and bad outcomes. In particular, this extensive research base provides details on supporting human safety and effective decision-making when working with AI systems. The ability of people to know when to trust the outputs of AI systems, their ability to understand its capabilities and deficiencies, and their ability to override AI actions, particularly in safety critical situations, is essential for achieving the goals of safe, effective use, avoiding the negative effects of bias or inaccuracy, and allowing for effective, informed decision-making with respect to interaction with AI systems.
With over 3,000 members, the Human Factors and Ergonomics Society (HFES) is the world’s largest nonprofit association for human factors and ergonomics professionals. HFES members include psychologists, engineers and other professionals who have a common interest in working to develop safe, effective, and practical human use of technology, particularly in challenging settings.
Based on research on human performance in operating with automation and AI, HFES provides the following recommended guardrails for AI, to support the goal of safe and effective use of these systems.
- AI Shall Provide Explicit Labeling
- AI Shall not be Used to Commit or Promote Fraud
- AI Shall Avoid and Expose Bias
- Developers of AI Systems Must be Liable for Their Products
- AI Shall be Explainable
- AI Shall be Transparent
- AI Systems Shall be Tested with Human Users
- AI Shall Provide Safety Alerts
- AI shall be Fail Safe
- Autonomous AI Systems Shall be Validated and Certified