Study proposes risk management for artificial intelligence in organizations
The adoption of artificial intelligence by organizations can generate unpredictable impacts, such as discrimination, according to an article by two researchers at Fundação Getulio Vargas’ Sao Paulo School of Business Administration (FGV EAESP), Carlos Eduardo Brandão and João Luiz Becker, published in the school’s journal, GV-Executive.
For this reason, ethical principles such as responsibility are fundamental when dealing with this technology. Accordingly, there need to be audit mechanisms, transparency regarding the capabilities and limitations of systems, supervision and human intervention.
The authors carried out a systematic review of 173 scientific papers on this subject, accessed in databases such as Web of Science and Scopus. From the literature, they concluded that the use of artificial intelligence algorithms by organizations requires a review of risk management models to ensure reliable, fair and responsible systems.
Artificial intelligence entails adapting technological resources and functions to interactions with humans in order to respond more and more precisely to users’ demands, the article explains. However, high levels of automation pose higher risks, such as the reproduction of biases and discriminatory practices caused by failures in facial recognition of crime suspects, accidents caused by vehicles with autonomous driving systems and interference in human learning processes.
The risk management process proposed by the authors is cyclical and aligned with the objectives of artificial intelligence in organizations. It foresees measures to identify, categorize and evaluate mapped risks, considering the likelihood of risks occurring in organizations. After that, impacts must be recorded and communicated to stakeholders and regulatory bodies, together with mitigation efforts and constant monitoring of results.
The risk management structures suggested by the article include the creation of a risk committee in organizations to supervise the use of artificial intelligence systems and to guide measures to mitigate impacts. “Throughout their journey, organizations should take appropriate risks, in line with their objectives and goals, avoiding overexposure or underexposure, which could potentially be harmful to their business,” the authors write.