Organizations urged to embrace responsible AI programs to mitigate growing risks

A recent report from MIT Sloan Management Review and Boston Consulting Group reveals that over 70% of organizations are struggling to effectively address the risks associated with the use of artificial intelligence (AI) tools. To keep up with the ever-evolving landscape, these organizations are advised to adopt Responsible AI (RAI) programs, which can help them navigate the latest advancements.

The study highlights the emergence of significant risks, particularly in relation to third-party AI tools, which account for 55% of all AI-related failures. These risks encompass potential financial losses, damage to reputation, erosion of customer trust, regulatory penalties, compliance hurdles, litigation, and more.

Elizabeth Renieris, a co-author of the report and guest editor of MIT Sloan Management Review, emphasizes the substantial changes that have occurred in the AI landscape since the previous report was published. She notes that the sudden and widespread adoption of generative AI tools has thrust AI into everyday conversations. However, the fundamentals remain largely unchanged. The current research reinforces the urgent need for organizations to embrace responsibility by investing in and expanding their RAI programs to address the growing utilization and risks associated with AI.

The report draws on a global survey conducted among 1,240 respondents representing organizations from 59 industries and 87 countries, each with annual revenues exceeding $100 million. The majority of organizations (78%) heavily rely on third-party AI tools, with 53% relying exclusively on such tools. This heavy dependence exposes these organizations to a range of risks, including some that may go unnoticed or misunderstood by leaders.

Alarmingly, the report reveals that one-fifth of organizations utilizing third-party AI tools fail to assess the associated risks altogether. To address this gap, the authors stress the importance of proper evaluation of third-party tools, preparedness for emerging regulations, CEO engagement in RAI initiatives, and the swift maturation of RAI programs.

The most prepared organizations employ multiple approaches to evaluate third-party tools and mitigate risks. Companies that implement seven different evaluation methods are more than twice as likely to uncover vulnerabilities compared to those using only three methods (51% versus 24%). These methods include vendor pre-certification and audits, internal product-level reviews, contractual obligations mandating adherence to RAI principles, as well as compliance with AI-related regulations and industry standards.

The report also highlights the significance of CEO engagement in RAI discussions. Organizations benefit from CEOs who take a hands-on role by participating in RAI-related hiring decisions, engaging in product-level discussions, and setting performance targets. Such active involvement by CEOs leads to 58% more business benefits compared to organizations with less engaged CEOs.

Steven Mills, another co-author of the report and Chief AI Ethics Officer at BCG, emphasizes the need for organizations to double down and invest in robust RAI programs. Despite the feeling that technology is outpacing the capabilities of RAI programs, the solution lies in increasing commitment to RAI rather than scaling back. Organizations must demonstrate leadership and allocate resources to deliver business value while effectively managing risks.

As awareness of AI risks grows, state and federal officials are contemplating regulations to monitor and track the use of automated tools in the workplace. The White House has announced plans to evaluate technologies used for “surveilling, monitoring, evaluating, and managing” workers. Additionally, 34 state legislatures have nearly 160 bills or regulations pending, which pertain to AI.

The U.S. Equal Employment Opportunity Commission (EEOC) identifies employment discrimination as a critical risk, particularly in the context of AI-based platforms involved in hiring and firing decisions. As a response, cities and states are considering legislation that could regulate automated employment decision tools and ensure transparency for job seekers regarding their usage.