Today, it’s nearly impossible to avoid the furor over artificial intelligence. What was once confined to the pages of speculative fiction is now part and parcel of everyday life as workers unearth new AI-driven efficiencies and students can barely remember the educational landscape before ChatGPT. Whether you’re excited about the possibilities AI brings, worried about its far-reaching implications, or a combination of both–the fact is that supply chain and risk management leaders need to mobilize new governance frameworks to safeguard against the risks it brings to third-party ecosystems.
Risk vs. Reward
When it comes to AI, one of the promising central benefits is improved productivity for employees. One joint study by Stanford and MIT showed a 14 percent boost in worker productivity, and it’s just the tip of the iceberg as we learn how best to leverage the still-developing technology. However, the tantalizing productivity boost from AI brings with it significant risk to companies that allow the use of tools like ChatGPT among employees.
In April 2023, Samsung suffered a reputational setback upon discovering that employees had accidentally leaked sensitive data through ChatGPT. They had uploaded internal source code and confidential meeting notes. The concern stems from the fact that the data gets sent to external servers, making it difficult to navigate access and removal–as well as prevent accidental leaking. The uploaded data may then be used to train ChatGPT, causing significant headaches for the intellectual property owner.
As a result, Samsung employees are currently barred from using generative AI tools until the company creates and codifies procedures around their use. Other large companies including Apple have adopted similar policies for fear of the danger associated with uncontrolled AI access. Bans are particularly prevalent in industries that handle large volumes of sensitive information, such as healthcare and banking. Bank of America, Citi Group, and JP Morgan Chase all restrict employee access to AI tools for work.
Privacy and data hygiene
Of all the risks AI poses, data privacy is one of the most serious concerns. Without adequate policies, training, and monitoring in place, it’s too easy for employees to feed sensitive information to AI tools. Even if done without malicious intent, these mistakes may bear significant consequences in the future as the legal landscape evolves to address AI.
Another risk stems from bad actors manipulating the AI tool itself. Because AI platforms use data to perform analyses, learn and make decisions, an attacker able to manipulate the data can wreak havoc on the AI’s output. In situations where human well-being is at stake, the danger is hard to overstate. In one confounding example, the National Eating Disorders Association fired all human helpline correspondents in favor of an AI chatbot. Within hours, users began reporting that the AI had started offering highly questionable dieting advice that could easily exacerbate eating disorders rather than providing support and healthy tools for recovery. In this case, even with no bad actor at all, the outcomes of pivoting to AI were starkly poor.
Reframing governance for risk mitigation
At this point, all companies should be seriously considering their AI policies and determining how to move ahead alongside this rapidly-evolving technology. However, one factor many leaders are failing to consider is the impact of AI on their supply chain and third-party network. In any industry, suppliers will be formulating their own strategies for AI use and some will naturally be more carefully constructed than others.
Third-party risk managers must work with company leadership to determine what AI-related requirements, if any, are prudent to adopt when it comes to third-party relationships new and old. Should you require potential partners to have AI policies in place for their employees? If so, how restrictive do those policies need to be to pass muster? Leaders should also consider breaking requirements into tiers.
For example, a supplier that provides generic metal fittings for a product will likely require far less scrutiny of AI use than a partner with whom your company shares a collaborative design process of a proprietary part. If any time-sensitive information is being analyzed or exchanged, risk managers need to know if AI is in use and what safeguards are in place to mitigate leaks and other potential negative outcomes.
Collaborating for future defense
Just as no company wants to deal with the fallout from the use of AI, no third party wants their AI use to affect their ability to secure partnerships. As technology continues to advance and change with no regulation on the immediate horizon, companies looking to minimize risk must work together to set ground rules and elevate standards around AI. That means opening up two-way channels of communication and offering clarity in regard to present and future expectations of conduct–plus establishing agreed-upon consequences for breaches of relevant policies. Governance surrounding a technology still in development is a tricky challenge, but embracing a collaborative approach forms the path to more stable relationships and success with minimal risk in an uncertain future.