By Risk Area

Anti-Bribery & Corruption
Modern Slavery & Human Rights
Environmental, Social & Governance (ESG) 
Corporate Sustainability Due Diligence
Third-Party Risk Management




By Role

A compilation of articles, highlighting the depth and complexity of this world wide problem. 

A compilation of articles, highlighting the depth and complexity of this world wide problem. 

The challenges and opportunities of AI in third party risk management and due diligence

Earlier this month, attendees at the inaugural Ethixbase360 Third Party Risk Management & Compliance Summit were abuzz following a panel featuring Ted Datta of Moody’s, Michael Gillet of Microsoft, Chris O’Brien of Advania UK, Anne Vogdt of FREENOW, and Tristan Atkins of Ethixbase360. The panel explored the challenges and opportunities of AI in third-party risk management and due diligence with the discussion covering a range of topics, from how AI can free up compliance teams for more analytical tasks to the responsible implementation of AI solutions.

As one of the most popular sessions of the day, the panelists gave attendees food for thought. Not just on the challenges of effective AI governance but also on what opportunities they see for third-party risk management professionals. As we heard from Anne Vogdt “the biggest impact I see in the use of AI in Third-Party Risk Management is in alleviating compliance teams from administrative burden” said Vogdt “freeing up more time to do the more strategic work that compliance officers really like to do.” 

Michael Gillet added, “the important part about responsible AI in regards to due diligence is making sure you have accountability and transparency about how AI is being used, so that customers and third parties can trust outcomes,” additionally Gillet insisted AI should be used as a companion for decision making – not to make decisions itself.

recent market study by Moody’s* on AI in compliance predicts widespread adoption within five years, yet only 30 percent of the 550 risk and compliance professionals surveyed are currently using or testing AI. Fragmented data is often the main barrier to the successful implementation of AI, but once companies overcome this hurdle, AI is a gamechanger, helping to reduce false positives, shortening the time to on-board third parties, and informing reviews and decision-making all while allowing compliance teams to focus on more strategic work.

Despite these benefits, many warn AI in compliance and due diligence should be approached with caution, as it cannot replace the human element needed to comprehend and assess risk. The power of AI lies in information gathering but humans are still needed to validate and assess the AI’s findings—essentially conducting “due diligence” on the AI itself.

While due diligence, onboarding and particularly false positive remediation were key themes discussed, Chris O’Brien of Advania added the role of AI in Due Diligence Questionnaire submissions to the list of potential applications and benefits for improving submission quality and completion rates while reducing turnaround times, “here the task of the human now shifts from having to answer all the same questions again to perhaps just reviewing for accuracy,” said O’Brien highlighting not just the benefits to in-house risk and compliance professionals but also for their third parties when in direct engagement.

Tristan Atkins of Ethixbase360 highlighted lessons from Ethixbase360’s own AI journey which has been uniquely AI-assisted, not AI-driven, ensuring that critical decision-making remains human-led.

The company is using AI to develop tools trained by Ethixbase360 subject matter experts that improve data output and client delivery, leading to reduced turnaround times and increased operational efficiency. New AI capabilities in development will focus on analytics, enhanced due diligence, and sentiment analysis to further refine Ethixbase360’s risk management solutions.

Ultimately AI can be used as a companion to make third-party risk management and compliance processes more efficient giving risk and compliance professionals more time to conduct value adding analysis. It cannot replace human intelligence or even a “gut” feeling when something is not quite right. Strong governance is needed to ensure that AI is being used appropriately and that effective oversight exists to ensure trust and confidence in the process.

 

Please note that while this session was conducted under Chatham House rules each speaker was kind enough to provide insight video testimonials post session, some of which can be seen on this page. Any attributed quotes have been taken from these publicly facing videos or materials, not their contributions in session due to Chatham House rules.

Sign-up now for the latest industry news, straight to your inbox.
Share via
Copy link
Powered by Social Snap