Elevating Due Diligence and Risk Management in Advancement: Harnessing automation and AI to strengthen traditional processes and transform insight
Overview
In an era of increasing complexity and scrutiny, philanthropic decisions demand more than instinct they require confidence, clarity, and trust. This session explores how a contemporary risk management framework, strengthened by AI-enabled agents, can transform how advancement teams assess risk, navigate uncertainty, and build sustainable philanthropic relationships.
Drawing on a real-world UQ case study, attendees will gain practical insight into designing and operationalising a scalable, tiered approach to due diligence, one that balances risk appetite with philanthropic impact. The session will unpack how AI can enhance efficiency and consistency, while human judgment remains central to ethical decision-making.
Learning Outcomes
By the end of this session, participants will be able to:
- Understand the structured, tiered due diligence and risk management framework aligned to ACE’s strategy and risk appetite.
- Understand how AI agents can be deployed across the due diligence lifecycle to automate research, flag emerging risks, and supporcontinuous monitoring.
- Evaluate where AI adds the most value and where human expertise and relationship management must remain central.
- Apply practical governance, controls, and ethical considerations when integrating AI and AI agents into advancement workflows.
What is the key takeaway from this session?
- Due diligence is a strategic enabler, not a compliance exercise - Effective risk management supports confident philanthropic decision-making, strengthens trust, and protects institutional integrity.
- A tiered, structured framework creates clarity and consistency - Applying proportionate levels of due diligence based on risk and impact enables teams to focus effort where it matters most, without slowing momentum.
- AI and AI agents enhance efficiency and insight - AI agents can automate routine research, synthesise complex data sources, flag emerging risk signals, and support continuous monitoring across the donor lifecycle.
- Human judgment remains essential - AI augments, not replaces professional expertise. Ethical decision-making, contextual interpretation, and relationship stewardship must remain human-led.
- Governance and ethics are non-negotiable - Clear controls, accountability, and transparency are critical to ensuring AI strengthens trust rather than introducing new risk.
