Training Overview
Frontline professionals and faculty are increasingly fielding questions about artificial intelligence, yet there is little consistent guidance on how to evaluate its use responsibly in child protection and human services settings.
This training focuses on where AI can intersect with child protection work, without assuming uniform adoption or maturity across agencies. Rather than promoting specific tools or claiming insight into current practices, the session equips participants with a clear decision-making lens for assessing potential AI use cases based on their organization’s context, capacity, and risk tolerance.
Participants will explore realistic, high-level examples—such as administrative support, documentation assistance, training preparation, and non-identifiable data analysis—while clearly distinguishing between what may be appropriate to explore, what requires strong guardrails, and what should remain strictly off-limits in sensitive work involving children and survivors.
The emphasis is on judgment over tools and decision-making over adoption, giving faculty and professionals shared language, practical frameworks, and ethical clarity they can apply regardless of where their organization currently falls on the AI adoption spectrum.
Learning Objectives
- Where AI can responsibly support child protection and human services work
- How to evaluate potential AI use cases without assuming or endorsing adoption
- Common myths and misconceptions that create unnecessary fear or false confidence
- Real risks professionals should care about, including data privacy, over reliance, and accuracy
- A simple decision-making framework faculty and professionals can teach, model, and adapt
