AI tools are becoming ubiquitous in K–12 education—from tutoring systems and writing assistants to hiring platforms and data dashboards. These tools can enhance efficiency and personalization, but they can also amplify bias, harm marginalized students, and entrench inequity if implemented without care.
In this session, participants will explore how bias can appear in AI systems—from biased data sets and language models to inequitable access and assumptions baked into algorithms. Attendees will learn practical strategies to evaluate AI tools with an equity lens.
Participants will engage in an interactive "bias spotting" activity, discuss how to create safe, equitable conditions for student use of generative AI, and walk away with a set of guiding questions and protocols for more inclusive AI integration. Whether you're just beginning to explore AI or already using it across classrooms and departments, this session offers tools to ensure innovation supports every learner.