Policy Scenario Cards
Activity 2: Ethics in Automated Security
How to Use These Cards
Print and cut these cards for group discussion. Each card presents a scenario that tests the group’s policy decisions.
Scenario 1: The Research Project
The Situation
A student is researching “common hacking techniques” for a cybersecurity class project. SchoolGuard flags the search as suspicious.
Questions for Discussion
- Should the site be automatically blocked?
- Should the teacher be alerted?
- How could the AI tell the difference between a threat and legitimate research?
What Would Your Policy Do?
Under your current policy, this search would be:
Scenario 2: The Mental Health Search
The Situation
A student searches for “signs of depression” multiple times over several days. SchoolGuard’s behavioral monitoring flags this as concerning.
Questions for Discussion
- Should anyone be notified? Who?
- What if the student is researching for a health class assignment?
- What if the student is actually struggling?
- How do we balance privacy with potentially helping a student in need?
What Would Your Policy Do?
Under your current policy, this activity would be:
Scenario 3: The Gaming Site
The Situation
During lunch, a student tries to access a gaming website. SchoolGuard automatically blocks it as “non-educational.”
Questions for Discussion
- Is this an appropriate automatic block?
- Should students have different rules during lunch vs. class time?
- What if the gaming site has educational value (like coding games)?
- Who gets to decide what’s “educational”?
What Would Your Policy Do?
Under your current policy, this would be:
Scenario 4: The False Positive
The Situation
SchoolGuard blocks a Wikipedia article about computer viruses that a student needs for an assignment. The AI classified it as malware-related content.
Questions for Discussion
- How quickly should the student be able to get the site unblocked?
- Who should have the authority to override the AI?
- What happens if the teacher is unavailable?
- How do we prevent this from happening again?
What Would Your Policy Do?
Under your current policy, the student should:
Scenario 5: The Pattern Detection
The Situation
SchoolGuard’s learning algorithm notices that one student accesses school files at 2 AM every night. The AI flags this as “unusual activity.”
Questions for Discussion
- Is this concerning, or just a student who likes to work late?
- Should the AI learn this is “normal” for this student?
- What if the account was actually compromised?
- How do we distinguish unusual-but-fine from unusual-and-concerning?
What Would Your Policy Do?
Under your current policy, this would:
Scenario 6: The Privacy Complaint
The Situation
A student discovers that SchoolGuard has been tracking and storing their browsing history for the entire semester. They feel their privacy has been violated.
Questions for Discussion
- Did the student have a right to know they were being monitored?
- How long should data be kept?
- Should students be able to see what data exists about them?
- Can students request their data be deleted?
What Would Your Policy Do?
Under your current policy:
- Data is kept for: _______________________
- Students can view their data: [ ] Yes [ ] No
- Students can request deletion: [ ] Yes [ ] No
- Students are notified of monitoring: [ ] Yes [ ] No
Scenario 7: The Emergency Response
The Situation
SchoolGuard detects what appears to be a student accessing instructions for making weapons. The AI has an 85% confidence level.
Questions for Discussion
- With 15% chance of being wrong, should the AI take automatic action?
- What if waiting for human review allows something dangerous?
- What if the student is falsely accused and humiliated?
- Who should be contacted and in what order?
What Would Your Policy Do?
Under your current policy, this would:
Scenario 8: The Learning Dilemma
The Situation
After three months of learning, SchoolGuard has become very accurate at identifying threats at your school. But it has also built detailed behavioral profiles of every student.
Questions for Discussion
- Is the improved accuracy worth the privacy trade-off?
- What if this data was accessed by someone unauthorized?
- Should profiles be deleted at the end of each year?
- What if a student transfers—does their profile follow them?
What Would Your Policy Do?
Under your current policy:
- Behavioral profiles are: [ ] Allowed [ ] Limited [ ] Prohibited
- Profile retention period: _______________________
- Profiles transferable to other schools: [ ] Yes [ ] No
Grade-Band Adaptations
For K-2 (Robot Helper Rules)
Use simplified versions focusing on Sparky scenarios:
- “What if Sparky turns off the lights but someone is still reading?”
- “What if Sparky thinks it’s messy but we’re doing an art project?”
- “What if Sparky tells the teacher about running, but it was actually a fire drill?”
For 3-5 (Computer Rules Committee)
Use the main scenarios but with simpler language and focus on fairness:
- Remove the 9-12 level complexity around FERPA/COPPA
- Focus on “Is this fair?” rather than legal compliance
- Emphasize the student perspective in each scenario
For 9-12 (AI Governance Workshop)
Add complexity:
- Include specific legal compliance questions
- Ask students to write formal policy language
- Have them consider liability and implementation costs
- Connect each scenario to specific NICE Framework Work Roles
From “True Teamwork: Building Human-AI Partnerships” — NICE K12 2025 Dr. Ryan Straight, University of Arizona • ryanstraight@arizona.edu