Ethics in Automated Security

Student Policy Worksheet (Grades 6-8)

Your Name: _______________________________ Date: _______________

Group Members: _____________________________________________________________

The Scenario

Your school is implementing “SchoolGuard,” an AI-powered network security system. Your advisory committee must decide what the AI can do automatically versus what requires human approval.

SchoolGuard can:

  • Block websites it identifies as dangerous
  • Monitor student digital activity for threats
  • Alert administrators about unusual behavior
  • Learn from patterns to improve detection

Part 1: Your Initial Position (5 minutes)

Before group discussion, write your own thoughts:

Question 1: Automatic Blocking

Should SchoolGuard automatically block websites it identifies as malicious, or require human approval first?

My position: [ ] Auto-block [ ] Require approval [ ] Hybrid approach

Main reason:


My biggest concern:


Question 2: Activity Alerts

Should SchoolGuard alert administrators about “unusual” student activity?

My position: [ ] Yes, alert [ ] No alerts [ ] Only for serious concerns

What should count as “unusual”?


Privacy concern I have:


Question 3: Adaptive Learning

Should SchoolGuard learn from student behavior patterns to improve?

My position: [ ] Allow learning [ ] Prohibit [ ] Limit somehow

Benefit I see:


Risk I worry about:


Part 2: AI Consultation (10 minutes)

Talk to SchoolGuard AI about your policy questions. Record what you learn.

Suggested Opening:

“You’re an AI security system being implemented at a middle school. I’m on the student advisory committee helping design policies. For each question I ask, share both your capabilities AND your honest limitations.”

AI Insights on Automatic Blocking

AI’s strongest argument for automation:


Limitation AI acknowledged:


Question this raised:


AI Insights on Activity Alerts

How AI would define “unusual”:


What AI said it CAN’T determine:


Privacy concern AI raised:


AI Insights on Adaptive Learning

How learning would improve protection:


Data AI would need to collect:


Trade-off AI identified:


Part 3: Group Policy Development (15 minutes)

Your Group’s Recommendations

Policy Area Our Recommendation Our Reasoning How We Address AI’s Limitations
Automatic Blocking
Activity Alerts
Adaptive Learning

Stakeholder Perspectives

Students would say:


AI system would say:


Parents would say:


Teachers would say:


Administrators would say:


Part 4: Reflection (5 minutes)

Where did AI’s perspective change your thinking?



Where did your policies balance AI capabilities with human values?



What insights emerged from human-AI collaboration that neither could develop alone?



What cybersecurity careers work on these kinds of decisions?



From “True Teamwork: Building Human-AI Partnerships” — NICE K12 2025 Dr. Ryan Straight, University of Arizona • ryanstraight@arizona.edu