Primary Objective
Students will design governance policies that balance AI system capabilities with human oversight requirements, articulating the trade-offs involved in automated security decisions.
Navigating Policy Design with AI as Active Participant (Grades 6-8)
Dr. Ryan Straight
December 7, 2025
This activity engages students in designing policies for AI-powered security systems in educational settings. Unlike traditional ethics exercises where students discuss abstract principles, here AI actively participates in the conversation—advocating for its capabilities while acknowledging its limitations.
Students discover that AI systems aren’t neutral tools; they have perspectives and constraints that must be accounted for in governance decisions. This activity prepares students for real-world cybersecurity policy roles where human-AI coordination is essential.
Duration: 45-55 minutes Grade Levels: 6-8 (with complexity variations) Group Size: Small groups (3-5 students) Technology Requirements: At least one device per group with AI access
Students will design governance policies that balance AI system capabilities with human oversight requirements, articulating the trade-offs involved in automated security decisions.
Students explore Work Roles: Security Governance Specialist, Privacy Officer, Cybersecurity Policy Analyst, Risk Analyst
Background: Riverside Middle School (600 students, grades 6-8) is implementing a new AI-powered network security monitoring system. The system can:
Your Task: As the student technology advisory committee, you must develop policies that determine what the AI can do automatically versus what requires human approval.
The AI system will participate in your discussions, advocating for its capabilities while being honest about its limitations.
Should AI automatically block websites it identifies as malicious, or should it require human approval first?
Considerations:
Should AI alert administrators about “unusual” student activity? If so, what counts as unusual?
Considerations:
Should AI adapt its threat models based on student behavior patterns it observes?
Considerations:
AI Perspective:
“I can block threats in milliseconds—human approval creates dangerous delays. Every second a malicious site is accessible, students could be exposed to malware, phishing, or inappropriate content.
However, I acknowledge my limitations: I sometimes misclassify educational security research sites as threats. I can’t understand context—a site about ‘hacking’ might be a coding tutorial or an attack platform. I need humans to define the categories I protect against.”
AI Perspective:
“I can identify patterns humans would miss—sudden changes in browsing behavior, repeated visits to concerning sites, or activity that matches known threat patterns. This could help protect students from cyberbullying, exploitation, or self-harm.
But I can’t understand why a student is researching something. A student researching ‘depression symptoms’ might be completing a health assignment, supporting a friend, or experiencing personal struggles. I can detect the pattern, but only humans can understand the context and respond appropriately.”
AI Perspective:
“Learning from patterns makes me better at my job. If I notice that a particular type of link is frequently clicked but then reported as suspicious, I can update my models. Over time, I become more accurate and cause fewer false positives.
The trade-off: To learn effectively, I need to observe and remember behavior patterns. This creates data that could theoretically be used for purposes beyond security—tracking which students visit which sites, building profiles of student interests. I can learn without storing identifiable data, but that limits my effectiveness. Humans must decide what trade-offs are acceptable.”
flowchart LR
subgraph P1["Part 1: Individual Reflection"]
A1[Consider Each Question]
A2[Note Initial Position]
A3[Identify Concerns]
end
subgraph P2["Part 2: AI Consultation"]
B1[Engage AI Partner]
B2[Record AI Insights]
B3[Note Limitations]
end
subgraph P3["Part 3: Group Policy Development"]
C1[Develop Recommendations]
C2[Consider All Stakeholders]
C3[Address AI Limitations]
end
subgraph P4["Part 4: Reflection"]
D1[How Did AI Help?]
D2[What Balance Did We Strike?]
D3[Career Connections]
end
P1 --> P2 --> P3 --> P4
Before group discussion, consider each policy question and note your initial thoughts:
Question 1 (Automatic Blocking):
Question 2 (Activity Alerts):
Question 3 (Adaptive Learning):
Engage your AI partner in discussion about each policy question. Record key insights:
Suggested Opening Prompt:
“You’re an AI security system being implemented at a middle school. I’m on the student advisory committee helping design policies for your deployment. For each question I ask, please share both your capabilities AND your honest limitations.”
AI Insights on Question 1:
AI Insights on Question 2:
AI Insights on Question 3:
Develop your group’s recommended policies:
| Policy Area | Our Recommendation | Rationale | How We Addressed AI’s Limitations |
|---|---|---|---|
| Automatic Blocking | |||
| Activity Alerts | |||
| Adaptive Learning |
Stakeholder Considerations (consider all perspectives, including AI’s):
After completing your policies, reflect:
Where did AI’s perspective change your thinking? ___________________________________________
Where did your policies balance AI capabilities with human values? ___________________________________________
What insights emerged from human-AI collaboration that neither could have developed alone? ___________________________________________
What cybersecurity career roles work on these kinds of decisions? ___________________________________________
| Criteria | Emerging (1) | Developing (2) | Proficient (3) | Advanced (4) |
|---|---|---|---|---|
| AI Perspective Integration | Ignores AI input | Acknowledges AI without engaging | Meaningfully incorporates AI perspective | Synthesizes AI and human perspectives creatively |
| Policy Reasoning | No rationale provided | Basic reasoning | Clear reasoning with trade-offs acknowledged | Sophisticated analysis of competing values |
| Stakeholder Consideration | Single perspective | Some stakeholder awareness | Multiple stakeholders considered | Comprehensive stakeholder analysis |
| Ambiguity Navigation | Seeks single “right” answer | Acknowledges complexity | Comfortable with uncertainty | Leverages ambiguity productively |
| NICE Framework Connection | No career connection | Basic role awareness | Clear Work Role connections | Deep understanding of governance careers |
This table shows how activity elements connect to assessment rubric criteria:
| Rubric Criterion | Developed Through | Evidence Source |
|---|---|---|
| AI Partnership Framing | Part 2: AI consultation with AI Perspective Cards | Worksheet: How student engaged AI for insights |
| Complementary Strengths | AI Perspective Cards: AI explains capabilities AND limitations | Written notes on AI insights for each question |
| AI Limitation Awareness | AI Voice sections: AI acknowledging context gaps | “What AI said it CAN’T determine” responses |
| Synthesis Quality | Part 3: Group Policy Development table | “How We Addressed AI’s Limitations” column |
| Human Context Application | Stakeholder Considerations section | Written stakeholder perspectives |
| Decision Justification | Part 4: Reflection questions | Articulation of how policies balance AI with values |
| NICE Framework Application | Career Pathway Connections | Responses to Work Role reflection |
Applicable Rubrics: Human-AI Collaboration Rubric, NICE Framework Application Rubric