The debate over artificial intelligence is happening in every room that matters: courtrooms, hospital ethics boards, congressional hearings, philosophy departments, and the engineers’ open-plan offices where the next generation of systems is being built. Students in a Current Events class need frameworks for thinking through questions that are moving faster than legislation; they need practice applying ethical reasoning to genuinely unresolved problems; and they need to feel the weight of those problems before they can argue about them with any real conviction.
Fiction has always been one of the most efficient vehicles for that feeling. The four episodes in this guide do the work that a policy brief cannot: they put a face, a name, and a set of relationships onto abstract questions about consciousness, rights, and human attachment. Rod Serling knew this in 1959; the writers of Deep Space Nine and The Next Generation were working in his tradition forty years later.
I designed this as a four-episode arc for Current Events Debate, using only The Twilight Zone and the two Star Trek series available on Paramount+. Each episode anchors a different debate theme; together, they build toward a structured culminating debate on whether machines that exhibit consciousness or emotion deserve legal and moral protection.
The Arc
The four episodes are best used in sequence, because each one complicates the one before it.
“The Mighty Casey” (The Twilight Zone, Season 1, 1960) opens the unit by asking the most fundamental question: what is it that makes a being human? Casey is a robot pitcher built to be perfect at baseball; when his creator gives him a heart, a device intended to simulate human emotion, Casey loses all competitive drive. He cannot bear to humiliate a batter. He cannot bring himself to win at someone else’s expense. He is benched, reprogrammed, and traded.
Serling’s sardonic point is hard to miss: the qualities we associate most closely with our humanity, compassion, conscience, empathy, are liabilities in a system designed around winning. The episode is short and accessible, and it works as a provocation before students have had to engage with any of the harder philosophical material. The real hook for a Current Events class is the obvious parallel to AI in the workplace; by 2024, major employers were openly estimating that AI would displace hundreds of thousands of human workers, and the question of what we owe those workers (and what we owe the machines doing the displacing) is not hypothetical.
“It’s Only a Paper Moon” (Deep Space Nine, Season 7, 1998) takes the unit somewhere more uncomfortable. Nog, a young Ferengi officer who lost his leg in a brutal ground battle, retreats into a holosuite and refuses to leave. He builds a life around Vic Fontaine, a self-aware holographic lounge singer who happens to be extraordinarily perceptive about human pain. The relationship is real in every functional sense; it produces genuine healing. Nog gets better. The complication is that he is getting better in a fictional world, and the people who care about him are watching from outside.
Vic resolves the situation by turning himself off, forcing Nog back into reality. It is one of the most quietly remarkable acts in the series, and it generates the episode’s central question: if an AI provides genuine comfort, and real healing, does it matter that the relationship is not, in some technical sense, real? For Current Events, this maps onto a genuinely active debate. Apps like Replika and Character.ai have millions of users who report meaningful emotional bonds with their AI companions; researchers are divided on whether this is therapeutic or harmful; and no regulatory framework yet exists to answer the question.

“The Measure of a Man” (The Next Generation, Season 2, 1989) is the unit’s formal philosophical center, and it is the episode that most directly addresses the question of rights. Commander Data, Starfleet’s only android officer, is ordered to submit to disassembly for scientific study. He refuses. A legal hearing is convened to determine whether he is property, which cannot refuse, or a person, who can. Captain Picard’s closing argument does not prove that Data is conscious; it argues that we cannot prove he is not, and that the decision to declare him property rather than person says more about us than it does about him.
Thirty-five years later, that argument is being made in actual courtrooms. The European Union’s AI Act (2023) is the first comprehensive AI regulation in history; it does not grant AI legal rights, but it does impose legal obligations on creators and developers based on risk level, implicitly acknowledging that AI behavior has moral weight. Federal courts have heard arguments about whether AI-generated work can be copyrighted. Philosophers including Peter Singer have published seriously on AI moral patiency. Snodgrass’s 1989 script is not dated; if anything, it is better framed for the current moment than it was when it aired.
“The Lonely” (The Twilight Zone, Season 1, 1959) closes the unit on the hardest question of all. James Corry, serving a solitary sentence on an asteroid nine million miles from Earth, is given a robot companion named Alicia by a sympathetic supply captain. Months pass. Corry falls in love with her. When he receives a pardon and learns he cannot bring Alicia home due to the ship’s weight limit, he refuses to leave. The captain resolves the problem by shooting Alicia; as her mechanical components are exposed, he tells Corry, quietly, that she was just a machine.
The episode asks whether that is true, and whether it matters. The answer Serling seems to offer in his closing narration is that it does not matter whether Alicia was “really” conscious; what matters is what the relationship meant to Corry, and whether the captain’s act was mercy or something closer to violence. For a debate about AI companions and emotional attachment, there is no better forty-four minutes of television. Serling was asking in 1959 what South Korea’s government is studying in policy documents today.
How I Use This in a Debate Course
I run the unit over about two weeks. Students watch each episode with the viewing guide (linked below), which provides episode context, key vocabulary, specific scenes to watch for, a real-world connection, and tiered discussion questions. The questions move from comprehension through application to synthesis; by the time we reach “The Measure of a Man,” students are already fluent enough in the vocabulary to apply philosophical frameworks without prompting.
The Socratic seminar format works better than formal debate for the first three episodes, because those episodes resist clean positions. “The Mighty Casey” is funny and disarming; it lets students practice the vocabulary of sentience and personhood without the stakes feeling too high. “It’s Only a Paper Moon” is emotionally demanding in a way that usually produces the seminar’s most honest conversation of the unit.
“The Measure of a Man” is where I introduce formal position statements for the first time: students write a brief (two to three sentences) defense of a position before seminar, then reflect afterward on what changed and why. The delta between those two documents is consistently the most revealing student writing of the unit.
The culminating debate uses “The Lonely” as its emotional anchor and asks students to choose between two positions: whether machines that exhibit consciousness or emotion deserve legal and moral protection regardless of whether those qualities can be proven, or whether extending such protections is a category error that ultimately harms humans. The handout includes full debate requirements; I typically allow a week of preparation, encourage students to search for supporting current events, and run the debate as a structured academic controversy rather than a competitive format.
A Note on the Episode Selection
I want to be transparent about why “The Lonely” concludes the unit rather than “The Measure of a Man,” even though “Measure” is the more formally philosophical episode. The legal question, while important, is ultimately answerable; courts will eventually decide what AI personhood looks like, and students tend to sense that resolution is coming even if it hasn’t arrived. The emotional question that Serling raises in “The Lonely” does not have that same horizon. Is love for a machine real? Does it create obligations on anyone’s part? What does it cost to destroy something someone loves, even if it was, technically, just a machine? Legislation cannot resolve those questions, and a debate class should not pretend otherwise.
That said, I ran the unit in a different sequence this past year for block scheduling, and the experiment is worth sharing. The two Twilight Zone episodes ran back to back on Days 1 and 2, “The Mighty Casey” followed immediately by “The Lonely,” so that Serling’s two emotional anchors hit together before any formal philosophical vocabulary entered the room. Day 3 brought “The Measure of a Man,” with students now carrying both Casey’s conscience problem and Corry’s grief into Data’s hearing; the legal framework landed with noticeably more weight because the feeling had already arrived first. Day 4 was a dedicated research day, with students searching for current AI legislation, companion app studies, and workplace displacement data. Day 5 closed the viewing with “It’s Only a Paper Moon,” which worked remarkably well as a finale for the episode sequence; Nog’s retreat into a fictional relationship and Vic’s quiet self-sacrifice hit differently after students had already spent a week sitting with the question of what we owe machines and what machines can mean to us. Day 6 returned to research and preparation, and Day 7 was the debate.
That reordering shifted the unit’s center of gravity in ways I didn’t fully anticipate. Opening with both Serling episodes pushed students toward a question I hadn’t foregrounded in the original sequence; specifically, whether they already extend a kind of informal moral recognition to machines without consciously admitting it. Several students arrived at the first research day asking some version of something I hadn’t put in the handout: whether they were obligated to say please or thank you to an AI, and what it revealed about them if they did or didn’t. That instinct, the felt sense that politeness might be owed even to a machine, turned out to be the most generative thread in the unit, sitting exactly at the intersection of sentience, moral patiency, and the quiet ways our behavior already concedes more than our philosophy has caught up to.
The Student Handout
The downloadable packet below includes full episode summaries (for students who need pre-viewing context or who miss a class), the complete vocabulary list with definitions, episode-by-episode “what to watch for” viewing guides, real-world connection boxes tying each episode to 2020–2025 debates, tiered discussion questions for each episode, and the culminating debate prompt with full requirements and a post-debate reflection prompt. It is formatted for Paramount+ availability; all four episodes are accessible there. (TPT Listing)
The handout is licensed under CC BY-SA 4.0; adapt it for your own class, credit BFWClassroom.com, and share it forward.
“The Mighty Casey” aired June 17, 1960. “The Lonely” aired November 13, 1959. Both were written by Rod Serling. “It’s Only a Paper Moon” aired December 30, 1998, written by Ronald D. Moore. “The Measure of a Man” aired February 13, 1989, written by Melinda M. Snodgrass. All four are streaming on Paramount+.


You must be logged in to post a comment.