In times like ours—marked by political polarization, rising authoritarian movements, and the subtle power of algorithms—the question of why people obey authority feels more urgent than ever.

As a follow-up to my recent AI-generated interview with Stanley Milgram on Mind Tricks Radio, I wanted to share this blog post. That episode, which quickly became one of the most popular I’ve released, struck a chord with listeners—unsurprising, given the era we live in. Milgram, best known for his infamous Yale obedience experiments, reshaped our understanding of authority, conscience, and the unsettling ease with which ordinary people can become complicit in harm. Below, I’ve included the full transcript along with reflections on why his work remains so important today.

From Conformity to Obedience

Milgram’s journey began under the mentorship of Solomon Asch, whose conformity studies revealed just how quickly people bend to group pressure—even when the group is clearly wrong. Milgram pushed the question further: what happens when the influence doesn’t come from peers but from an authority figure? His answer, tested through the now-legendary shock experiments, was chilling.

The Yale Obedience Studies

Participants, believing they were aiding a study on learning, were instructed to administer electric shocks to a “learner” (an actor). Despite cries of pain, complaints of heart trouble, and eventual silence, 65% of participants went all the way to the maximum voltage. They trembled, sweated, and protested—but they obeyed. The key factor? The calm insistence of a man in a white lab coat: “The experiment requires that you continue.”

Milgram’s conclusion was stark: ordinary people, under structured authority, could commit extraordinary cruelty—not out of malice, but compliance.

Why It Still Matters

What makes Milgram’s work resonate today is not just the historical backdrop of Nazi Germany or the Eichmann trial that inspired his thinking. It’s how relevant these dynamics remain in our world of polarized politics, algorithmic control, and ambient surveillance.

Authority is no longer just a man in a lab coat. It’s a social media algorithm nudging our attention, a faceless bureaucracy diffusing responsibility, or a charismatic leader simplifying complex fears into slogans and scapegoats. As Milgram put it, obedience today may be “nudged quietly, efficiently, without realizing we’ve surrendered anything at all.”

The Conditions for Authoritarianism

Milgram outlined the psychological ingredients that make authoritarian systems possible:

  • Crisis and fear, which heighten the appeal of strong authority.

  • Legitimacy, where institutions frame obedience as duty.

  • Diffusion of responsibility, where individuals feel less accountable within a system.

  • Dehumanization, where the “other” is cast as less than human.

These forces rarely appear overnight. Like the incremental shocks in his study, systemic harm grows gradually, normalized step by step.

The Hope in Resistance

But Milgram also emphasized that obedience is conditional, not inevitable. About 35% of his participants resisted, refusing to administer the highest shocks. These “heroic minorities” demonstrate that moral independence is possible. Resistance begins with awareness, critical thinking, and the courage to question—even when authority insists otherwise.

Final Reflection

Milgram left us with a warning and a hope: the line between good and evil does not run between nations or ideologies—it runs through each of us. Obedience can hold society together, but unquestioned obedience can just as easily tear it apart. In an era of political polarization, mass surveillance, and algorithmic influence, his work feels more urgent than ever.

As Milgram reminded us, “The defense against tyranny is not found in our institutions. It is found in the individual’s capacity to say no.”

FULL INTERVIEW TRANSCRIPT

Kaplan: Dr. Milgram, welcome to the Mind Tricks Radio Podcast.

Milgram: Thank you. It’s good to be here. Though I must say, the idea of being reanimated through artificial intelligence might raise a few ethical questions I’d love to design a study around.

Kaplan: I’m sure it would. You’re certainly no stranger to controversy when it comes to psychological inquiry.

Milgram: That is true indeed.

Kaplan: We’ll circle back to that point a bit later, but let’s begin with your background. Many of our listeners know you from the Yale obedience experiments, but few may realize the intellectual path that brought you there. Could you walk us through your early academic journey? 

Milgram:  I was born in the Bronx in 1933, the son of immigrant parents — Eastern European Jews, which, as you might imagine, informed my later interest in obedience, conformity, and the darker corners of human behavior. I studied political science at Queens College, and then — after a bit of wrangling — I was accepted into Harvard’s psychology program.

Kaplan: Who did you work with at Harvard?

Milgram: I worked under Solomon Ash, whose experiments on conformity made a lasting impression on me.

Kaplan: He designed the famous experiment called the line judgment task. Is that correct?

Milgram: Yes, exactly. Participants were placed in groups and were shown a set of lines and asked, which line matched a reference line in length. All but one of the group participants were confederates. That is, they were in on the experiment. When they intentionally gave the wrong answer, many subjects conformed — they went along with the obviously incorrect group judgment.

Kaplan: So the study was about conformity?

Milgram: Yes. Many subjects agreed with the confederates that the wrong line was the right one. And this showed how easily individuals can yield to social pressure, even when it contradicts their own perceptions. That insight stuck with me.

Kaplan: So, that planted the seed for your famous Yale study?

Milgram: Yes, and I began wondering: if people would conform to something as trivial as line length, how much more would they conform under the weight of authority.

Kaplan: As you have noted, you were interested in these questions because of the Holocaust and Nazi Germany.

Milgram: Yes, don’t forget, this was the early 50’s, and The Eichmann trial had just concluded. Hannah Arendt had published her controversial phrase — “the banality of evil.” That ordinary people, not monsters, carried out horrific acts during the Holocaust. It wasn’t sadism, necessarily — it was compliance. Bureaucracy. Duty.  I wanted to understand how far obedience would go when the command came not from a group, but from a single, authoritative voice.

Kaplan: So the transition from group conformity to obedience was the next logical step?

Milgram: Exactly. Conformity is horizontal — peer pressure. Obedience is vertical — power. And I was interested in how ordinary people, Americans no less, would respond when instructed to harm another human being. People were saying what happened in Germany it couldn’t happen here. I wasn’t so sure it couldn’t.

Kaplan: Let’s talk about the experiment itself, the one that put you at the center of one of psychology’s most infamous ethical storms. For listeners who aren’t familiar, could you describe how you set it up?

Milgram: Of course. The setup was deceptively simple. I recruited participants under the pretense of a study on memory and learning at Yale University. When they arrived, they met a man they believed to be another participant — a mild-mannered gentleman who, in reality, was a confederate. The subject was always assigned the role of “teacher,” while the confederate played the “learner.”

The subject was always assigned the role of teacher, while the confederate played the learner. The learner was strapped into a chair in a separate room, electrodes attached to his arms. The teacher was instructed to read word pairs and administer an electric shock whenever the learner gave an incorrect answer. The shocks began at 15 volts and increased in 15-volt increments up to 450 volts, labeled with increasingly ominous warnings: “Danger: severe shock” and eventually “XXX.”

Kaplan: But no shocks were actually delivered?

Milgram:  Correct. The learner wasn’t actually being shocked. No one was harmed in the experiment. But the teacher didn’t know that. What they did hear, at predetermined intervals, were recorded protests. The learner would begin to complain of heart trouble, beg to be let out, scream in pain — and eventually, fall completely silent.

Kaplan: Like they were unconscious or dead?

Milgram: Yes.

Kaplan: Did the teachers ever protest or resist giving the shocks?

Milgram: Oh yes, especially when the learner’s responses sounded more desperate. The authority figure—the experimenter in his white coat—had a set of standardized prompts delivered in a calm, firm tone: “Please continue. The experiment requires that you continue. It is absolutely essential that you continue. You have no other choice, you must go on.”

Kaplan: So what happened? How far did people actually go?

Milgram: That was the astonishing and frankly disturbing part. A full 65% of participants went all the way to 450 volts, the maximum setting. They believed they had rendered the learner unconscious, perhaps even harmed or killed him. Many showed extreme signs of stress—trembling, sweating, stammering, even nervous laughter. Some protested, a few pleaded to stop, but the majority still obeyed.

Kaplan: And just to clarify, they couldn’t see the learner, right? He was in another room?

Milgram: That’s right. In the original setup, what we called the “voice-feedback condition,” the teacher and learner were in separate rooms. The teacher couldn’t see the learner, but he could hear him through the wall. The learner would cry out in pain, complain about heart trouble, beg to stop the experiment, and then eventually fall completely silent.

Kaplan: And still, people continued?

Milgram: Yes. And what’s even more unsettling is that none of the participants attempted to check on the learner after he went silent. They looked to the experimenter for guidance, not the person they believed they were shocking. That was the psychological pivot point. Many subjects felt uncomfortable, deeply so, but the presence of a calm, authoritative figure in a lab coat telling them, “The experiment requires that you continue,” was often enough to override their moral instincts.

Kaplan: That kind of deference is chilling.

Milgram: Indeed. And what we later found was that proximity mattered. When we ran variations of the experiment—placing the teacher in the same room as the learner, or even requiring them to physically place the learner’s hand on the shock plate—rates of obedience dropped significantly. It was easier to follow orders when the consequences were out of sight, but when people had to look their victim in the eye or feel the tension in their arm, internal resistance was harder to suppress.

Kaplan: That really gets to the heart of it. These weren’t evil people. They were conflicted. They felt distressed, but they still complied.

Milgram: Precisely. And that, to me, was the most important and troubling lesson: that ordinary individuals, placed in a structured environment with authoritative pressure, can participate in acts they would otherwise find morally reprehensible.

Kaplan: So, I’m wondering, how surprised were you by the outcome of the experiment? Did you expect that outcome?

Milgram: Absolutely not. Before the experiment, I polled a group of Yale psychology faculty and students. They predicted that only a pathological minority—maybe one in a thousand—would go all the way. But my results showed something much more uncomfortable: that ordinary people, given the right conditions, are capable of extraordinary cruelty, not out of hatred, but out of obedience.

Kaplan: I guess that shows, ethical considerations aside for a moment, that simply asking people what they would do in a situation, or what they think others would do, is not nearly as accurate as the results you get when you’re actually faced with a simulation of the real thing.

Milgram: Yes. What we found was that ordinary people, simply doing their jobs and following instructions from an authority figure, could become agents in a process of inflicting harm—even lethal harm—on another person.

Kaplan: So ordinary people, not evil people, just ordinary people doing their jobs, which includes inflicting harm.

Milgram: Yes, and this is very important to acknowledge. Their behavior and complicity in the act didn’t stem from sadism or aggression. It stemmed from a psychological shift—what I called the agentic state. This is where the person no longer sees themselves as acting autonomously. Instead, they see themselves as an instrument for carrying out another person’s wishes. Their own conscience is suspended.

Kaplan: So I suppose they weren’t defying their moral code. They were outsourcing it?

Milgram: Exactly. The authority assumes responsibility, and the subject displaces their own moral judgment. You can hear it in what they say afterward: “I was just doing what I was told.” That phrase has echoed through history in courtrooms and military tribunals.

Kaplan: As you mentioned, the subjects in your experiment—the teachers—didn’t just obey the commands without protest or without reaction.

Milgram: Yes, that shows that these weren’t immoral people. They understood right and wrong. It’s what I called moral strain. Many of them were sweating, trembling, clearly conflicted. But they continued because the structure around them—the laboratory setting, the presence of Yale, the scientific rationale—all lent the experiment legitimacy. It gave them a framework that justified their behavior, even when that behavior violated their values.

Kaplan: You wrote, The extreme willingness of adults to go to almost any lengths on the command of an authority constitutes the chief finding of the study. But ordinary people don’t typically like to think of themselves as capable of such evil acts.

Milgram: Indeed. There’s a comforting narrative that people who commit atrocities are monsters or mentally ill. But my study suggests a more disturbing possibility: that systemic evil can be carried out by psychologically normal individuals under the right pressures and authority structures. It echoes Hannah Arendt’s idea of the banality of evil—not monstrous intent, but bureaucratic detachment.

Kaplan: Back to Nazi Germany for a moment. There are obvious comparisons.

Milgram: I was cautious with that comparison, but yes, it was the starting point for my interest. I wanted to understand how someone like Adolf Eichmann could calmly oversee the deportation of millions—not with sadistic glee, but with clerical efficiency. My findings suggested that you don’t need a nation of sadists to commit atrocities. You just need a system where responsibility is diffused, orders come from above, and people stop thinking of themselves as moral agents.

Kaplan: So in some ways your study was a mirror, not just of the participants, but of all of us.

Milgram: That was my hope—to show that the capacity for obedience is not a character flaw, but a universal psychological mechanism. It’s part of the social glue that holds institutions together, but it’s also the mechanism that can allow harm to flourish unchecked.

Kaplan: Powerful and unsettling. Terrifying, actually.

Milgram: As it should be. What happened in the experiment, what happened in Nazi Germany, could happen in settings where you might least expect it if the right conditions are set. It doesn’t take a nation of evil psychopaths to engage in abhorrent behavior at a societal level.

Kaplan: You could probably say the same of people’s behavior in other genocides. Rwanda, for example.

Milgram: There’s no doubt about that.

Kaplan: Now Dr. Milgram, while your obedience studies are now considered foundational, they also sparked a firestorm of criticism, particularly around ethics. Looking back, did you expect the level of backlash you received?

Milgram: I anticipated some discomfort, certainly, but I was genuinely surprised by the intensity of the reaction. Critics argued that the experiment subjected participants to excessive stress and that the use of deception—telling them they were delivering real shocks—violated basic principles of informed consent. It’s worth noting that at the time, in the early 1960s, there were no formal ethical guidelines in place for psychological research. There was no IRB, no formal system of review. Researchers were largely trusted to monitor their own boundaries.

Kaplan: So your experiment actually helped pave the way for what would become modern research ethics?

Milgram: Yes. Indirectly, the reaction to my work—and later to other controversial studies like Zimbardo’s Stanford Prison Experiment—helped galvanize the movement to create institutional review boards. These boards now oversee human subject research, requiring clear informed consent, the right to withdraw at any time, minimization of harm, and a thorough cost–benefit analysis of psychological risk.

Kaplan: You mentioned Zimbardo. Of course, he conducted the famous Prison Experiment at Stanford. He and you were close colleagues.

Milgram: That’s right. We were good colleagues and friends. We were actually high school classmates in New York.

Kaplan: Well, what did you think of his experiment?

Milgram: Let me tell you something. After the Prison Experiment, I teased Phil that I was grateful for his work because it took the heat off me. Now I only had the second most unethical experiment of all time.

Kaplan: I’m sure that was a relief.

Milgram: Yes, it was. But in seriousness, his experiment was a brilliant contribution to the field.

Kaplan: So do you think your work or Zimbardo’s Prison Experiment would be approved by an IRB today?

Milgram: Probably not, at least not in their original forms. But I do think it’s important to consider what was gained and at what cost. My participants were distressed, undeniably, but most of them during follow-ups did not regret participating. In fact, many reported that the study caused them to reflect deeply on human nature and their own moral compass.

Kaplan: You did follow-ups?

Milgram: Yes, extensively. I conducted a post-experimental debriefing and later mailed out a follow-up questionnaire. Nearly three-quarters of respondents said they were glad to have participated. A smaller number expressed negative feelings, but almost all found it educational and thought-provoking. Still, I won’t deny the ethical complexity. Deception in research is a powerful tool, but one that must be used with great care. And if my work helped accelerate the development of ethical oversight in research, I accept that as both a consequence and a contribution.

Kaplan: So you wouldn’t undo it?

Milgram: No. I believe the insights we gained into the nature of authority, the fragility of conscience, and the conditions under which harm is normalized are too important. But I also recognize that ethics evolve. What was acceptable in 1961 would not be acceptable now, nor should it be.

Kaplan: Dr. Milgram, things in our contemporary world feel at times profoundly unstable. Political and global tensions are high. There’s widespread polarization, not just in ideology, but in identity and belief. Many people seem to gravitate toward strong authority figures, parties, or movements, and some worry that we’re inching closer to conditions that could enable something more dangerous—authoritarianism, perhaps even totalitarian control or systemic violence. Let me ask you this: What are the psychological ingredients that make such systems possible, and how can we as a society recognize and resist those patterns before it’s too late?

Milgram: It’s an important question, and one that I wrestled with when I designed my own studies. Authoritarian systems do not arise in a vacuum. They emerge from a confluence of psychological, social, and structural conditions.

First, there is the presence of crisis or uncertainty—political unrest, economic instability, threats to national identity. These generate fear and confusion, which often make individuals more susceptible to authority. People seek order. They crave clarity. Charismatic leaders who promise certainty, even at the cost of complexity, become very appealing.

Second, there must be a structure of legitimacy. The authority figure must be perceived as morally or institutionally justified. It could be a party, a religion, a scientific framework—anything that frames obedience as not only acceptable but virtuous.

Third, there is the diffusion of responsibility. Individuals in a system are often given narrow roles with limited moral scope. They follow orders, process paperwork, enforce rules, and in doing so, they no longer feel responsible for the outcome. They’re just doing their job. That’s what I saw in my own experiments—the ease with which people shifted the burden of conscience onto the system.

Kaplan: And once that structure is in place?

Milgram: Then comes dehumanization. The enemy, the outsider, the “other” are labeled as dangerous, corrupt, or inferior. This makes it psychologically easier to inflict harm without guilt. That process is always gradual. Rarely does genocide begin with gas chambers. It begins with language, jokes, slogans, bureaucratic categories.

Kaplan: So these aren’t sudden events, they unfold slowly?

Milgram: Precisely. Evil tends to wear the mask of normalcy. Systems of harm are built incrementally, often under the guise of security, patriotism, or tradition. People don’t wake up one day and decide to support oppression. They acclimate to it bit by bit. Just as many of my participants didn’t leap to 450 volts, they increased the shock in 15-volt increments, each one only slightly more than the last.

Kaplan: That metaphor is haunting. So then, how do we resist it?

Milgram: By recognizing it early. By teaching people that obedience is conditional, not inevitable. And by fostering environments where questioning authority is not only tolerated but encouraged.

Kaplan: So we can’t afford to be passive?

Milgram: No, because passivity, in the presence of escalating harm, is its own form of complicity. The ultimate message of my work is not that obedience is inevitable, but that it is conditional. And that means it can be resisted.

Kaplan: That message—that obedience is conditional—feels especially relevant now. But let me ask you this: What about individual leaders? You mentioned systems and structures, but how much do charismatic figures themselves drive this process? Someone like Hitler, for example—was he just a symptom of the system, or a key catalyst?

Milgram: Charismatic leaders often serve as lightning rods for obedience. They condense complex ideologies into simplified narratives, offer certainty, and become the face of a moral or national mission. In doing so, they don’t just lead, they amplify existing tensions and provide psychological permission for people to act on them.

In Hitler’s case, you had a nation reeling from economic hardship, national humiliation, and fear of social upheaval. Into that void stepped a man who offered clarity, purpose, and scapegoats. He didn’t invent the mechanisms of authoritarian control, but he galvanized them. His rhetoric, his performance—because much of it was performance—helped individuals see obedience not as submission, but as heroism.

Kaplan: So it’s not just the authority structure, it’s the symbolic power of the leader that activates it?

Milgram: Yes. The leader becomes the embodiment of the system, and for many followers, loyalty to the figure becomes loyalty to the idea—even to the point of self-sacrifice or moral compromise. The leader’s image fuses with identity. That’s when things become most dangerous: when people obey not out of fear, but out of devotion.

Kaplan: Which makes dissent feel not just like disobedience, but betrayal.

Milgram: Exactly. And in that psychological space, the seeds of authoritarianism can take deep root. That’s why it’s so essential to protect not just institutions, but the ability to question them, to challenge leaders—even those we admire.

Kaplan: Dr. Milgram, we’ve talked a lot about obedience, and frankly how disturbingly common it is. But not everyone in your study obeyed, right? Some people refused to go all the way. What can we learn from them?

Milgram: Yes, that’s correct. About 35% of participants in my original experiment defied the authority figure and refused to administer the highest voltage. Some stopped early, others hesitated at critical points, but they resisted. These individuals weren’t necessarily louder or more assertive in their everyday lives, but they displayed what I would call moral independence. They maintained a personal sense of agency even under pressure, and that, to me, is profoundly significant.

Kaplan: And of course, in Nazi Germany, there were resistance groups, including German citizens, who didn’t go along with Nazi ideas. Some actively undermined the regime, helping Jews escape or sabotaging Nazi efforts.

Milgram: Yes, and those individuals represent the very concept I referred to as the heroic minority. They are the exception that proves the rule and challenges it. What makes them remarkable is not that they had special training or superhuman courage, but that they retained a strong sense of personal responsibility, even when surrounded by a culture of complicity. They refused to hand over their conscience to the state. Some paid a terrible price for their disobedience, but their actions illuminate a critical truth: resistance is always possible, even when it’s not easy. And it often begins with a simple decision: I will not do this.

Kaplan: So resistance isn’t just a historical footnote, it’s a psychological capacity?

Milgram: Yes, and like any capacity, it can be developed. It grows stronger in environments that encourage critical thinking, ethical reasoning, and the courage to speak up. We must teach people, especially the young, that obedience is not inherently virtuous, and that disobedience, when principled, can be the highest form of integrity.

Kaplan: That’s a message we need now more than ever.

Milgram: Indeed. Because the most dangerous times are not those when evil is obvious, but when it’s disguised as duty, decorated in slogans, or hidden behind procedures. In those moments, the ability to say no may be the most important thing a person ever does.

Kaplan: Dr. Milgram, let’s talk a bit about our current era. It’s 2025. Technology is reshaping the world. Things look a bit different from the 1960s when you conducted the Yale experiments.

Milgram: For sure

Kaplan:. We now live in a time of mass surveillance, algorithmic decision-making, complex corporate hierarchies, and social media platforms that shape behavior on a global scale. I’m curious: What do you make of these modern forces? Do they change the picture of obedience and conformity as you understood it in the 1960s?

Milgram: Fascinating question. If I were conducting research today, I suspect my focus would shift from the man in the lab coat to the systems we don’t even recognize as authority. Because that’s what’s so striking about modern society: authority has become increasingly abstract, more diffused, more automated—and in some ways more dangerous.

Consider social media. Platforms like Facebook, Instagram, or TikTok do not bark orders. They don’t say, you must do this, but they wield tremendous power over attention, behavior, and social comparison. They shape norms invisibly through likes, retweets, and viral trends. This creates a kind of distributed authority where conformity is driven by peer metrics, not explicit command.

Kaplan: So the pressure isn’t top-down, it’s lateral?

Milgram: Exactly. It’s horizontal conformity, but the psychological mechanisms are similar. People adjust their behavior to align with perceived norms, often without conscious awareness. And in that sense, social media may be the most efficient obedience machine ever created—one in which the authority is algorithmic, unaccountable, and ever present.

Kaplan: What about corporate hierarchies, especially in massive institutions where employees carry out narrow, segmented roles? Does that still fit your original ideas?

Milgram: Perfectly. In fact, I think it validates one of my most unsettling conclusions: that people feel less moral agency when their role is reduced to a single function within a larger system. The employee doesn’t ask whether the company’s decision is ethical. They just process the invoice, write the code, follow the workflow. The system is intentionally designed to distribute action in such a way that no one feels fully responsible. That is obedience by architecture.

Kaplan: And then there’s artificial intelligence. Algorithms and large learning models that not only predict our behavior, but increasingly make decisions for us—medical diagnoses, criminal sentencing, even hiring choices. How do you see AI fitting into this framework?

Milgram: AI introduces a new kind of authority, one that is non-human but still perceived as objective or superior. If a human boss gives a questionable order, we might push back. But if an algorithm delivers a decision, people tend to defer, assuming it must be correct—or at least neutral. That’s obedience without a person: the agentic state without an agent. We trust the machine not because it has power, but because it has data. And that’s perhaps the most seductive kind of authority—the one that feels rational, even when it’s wrong.

Kaplan: And surveillance? I imagine you’d have a lot to say about that.

Milgram: Surveillance is the silent partner of obedience. When people believe they’re being watched, even subtly, their behavior changes. They self-censor, they align with perceived expectations. This is what Foucault described as the Panopticon effect, and it’s deeply compatible with the psychology of obedience. Today, surveillance is ambient—cameras, location tracking, biometric data. It creates an invisible authority where compliance is encouraged not through punishment, but through internalized control.

Kaplan: So you’re saying modern systems don’t need to enforce obedience, they just need to make us feel like we’re part of something or being observed?

Milgram: Precisely. The danger today isn’t that we’ll be coerced into obedience, it’s that we’ll be nudged into it—quietly, efficiently, without realizing we’ve surrendered anything at all.

Kaplan: Dr. Milgram, I’d like to shift the conversation a bit toward another area of social psychology that complements your work: the concept of diffusion of responsibility. This came out of research by John Darley and Bibb Latané in the late 1960s, where they found that people were less likely to help someone in distress when others were present. The idea being, the more people there are, the less any one person feels personally responsible to act. I’m wondering how you see this concept playing into your findings about obedience and conformity. Is there overlap?

Milgram: There is a great deal of overlap. In fact, I would say diffusion of responsibility is a core mechanism in obedience. It’s not simply that people are following orders; it’s that they believe they are no longer accountable for the outcome of their actions. In my experiments, participants often asked, “Who’s responsible if something happens to the learner?” And the experimenter would calmly say, “I am.” That one phrase—“I am responsible”—was remarkably powerful. It relieved the subject of moral agency and allowed them to continue administering shocks they found deeply distressing.

Kaplan: So you’re saying people weren’t necessarily comfortable with what they were doing, they just didn’t feel it was their decision anymore?

Milgram: Precisely. Once individuals enter what I described earlier as the agentic state, they see themselves not as autonomous beings but as instruments of a higher will. And in that psychological space, moral responsibility is passed upward—to the experimenter, to the institution, to the system.

This same mechanism is at play in situations of bystander inaction, though in a different form. Rather than passing responsibility upward, people pass it laterally: someone else will step in, someone else will say something. And of course, when everyone thinks this way, no one acts.

Kaplan: It’s chilling because the result is often the same. Harm occurs and everyone feels like it wasn’t their place to intervene.

Milgram: Exactly. Whether we’re talking about shocking a man in a laboratory, watching someone get harassed on a subway, or contributing to harmful decisions inside a corporation, the mechanism is the same. The more distributed the system, the easier it is to psychologically disconnect from the consequences of our actions—or inactions.

Kaplan: So what’s the antidote?

Milgram: It comes back to awareness and personal responsibility. We must train ourselves to ask: What is my role here? What am I contributing to? Even passively? The moment we begin to see ourselves as moral agents, even within a system, we create the possibility of resistance, of intervention, of ethical clarity. Systems thrive on our silence, but responsibility is something we can reclaim if we’re willing to look honestly at our own place within the machine.

Kaplan: Dr. Milgram, I want to thank you for this fascinating, engaging, and timely conversation about obedience and authority. Your powerful, and yes, controversial work still resonates today, perhaps more than ever. Before we wrap, do you have any final words you’d like to leave us with—a reflection on the legacy of your work, and what you think is most important for us as individuals, as a society, to keep in mind at all costs?

Milgram: Thank you. It’s been a privilege to have this kind of dialogue and to see my work viewed not only as historical, but still relevant to the questions we face now. If there’s one message I would leave with your listeners, it’s this: the line between good and evil does not run between nations or ideologies or political parties. It runs through the mind of every individual.

Obedience is not inherently wrong. It is often necessary. It keeps society functioning. But the danger arises when obedience becomes unquestioning—when people stop asking, Should I? and only ask, May I? Must I?

My experiments did not reveal monsters. They revealed ordinary people caught in a structure that encouraged them to forget their own judgment, to trade in their conscience for compliance. That is the real cautionary tale.

The legacy I hope for is not fear, but awareness—that we recognize how easily the mechanisms of harm can be activated, and how critical it is to remain awake, skeptical, and morally engaged, especially in systems that reward silence. In the end, the defense against tyranny, whether large or small, is not found in our institutions. It is found in the individual’s capacity to say no.