Controversial Medical Procedures That Shocked the World

Kalterina - April 1, 2026
Share

Medicine has always walked a razor’s edge between miracle and madness. Throughout history, doctors, surgeons, and scientists have pushed the boundaries of what’s possible — sometimes saving lives, sometimes causing harm, and almost always sparking fierce debate. These are the 50 most controversial medical procedures that left the world speechless, furious, or completely divided.

Lobotomy

Few medical procedures have earned as much infamy as the lobotomy — a surgery that involved severing connections in the brain’s prefrontal cortex, often performed with an ice pick inserted through the eye socket. Developed in the 1930s by Portuguese neurologist Egas Moniz, it was hailed as a revolutionary treatment for mental illness and even earned Moniz a Nobel Prize in 1949. At its peak, tens of thousands of lobotomies were performed across the United States alone, with patients as young as children subjected to the procedure.

The reality, however, was devastating. Instead of curing mental illness, lobotomies frequently left patients emotionally blunted, unable to function, incontinent, or in a permanent vegetative state. Rosemary Kennedy, sister of President John F. Kennedy, was lobotomized at age 23 and spent the rest of her life in an institution. By the 1960s, the rise of antipsychotic drugs made lobotomies obsolete, and they are now widely regarded as one of medicine’s darkest chapters — a cautionary tale about the dangers of medical hubris.

Female Genital Mutilation (FGM)

Female genital mutilation, or FGM, refers to the partial or total removal of external female genitalia for non-medical reasons. Practiced for centuries in parts of Africa, the Middle East, and Asia, it is deeply embedded in cultural, social, and sometimes religious traditions, viewed by practitioners as a rite of passage, a mark of purity, or a prerequisite for marriage. Estimates suggest that over 200 million women and girls alive today have undergone some form of FGM.

The medical consequences are severe and lifelong. FGM causes intense pain, severe bleeding, infection, complications during childbirth, and profound psychological trauma. The World Health Organization, the United Nations, and virtually every major medical body have condemned the practice as a fundamental violation of human rights. Yet debates continue about how to address it without demonizing the cultures in which it exists — making it one of the most ethically charged controversies in global health today.

Electroconvulsive Therapy (ECT)

Electroconvulsive therapy — sending electrical currents through the brain to trigger controlled seizures — sounds like something ripped from a horror movie, and its early history was indeed terrifying. In the 1940s and 50s, ECT was used without anesthesia or muscle relaxants, causing violent convulsions that frequently broke patients’ bones. It was also catastrophically misused as punishment in psychiatric institutions, leaving millions traumatized. Its depiction in films like One Flew Over the Cuckoo’s Nest cemented its reputation as an instrument of torture.

What makes ECT so controversial today is that modern medicine has largely rehabilitated it. Contemporary ECT is performed under general anesthesia, is carefully controlled, and has proven remarkably effective for severe, treatment-resistant depression — particularly in patients who have not responded to any medication. Many psychiatrists argue it is one of the most effective treatments in all of medicine. Yet public stigma remains enormous, patient advocacy groups remain deeply divided, and the procedure continues to generate fierce ethical debate about consent, memory loss side effects, and its troubled past.

Thalidomide Prescription

Thalidomide was introduced in the late 1950s as a seemingly miraculous sedative and anti-nausea drug, aggressively marketed to pregnant women across Europe, Australia, Canada, and beyond as a safe treatment for morning sickness. The pharmaceutical company that produced it assured the public — and regulators — that it had been thoroughly tested. It hadn’t. By the early 1960s, a global catastrophe unfolded: an estimated 10,000 to 20,000 babies were born with severe birth defects, most notably phocomelia — the absence or severe shortening of limbs.

The thalidomide scandal fundamentally transformed how drugs are approved and regulated worldwide. In the United States, FDA pharmacologist Frances Oldham Kelsey refused to approve the drug, and her resistance — widely celebrated in hindsight — directly led to sweeping reforms in drug testing requirements. Thalidomide itself, however, refused to die quietly. It was later found to be effective against certain cancers and leprosy complications, and is still used today under strict protocols — a drug that caused one of history’s worst pharmaceutical disasters now quietly saving lives, wrapped in layers of contradiction.

Frontal Lobe Surgery for Homosexuality

For much of the 20th century, homosexuality was classified as a mental illness, and doctors across the Western world subjected gay men and women to a horrifying array of “treatments” aimed at conversion. Among the most extreme was surgical intervention — including lobotomies and hypothalamic lesioning — performed on the grounds that homosexuality was a neurological defect that could be excised or corrected. In Nazi Germany, gay men in concentration camps were subjected to experimental hormonal implants and castration in attempts to “cure” them.

The legacy of these procedures is one of profound shame for the medical profession. Not until 1973 did the American Psychiatric Association remove homosexuality from its Diagnostic and Statistical Manual of Mental Disorders. The surgeries left survivors with permanent neurological damage, psychological trauma, and shattered lives. Today, conversion therapy in any form — surgical or otherwise — is condemned by every major medical and psychological organization in the world, and has been banned in a growing number of countries and U.S. states, though it continues to be practiced in some parts of the world.

Radical Mastectomy

For much of the 20th century, the Halsted radical mastectomy was the standard treatment for breast cancer — a devastatingly aggressive surgery that removed not just the breast, but also the underlying chest muscles, lymph nodes, and surrounding tissue. Pioneered by surgeon William Stewart Halsted in the 1890s and rooted in the belief that the more tissue removed the better, the procedure left survivors with significant disfigurement, chronic pain, and long-term physical disability. For decades, questioning it was tantamount to medical heresy.

The shift came slowly and painfully. Clinical trials in the 1970s and 80s finally demonstrated that less radical surgeries — lumpectomy combined with radiation — produced survival outcomes equal to or better than radical mastectomy for many patients, with dramatically fewer side effects. The transition exposed how long patients had been subjected to unnecessarily mutilating surgery based on assumption rather than evidence. Today, radical mastectomies are performed only in specific, limited circumstances, but the episode remains a sobering reminder of how surgical tradition can persist long after the science no longer supports it.

Hydrotherapy and Prolonged Baths

In the early 20th century, psychiatric institutions enthusiastically adopted hydrotherapy — submerging patients in baths of cold or hot water for hours, sometimes days, restrained in canvas sheets to prevent escape. The stated goal was to calm agitation, treat mania, and manage psychosis. At its most extreme, “continuous bath therapy” kept patients immersed for up to a week at a time. It was presented as humane and scientific, a rational alternative to physical restraints and the chaos of overcrowded wards.

In practice, hydrotherapy was often a tool of control dressed up in medical language. Patients suffered hypothermia, skin damage, and severe psychological distress. There was never credible scientific evidence that it treated any mental illness effectively. By the mid-20th century, the practice was largely abandoned, swept aside by medication and evolving standards of psychiatric care. Yet it stands as a striking example of how the language of medicine can be used to legitimize what is, in effect, institutionalized abuse — a problem that did not begin or end with hydrotherapy.

Insulin Coma Therapy

Insulin coma therapy — inducing repeated hypoglycemic comas by injecting massive doses of insulin into psychiatric patients — was one of the most widely used treatments for schizophrenia from the 1930s through the 1950s. Developed by Austrian physician Manfred Sakel, it involved plunging patients into unconsciousness for hours before bringing them back with glucose. It was exhausting, dangerous, and frequently lethal: death rates from the procedure ranged between 1 and 2 percent. Patients who survived underwent dozens of comas over the course of months.

What makes the story especially troubling is how long the therapy persisted without solid evidence it actually worked. A landmark 1957 study finally demonstrated that insulin coma therapy produced outcomes no better than those achieved by simple rest and supportive care — meaning patients had been subjected to a life-threatening procedure for nothing. The therapy was quietly abandoned, but not before it had been administered to hundreds of thousands of patients worldwide. It remains one of the starkest examples in psychiatric history of a harmful treatment that survived on reputation rather than results.

Electroconvulsive Therapy on Children

While adult ECT has found renewed medical legitimacy, its use on children and adolescents remains one of the most contested areas in modern psychiatry. Some clinicians argue that severe, treatment-resistant depression or catatonia in young patients warrants ECT when nothing else has worked, citing cases where it has been genuinely life-saving. A small but growing body of research suggests it can be safe and effective in adolescents under the right circumstances and with proper consent protocols.

The opposition, however, is fierce and emotionally charged. Critics argue that the developing brain is uniquely vulnerable to the effects of induced seizures, that long-term outcomes in children are poorly understood, and that alternatives are not always fully exhausted before ECT is recommended. Stories of teenagers subjected to ECT against their wishes — sometimes at the behest of parents and institutions — have fueled advocacy campaigns demanding outright bans. The procedure exists in a legal and ethical gray zone in many countries, and the debate shows no signs of resolution.

Prefrontal Leucotomy

The prefrontal leucotomy — a surgical procedure that severed nerve fibers connecting the prefrontal cortex to the rest of the brain — was the precursor to the more notorious transorbital lobotomy and was performed on an even larger scale. Developed in Portugal by Egas Moniz in 1935 and rapidly adopted worldwide, it was seen as a breakthrough for patients with severe psychiatric conditions. Hospitals in the United States, United Kingdom, and Europe performed thousands of the operations, and Moniz won the Nobel Prize partly on the strength of optimistic (and poorly documented) early results.

What followed was one of medicine’s most sobering reckonings. Outcomes varied wildly and were rarely as positive as initially claimed. Many patients were left with permanent personality changes, cognitive impairment, and loss of initiative — essentially had their humanity surgically diminished. A number of patients, including some of Moniz’s own cases, showed little to no improvement. One of his patients shot him in 1939, leaving him partially paralyzed, in what many have grimly noted was a darkly poetic consequence. The leucotomy’s legacy has been the subject of intense historical and ethical scrutiny ever since.

Forced Sterilization

Forced sterilization programs — carried out in the name of eugenics — represent one of the most monstrous intersections of medicine and state power in modern history. Between the 1900s and 1970s, governments across the United States, Canada, Sweden, Japan, Nazi Germany, and dozens of other nations systematically sterilized people deemed “unfit” to reproduce: the mentally ill, the intellectually disabled, the poor, prisoners, immigrants, and racial minorities. In the U.S. alone, an estimated 60,000 people were forcibly sterilized under state laws upheld by the Supreme Court in Buck v. Bell in 1927.

The horror extends far beyond the historical. Forced sterilizations of Indigenous women in Canada and the United States were documented as recently as 2018 and 2019. Reports of sterilizations of Uyghur women in China and migrant women in U.S. immigration detention facilities have emerged in recent years, sparking international outrage. That a practice so clearly violating bodily autonomy and human rights continues — in various forms, under various justifications — makes forced sterilization not merely a dark historical footnote but an ongoing human rights emergency.

Trepanation

Trepanation — drilling or scraping a hole into the skull — is the oldest surgical procedure known to humanity, with evidence dating back 8,000 years. Ancient skulls discovered on nearly every inhabited continent show signs of trepanation, many with bone regrowth indicating the patients survived. Historically, it was used to treat head injuries, epilepsy, mental illness, and what were presumably supernatural afflictions. The remarkable thing is that in many trauma cases — relieving pressure from a skull fracture, for instance — it was genuinely effective.

What makes it controversial in modern times is its revival by fringe practitioners who claim that trepanation increases blood flow to the brain and permanently elevates consciousness. In the 1970s, Dutch activist Bart Huges famously drilled a hole in his own skull with a dentist’s drill, claiming it gave him a sustained high similar to childhood. A small number of adherents have followed. The medical establishment is unanimous: self-trepanation is extraordinarily dangerous, has no proven benefit beyond legitimate trauma treatment, and has killed and maimed its DIY practitioners. Yet the cult of trepanation as consciousness expansion refuses to fully die.

Human Experimentation in Nazi Germany

The medical experiments conducted by Nazi physicians on concentration camp prisoners between 1939 and 1945 constitute among the most monstrous atrocities in the history of science. Prisoners — predominantly Jewish, but also Roma, Soviet POWs, and people with disabilities — were subjected without consent to hypothermia experiments, high-altitude pressure tests, infectious disease injections, surgical amputations without anesthesia, bone transplantation, sterilization experiments, and dozens of other procedures designed to serve military and ideological purposes. Josef Mengele, the “Angel of Death,” performed his experiments on twins at Auschwitz with a particular obsession and sadism.

The Nuremberg Doctors’ Trial of 1946-47 resulted in the conviction of 23 Nazi doctors, seven of whom were executed. More enduringly, it produced the Nuremberg Code — the first international standard for ethical human experimentation, establishing that voluntary consent is “absolutely essential.” The data from these experiments remains deeply controversial: some scientists have argued it should be used if it could save lives, while the overwhelming consensus holds that using data obtained through torture and murder is a moral obscenity that legitimizes the crimes from which it was extracted.

The Tuskegee Syphilis Study

Beginning in 1932, the United States Public Health Service enrolled 399 Black men with syphilis in Macon County, Alabama, into a study that was, from the start, predicated on deception. The men were told they were being treated for “bad blood” — a local term for various ailments — but were in fact deliberately left untreated so researchers could observe the natural progression of syphilis. When penicillin was established as an effective cure in the 1940s, it was withheld from the participants. The study continued for 40 years until a whistleblower leaked it to the press in 1972.

The Tuskegee study became a defining atrocity in American public health history — one whose shadow is still felt today in the deep distrust many Black Americans have toward the medical establishment. By the time it was exposed, 28 men had died directly of syphilis, 100 had died of related complications, 40 wives had been infected, and 19 children had been born with congenital syphilis. President Clinton formally apologized on behalf of the U.S. government in 1997. The study fundamentally reshaped research ethics and informed consent law in America — but its legacy of trauma and distrust is impossible to fully repair.

Chelation Therapy for Autism

Chelation therapy — using chemicals to remove heavy metals from the body — is a legitimate, FDA-approved treatment for genuine heavy metal poisoning such as lead or mercury toxicity. It becomes deeply controversial when promoted, as it has been aggressively by certain alternative medicine practitioners, as a treatment for autism. The theory, rooted in the thoroughly debunked claim that childhood vaccines containing thimerosal cause autism, holds that removing mercury from the body will reduce or eliminate autistic symptoms.

There is no credible scientific evidence that chelation therapy benefits autistic children in any way. There is, however, substantial evidence that it is dangerous: chelation removes not just heavy metals but also essential minerals like calcium and zinc, and can cause kidney failure, cardiac arrest, and death. In 2005, a five-year-old autistic boy named Abubakar Tariq Nadama died during chelation therapy in Pennsylvania. Multiple children have died or suffered serious harm from the procedure. Yet it continues to be offered by practitioners, purchased by desperate parents, and fiercely defended by anti-vaccine communities — a tragic collision of pseudoscience, parental grief, and medical exploitation.

Stem Cell Tourism

Stem cell therapy holds genuine promise in medicine — for certain conditions, with rigorously tested protocols, under close clinical supervision. What has exploded alongside legitimate research, however, is a global industry of unproven stem cell treatments offered at clinics across China, Mexico, Germany, and Panama, targeting patients with conditions like ALS, multiple sclerosis, cerebral palsy, spinal cord injuries, and Parkinson’s disease — people for whom conventional medicine has offered little hope. These clinics charge tens or hundreds of thousands of dollars for procedures that have not been proven safe or effective.

The results have been frequently catastrophic. Patients have developed tumors at injection sites, suffered life-threatening infections, experienced strokes, and died. In one documented case, a child treated with fetal neural stem cells developed multiple tumors in his spine and brain. Yet regulatory agencies have struggled to contain the industry because clinics operate across borders, exploit desperate patients, and wrap dangerous practices in the language of cutting-edge science. The stem cell tourism industry is perhaps the starkest contemporary example of how medical hope can be weaponized against the most vulnerable patients.

Radical Prostatectomy Overuse

Prostate cancer is one of the most common cancers in men, but it is also frequently one of the slowest-growing and least likely to cause death. Many prostate cancers discovered through PSA screening will never cause harm during a patient’s natural lifetime — a phenomenon doctors call “overdiagnosis.” The controversy centers on whether men diagnosed with low-risk prostate cancer are being pushed toward radical prostatectomy — surgical removal of the prostate — far more often than is medically justified, resulting in unnecessary incontinence and erectile dysfunction for thousands of men.

Decades of data have complicated the picture significantly. Multiple major studies have found that for low-risk prostate cancer, active surveillance — careful monitoring without immediate treatment — produces survival outcomes equivalent to immediate surgery or radiation, while sparing patients the side effects. Yet rates of radical prostatectomy remain high, driven by a combination of patient anxiety about any cancer diagnosis, physician training, institutional financial incentives, and the deeply human instinct to “do something.” The debate over prostate cancer treatment has become a centerpiece of broader discussions about overdiagnosis and overtreatment in modern medicine.

Vagotomy for Peptic Ulcers

For much of the 20th century, peptic ulcers were believed to be caused by stress, diet, and excess stomach acid — and surgical treatment, including vagotomy (cutting the vagus nerve to reduce acid production) and partial gastrectomy (removing part of the stomach), was standard care for severe cases. Hundreds of thousands of patients underwent these major operations, with significant risks of complications and lasting digestive problems. It was considered sound, evidence-based medicine.

Then in 1982, Australian physicians Barry Marshall and Robin Warren discovered that most peptic ulcers are caused by the bacterium Helicobacter pylori — a finding so radical that Marshall famously drank a solution of the bacteria himself to prove it. Their discovery eventually earned them the 2005 Nobel Prize in Medicine, and it meant that a condition requiring dangerous surgery could be cured with a simple course of antibiotics. The vagotomy story is one of medicine’s most instructive: a generation of patients underwent unnecessary operations because the field was solving the wrong problem, with enormous confidence and the best of intentions.

Routine Tonsillectomy

For much of the 20th century, tonsillectomy — surgical removal of the tonsils — was one of the most commonly performed surgeries in the Western world, carried out almost reflexively on children with recurrent sore throats. At its peak, roughly a million tonsillectomies were performed annually in the United States alone. The surgery was viewed as a benign, routine preventive measure: remove the tonsils, prevent future infections, done. It was performed so routinely that generations of children simply expected it as part of childhood.

The evidence, however, was always far shakier than the practice suggested. Research over the past few decades has consistently found that tonsillectomy offers only modest benefits for most children — reducing sore throat episodes somewhat, but not eliminating them, and at the cost of a surgical procedure that carries real risks of bleeding, anesthesia complications, and prolonged recovery. Guidelines have tightened considerably, and tonsillectomy rates have fallen dramatically. The episode is now frequently cited in debates about healthcare system incentives: tonsillectomies generated significant revenue, and some researchers have noted that rates correlated with reimbursement structures rather than clinical need.

Heart Transplantation (Early Era)

When South African surgeon Christiaan Barnard performed the world’s first human heart transplant on December 3, 1967, it was simultaneously one of medicine’s greatest triumphs and most controversial moments. The patient, Louis Washkansky, survived only 18 days before dying of pneumonia. Barnard’s second transplant patient survived 19 months. Dozens of cardiac centers around the world rushed to replicate the procedure in what became a publicity frenzy, but early survival rates were dismal and the ethics of transplantation — including questions about when exactly a donor was “dead” enough for their heart to be taken — sparked fierce philosophical and legal debate.

The controversy also exposed profound inequities: who received hearts, who donated them, and whether the excitement of surgical innovation was driving decision-making that was not in patients’ best interests. Heart transplantation eventually succeeded — the development of cyclosporine in the 1970s, which revolutionized rejection management, made it genuinely life-saving — but the early era is still studied as a cautionary tale about the pressure to innovate at the bedside before the science is ready, and about the ethical complexities of defining death in the age of transplantation.

DIY Gender-Affirming Surgery

Access to gender-affirming surgery remains severely limited for transgender individuals in many parts of the world — restricted by cost, insurance denial, long waiting lists, geographic barriers, and legal prohibitions. In this vacuum of access, some transgender individuals have attempted to perform surgical or quasi-surgical procedures on themselves: injecting black market silicone or cooking oil into breasts or buttocks, self-administering hormones without medical supervision, or attempting self-castration. The results have ranged from severe infection and disfigurement to death.

This phenomenon sits at a devastating intersection of medical gatekeeping, economic inequality, and social marginalization. Physicians and public health researchers who have documented these cases largely agree that the solution is not to further restrict gender-affirming care — which has extensive evidence supporting its efficacy in reducing gender dysphoria and suicide risk — but to make it accessible. Critics of gender-affirming care argue that the procedures themselves are harmful, particularly for minors, creating a political and ethical battle that has increasingly spilled into legislatures, courtrooms, and children’s hospitals. The debate is among the most divisive in contemporary medicine.

Fetal Surgery

Operating on a fetus still inside the womb sounds like science fiction, yet fetal surgery has been performed since the 1980s and is now used to treat a range of serious conditions including spina bifida, twin-to-twin transfusion syndrome, and congenital diaphragmatic hernia. In some cases, the fetus is partially delivered, operated on, and returned to the uterus. Successful fetal surgeries have saved lives and prevented severe disabilities that would otherwise have been inevitable. They represent a genuine medical breakthrough.

The controversies are multiple and deep. Fetal surgery exposes the mother to significant surgical risk for the benefit of the fetus — a fundamental ethical tension in which one patient’s welfare is weighed against another’s. Outcomes are highly variable, and the difference between a life-saving intervention and an iatrogenic catastrophe can be razor-thin. The field also opens profound questions about fetal personhood: if a fetus can be a surgical patient, what are the legal and ethical implications for abortion rights? Anti-abortion advocates have explicitly cited fetal surgery as evidence that a fetus is a patient with independent rights — a use of medical progress that has made some surgeons deeply uncomfortable.

Deep Brain Stimulation

Deep brain stimulation, or DBS, involves surgically implanting electrodes in specific regions of the brain and connecting them to a pulse generator implanted in the chest — essentially placing a pacemaker in someone’s brain. Originally developed for Parkinson’s disease, it has proven genuinely effective at reducing motor symptoms when medication fails. More recently, DBS has been explored as a treatment for severe depression, OCD, Tourette syndrome, epilepsy, and even anorexia nervosa — conditions where the electrical modulation of neural circuits shows promise but where evidence remains early and contested.

The ethical concerns are substantial. Unlike most medical treatments, deep brain stimulation can alter personality, emotional responses, and core aspects of identity in ways patients may not anticipate or desire. Some patients report feeling that their sense of self has been changed in disturbing ways. Questions arise about autonomy: if the brain is stimulated differently, is the consent given before surgery still valid for the person who exists after it? The prospect of DBS being used for psychiatric conditions — particularly in vulnerable populations — has generated intense debate among neuroscientists, bioethicists, and patient advocacy groups about the boundaries of neurological intervention.

Conversion Therapy

Conversion therapy — encompassing any attempt to change a person’s sexual orientation or gender identity through psychological or medical intervention — has been practiced by clinicians, religious organizations, and counselors for well over a century. Methods have ranged from talk therapy and prayer to aversion conditioning (pairing same-sex attraction with electric shocks or nausea-inducing drugs), hormonal treatment, and in the most extreme cases, surgical intervention. It has been applied to both adults and children, sometimes with family coercion and sometimes with institutional force.

The medical consensus today is unambiguous: sexual orientation cannot be changed, conversion therapy does not work, and it causes serious psychological harm — including elevated rates of depression, anxiety, self-harm, and suicide among survivors. More than 20 countries and over 20 U.S. states have enacted legislative bans on conversion therapy for minors. Yet an estimated 700,000 LGBTQ+ Americans have undergone some form of it, and it continues to be practiced in many parts of the world, defended by religious communities on grounds of free speech and religious liberty. The collision between medical ethics and religious freedom makes this one of the most politically combustible controversies in modern health policy.

Experimental Gene Therapy Deaths

Gene therapy — the idea of correcting genetic diseases by delivering functional genes into patients — was one of the most thrilling frontiers of late 20th-century medicine. It was also the setting for one of its most sobering disasters. In 1999, 18-year-old Jesse Gelsinger enrolled in a University of Pennsylvania gene therapy trial for a rare metabolic disorder and died four days after receiving the experimental treatment — the first acknowledged death from gene therapy. An investigation revealed serious ethical violations: adverse events had been underreported, and Gelsinger may never have been told about the deaths of monkeys in earlier trials.

The fallout was catastrophic for the field. Clinical trials were suspended across the United States, regulations were overhauled, and gene therapy entered a decade-long reputational crisis. Further disasters followed: children treated for a rare immune disorder developed leukemia from the viral vector used to deliver the therapeutic gene. Only in the 2010s did the field begin to recover its credibility, producing genuinely effective treatments for conditions like hemophilia, certain cancers, and inherited blindness. Today, gene therapy is resurging — but the shadow of Jesse Gelsinger remains a permanent reminder of how catastrophically wrong experimental medicine can go when financial pressure and scientific enthusiasm outpace rigor.

Electroshock Aversion Therapy

At the Judge Rotenberg Educational Center in Canton, Massachusetts, a device called the Graduated Electronic Decelerator has been used to deliver painful electric shocks to disabled students — many with autism or intellectual disabilities — as a behavior modification tool. Shocks have been administered for behaviors ranging from self-injury to failing to follow instructions. A 2020 FDA ban on the device for self-injurious or aggressive behavior was overturned in court on procedural grounds, and as of the early 2020s, the center continued to use it on some residents.

The practice has been condemned as torture by the United Nations Special Rapporteur on Torture, the American Association on Intellectual and Developmental Disabilities, and disability rights organizations worldwide. Survivors and former residents have given harrowing testimonies about the psychological terror of anticipating shocks and the trauma of receiving them. Defenders of the center argue that for a small subset of patients with extreme self-injurious behavior, the shocks prevent greater harm. The case has become a flashpoint in the broader disability rights movement’s fight against the use of aversive interventions and the dehumanization of disabled people in institutional settings.

Bilateral Adrenalectomy for Breast Cancer

In the 1950s and 1960s, before effective anti-hormonal drugs were developed, some oncologists proposed and performed bilateral adrenalectomy — surgical removal of both adrenal glands — in women with advanced breast cancer. The rationale was that adrenal glands produce hormones, including estrogen precursors, that could fuel hormone-receptor-positive tumors. Remove the glands, reduce the hormones, potentially slow the cancer. The surgery was performed in women with metastatic disease who had already exhausted other options.

The consequences were profound and permanent. The adrenal glands are essential to life, producing cortisol and other hormones without which the body cannot regulate stress response, blood pressure, electrolytes, or immune function. Women who underwent bilateral adrenalectomy required lifelong hormone replacement therapy. The benefits in terms of cancer control were modest and inconsistent. The procedure was largely abandoned once effective drugs like tamoxifen became available — but not before many women with terminal cancer spent their remaining months recovering from a surgery that dramatically diminished their quality of life while offering uncertain benefit.

Penile Inversion Vaginoplasty

Penile inversion vaginoplasty is the most common surgical technique for gender-affirming bottom surgery in transgender women — using penile and scrotal skin to construct a vagina, clitoris, and vaginal opening. When performed by experienced surgeons, outcomes are generally positive, and research consistently finds high patient satisfaction and significant improvement in psychological wellbeing and gender dysphoria. It is considered medically necessary by every major medical organization for transgender women who meet established criteria.

The controversy is almost entirely political. Anti-transgender advocacy groups have weaponized descriptions of the surgery to inflame public opinion, frequently presenting it without medical context as inherently mutilating or abusive — particularly when discussing its applicability to minors (though bottom surgery for minors is extremely rare and not recommended by medical guidelines). State legislatures across the United States have proposed or passed laws restricting gender-affirming care, sometimes explicitly targeting surgical procedures. The result is a medical procedure supported by decades of outcomes data and professional consensus, now at the center of a culture war that has driven some providers to stop offering it entirely out of fear of legal liability.

Heroin as Medicine

In the early 20th century, heroin — diacetylmorphine — was freely sold as a non-addictive cough suppressant and pain reliever. The Bayer pharmaceutical company, which coined the trade name “Heroin” in 1898, marketed it as a wonder drug, even promoting it as a cure for morphine addiction. It was available in pharmacies without prescription and was given to children for coughs. The catastrophic addictive properties that were hidden in plain sight took decades to be officially acknowledged and acted upon.

What makes heroin even more controversial today is its continued medical use in some countries. In the United Kingdom, Switzerland, Canada, and the Netherlands, pharmaceutical-grade heroin (diamorphine) is legally prescribed under strict protocols to people with severe opioid addiction who have not responded to methadone or buprenorphine maintenance. Clinical trials have found that heroin-assisted treatment significantly reduces illicit drug use, crime, and social marginalization in this population. The results are robust, yet the idea of prescribing heroin remains viscerally controversial — particularly in the United States, where it remains a Schedule I substance with no recognized medical use, a distinction researchers argue is more political than scientific.

Gastric Bypass for Teenagers

Bariatric surgery — including gastric bypass and sleeve gastrectomy — has become increasingly offered to severely obese adolescents in recent years, as rates of childhood obesity and obesity-related conditions like type 2 diabetes have climbed. For morbidly obese teenagers who have not responded to intensive lifestyle intervention, surgery can produce dramatic weight loss and resolution of life-threatening comorbidities. Some research suggests that outcomes in adolescents may actually be better than in adults, with longer to benefit from sustained weight loss.

Critics, however, raise urgent concerns about operating on still-developing bodies and minds. Adolescents cannot fully comprehend the lifelong implications of permanently altering their digestive anatomy — including the need for lifelong nutritional supplementation, potential complications decades later, and the psychological dimensions of radical body change during an already turbulent developmental period. There are also questions about whether surgery addresses the root causes of adolescent obesity or simply treats a symptom, and about whether social pressure to conform to body norms is driving families to accept surgical risks prematurely. The debate reflects larger tensions about autonomy, consent, and the appropriate treatment of obesity at any age.

Opioid Prescription Epidemic

In the 1990s, pharmaceutical company Purdue Pharma aggressively marketed OxyContin — a long-acting oxycodone formula — as a safe, non-addictive pain reliever, backed by a sales campaign that included payments to physicians and misleading claims about the drug’s abuse potential. Doctors, responding to a genuine public health push to treat pain more aggressively, prescribed opioids in quantities that had no precedent. Other pharmaceutical companies followed Purdue’s lead, and within a decade, the United States had flooded its communities with billions of opioid pills.

The result was the deadliest drug crisis in American history. By 2021, opioids were responsible for over 80,000 deaths annually in the United States. Purdue Pharma pleaded guilty to federal criminal charges. Members of the Sackler family, which owned Purdue, paid billions in settlements — while largely avoiding personal criminal accountability, generating enormous outrage. The opioid epidemic exposed systemic failures across the pharmaceutical industry, regulatory agencies, prescribing culture, and pain management training. It also sparked a brutal overcorrection: in the rush to restrict opioid prescriptions, millions of legitimate chronic pain patients found their medications cut off, forcing some into withdrawal or illicit drug markets.

Lobotomy of Children

While the lobotomy is already one of medicine’s darkest chapters, its application to children makes it still more disturbing. Walter Freeman, the American neurologist who popularized and aggressively promoted the transorbital “ice pick” lobotomy, performed the procedure on patients as young as four years old. Children with epilepsy, behavioral problems, childhood schizophrenia, and intellectual disabilities were subjected to a procedure whose effects on the developing brain were entirely unknown and never systematically studied. Freeman treated children with the same casual confidence he brought to adults, often performing multiple lobotomies in a single day.

The outcomes were predictably devastating. Children who survived lobotomies frequently showed severe cognitive impairment, emotional blunting, and arrested development — emerging from surgery as shadows of their former selves. Unlike adult patients, who had lived and developed before the procedure, lobotomized children never had the opportunity to reach their developmental potential. Freeman’s youngest patient, Howard Dully, received a lobotomy at age 12 at the request of his stepmother. Dully survived and, decades later, wrote a memoir documenting his lifelong search to understand what was done to him. His story became a symbol of the lobotomy’s most unconscionable chapter.

Human Guinea Pig Experiments at Willowbrook

Willowbrook State School in Staten Island, New York, was a residential institution for children with intellectual disabilities that became notorious in the 1960s and 70s for its horrific overcrowding and abuse. What made Willowbrook internationally infamous in medical ethics circles, however, was the deliberate infection of children with hepatitis — with the consent of their parents — by researchers led by Dr. Saul Krugman. The stated purpose was to study the disease and develop a vaccine. The research produced scientifically valuable results, including the distinction between hepatitis A and B.

But the ethics were deeply troubling. The children who were enrolled were a captive population with limited ability to advocate for themselves. Informed consent was questionable at best: parents who refused to enroll their children in the study were reportedly told that their children would not be admitted to the school — effectively coercing participation. Critics argued that deliberately infecting disabled children with a dangerous virus, however valuable the scientific knowledge gained, was a fundamental violation of research ethics. The Willowbrook experiments became one of the key cases driving the development of modern bioethics and the reforms in research oversight that followed in the 1970s.

Stomach Stapling Without Psychological Evaluation

In the 1990s and early 2000s, as bariatric surgery expanded rapidly, many patients underwent stomach-reducing procedures — stapling, banding, and eventually gastric bypass — with minimal or no psychological evaluation. The focus was almost entirely on the physical mechanics of weight loss, with little attention to the psychological conditions — depression, anxiety, trauma, binge-eating disorder, body dysmorphia — that frequently underlie severe obesity and that do not disappear when the stomach is smaller. Patients who had not adequately processed these issues before surgery were poorly prepared for the psychological demands of life afterward.

The consequences became increasingly well-documented: patients who regained weight after initially losing it, often through “transfer addiction” — substituting food with alcohol or other substances. Rates of alcoholism, substance abuse, depression, and suicide were found to be significantly elevated in post-bariatric patients compared to the general population. Some studies found that bariatric surgery was associated with an increased risk of suicide, not a decreased one. The field has since made psychological evaluation a standard part of the preoperative workup, but the history of performing the surgery without it raised urgent questions about what it means to treat obesity — and whether medicine understood it as a physical disease, a behavioral disorder, or both.

Electroconvulsive Therapy Without Anesthesia

Early electroconvulsive therapy, developed in Italy by Ugo Cerletti and Lucio Bini in 1938, was performed without any anesthesia or muscle relaxants. The patient was fully conscious when the electrical current was delivered, experienced a grand mal seizure — a full-body convulsion — and then lost consciousness. The convulsions were so violent that patients frequently broke their own bones: vertebral fractures were common complications. In its earliest form, ECT was less a treatment than a controlled trauma, inflicted on some of the most vulnerable people in society.

What makes this particularly troubling is not merely that the early method was dangerous — medicine often advances through trial and error — but that fully conscious ECT was performed in some institutions well into the 1950s, even after it was clear that anesthesia and muscle relaxants made the procedure dramatically safer and more humane. The adoption of safer technique was slow and uneven, particularly in underfunded psychiatric institutions. The history of ECT without anesthesia is a case study in how new medical technologies are adopted by institutions at radically different rates, and how the patients who suffer the consequences of slow adoption are disproportionately society’s most powerless.

Surgical Treatment of Masturbation

Few episodes in medical history are as bizarre and disturbing as the late 19th century’s obsession with masturbation as a disease requiring surgical cure. Physicians across the United States and Europe genuinely believed that masturbation caused epilepsy, blindness, mental illness, and a host of other maladies — a theory they labeled “masturbatory insanity.” The treatments they devised are almost incomprehensible: clitoral cauterization, clitoridectomy, circumcision, and even castration were performed on both male and female patients — including children — as cures for the supposed pathology.

John Harvey Kellogg — yes, the cereal inventor — was an enthusiastic advocate of circumcision as a cure for masturbation and recommended performing it on boys without anesthesia so that the pain would serve as a deterrent. Female circumcision was performed into the 1930s in the United States for masturbation and “excessive” sexual desire. The episode reveals how thoroughly medical authority can be captured by the prevailing moral anxieties of an era — and how patients, particularly children and women, pay the price when it is. It also raises uncomfortable questions about practices still performed today that future generations may view with similar horror.

Dialysis Rationing

When kidney dialysis was first developed in the early 1960s, it was scarce, expensive, and could save the lives of people who would otherwise die from kidney failure. But there weren’t enough machines for everyone who needed them. In Seattle, Washington, the world’s first dialysis center convened a committee of laypeople — nicknamed the “God Committee” — to decide which patients would receive the life-saving treatment and which would be left to die. Criteria included age, social worth, employment, family status, and community contribution. Those deemed less socially valuable were denied dialysis.

The God Committee’s decisions have been scrutinized by ethicists ever since as a case study in explicit healthcare rationing and its inevitable entanglement with social bias. People with disabilities, the elderly, the unemployed, and those with fewer social connections were systematically less likely to receive dialysis. Congress eventually passed legislation in 1972 making dialysis available to all Medicare-eligible Americans with end-stage renal disease — but the philosophical questions the God Committee raised never went away. Every healthcare system rations care in some form, usually implicitly rather than explicitly. The dialysis story forces the question of whether it is more honest and ethical to ration openly, with visible criteria, than to do so quietly through cost, access, and insurance coverage.

Experimental Antidepressants in the 1950s

The discovery of antidepressants in the late 1950s was largely accidental. Iproniazid, originally developed as a tuberculosis treatment, was noticed to have mood-elevating effects in TB patients. Imipramine, the first tricyclic antidepressant, was originally investigated as a potential antipsychotic. These drugs were rapidly deployed on a massive scale in psychiatric institutions, often on patients who could not meaningfully consent and without long-term safety data — because long-term safety data did not exist.

The side effect profiles of early antidepressants were significant and sometimes deadly: tricyclics in overdose were (and remain) extremely dangerous, and MAO inhibitors required strict dietary restrictions to avoid fatal hypertensive crises. Yet they were prescribed broadly, sometimes carelessly, to populations who were institutionalized and vulnerable. The legacy is complex: antidepressants genuinely transformed psychiatric treatment and have reduced enormous suffering. But the early era of their deployment also represents a massive uncontrolled experiment on a captive population — a pattern that would repeat itself throughout the history of psychiatric pharmacology.

Uterus Transplantation

Uterus transplantation — transplanting a donor uterus into a woman who was born without one or who lost hers to disease — is one of the most technically demanding and ethically contested surgical frontiers of the 21st century. The first successful uterus transplant births occurred in Sweden in 2014, and since then dozens of babies have been born worldwide to women who received donor uteruses. For women with absolute uterine infertility, it offers the possibility of carrying and delivering their own child — an experience many describe as profoundly meaningful.

The controversies are significant. Uterus transplantation is performed not to save the recipient’s life but to enable reproduction — making it elective in a way that other organ transplants are not, and requiring the recipient to undertake the risks of major surgery, immunosuppression, and subsequent pregnancy. Living donors have given their uteruses — usually from relatives — a procedure that removes a healthy organ from a healthy person and carries significant surgical risk. There are questions about whether medical resources devoted to uterus transplantation are well-allocated when adoption and surrogacy remain options. And there are disability rights perspectives that challenge whether the desire for biological motherhood constitutes sufficient justification for the risks involved. The field is advancing rapidly, and the ethical debate has not kept pace.

Prescription Amphetamines to Children

The prescription of stimulant medications — primarily amphetamine salts and methylphenidate — to children diagnosed with ADHD is one of the most widespread and contested practices in modern pediatric medicine. The United States prescribes stimulants to approximately 10% of all children, with rates in some regions significantly higher. Proponents argue that, for children with genuine ADHD, stimulants are among the most effective medications in all of psychiatry — reliably improving attention, impulse control, and academic performance, with a well-established safety profile.

Critics argue that ADHD is dramatically overdiagnosed, that normal childhood behavior is being pathologized and medicated, and that the long-term effects of stimulant use on the developing brain are insufficiently understood. They point to evidence that ADHD diagnosis rates correlate with state-level variation in insurance coverage and school discipline policies rather than any underlying difference in biological rates of the condition. Some researchers have raised concern that children born in the months before school enrollment cutoff dates are systematically more likely to be diagnosed with ADHD than their slightly older classmates — suggesting that relative immaturity, not a brain disorder, is driving many diagnoses. The debate sits at the uncomfortable intersection of neuroscience, education policy, pharmaceutical marketing, and parental anxiety.

Foreskin Restoration and Circumcision Ethics

Male circumcision — surgical removal of the foreskin — is one of the world’s most commonly performed surgical procedures, carried out for religious, cultural, and purported medical reasons on millions of boys annually. In the United States, it is routinely performed on newborns without their consent, a practice defended by many as medically beneficial and culturally normative. Major medical organizations, including the American Academy of Pediatrics, have concluded that the health benefits — modest reductions in UTI risk, HIV transmission, and penile cancer — outweigh the risks, though they stop short of recommending universal circumcision.

The intactivist movement — advocates for genital autonomy who oppose non-therapeutic circumcision of infants — argues passionately that circumcision removes sensitive tissue without consent, constitutes a human rights violation, and that the claimed medical benefits do not justify a permanent surgical alteration on a non-consenting minor. They draw explicit parallels with female genital mutilation, a comparison that defenders of male circumcision vigorously reject. The ethics of infant circumcision has become increasingly heated in Western countries, particularly as cultural norms shift and second-generation immigrants from non-circumcising cultures question the practice. It is perhaps the most common surgical controversy in modern medicine, normalized to the point that many people have never considered it as a controversy at all.

Forced Psychiatric Hospitalization

Involuntary psychiatric hospitalization — committing a person to a psychiatric facility against their will — is legal in virtually every country under certain circumstances, typically when a person is deemed an imminent danger to themselves or others and is unable to make rational decisions about their own care. It can be genuinely life-saving: people in acute psychotic episodes or at severe suicidal risk may be unable to access help voluntarily, and involuntary hospitalization can provide a bridge to treatment. Many people who have been involuntarily committed describe it, in retrospect, as the intervention that saved their lives.

The history and present reality of involuntary commitment, however, are deeply troubled. Psychiatric hospitals have been used throughout history to confine political dissidents, LGBTQ+ individuals, women deemed hysterical or inconvenient, and others who posed no genuine danger. The Soviet Union notoriously institutionalized political dissidents as “mentally ill.” Even in democratic countries, Black patients are significantly more likely to be involuntarily committed than white patients with equivalent presentations. Survivors of involuntary hospitalization frequently describe it as traumatic, retraumatizing, and counterproductive to recovery. The tension between genuine protection and coercive control is unresolvable in the abstract — and deeply consequential in practice.

Xenotransplantation

Xenotransplantation — transplanting organs from animals into humans — has been a medical dream for over a century, driven by the chronic shortage of human donor organs. In 2022, surgeons at the University of Maryland performed the first transplant of a genetically modified pig heart into a living human patient, David Bennett Sr., who survived for 60 days. The achievement was hailed as a landmark moment. Subsequent pig kidney transplants into human patients in 2023 and 2024 extended the field further, with researchers from NYU Langone reporting that a pig kidney functioned in a brain-dead human for over 30 days.

The controversies are both scientific and ethical. On the scientific side, the risk that animal viruses — particularly porcine endogenous retroviruses — could infect human recipients and potentially spread into the human population raises legitimate biosecurity concerns, though genetic engineering is aimed at mitigating this risk. Ethically, questions arise about the welfare of the pigs bred specifically for organ harvesting and about whether the desperation created by organ shortages — shortages that could be reduced by improved donor registration and opt-out systems — is pushing xenotransplantation faster than safety data justifies. Religiously observant Jews and Muslims also face specific concerns about receiving porcine organs. The field is advancing at extraordinary speed, and the ethical frameworks have not yet caught up.

Chemotherapy in Terminal Patients

Chemotherapy — the use of cytotoxic drugs to kill cancer cells — has extended and saved millions of lives. It has also, in many cases, been administered to terminal patients in the final weeks or months of their lives in ways that research suggests cause more harm than benefit. Studies have consistently found that a significant proportion of cancer patients receive chemotherapy in the last weeks of life, with associated increased rates of hospitalization, intensive care admission, and death outside the home — outcomes that cancer patients themselves, when surveyed, consistently say they want to avoid.

The drivers of this pattern are complex and uncomfortable. Oncologists are trained to treat cancer, not to stop treating it. Clinical trial eligibility historically excluded patients with poor prognosis, creating a literature biased toward patients most likely to respond. Healthcare systems reimburse treatment more generously than palliative or hospice care. Patients and families often struggle to accept that cure is no longer possible. And some oncologists, operating under genuine uncertainty about prognosis, continue chemotherapy rather than transition to comfort care. The landmark 2010 Temel study found that metastatic lung cancer patients who received early palliative care alongside — rather than after — standard treatment actually lived longer and had better quality of life than those who received standard care alone. The implications are profound and have been slow to transform practice.

MK-Ultra

From 1953 to 1973, the CIA conducted a covert program of mind control experiments known as MK-Ultra, in which unsuspecting American and Canadian citizens — including mental patients, prisoners, sex workers, and members of the public — were administered LSD, other psychoactive drugs, hypnosis, electroconvulsive shock, sleep deprivation, and psychological torture by researchers funded by the U.S. government. The goal was to develop techniques for interrogation and covert influence. Participants were not informed that they were subjects of experimentation. Some were dosed repeatedly over extended periods. At least one person died — Frank Olson, a government scientist who fell from a hotel window after being secretly dosed with LSD.

Most MK-Ultra records were destroyed in 1973 on the orders of CIA Director Richard Helms, but enough survived to be exposed by congressional investigations in 1975 and 1977. The program revealed the extent to which U.S. government agencies were willing to experiment on their own citizens without consent, and highlighted the complete inadequacy of ethical oversight in classified government research. It also helped inspire major reforms in research ethics law. MK-Ultra has since become a touchstone of conspiracy culture, which has paradoxically made it harder to discuss its real, documented abuses without being dismissed as conspiratorial — one of history’s more unfortunate ironies.

Aversion Therapy with Electric Shocks for Addiction

Aversion therapy — pairing undesirable behaviors with painful or unpleasant stimuli to extinguish them — was widely practiced as a treatment for addiction and various behavioral conditions throughout the mid-20th century. For alcohol use disorder, patients were given alcohol while simultaneously administered electric shocks or emetic drugs to induce vomiting, with the goal of creating a conditioned aversion. For compulsive behaviors, shocks were delivered in response to the triggering stimulus. It was applied to addiction, homosexuality, pedophilia, and other behaviors that clinicians at the time classified as disorders requiring modification.

The effectiveness of aversion therapy has never been convincingly established for most conditions, and its ethical status is deeply contested. While some patients report that it helped them, the coercive conditions under which it was frequently applied — often in institutional settings, sometimes as a condition of release — make meaningful consent nearly impossible to assess. The psychological trauma of deliberately induced pain and nausea as medical treatment has been documented extensively by survivors. Today, aversion therapy is not recommended by any major addiction treatment guideline in the United States, though it persists in some forms in other countries and in some private treatment programs, where oversight is minimal.

Blood Doping in Sports Medicine

Blood doping — manipulating the body’s blood volume or oxygen-carrying capacity to enhance athletic performance — has a long and murky relationship with sports medicine. Techniques include autologous blood transfusion (withdrawing and reinfusing one’s own blood), homologous transfusion (using another person’s blood), and administration of erythropoietin (EPO) to stimulate red blood cell production. While banned by sporting organizations, these techniques were actively assisted or overlooked by team physicians, particularly during the 1970s-2000s, in cycling, track and field, and Nordic skiing.

The Lance Armstrong scandal — which revealed systematic, team-organized doping under the guidance of sports medicine practitioners — illustrated just how deeply blood doping had infiltrated elite sport. But the controversy goes deeper than cheating: EPO abuse in particular carries significant health risks, including dramatically increased risk of blood clots, stroke, and cardiac events, particularly when combined with the extreme physiological stress of endurance sports. Cyclists died during this era of causes that researchers attributed to EPO abuse. The physicians who facilitated or ignored these practices violated the most fundamental ethical obligation of medicine — to do no harm — in service of athletic performance.

Antipsychotics in Dementia Patients

Antipsychotic medications — developed for schizophrenia and other serious psychotic disorders — have been widely prescribed to elderly patients with dementia to manage behavioral symptoms like agitation, aggression, and wandering. The practice is so common that at the peak of its use, roughly a third of nursing home residents with dementia in the United States were on antipsychotics, the vast majority of whom did not have a psychotic disorder. The drugs were used as a chemical restraint — a way to make difficult, distressed patients easier to manage in understaffed facilities.

The evidence of harm is substantial and chilling. Studies have consistently found that antipsychotics significantly increase the risk of stroke, falls, aspiration pneumonia, and death in elderly dementia patients. The FDA issued a black box warning about this risk in 2005, and Medicare implemented quality measures aimed at reducing antipsychotic prescribing in nursing homes. Progress has been made, but prescribing rates in some facilities remain high, driven by inadequate staffing ratios, insufficient training in non-pharmacological behavioral management, and the intense practical pressures of managing dementia in institutional settings. It is a case study in how the needs of institutions can override the medical interests of the most vulnerable patients.

Electroconvulsive Therapy for Homosexuality

Homosexuality was classified as a mental disorder by the American Psychiatric Association until 1973, and in the preceding decades, ECT was among the “treatments” used to attempt to suppress or eliminate same-sex attraction. Gay men and women — some voluntarily seeking help, many coerced by families, courts, or institutions — were subjected to electroconvulsive shocks, sometimes paired with erotic images in aversion protocols, in attempts to rewire their sexual orientation. The procedures were performed by mainstream psychiatrists, at accredited hospitals, as accepted medical practice.

None of it worked. Sexual orientation did not change. What changed was that patients emerged traumatized, many with ECT-related memory impairment on top of the psychological damage of being told their fundamental selves were diseased. The stories of survivors paint a picture of profound institutional harm — harm inflicted not by rogue practitioners but by medicine operating within its official consensus. The history of ECT for homosexuality is one of the primary reasons why LGBTQ+ communities have maintained profound distrust of psychiatric institutions even as those institutions have dramatically reformed. Trust, once broken so completely, is not restored easily.

Human Head Transplantation

In 2017, Italian neurosurgeon Sergio Canavero announced plans to perform the world’s first human head transplant — technically a whole-body transplant, since the head containing the brain and identity of the patient would be attached to a donor body. His proposed patient was Valery Spiridonov, a Russian man with a degenerative muscle disease. Canavero claimed he could reconnect a severed spinal cord using a chemical called polyethylene glycol and restore motor function. The announcement generated massive media coverage and immediate, near-universal condemnation from the neuroscience and surgical communities.

The scientific consensus is that human head transplantation — as proposed — is not currently possible. The human spinal cord cannot be reconnected with current technology, and there is no credible evidence that polyethylene glycol can achieve functional reconnection. Even if the surgery could be performed, the recipient would almost certainly be left permanently quadriplegic, would face overwhelming immunological challenges, and would be subjected to unimaginable psychological trauma. A rehearsal procedure claimed to have been performed on a corpse was widely dismissed as meaningless. Critics — including virtually the entire field of neurology — accused Canavero of scientific theater and exploitation of a desperately ill patient. The episode stands as a monument to the dark side of medical ambition: the point where innovation becomes spectacle and the suffering of real people becomes a platform.

Advertisement