Nautilus Science Connected Fri, 14 Apr 2023 17:02:16 +0000 en-US hourly 1 The Electron Is So Round That It’s Ruling Out Potential New Particles Fri, 14 Apr 2023 17:02:15 +0000 If the electron’s charge wasn’t perfectly round, it could reveal the existence of hidden particles. A new measurement approaches perfection.

The post The Electron Is So Round That It’s Ruling Out Potential New Particles appeared first on Nautilus.

Imagine an electron as a spherical cloud of negative charge. If that ball were ever so slightly less round, it could help explain fundamental gaps in our understanding of physics, including why the universe contains something rather than nothing.

Given the stakes, a small community of physicists has been doggedly hunting for any asymmetry in the shape of the electron for the past few decades. The experiments are now so sensitive that if an electron were the size of Earth, they could detect a bump on the North Pole the height of a single sugar molecule.

The latest results are in: The electron is rounder than that.

The updated measurement disappoints anyone hoping for signs of new physics. But it still helps theorists to constrain their models for what unknown particles and forces may be missing from the current picture.

“I’m sure it’s hard to be the experimentalist measuring zero all the time, [but] even a null result in this experiment is really valuable and really teaches us something,” said Peter Graham, a theoretical physicist at Stanford University. The new study is “a technological tour de force and also very important for new physics.”

Poaching Elephants

The Standard Model of Particle Physics is our best roster of all the particles that exist in the universe’s zoo. The theory has held up exceptionally well in experimental tests over the past few decades, but it leaves some serious “elephants in the room,” said Dmitry Budker, a physicist at the University of California, Berkeley.

For one thing, our mere existence is proof that the Standard Model is incomplete, since according to the theory, the Big Bang should have produced equal parts matter and antimatter that would have annihilated each other.

In 1967, the Soviet physicist Andrei Sakharov proposed a possible solution to this particular conundrum. He conjectured that there must be some microscopic process in nature that looks different in reverse; that way, matter could grow to dominate over antimatter. A few years before, physicists had discovered such a scenario in the decay of the kaon particle. But that alone wasn’t enough to explain the asymmetry.

Ever since then, physicists have been on a hunt to find hints of new particles that could further tip the scale. Some do so directly, using the Large Hadron Collider—often touted as the most complicated machine ever built. But over the past several decades, a comparatively low-budget alternative has emerged: looking at how hypothetical particles would alter properties of known particles. “You see footprints [of new physics], but you don’t actually see the thing that made them,” said Michael Ramsey-Musolf, a theoretical physicist at the University of Massachusetts, Amherst.

Our mere existence is proof that the Standard Model is incomplete.

One such potential footprint could appear in the roundness of the electron. Quantum mechanics dictates that inside the electron’s cloud of negative charge, other particles are constantly flickering in and out of existence. The presence of certain “virtual” particles beyond the Standard Model—the kind that could help explain the primordial supremacy of matter—would make the electron’s cloud look slightly more egg-shaped. One tip would have a bit more positive charge, the other a bit more negative, like the ends of a bar magnet. This charge separation is referred to as the electric dipole moment (EDM).

The Standard Model predicts a vanishingly tiny EDM for the electron—nearly a million times smaller than what current techniques can probe. So if researchers were to detect an oblong shape using today’s experiments, that would reveal definitive traces of new physics and point toward what the Standard Model might be missing.

To search for the electron’s EDM, scientists look for a change in the particle’s spin, an intrinsic property that defines its orientation. The electron’s spin can be readily rotated by magnetic fields, with its magnetic moment serving as a sort of handle. The goal of these tabletop experiments is to try to rotate the spin using electric fields instead, with the EDM as an electric handle.

“If the electron’s perfectly spherical, it’s got no handles to grab onto to exert a torque,” said Amar Vutha, a physicist at the University of Toronto. But if there’s a sizable EDM, the electric field will use it to tug on the electron’s spin.

The dream is that these EDM experiments will be the first to detect signs of new physics.

In 2011, researchers at Imperial College London showed that they could amplify this handle effect by anchoring the electron to a heavy molecule. Since then, two main teams have been leapfrogging one another every few years with increasingly precise measurements.

One experiment, now at Northwestern University, goes by the name of Advanced Cold Molecule Electron EDM, or ACME (a backronym inspired by the old Road Runner cartoons). Another is based at the University of Colorado’s JILA institute. The competing teams’ measurements have jumped in sensitivity by a factor of 200 in the last decade—still with no EDM to be seen.

“It is sort of a race, except we have no idea where the finish line is, or whether there is a finish line, even,” said David DeMille, a physicist at the University of Chicago and one of the leaders of the ACME group.

A Race to the Unknown

To keep trekking ahead, researchers want two things: more measurements and a longer measurement time. The two teams take opposite approaches.

The ACME group, which set the previous record in 2018, prioritizes quantity of measurements. They shoot a beam of neutral molecules across the lab, probing tens of millions of them every second, but only for a few milliseconds each. The JILA group measures fewer molecules, but for longer: They trap a few hundred molecules at a time, then measure them for up to three seconds.

The ion-trapping technique, first developed by Eric Cornell, a physicist at the University of Colorado, Boulder who directs the JILA group, was “a big conceptual breakthrough,” DeMille said. “Many people in the field thought this was nuts. Seeing it come to fruition is really exciting.”

Having two distinct experimental setups that can cross-check one another is “absolutely crucial,” Budker said. “I don’t have words to express my admiration of this cleverness and persistence. It’s just the best science there is.”

Cornell’s technique was first showcased in 2017 with hafnium fluoride molecules. Since then, technical improvements have allowed the group to surpass ACME’s record by a factor of 2.4, as described in a recent preprint led by Cornell’s former graduate student Tanya Roussy. The team declined to comment while their paper is under review at Science.

Probing the electron’s roundness with increased precision equates to looking for new physics at higher energy scales, or looking for signs of heavier particles. This new bound is sensitive to energies above roughly 1013 electron-volts—more than an order of magnitude beyond what the LHC can currently test. A few decades ago, most theorists expected that hints of new particles would be discovered significantly below this scale. Each time the bar rises, some ideas are discredited.

“We have to keep wrestling with what these limits imply,” Ramsey-Musolf said. “Nothing’s killed yet, but it’s turning up the heat.”

Meanwhile, the electron EDM community forges ahead. In future experimental iterations, the dueling groups aim to meet somewhere in the middle: The JILA team plans to make a beam full of ions to increase their count, and the ACME team wants to extend the length of their beam to increase their measurement time. Vutha is even working on “some totally crazy” approaches, like freezing molecules in blocks of ice, in the hope of jumping several orders of magnitude in sensitivity.

The dream is that these EDM experiments will be the first to detect signs of new physics, prompting a wave of follow-up investigations from other precision measurement experiments and larger particle colliders.

The shape of the electron is “something that teaches us about totally new and different pieces of the fundamental laws of nature,” Graham said. “There’s a huge discovery waiting to happen. I’m optimistic that we’ll get there.”

This article was originally published on the  Quanta Abstractions blog. 

Lead image: If an electron were the size of Earth, the experiment could detect a bump the size of a sugar molecule. Credit: Kristina Armitage/Quanta Magazine.

The post The Electron Is So Round That It’s Ruling Out Potential New Particles appeared first on Nautilus.

]]> 0
What the Webb Telescope Really Showed Us About the Cosmos’ Beginning Thu, 13 Apr 2023 22:25:46 +0000 And how the family business first took me there.

The post What the Webb Telescope Really Showed Us About the Cosmos’ Beginning appeared first on Nautilus.

On my 10th birthday, I convinced a flock of cousins to travel to the end of the universe with me. I had my reasons. Cosmology was a family affair. My father, Solomon Zeldovich, was working on detecting gravitational waves long before the proper detecting equipment even existed. His uncle, Russian-Jewish scientist Yakov Zeldovich, was one of the leading physicists and cosmologists who contributed to the Big Bang theory. Growing up, I learned to stay away from black holes before I learned to cross the street. Other kids’ bedtime stories featured gnomes and fairies, but mine revolved around collapsing neutron stars, supernovas, and fusion reactions inside our sun. “Once upon a time almost 14 billion years ago there was a big boom that created our universe,” my father had told me. “And ever since this Big Bang, the universe keeps expanding, so fast that even the rockets we sent to the moon can’t catch up with it. But if you got to that constantly expanding end, you’d see the universe as it was when it was first created.” That sounded like going back in time—I just had to get to the end of the universe.

My cousins were game. We boarded an overturned play table placed on my bed, huddled together between its four legs sticking up in the air and jumped up and down in unison to make the journey appropriately bumpy. Using the table’s loose leg as a throttle, we took off, propelled by our screams. Halfway into our journey, my father poked his head in and yelled at us to stop breaking the table. “We can’t stop,” I yelled back. “We’re at the end of the universe 14 billion years ago, but the black hole is sucking us in!” He growled something about having to fix the damn leg later and withdrew.

Perhaps we don’t fully understand the physics behind the formations of stars.

Little did I know that seeing the end of the universe would become possible a few decades later, thanks to the massive telescopes NASA had been launching—first the Hubble and then the more recent James Webb. The telescopes look for the light coming back from distant stars and galaxies, and because the light takes time to arrive to us, the telescopes are essentially staring back in time, explains Erica Nelson, an astrophysicist at the University of Colorado, Boulder. “If you’re looking at something that’s really far away, you’re looking at light that’s been traveling for a long time, so you’re seeing that object as it was in the past,” says Nelson, who studies the images that the Webb and Hubble gather. “So the telescopes act as time machines.” Nelson realized that fact when, as a 10-year-old, she wrote a report about Hubble. She was a more practical child than me—she decided to study cosmology rather than inventing games about it.

Webb, in fact, is a particularly potent “time machine.” It has the ability to see really far back in time, to when the galaxies were young, because it was built to most optimally detect the light of the infrared spectrum. “The light from the stars and galaxies located far away from us reaches us not as the visible light, but as the infrared,” explains Nancy Levenson, director of the Space Telescope Science Institute in Maryland, which operates the telescopes for NASA. “In astronomy we call this phenomenon redshifting.”

The visible light has a shorter wavelength while the infrared has a longer one. As the light travels through our still-expanding universe, its wavelength gets stretched out, shifting from the shorter, visible range to the longer infrared. A chewing gum that kids pull apart with their fingers serves as a good visual metaphor. The gum stretches out and so does the light—kind of.

The Webb—built with 18 mirror segments, gold-plated to maximally reflect and gather the infrared light—might be able to see all the way to the beginning of times, to the edge of the universe I had once tried to reach on my overturned play table. Only months after its launch, the Webb not only gathered striking infrared images, but caused a few snags in the fabric of the Big Bang theory, questioning some of the facts we thought were well established, such as the timelines of when certain things happened.

The snags started in summer of 2022, when Nelson peered into the first images that arrived from the Webb—and saw some striking red spots. The spots never appeared on the Hubble images because they were clusters of infrared light, which Hubble isn’t as good at detecting. “The first thing I noticed was that these bright red objects were present in the James Webb image, but not in the Hubble,” she says. She looped her colleagues in. “We knew immediately that it was a big deal.”

“We’re going to find out that we were wrong about a number of things. That’s exciting!”

After digging deeper, the team deemed the glowing spots to be six huge, previously unknown galaxies from a very early cosmic time, about 500 to 700 million years after the Big Bang occurred. But according to the current Big Bang theory, which includes estimates of how long it takes to form objects of a certain mass, this isn’t enough time to create six massive galaxies with millions of stars. “The reason why it flies in the face of our current understanding of cosmology is because our cosmological theory predicts that it should take much longer for galaxies of this mass to be able to form,” Nelson says. “We think that it should take about 2 billion years, at least, to form these large galaxies with so many stars.” Her team published their groundbreaking discoveries in Nature.

Does this mean that the Webb just threw a wrench into the Big Bang theory? It remains to be seen, scientists say. A new study published in Nature Astronomy explores the idea further. For starters, they must confirm that the six glowing objects are indeed galaxies—something they will keep working on. Unlike certain scientific fields where things can be physically touched and measured, astronomy is driven by theoretical models of how far away or how big objects are, and how they function. A more detailed analysis of more pictures, and more calculations, may reveal that the galaxies are smaller or closer, and thus aren’t as old. But if their observations are correct, the current cosmological models may indeed need some tweaking. Perhaps early on the galaxies could form super-fast because the universe was denser before it expanded. Or perhaps we don’t fully understand the physics behind the formations of stars.

Whichever way it goes, astronomers are enthused. “They are definitely seeing something interesting,” Levenson says of her colleagues. “So my reaction is that of cautious excitement.” But that’s the point, she notes. The reason we sent the huge, gold-plated, $10 billion science machine that took 30 years to build into the cosmic void is to learn what we don’t know yet. “We’re going to find out that we were wrong about a number of things,” Levenson says. “But that’s not bad. That’s exciting!”

I’m excited too. To my father’s dismay, I did not follow him into the vaunted field of physics. But I’ve been watching the developments from the sidelines and writing stories. I can’t wait to learn what Webb will see next in that faraway enigmatic place I once traveled to on an overturned play table with a crew of cousins in tow.

Lina Zeldovich grew up in a family of Russian scientists, listening to bedtime stories about volcanoes, black holes, and intrepid explorers. She has written for The New York Times, Scientific American, Reader’s Digest, and Audubon Magazine, among other publications, and won four awards for covering the science of poop. Her book, The Other Dark Matter: The Science and Business of Turning Waste into Wealth, was published in 2021 by Chicago University Press. You can find her at and @LinaZeldovich.

Lead image: Dotted Yeti / Shutterstock

The post What the Webb Telescope Really Showed Us About the Cosmos’ Beginning appeared first on Nautilus.

]]> 0
Animal Sex Determination Is Weirder Than You Think Wed, 12 Apr 2023 22:09:20 +0000 Parasites, weather, and luck can play a role in determining whether some animals are male or female.

The post Animal Sex Determination Is Weirder Than You Think appeared first on Nautilus.

The once-unfathomable octopus has revealed some of its most intimate details to science—its brain, its genome, its secret cities. But scientists are still in the dark about a supremely foundational aspect of this animal’s existence: its sex. What causes an octopus to be female or male?

No one knows.

Octopuses, for starters, seem to be missing sex chromosomes in any form as we know them. In humans and many other animals, two X chromosomes make an egg-producing female, while one X and one Y make a sperm-producing male. (Biologists use the word “female” to describe organs or organisms that produce eggs, and “male” for those that produce sperm; animals do not have socially constructed genders.) Octopuses possess no such familiar, tidy determinants.

Sex determinants can mix and change, even within a single species or a single individual. 

Is this simply another example of octopuses being oddballs? Not at all. Across the animal kingdom, chromosomes are only one of more than a dozen ways that sex is determined, and scientists are continuing to find more, expanding the notion of how—and why and when—animals produce one sort of sex cell over the other.

The effort to understand these dynamics goes beyond mere curiosity. Unraveling these unexpectedly complex patterns is helping scientists sharpen their understanding of evolution itself, by illustrating how conflicts between genes or between parasites and hosts can lead to new traits.

This research also helps scientists peer into the future. It’s no exaggeration to say that animal life on Earth depends on eggs and sperm. (A few fascinating species can reproduce with eggs alone, but everyone else, from earthworms to elephants, requires both types of sex cell to build the next generation.) Climate change and pollution can seriously impact the sex ratios of many animals. Thus, understanding how and why sex determination happens could also help us safeguard the future of many species on this rapidly changing planet.

A WRINKLE IN GENES: An unassuming amphibian, the Japanese wrinkled frog (Glandirana rugosa), pictured above, has developed two entirely different chromosome patterns for sex determination and reproduction. Some frogs have XY chromosomes like we do, and others have ZW chromosomes. Photo by Alpsdake / Wikimedia Commons.

Eggs and sperm go way back, but, interestingly, not as far back as sex itself. That began perhaps as much as 2 billion years ago, when single-celled organisms, which had been reproducing by making copies of themselves, began swapping genes in order to create genetically novel offspring. Some eventually facilitated the exchange by dividing themselves into sex cells that could fuse with other similarly sized sex cells. Two, four, or more “mating types” determined compatibility between cells.

Today, single-celled life, such as amoebas, as well as most fungi, still do their sexual reproduction with equal-sized sex cells and mating types. However, the ancestors of animals—as well as those of plants and of several kinds of algae—all evolved sex cells of two distinct sizes: large eggs packed with resources and small speedy sperm. Scientists don’t know why this happened in some types of life and not others, but they do know that this system of reproduction has since become entrenched.

For animals to produce the essential sex cell types, however, they need some deciding factor, some switch, determining which individuals will make which kind of cell. And herein lies the puzzle. “If that division [of egg and sperm] is ancient, why would you ever change the mechanism underpinning which of those you make?” asks zoologist Judith Mank, a zoologist at the University of British Columbia who has studied sex determination for more than two decades. “That’s one of the mysteries of sex.”

Scientists don’t know what switch or signal the earliest animals used to determine whether they would produce eggs or sperm. But since then, the methods have proliferated into a dizzying array.

For example, the worm on a hook, the fish that bites it, and the human holding the rod all reproduce with eggs and sperm, but the three animals have three very different ways of determining what type of sex cell they’ll make. Earthworms are simultaneous hermaphrodites, producing both eggs and sperm in the same body. Fish, depending on species, could be sequential hermaphrodites (maturing first as one sex then switching to the other) or have separate sexes determined by genes, environment, or both. Humans, of course, have X and Y chromosomes (a system that’s distinct from our diverse gender identities).

Climate change could wreak havoc for species that rely on temperature to generate females or males.

Before the discovery of sex chromosomes, people had developed a plethora of creative ideas about sex determination, primarily in humans—and also in farm animals. Their theories tended to be environmental, including: the heat of the surroundings, the heat of the parents’ passion, the nutrition of the mother, and the quantity of semen from the father. Biologists now know that many of these factors can indeed affect sex determination … but just not in humans, sheep, or cows.

The assumption of environmental sex determination acquired some holes in 1845, when male bees were found to develop from unfertilized eggs and female bees from fertilized ones. And the presumption was fully overturned with the 1905 discovery of XY sex chromosomes (first in beetles, then in mammals and a number of other animals) and the 1909 finding of the less widely known ZW sex chromosomes (first found in aphids, then in birds and various reptiles and fish). In this system, ZZ animals develop as male, ZW as female.

Sex appeared to be purely, simply genetic—until 1966, when a French zoologist discovered that the rainbow agama lizard did, in fact, rely on an environmental factor to determine sex. A higher temperature produced male rainbow agamas and a lower temperature, females. (The passion of the reptilian parents was not scrutinized.)

Things only got more complicated from there.

Sex determinants, scientists learned, can mix and change, even within a single species or a single individual. Such changes may occur as populations adapt to different environments, and they can help tease apart complex evolutionary histories.

The Japanese wrinkled frog has XY chromosomes in both eastern and western Japan, but frogs living between these two populations demonstrate two different systems: one with ZW chromosomes and one with distinct XY chromosomes that aren’t shaped like those of the eastern or western frogs. In 2022, scientists discovered that the eastern frogs had become a subtly different species, and where they had hybridized with western frogs, the result was chromosome chaos.1

Beyond sex determination, the wrinkly case of these Japanese frogs reveals the striking ability of evolution to flip traits from one state to another and back again.

ROLLING WITH THE FLOW: The humble roly poly (family Armadillidiidae), pictured above, has a surprisingly complicated sex history. The presence—or absence—or a parasite in flipping these bugs to be female or not proves that sex determination is far from an open-and-shut case. Photo by Mauro Rodrigues / Shutterstock.

Common pillbugs would seem to be a simpler example, with only ZW sex chromosomes to propagate new generations of roly-polies. But, like most other arthropods, they happen to be vulnerable to infection by a weird and wily parasitic bacterium called Wolbachia. Wolbachia is transmitted exclusively from mother to offspring, never via the father, so from Wolbachia’s point of view a male host is a dead end. It would prefer a host population dominated by females. Thus, Wolbachia has evolved the ability to feminize ZZ male pillbugs, creating egg-bearing ZZ individuals that can perpetuate its lineage.

This turns out to be just the beginning of the pillbug’s curious sex history.

In some populations, the feminizing influence of Wolbachia has been so powerful that, over millions of years, these pillbugs lost their ZW sex chromosomes altogether. Their sexes came to be determined entirely by a parasite: Infected individuals were female, uninfected male. That was peculiar enough. Then in the 1980s, scientists observed pillbugs from these populations that were clearly female—but had no Wolbachia infection.

The researchers turned to a relatively new idea at the time, horizontal gene transfer, and hypothesized that feminizing genes from Wolbachia had been incorporated into the pillbugs’ own genome. “It was a visionary hypothesis,” says biologist Richard Cordaux of the Centre National de la Recherche Scientifique and the University of Poitiers, who proved it with modern genetic sequencing in 2016. When he shared the news with the sole surviving member of the original scientific team, “He was very happy and amazed,” Cordaux recalls.

Subsequent research has uncovered sites throughout Europe and Japan where a pillbug’s sex might be determined by any one of these three factors: chromosomes, Wolbachia infection, or transferred genes. “If you have a female from such a population it becomes very difficult to tell just by looking at them,” says Cordaux—an example of the hidden, wondrous diversity right under our noses (or under the stones in our garden).

Another case of garden-variety sexual variance was discovered last year in the tiny aquatic bladder snails that are abundant in ponds and hobby aquariums. They’re hermaphrodites, able to have their eggs and fertilize them too. But even though each individual can produce both eggs and sperm, some of its genes are passed on only through eggs. Because it is to these genes’ advantage for snails to produce only eggs, they’ve acted, like Wolbachia themselves, as a feminizing influence. A subset of snails found recently in Lyon, France, have lost their male functionality and are now all effectively female.2 They live side by side with full hermaphrodites, successfully interbreeding. This curious situation, in which conflicts between genes cause male sterility, was previously known only in plants.

In addition to sex “overrides” created inside an organism, whether by genes or bacteria, these forces can come from the environment. Endocrine disrupting chemicals such as pesticides and BPA are now found all over the planet, and studies show they can tweak sex in either direction, biasing sex ratios toward male or female depending on the species.3 (Some also cause other reproductive issues, like reduced sperm or egg production.)

At the same time, climate change is cranking up the heat, which could wreak havoc for species that rely on temperature to, at least in part, generate females or males. Within temperature-dependent reptiles, some make “hot females,” some make “hot males,” and still others make females at both hot and cold extremes, with “lukewarm males” developing in the middle. Rising temperatures around the globe threaten to unbalance their sex ratios, or even erase one sex altogether.

“It is surprisingly easy to fashion a new sex-determining gene.”

Some reptile species have a limited ability to shift the pivotal temperature at which development switches between sexes.4 “We found it in painted turtles,” says Lisa Schwanz, an evolutionary ecologist at the University of New South Wales. “In a warmer year, it took a warmer temperature to make females.” However, this adjustment was too small to compensate for an environment-skewed sex ratio. “I suspect it won’t be strong enough to counteract climate change,” she says.

Rising temperatures seem to also be pushing some species into new sex determination mechanisms. Bearded dragons, for example, have a fairly standard ZW sex chromosome system. Extreme heat, however, causes ZZ individuals that would otherwise be male to develop into egg-bearing females. This phenomenon was first discovered in captivity, but just last year, researchers described sex-reversed wild females they had discovered in the warmest part of the bearded dragon’s natural Australian habitat.

“Do they have an advantage?” wonders Schwanz, who is one of the paper’s authors. “What sex do they act like?” In the laboratory, ZZ females explore more like males, but in the wild their movement is indistinguishable from that of ZW females. They don’t seem to be any more successful than their ZW sisters, but scientists have yet to determine what their role is in wild populations, which may already be shifting as the local temperatures rise. And it’s unclear how many other animal species might harbor this hidden capability, waiting to be let loose in a rapidly changing environment.

These accelerating shifts add a new urgency to research into animals’ ever-evolving symphony of sex strategies. Many animals seem ready to swap strategies at the drop of a hat, while mammals and birds have kept their staid sex chromosomes for hundreds of millions of years. And each new discovery adds to the proliferating questions about fluid and seemingly stable systems.

In a new paper published earlier this year, researchers uncovered surprising similarities in evolutionary trajectories between the divergent sex determination systems of the populations of Japanese wrinkled frogs. Another endemic Japanese species—this one a mammal—the Amami spiny rat, also recently revealed some surprises. These rodents are some of the extremely rare mammals who have lost their Y chromosome, and scientists finally figured out the genetic mutation that allows them to determine their sexes without it: a gene whose expression is toggled up or down.

It’s a long-standing question whether the tiny human Y chromosome is on its way to evolutionary obsolescence. The authors of a commentary on the spiny rat study, published in late 2022, offer this consolation: “It is surprisingly easy to fashion a new sex-determining gene. It should give hope to those that are distressed by a possible shriveling away of the human Y.”5

As the picture of sex determination—once thought simple and fixed—cracks into fractal-like complexity, scientists uncover more of these unseen dynamics, which underpin the future of animal life on Earth. But even if biologists one day understand what creates sex in all the world’s creatures, they now know even that knowledge will be transient.

The oddities of sex will continue to be fruitful and multiply. Life, it seems, would have it no other way.

Danna Staaf is a science writer and author of the upcoming book Nursery Earth: The Wondrous Lives of Baby Animals and the Extraordinary Ways They Shape Our World. She lives in San Jose, California, with her family and a frankly immoderate number of plush octopuses.

Lead image: Dudley Simpson / Shutterstock


1. Shimada, T., et al. Genetic and morphological variation analysis of Glandirana rugosa with description of a new species (Anura, Ranidae). Zootaxa 5174, 025-045 (2022).

2. David, P. et al. Extreme mitochondrial DNA divergence underlies genetic conflict over sex determination. Current Biology 32, 2325-2333 (2022).

3. Marlatt, V. L., et al. Impacts of endocrine disrupting chemicals on reproduction in wildlife and humans. Environmental Research 208, 112584 (2022).

4. Schwanz, L.E. & Georges, A. Sexual development and the environment: Conclusions from 40 years of theory. Sexual Development 15, 7-22 (2021).

5. Schartl, M. & Lamatsch, D.K. How to manage without a Y chromosome. Proceedings of the National Academy of Sciences 120, e2218839120 (2023).

Further Reading

Furman, B.L.S., et al. Sex chromosome evolution: So many exceptions to the rules. Genome Biology and Evolution 12, 750-763 (2020).

Kobayashi, Y., Nagahama, Y., & Nakamura, M. Diversity and plasticity of sex determination and differentiation in fishes. Sexual Development 7, 115-125 (2013).

Picard, M.A.L., Vicoso, B., Bertrand, S., & Escriva, H. Diversity of modes of reproduction and sex determination systems in invertebrates, and the putative contribution of genetic conflict. Genes 12, 1136 (2021).

Schwanz, L.E., Georges, A., Holleley, C.E., & Sarre, S.D. Climate change, sex reversal and lability of sex‐determining systems. Journal of Evolutionary Biology 33, 270-281 (2020).

The post Animal Sex Determination Is Weirder Than You Think appeared first on Nautilus.

]]> 0
Searching for the River of Wind Tue, 11 Apr 2023 20:34:22 +0000 The jet stream is one of Earth’s defining features—but it wasn’t easy to find.

The post Searching for the River of Wind appeared first on Nautilus.

In August of 1947, the Stardust fired up its engines in Buenos Aires for an afternoon flight to Santiago. The scene resembled something from a Graham Greene novel: A hulking piston-engined airliner thundering aloft in an exotic austral city, while a small and mysterious cadre of passengers adjusted their seatbelts in the cabin. Among them were a German widow with her husband’s ashes, a Palestinian carrying a hidden diamond, and a British Foreign Service courier—a “King’s messenger”—on some opaque mission for the crown. The air route from Buenos Aires to Santiago runs west for about 700 miles, crossing the South American coastal plain before hopping over the Andes to the Chilean capital. In terms of geography, a trip from Denver to San Francisco would be a rough northern analog. The journey of the Stardust passed that day under seemingly routine circumstances, and about four hours after takeoff the crew radioed air traffic control to report their imminent arrival at Santiago. That was the last anyone saw or heard of them until 1998, when a group of climbers came upon the plane’s wreckage melting slowly out of an Andean glacier, some 50 miles east of her destination.

Jet streams are often portrayed as smooth roads, but they’re really more like braided rivers.

By then, the last flight of the Stardust had taken its place among the great unsolved mysteries of air transport, the gamut of possible explanations running from Nazi sabotage to alien abduction. Using aviation science and a bit of forensic meteorology, the long-delayed investigation reached a more scientific conclusion: Stardust, flying above the clouds at an altitude of 24,000 feet, had plowed into the teeth of an unanticipated headwind—a jet stream—and as a result badly overestimated her progress. In all likelihood, the flight crew had made a controlled descent into the ground, thinking their ship was well clear of the mountains when in fact she still had miles still to go.

Jet streams are high-altitude westerly winds, created by the sharp variations in density that exist in the atmosphere at different latitudes. They concentrate in narrow bands at the boundaries of the main global air masses and can blow hard enough to slow even 21st-century air travel. The most pronounced streams occur at the polar fronts, where temperate air in each hemisphere meets colder air from the Arctic and Antarctic regions. They are often portrayed as smoothly contiguous features, regular as roads, but the actual phenomena are really more like braided rivers or dotted lines—composites of multiple meandering subparts, adding together to produce movement.

At the time of the Stardust casualty, an understanding of high-altitude winds was just starting to gel after several decades of observation by pilots and meteorologists. The first scholarly account of the topic is credited to an obscure researcher named Wasiboro Ooshi, who worked alone at a lab north of Tokyo in the early 1920s. Ooshi launched thousands of paper balloons and followed them through the eyepiece of his theodolite as the wind carried them up and away. From measurements of range and elevation he concluded that there were steady currents of westerly wind blowing briskly across the sky, high above Japan. They were most prevalent in winter, when they could be very brisk indeed—upward of 100 knots.

A modern analysis of Ooshi’s data has confirmed his findings, but at the time they were largely ignored by Western academia. He was a distant figure, far from the epicenters of contemporary science, and published his work in the suspect Utopian language of Esperanto. Nobody in Europe or America paid him any mind. The first direct application of Ooshi’s findings came in a bizarre campaign to attack the United States with an armada of incendiary balloons launched from Japan during World War II. Ten thousand such devices were built in secret workshops and set adrift, equipped with ballasting mechanisms to keep them at the proper altitude along the way. At least 300 of these strange weapons landed in North America during the final years of the war. One managed to kill six people when they stumbled across it during a church junket in the Cascade Mountains. Given that many likely fell unseen into backcountry, it’s estimated that 1,000 of the drifting drones, 10 percent of the total, probably made it across the Pacific on Ooshi’s winds.

The jet stream wasn’t given its actual name until 1939, when a German scientist named Heinrich Seilkopf coined the term in a textbook—but by then, there was an established awareness that something was happening up there. As part of his work in the mid 19th century, the American mathematician William Ferrell derived a set of equations that predicted the presence of strong winds aloft, based on data from surface measurements. Soon afterward in Belgium an aerologist named Leon Tiesserenc de Bort was able to verify some of Ferrell’s theories using kites and balloons. By collecting temperature readings at altitude, de Bort also managed to establish the existence of the tropopause, a stable layer of minimum temperature about 8 miles above the ground. He got himself into brief but serious trouble one day when a bunch of his kites, strung together on miles of piano wire, fell down in a snarl over Paris.

These shorter pulses of energy are the day-to-day triggers for surface weather. 

Wiley Post was a pioneering aviator of the early 20th century, just the sort of visionary nut-job that Americans love to anoint as their heroes. Post missed the chance to fight as a pilot in World War I, but after brief diversions into oilfield work and car theft he found his way back into the sky. In the barnstorming ’20s he rose quickly to fame as an air racer and stunt pilot. He wore an eye patch and had spent time in jail. He also had a natural flair for engineering despite minimal schooling and correctly saw the future of aviation in the high speed, high-altitude transport of mail and passengers. He was the first pilot to complete a solo flight around the world, and perhaps the first to wear a pressure suit. The latter made Wiley look like the Michelin man in a diving helmet, but it permitted him to fly as high as 50,000 feet during a series of transcontinental flight attempts in 1935. While none of these efforts ultimately succeeded, his instrument data led him to note that there were strong belts of westerly wind in parts of the upper troposphere. For this, Post is sometimes credited with discovering the jet stream, but it’s fairer to say that he encountered it.

World War II pilots also encountered the jet stream, often unexpectedly. Much of aviation meteorology was at the time biased toward forecasting cloud cover (you can’t bomb what you can’t see) and while the concept of high-velocity wind aloft was not by then completely new, it was often absent from mission briefings. Jet streams are zonal, meaning that they flow mostly in a west-to-east direction. Allied aircraft returning from raids over Europe sometimes met extreme headwinds and were forced to ditch in the English Channel when their fuel ran low. American B-29’s on the way to bomb Japan bucked winds of up to 140 knots. The streams could often be avoided with small alterations in course and altitude, but the navigational constraints of flying in formations made this difficult to do in practice.

The doomed Stardust in fact vanished in the same year that the first comprehensive study of the jet stream finally went to press, published by a group in Chicago under the leadership of Carl-Gustav Rossby. Rossby was a fascinating character, a central figure in the advancement of meteorology during the middle of the 20th century. A brilliantly effective teacher and organizer—and reportedly a bit of a bon vivant—he loved long lively restaurant dinners and was apparently never comfortable driving his own car. The Chicago group’s first article, “On the General Circulation of the Atmosphere in Middle Latitudes,” nailed a theoretical framework under the long database of high-altitude wind measurements. In the spirit of Rossby’s collaborative approach, the authorship of this landmark piece was credited simply to “The Staff Members of the Department of Meteorology of the University of Chicago,” without the hierarchical listing of authors that is so common to scientific papers. Before diving into the facts, the document appealed for better communication among players in the new field of atmospheric studies, with a charming bit of old-word diplomacy:

There exists at present … a noticeable divergence of opinion with regard to the proper interpretation of several of the basic processes in the atmosphere. Because of this divergence … an effort was made to bring together research workers representing widely different points of view. Until a genuinely efficient method of data distribution to interested agencies has been set up, it is reasonably certain that Government funds invested in research institutions outside Washington will fail to yield maximum returns.

Given that the second half of this statement could have been written in 2020 as easily as 1947, it’s fair to say that Rossby’s theories about the atmosphere have gained more traction than his advice on streamlining government-funded science.

The troposphere—where most weather happens—could be imagined as a big room whose ceiling slopes downhill from the equator toward the poles, getting closer to the surface as the atmosphere underneath it grows colder and denser. This ceiling is the tropopause. It is not a smooth slope, but more like a series of gently tilted plateaus separated by cliffs. The steepest of these cliffs is at the abrupt thermal boundary between the cold polar zones and the much milder temperate regions. A second break sometimes shows up closer to the equator, at the border between the temperate regions and the tropics. In both cases, the sudden temperature change produces a steep difference in pressure aloft. Air flows across the tilting tropopause, and in the steep places it accelerates. The Coriolis effect deflects it eastward, creating a west-to-east channel of wind that is concentrated along the sharpest transitional zones of atmospheric temperature.

WINDS OF CHANGE: Though they’re a defining feature of Earth’s weather systems, jet streams were not scientifically described until well into the 20th century. Image from  Reading the Glass.

The Chicago paper focused mostly on the polar jet stream system, seen as the most powerful and influential. Rossby and his cohort described it as a distinct, surprisingly narrow belt of westerlies: “a meandering river winding its way eastward through relatively stagnant air masses to the north and south.” They got especially interested in a shapely pattern of waves that formed curves in the circulation. Known typically as long waves, these meanders are now understood to be a catalyst for cyclones and a major mechanism for moving heat between latitudes. Rossby derived an equation to describe these features, and they now bear his name.

When forecasters discuss the location of the jet stream, they are generally talking about the Rossby waves—which in their positioning are a primary determinant of weather on the ground. With their long meanders, the Rossby waves push cold air troughs into the temperate regions, while wave peaks—or ridges—drive wedges of warm air back toward the poles. Rossby waves travel eastward very slowly, but within the large-scale flow are smaller, faster moving short waves, which my meteorologistfriend Joe Sienkiewicz, a branch chief at NOAA’s Ocean Prediction Center, compares to ripples riding on an ocean swell. These shorter pulses of energy are the day-to-day triggers for surface weather. Ripples of cold air pushing toward the equator collide with warm air, raising it aloft and spawning cyclones by the process that the Norwegians called frontal lifting in their models. Surface storms are thus the eddies bordering a more powerful river of wind aloft, following a track more or less in step with the air streaming by above them. They are transient features, but if the long wave patterns behind them persist, they can happen again and again.

In the winter of 2015, a great cold trough parked itself like a frigid cow’s tongue over North America, hatching blizzards one after another as approaching pulses of warm southern air bloomed into cyclones. The cow’s tongue moved on eventually, but not until most of the country had logged record amounts of snow. A shorter but equally brutal batch of cold air arrived in January of 2019, when Chicago was briefly the coldest place on Earth. Temperatures fell to near -50 degrees Fahrenheit and media outlets spoke excitedly of an assault by a polar vortex—a term borrowed in recent years to villainize especially brutal winter events over North America. In fact, the real polar vortex is a bitterly cold blob of air that resides high in the stratosphere each winter, typically ignored by the populace until like some mad bull it breaks free from its enclosure to wreak havoc in the towns. The Chicago freeze of 2019 actually began when warm air from Asia found its way up into the stratosphere and forced the normally stolid polar vortex to dissociate into lobes—one of which wobbled down over North America, taking a deep cold meander of the polar jet stream along with it. Called arctic outbreaks, these displacements appear to have become more common in the last half-century, for reasons that researchers are still working to resolve.

From Reading the Glass by Elliot Rappaport, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2023 by Elliot Rappaport.

Lead image: The Blue Marble data is courtesy of Reto Stockli (NASA/GSFC).

The post Searching for the River of Wind appeared first on Nautilus.

]]> 0
Why Is Sea Level Rise Worse In Some Places? Mon, 10 Apr 2023 23:28:30 +0000 One question for Sönke Dangendorf, a coastal flooding researcher at Tulane University.

The post Why Is Sea Level Rise Worse In Some Places? appeared first on Nautilus.

One question for Sönke Dangendorf, who studies sea levels, tides, storm surges, and coastal flooding at Tulane University’s School of Science and Engineering. 

Photo courtesy of Sönke Dangendorf

Why is sea level rise worse in some places?

There is sometimes a misconception that the ocean behaves like a bathtub, but that’s not true.

 If we look at the global picture, there are two major reasons why sea levels are rising. One is that the ocean is warming, and it needs more space and expands. And the other one is that ice sheets and glaciers are melting, and they put more mass into the ocean. 

But if we look locally, there are a lot of factors that lead to regionally varying sea level rise. 

We have changes in ocean circulation, and the same is true for winds. Imagine, in the simplest form, winds over the ocean push water masses from one side to the other. That can lead to changes in sea level over several years to decades. The same is true for ocean currents because winds affect ocean currents. 

It’s not only the ocean that is rising, but it’s also the land that is sinking.

If we put water into the ocean by ice sheets or glaciers that are melting, their gravitational field is also changed. Here, there are three things that happen: You put mass into the ocean, so sea levels rise globally on average. But then, the ice sheet is a heavy body of mass, and due to its mass, it attracts the water of the surrounding ocean, like the moon generates tides. So as that ice sheet melts you reduce that gravitational attraction, and the water migrates away from the ice sheet. And then at the same time, the weight of the ice sheet also becomes less, and it leads to uplift of the ground below. So it leads to sea level fall near the ice sheets but sea level rise in what we call far-field—like here in Louisiana, for instance. 

Then, lastly, in particular here in Louisiana, it’s not only the ocean that is rising, but it’s also the land that is sinking. So we see subsidence, for instance, due to fluid withdrawal like crop water withdrawal and oil and gas withdrawal that have led to subsidence in some areas.

In our study, we saw sea level rise rates in excess of a third of an inch per year from Cape Hatteras, North Carolina, to the Gulf of Mexico. These high rates of sea level rise have already had profound impacts on our coasts in the region. For instance, in the Gulf of Mexico, in this period, we have seen a doubling of what we call high-tide flooding events. These “sunny-day floodings” due to sea level rise mean streets and sometimes properties get flooded and lead to considerable damage. We have also seen, in Louisiana in particular, on top of that, local subsidence—much larger rates of Louisiana has been losing land, equal to the entire state of Delaware since the 1930s. This sea level rise has also coincided with record-breaking hurricane seasons. And these hurricanes can build up on a higher base level with these higher sea levels, so that means these events can become more destructive with the underlying sea level rise. 

We will continue to see what we have seen in the past, that there have been different kinds of hotspots of sea level change, and they can shift from time to time. They will continue to accelerate in the future, and that means we will see these kinds of hotspots more often.

Lead image: MainlanderNZ / Shutterstock

The post Why Is Sea Level Rise Worse In Some Places? appeared first on Nautilus.

]]> 0
Sugar Pill Nation Mon, 10 Apr 2023 22:56:20 +0000 Even when we know they’re “fake,” placebos can tame our emotional distress.

The post Sugar Pill Nation appeared first on Nautilus.

For Macbeth, it was the ghost of his friend Banquo, sitting in a chair at the dinner table. In Edgar Allan Poe’s The Tell-Tale Heart, it was a disembodied thump-thump beneath the floorboards.

Guilt: It’s the emotion that arises when we know we’ve done another wrong. It’s an intrusive guest that can strangle us with regret and unravel the psyche. It feels larger than the body—which is perhaps why in fiction, it’s so often externalized to phantoms. Guilt tends to haunt people. It’s hard to expunge because the wrong deeds can’t be undone.

“As long as it’s plausible, it works.”

And yet researchers recently used a pill to reduce guilt in healthy human research subjects. In a study published in Scientific Reports at the end of last year, more than 100 people sat down for a “guilt induction.”1 They intentionally generated feelings of guilt by writing down a time they hurt someone they cared about. The researchers instructed participants to choose events that still made them feel bad when excavated from memory. Subjects later underwent a “guilt boost” where they were asked to close their eyes and dwell on the incident.

The purpose of the exercise was not to make study subjects feel bad, but to see if a pill could ease those bad feelings. The twist: The pill was a form of deception. It contained only lactose, sucrose, and glucose; it was a placebo. In the end, the study subjects’ feelings of guilt were significantly reduced after taking the pill—the pain of old hurts softened, the ghosts quelled.

The placebo effect is well documented. It’s a healing response to treatments that have no active ingredient, often delivered in a specific social and therapeutic context. Gold standard clinical trials have long sought to separate out placebo effects from effective treatments, precisely because the placebo effects are so real. But in the past couple of decades, researchers have sought to harness the power of the placebo effect as a treatment in its own right. Sham treatments, they have found, can alleviate a range of clinical conditions, particularly ones that have subjective and neurobiological components, such as fatigue, chronic pain, irritable bowel syndrome, and Parkinson’s disease.

Lately, the study of placebos has expanded to a more nebulous target: our emotions. They have been tested on a spectrum of distressing feelings, from guilt to anxiety, rumination, sadness, fear, and disgust.2 These emotions all play important roles in helping us process cues from our environment, learn from experience and move through the world successfully, but they can also become unmanageable, and when too persistent or intense, may lead to psychiatric illness, including depression and post-traumatic stress disorder.

More than 25 studies now offer evidence that placebos can regulate mild and acute emotional pain, with medium to large effect sizes, in both healthy and clinical populations, a review from this year noted. Placebos have been shown to reduce sadness in clinically depressed subjects who were watching sad movie clips or remembering upsetting memories, reduce the fear of public speaking in people diagnosed with social phobia, and lessen the fear of being shocked. A placebo nasal spray was also able to help people going through a break up feel fewer negative emotions when they saw pictures of their exes.

I NEED A NEW DRUG: Studies show placebos activate natural healing and reward processes in the brain associated with dopamine, shown above, and endogenous opioids. These neurotransmitters play key roles in pain relief, responses to reward and stress, emotional regulation, and the feelings of pleasure we get from food and social interactions. Photo by bogdandimages / Shutterstock.

“I was obsessed with trying to help people regulate their emotions, but in a very easy way,” says Darwin Guevarra, a postdoctoral scholar at the University of California, San Francisco, and first author of the review. Emotions may arise in us easily, but they are hard to control, Guevarra says. It’s no simple task to “stop” feeling something on your own. Some of the strategies psychologists tend to recommend, such as cognitive reappraisal or mindfulness, can take a lot of practice and effort. 

What makes taking a placebo easier than calming yourself down or working through a bout of sadness or guilt on your own? This is how Guevarra thinks of it: Placebos outsource emotional regulation onto placebo objects, like the pills or sprays commonly used in studies.

“If you’re outsourcing something, presumably it’s easier than if you were engaging in some strategy on your own,” Guevarra said. One consistent finding is that placebos seem to require less mental effort, compared to other emotional regulation strategies. Studies show they don’t interfere with other cognitive processes, suggesting that the effect may happen automatically, beneath the level of consciousness. In one study, people given a placebo while engaged in a working memory task—recalling a series of letters presented to them visually—still reported a decrease in pain following exposure to painful heat.3

In mental healthcare settings, where we typically wrestle with our emotions, some treatments are derided as working only through placebo effects. Calling a medicine a placebo is typically meant to suggest it’s a sham—but as researchers turn this idea on its head, they have uncovered some common features of placebo effects associated with certain therapies. These tend to include expectations, learned associations, a patient-clinician relationship, and a healing setting. Expectations are typically elicited through verbal suggestion, while learned associations may entail automatic responses to a familiar context or procedure.

“I came to the conclusion that you cannot separate placebo and psychotherapy,” says Jens Gaab, a clinical psychologist at the University of Basel, and the senior author of the guilt study. In psychotherapy, the patient-clinician relationship and healing setting lead many patients to expect relief, so long as the therapy proposed is believable. “It’s a contextual understanding,” said Gaab. “As long as it’s plausible, it works.”

Healing effects can occur even when subjects know that they are getting a placebo.

In one experiment, Gaab and his colleagues set out to show that even a bogus ritual could have a therapeutic effect. The researchers had three groups of healthy people watch videos of moving green circles, some of which changed colors.4 In one placebo group, people were told by a friendly, trustworthy, and empathetic researcher that the videos had a physiological impact that activated “early conditioned emotional schemata through the color green.” In the other two groups, people were told either that the video was being used to pass time or were paired with a non-empathic researcher. Only the group that received both a convincing rationale and interacted with an empathic researcher showed improvements in self-reported mood and stress.

“It was a fake idea, a fake rationale behind it,” Gaab said. “And it worked, people loved it.” It worked as well as a group psychotherapy treatment Gaab and his colleagues used with study subjects a few years earlier.

Imaging and pharmacological studies suggest that placebos work because the expectation of relief hijacks natural healing and reward processes in the brain,5 leading to the release of endogenous opioids and dopamine.6 These neurotransmitters are heavily involved in analgesia, reward and stress responsiveness, emotional regulation, and hedonic responses to food and social interactions.

According to one of the most prolific researchers in the field of placebos, Harvard professor of medicine Ted Kaptchuk, the placebo effect may be best explained by a popular theory of consciousness called “predictive processing,” or the “Bayesian brain.”  

According to this way of thinking, the brain doesn’t just take in sensory signals from the body and the outside world and process them directly. Instead minute-to-minute perception consists of a series of best guesses, or predictions about the world, calibrated via a complex computation of values from sensory inputs, past experiences and subtle contextual cues. These predictions are constantly updated as new information comes in, and they can be heavily influenced by expectations and associations. Our bodies may then begin to respond as if what we have predicted were already true. 

In one of the earliest writings about the placebo effect, a 1955 monograph in the Journal of the American Medical Association titled “The Powerful Placebo,” physician Henry Beecher wrote that patients needed to be in the dark about the fact that they got a placebo for it to work. He believed the duplicity was part of the magic. This view played a critical role in the application of placebos to medical research, where the most rigorous trial design is not just placebo-controlled but “double-blind”—the assumption being that even the clinician delivering a therapy could communicate subtle cues to a patient about whether it was real or fake that might influence its success.

But over the past decade, this fundamental assumption about how placebos work has been upended: Healing effects can occur even when subjects know that they are getting a placebo—that the treatment they are receiving has no active ingredients. These are called “honest” or open-label placebos, and the study of how and when they work has recently taken off.

In the guilt study, for instance, some people knew the pill was a sham, while others were told it contained a combination of herbs with psychoactive properties that could alleviate guilt. Shockingly, the pill worked to alleviate guilt whether they knew what was actually in it or not.

When and how much is it okay to lie to study subjects and patients?

“You would think this sounds ridiculous to tell people: You will get a pill or a natural spray, and there’s just nothing inside,” says Michael Schaefer, a professor of neuropsychology at Medical School Berlin. “But let’s see, maybe it will work for you, because some research has shown this. And indeed, people show effects.”

Researchers studying honest placebos want to understand not just how they work, but also get around an ethical quandary posed by deceptive ones: When and how much is it okay to lie to study subjects and patients?

In honest placebo studies, patients are typically educated about the placebo effect through a standard script—they learn that it can lead to healing outcomes in some contexts, that they work through expectation and previous conditioning, and that positive expectations can help but are not essential. In fact, the role of expectation in the success of open placebos is unclear. One study of placebos for irritable bowel syndrome from 2021 found that while high expectations led to better outcomes when deceptive placebos were used, low expectations were actually linked to greater symptom relief with honest placebos.7

Whether open placebos work as well as deceptive ones is still being explored. A review on open label placebos from 2021 found a significant overall effect on conditions like back pain, cancer-related fatigue, allergic rhinitis, irritable bowel syndrome, and menopausal hot flashes.8 For emotions, in addition to the guilt study, emerging evidence suggests that open label placebos may work. People who took honest placebo pills for five days experienced less emotional distress and said they felt better and slept better than study subjects who took nothing. Honest placebo pills also helped students manage test anxiety. And research published this year from Gaab and his colleagues showed even taking imaginary pills could reduce test anxiety.9

But open placebos don’t always have a therapeutic effect. A study from 2020 that aimed to harness the placebo effect to reduce sadness in women with major depression found that deceptive placebos worked significantly better.10 The deceptive placebo was better at lowering sadness levels following a “sadness-inducing” mood manipulation, but the open placebo was still able to prevent an increase in sadness after the mood manipulation, compared to the group that got no treatment. Some studies have also found that deceptive placebos lead to higher heat-pain tolerance than open ones, and the placebo effect disappears entirely when administered openly for motion-induced nausea.

The rules of open label placebos still need to be clarified. How does the role of expectation change when you know you’re taking something, that is really nothing? Shafir says she would predict that open label placebos bear greater similarities to other forms of cognitive emotional regulation than to deceptive placebos. But no one has tried to compare the relative effects of open label placebos versus an individual’s own personal effort to regulate emotions.

“This is something that is really essential for understanding the mechanism,” she says.

When it comes to regulating our emotions, we might have greater success harnessing the power of the placebo effect if we break it up into its component parts—expectation and conditioning—rather than focusing on fake pills and sprays and creams. In a study from February in Scientific Reports, Shafir and her colleagues set out to see if they could enhance  study subjects’ efforts at emotional regulation using the power of suggestion combined with conditioning, a variation on the open placebo.11

Study participants were instructed to use distraction techniques (thinking about writing letters or drawing shapes) to lessen their experience of pain while being shocked. One group was told how helpful distraction could be at reducing pain and was conditioned to make that association in a first round of tests. Their first effort to use the distraction method was combined with a lower voltage of electric shock. The control group participants were told that the distraction technique was ineffective and received more intense shocks when they reported attempting it. Later, when people in both groups were shocked with equal intensity, the group conditioned to expect weaker shocks said that distracting themselves helped them feel less pain compared to the control group.

These new findings related to conditioning extend the way we look at placebos, Shafir says. “Because if we can use what we’ve learned from the placebo literature to enhance people’s internal control, that opens a whole new world.”

The combination of expectation and conditioning could not only help us maximize the placebo effect, it could also reframe our understanding of what a placebo is: It’s not just an external intervention, but something that happens within us. “People really like that,” Gaab said. “They have this sense of: It was me.”

Could we one day administer placebos to ourselves? There are some good reasons to think self-administered open-label placebos might not work as well; interaction with another person seems to be a crucial component. But there are cases in which it might work.

Guevarra offers an example from his own life. Several years ago, he began pairing his morning coffee with the application of essential oils to his wrists, creating an association between the effects of the caffeine and the smell. “Whenever I just activate the scent, it produces similar effects, just as caffeine would do,” Guevarra says.

Gaab thinks that as placebo studies progress, we’ll find that emotions are particularly sensitive to these effects. “Placebos work on suffering,” he says. “If people suffer, placebos can work.”

Other ethical conundrums may soon arise beyond whether it is ok to deceive someone into feeling better. For example, should the guilty be relieved of their guilt? Should the Lady Macbeths of the world not be wracked with remorse, condemned to smell the blood on their hands for all eternity?

Being able to regulate our emotions is adaptive, but our emotions are there for a reason. “Emotions are useful,” Guevarra says. “They are only problematic when they’re too long, too intense, when they’ve lost their usefulness. There are certain emotions that you should feel.” But maybe one day soon, it will be easier for any of us to dispatch the ones that have overstayed their welcome.  

Lead image: Panimoni / Shutterstock


1. Sezer, D., Locher, C., & Gaab, J. Deceptive and open-label placebo effects in experimentally induced guilt: A randomized controlled trial in healthy subjects. Scientific Reports 12, 21219 (2022).

2. Guevarra, D.A., Kross, E., & Moser, J.S. Harnessing placebo effects to regulate emotions. PsyArXiv (2022).

3. Buhle, J.T., Stevens, B.L., Friedman, J.J., & Wager, T.D. Distraction and placebo: Two separate routes to pain control. Psychological Science 23, 246-253 (2012).

4. Gaab, J., Kossowsky, J., Ehlert, U., & Locher, C. Effects and components of placebos with a psychological treatment rationale—three randomized control studies. Scientific Reports 9, 1421 (2019).

5. Büchel, C., Geuter, S., Sprenger, C., & Eippert, F. Placebo analgesia: A predictive coding perspective. Neuron 81, 1223-1239 (2014).

6. Peciña, M. & Zubieta, J.-K. Molecular mechanisms of placebo responses in humans. Molecular Psychiatry 20, 416-423 (2015).

7. Lembo, A., et al. Open-label placebo vs double-blind placebo for irritable bowel syndrome: A randomized clinical trial. Pain 162, 2428-2435 (2021).

8. von Wernsdorff, M., Loef, M., Tuschen-Caffier, B., & Schmidt, S. Effects of open-label placebos in clinical trials: A systematic review and meta-analysis. Scientific Reports 11, 3855 (2021).

9. Buergler, S., et al. Imaginary pills and open-label placebos can reduce test anxiety by means of placebo mechanisms. Scientific Reports 13, 2624 (2023).

10. Haas, J.W., Rief, W., Glombiewski, J.A., Winkler, A., & Doering, B.K. Expectation-induced placebo effect on acute sadness in women with major depression: An experimental investigation. Journal of Affective Disorders 274, 920-928 (2020).

11. Shafir, R., Israel, M., & Colloca, L. Harnessing the placebo effect to enhance emotion regulation effectiveness and choice. Scientific Reports 13, 2373 (2023).

The post Sugar Pill Nation appeared first on Nautilus.

]]> 0
The Story of a Lonely Orca Fri, 07 Apr 2023 20:52:53 +0000 After 53 years in captivity, she has a chance at a better life.

The post The Story of a Lonely Orca appeared first on Nautilus.

She was once a wild animal, a predator; part of a family, a pod, a clan. She was magnificent.

On August 8, 1970, when she was 4 years old, she was captured in Penn Cove off Whidbey Island in Washington State. That day, 90 wild orcas were corralled; five drowned, and seven of the young were captured and sold to marine parks in Texas, Florida, France, Japan, Australia, and England. She outlived them all by decades.

The capture was sensational and brutish: a round-up, a speed boat, a spotting plane, explosives, tangled nets, drowned calves, and screaming orca mothers. Divers in wetsuits used ropes, lassos, nets, and nooses to separate the ones who would be sold. Trapped, the young orca lifted her white chin and smooth black head out of the water and looked about her like a child lost in a crowd. They lashed thick straps around her torso and pushed her body into a canvas sling that hung down into the water from a boat crane. Six men in flared blue jeans with leather belts and sweat bands, three with bare chests, squatted low on the floating dock with arms extended and heaved against the weight of the orca on a yellow nylon rope.

On the other side of the net, an orca mother watched her calf being taken. She breached, she shrieked, she slapped her tail, she churned the sea. She stayed. The 4-year-old lay still in the sling, her pectoral fins poking out like big useless paws. The men hosed her down. A crane hoisted her up. She called out as they took her from her tribe.

Lolita performed twice a day, seven days a week, for 50 years.

The capture of orcas—also known as killer whales, although they’re members of the dolphin family—was banned in Washington State in 1976. In the years since, we’ve learned a lot about these animals. We’ve learned that the orcas captured in Penn Cove that day were all from the now-endangered Southern Resident population of the Salish Sea, the coastal waters between Washington State and British Columbia. We’ve learned that they live in stable matrilineal family groups: grandmothers, mothers, aunts, uncles, nieces, nephews, sisters, brothers, sons, and daughters stay together for life. They travel together. Hunt together. Share food. Wait for each other. Grieve for each other. In 2018, a Southern Resident killer whale mother pushed her dead calf’s body for 17 days and 1,000 miles.

What we’ve learned from science has confirmed what some realized on their own. John Crowe worked as a diver on the captures in 1970. In 1998, at a Penn Cove Capture Commemoration event on Whidbey Island, he wore a denim shirt open at the chest with an orca t-shirt underneath. He had the hands and forearms of someone who works outside, a gray beard that followed his jawline, and unruly eyebrows that held his pained face together. When he spoke, he looked at the floor in front of him. He cleared his throat every so often as he described what happened that day. “When we were loading them from the water onto the truck … the terror … from separation. It’s the worst thing I ever did, that ever happened to me in my life.”

An elementary school-aged girl with red hair pulled back into a ponytail sat cross-legged on the dock in the front row and stared as she listened to the man who captured killer whales. A woman in her 60s leaned against her husband as she wiped tears from her face.

“As soon as the sling left the water, when the whale was no longer in the water, that was the last of the communication. And they knew it.” The orcas turned and swam out of Penn Cove and there are no records of them ever returning.

John Crowe, who passed away in 2015, finished up: “Do you have any questions or do you just want to stand there and watch a grown man cry?”

The young orca lay on a mattress in the back of a truck and was driven to a concrete tank at Pier 56 on the Seattle waterfront. A veterinarian named her Tokitae, which in the language of the Coast Salish people means “nice day, pretty colors.” Tokitae was sold to the Miami Seaquarium in Florida, and she arrived there by sling and crane and truck and plane on Sept. 24, 1970. There, they named her Lolita.

Once trained, she performed every day and executed her commands on cue—fluke waves, fluke slaps, speed runs, slide outs, bow jumps, high-energy jumps, head-in entry jumps, back breaches, trainer rides on belly, trainer rides on back, trainer rides on rostrum.

Orcas travel together. Hunt together. Share food. Wait for each other. Grieve for each other.

In video footage of the Killer Whale Show taken at the Miami Seaquarium in 2013, people begin to gather for the show outside a closed metal roll-up door. A young girl sits on a handrail. A man leans against the wall. A small boy pulls on his mom’s arm. They look like they’re waiting for a bus. Or waiting in line for popcorn. Founded in 1955, the Miami Seaquarium looks a bit like a low-budget motel. The ticket booth seems like a place where you’d line up to buy cotton candy and a token to play duck pond. Inflatable killer whale toys dangle from the ceiling of a gift tent. The white eye markings on the killer whale toys are (incorrectly) shaped like speech bubbles, and there’s no saddle patch under the dorsal fin. There are bright blue T-shirts for sale with cartoon representations of killer whales bursting out of a rainbow splash of tie dye.

A faded wooden sign hangs above the entrance to the show with (another) inaccurate cartoon design of a killer whale. The dorsal fin looks like that of a shark. When the garage door rolls up, people walk through a concrete alleyway that opens out to a dilapidated pool painted blue with bleachers on either side. A lowly coliseum. Lolita is already there. She floats, bobs, and lolls with her rostrum and one eye out of the water like a waterlogged thing that can’t right itself. An anti-climactic main event. The sun glints off her shiny black skin. Two teenage boys who work there have their backs to her as they lean up against the dirty glass of her tank. They are wearing Miami Seaquarium polo shirts and their shorts hang low with the weight of the radios that are clipped to their belts. More people file in and begin to fill the concrete bleachers. Lolita comes to the wall, to the sound of performance: crowds, cameras, music. By this time she had been living in captivity for 43 years.

THE CAPTURE: A glimpse of the roundup during which seven young orcas, including one who came to be known as Lolita, were taken from their families off the coast of Washington. After 53 years in captivity, much of it spent performing tricks, Lolita may be returned to a sanctuary in coastal waters. Photo by Wallie Funk.

Two women wearing matching full-body wetsuits appear from behind the scenes carrying Coleman coolers. They announce, with theatrical performance, “Lolita! Our killer whale!” The crowd erupts and drowns out the din of disco music.

Lolita holds her head up obediently in front of the trainer. On command she sinks down. Lolita swims fast around the tank and explodes out of the water. In the video footage, I see the full size of her four-ton body suspended over the tank, and it doesn’t look like she will fit back in. I wince, afraid she will hit her head or her tail flukes against the concrete as she breaches. She performs a slide-out onto a concrete island in the middle of the pool. Lolita gets a dead fish.

On her back with pectoral fins in the air, Lolita propels herself around the periphery of the pool with pumps from her tail to the beat of the 1990s Reel 2 Real hit, “I Like to Move It.” One of the trainers kneels on Lolita’s white chest between her pectoral fins and waves and smiles at the audience while she rides the killer whale around the tank. Lolita gets a dead fish.

People watch, clap, and cheer. Lolita gets another dead fish.

Lolita swims another circuit, this time working her tail to keep her head fully out of the water. The trainer stands on her rostrum as if it were a podium and waves and smiles at the audience and squints against the glare of the sun. Lolita gets a dead fish.

After the performance, a janitor in khaki trousers and a Miami Seaquarium T-shirt picks up plastic soda bottles and sweeps candy wrappers. The garage door rumbles down and Lolita waits for the next show.

In video footage from January 2021, not much had changed. Wetsuit-clad trainers clap their hands to the beat of the 2013 hit “All Night” from the Swedish duo Icona Pop while Lolita performs flips and tail whips in her tank. The people in the concrete bleachers clap and sing along with the lyrics, “We always dreamed about this better life …”

Lolita performed twice a day, seven days a week, for 50 years. For much of that time, people—animal rights groups, orca advocates, the Lummi tribe of the Pacific Northwest, a public growing ever less comfortable with keeping whales and dolphins in captivity—protested her conditions. They organized lawsuits, petitions, and boycotts; sometimes they gathered outside the Seaquarium, carrying signs and making speeches.

In June 2021, a U.S. Department of Agriculture inspection at the Miami Seaquarium revealed a long list of animal welfare violations. The facility had disregarded veterinary recommendations for Lolita’s care. Details of the inspection were published by the USDA’s Animal and Plant Health Inspection Service in September 2021: Multiple injuries to her lower jaw from hitting a bulkhead during a show. (She was asked to perform high speed circles and head-first jumps despite the vet’s recommendations to the contrary). Overexertion. Chlorine damage to her eyes. Lesions in her right eye. Dehydration. Inflammation. Malnutrition because her food had been reduced from 160.7 pounds per day to 132.1 pounds per day, and she was sometimes fed poor-quality salmon scraps and rotten capelin. According to the USDA vet, feeding poor-quality or partially decomposed fish can result in illness, compromised immune systems, and even death.

Tokitae’s devotees want a chance to give her a better life.

Late in 2021, the Miami Seaquarium came under new ownership. In February 2022, in compliance with new USDA permits, the new owners retired Lolita from shows. She no longer performs; she has been waiting in her tank while people decide what to do with her. What do you do with an aging four-ton killer whale who is dependent on humans? How do you right a wrong that can never be made right?

The Lummi tribe consider Lolita a relative and call her Sk’aliCh’elh-tenaut. Over the past few years, they have fought to move her to a sanctuary in Washington’s San Juan Islands. In early 2022 they joined a collective of other Lolita advocates, known as Friends of Toki, and in March of that year the Miami Seaquarium’s new owners agreed to work with them. Such an unlikely  collaboration had never happened before. With it came possibilities. Her day-to-day care improved. There is open and constructive conversation about her future and, as of March 2023, a mutual commitment to return her to a sanctuary pen in her home waters. For the first time in decades, there is movement in Tokitae’s story.

Repatriation to a sanctuary would set a precedent for captive orcas all over the world. It will also take government approval because the Southern Resident orca population is federally classified as endangered.

And it will take a clean bill of health. Over the past 12 months, Tokitae has suffered from a chronic respiratory infection that needed ongoing treatment with antibiotics. At a conference in December 2022, her care team talked of her the way you talk of a loved one when you don’t know how much time they have left. “Yesterday she had a not-so-great-day. Today she had a good day.” Ten weeks earlier they thought she was going to die. But the February 2023 health assessment reported “optimism.” Her condition was “stable.” Her appetite, energy, and engagement in daily activities was “steady.” She looked “good” clinically. Maybe she lives 10 more years.

Tokitae’s devotees want a chance to give her a better life. A chance to give her dignity and ease. A chance to finally do better by her. Their dream is for her to live out her remaining years in her native habitat where her family members might swim by from time to time and where she might hear the vocalizations—the clicks, squeaks, trills, whistles, and pops—unique to her pod. Will she recognize their calls; will they recognize hers? Will she remember the sounds of her early years the way someone recognizes a language they knew as a child? Will she remember the place where the water is salty? Where waves skirt the rocky shore? Where the currents run strong?

Lead image: Kamira / Shutterstock

The post The Story of a Lonely Orca appeared first on Nautilus.

]]> 0
Is There Any Place for Race in Medicine? Wed, 05 Apr 2023 21:43:39 +0000 Medicine uses race to try to provide more equitable care. But that prescription likely does more harm.

The post Is There Any Place for Race in Medicine? appeared first on Nautilus.

What prescription would you recommend?” my attending physician asked me.

We had just admitted a patient to the large teaching hospital where I was a medical student. He had been in hypertensive crisis with type-2 diabetes and would soon need a medication he could take at home. This was the first Black patient I had helped evaluate with this condition, and I knew we could not recommend the standard medications, the ones prescribed to all of the patients I had seen up to that point in my medical training.

His prescription would have to differ because a series of decades-old studies, “adjusted” for race (“Black” vs. “non-Black”), found that Black research participants had a suboptimal response to the standard, first-line treatment.

Physicians often still assess—and treat—Black patients differently.

I knew this common wisdom well from graduate school training in epidemiology, where decisions like these were about numbers and statistics. As expected, the team prescribed this Black patient a calcium channel blocker rather than a standard ACE (angiotensin converting enzyme) inhibitor or an ARB (angiotensin receptor blocker) that non-Black patients received.

It was only later that I learned that this recommendation was based on studies that had looked specifically at the responses of African-Americans. But we had been treating an immigrant from West Africa at a Canadian hospital. Which meant that we were making a prescription decision not based on data from people of his particular genetic background or lived experience but on his skin color alone. And that felt like a flimsy clinical benchmark. But it was 2011, and I was still a medical student, so I went along with the established, “best practice” recommendation.

Today, despite decades of work to deconstruct the ingrained racism in North American medicine, physicians often still assess—and treat—Black patients differently. As a rationale, scholars and clinicians point to statistics. It is known as the “correction for race.”

Over the years, researchers have tried to theorize about potential genetic underpinnings of this “correction” in an attempt to find biological plausibility. But this is treading on infamously dangerous territory previously trod on by myriad biological-anthropological scholars who attempted to perpetuate the myth of the superiority of whites.1

These centuries-long misconceptions have been so deeply woven into the fabric of medical training and research that, even in an era defined by rigorous research, their presence still appears in the sorts of clinical decisions I was part of daily, with lasting health implications for many.

The work of undoing these tangled webs of reasoning may be difficult, but with careful and persistent work, the result could one day be truly equitable, better care.

The West African patient’s particular treatment “correction” was just one of many we learned in medical school. I also learned how to “correct” for an apparent difference in glomerular filtration rate (a reflection of kidney function, which would be used to dose medications) in Black patients. This was based on a theory that presumed higher muscle mass in Black patients would lead, ultimately, to a difference in a measure of this filtration rate. And I was taught to “correct” lung volume for breathing tests, based on the theory that Black patients had larger lung capacity on average compared to other races, which had implications for evaluating and treating everything from chronic obstructive pulmonary disease to asthma.

Beyond the bedside, some of these “corrections” are now embedded in the code of electronic medical record algorithms addressing a variety of conditions, including determining likelihood of death, with dire implications for care.

Over the past few years, however, the shaky foundations of these assertions have finally begun to show cracks—first with lung function, and then for kidney disease, more recently for how febrile Black children are assessed.

In their key 2020 commentary in the New England Journal of Medicine, Harvard Medical School professor of the culture of medicine David Jones and colleagues carefully analyzed several clinical practice guidelines that use the variable of race—typically “Black” and “non Black”—to decide on medical management.

SEPARATE AND UNEQUAL: Race stands alone as a curiously imprecise criterion doctors often use to make treatment decisions. For decades, a Black patient with high blood pressure would automatically receive a different standard of care than a non-Black patient with the same numbers. Photo by Andrey_Popov / Shutterstock.

Thinking back to the patient I helped assess years ago, who was sent home on a different standard of care, we now know (from a study published in 20222) that these prescribing patterns actually may lead to worse blood pressure management, lower access to the right medications, and, ultimately worse outcomes. Similarly, a 2022 study in JAMA Pediatrics found that for guidelines in children’s medical care that incorporated race or ethnicity, nearly half had a potential negative effect for providing adequate care.3 The authors of that study concluded inequities could be exacerbated, more often than helped, by using race.

Yet these damaging “corrections” persist. For example, the adjustment for kidney filtration rate retains implications for kidney treatment, including suggesting lower priority for many Black patients for dialysis or transplant.

And in looking at impacts on children, one team of researchers argued against using “Black” in determining risk for urinary tract infections in pediatric patients with a fever, as it had been for years.4

“It didn’t seem possible that they could all share a common invulnerability to UTI,” Rachel Kowalsky, an assistant professor of emergency medicine and pediatrics at New York-Presbyterian Weill Cornell Medicine and a coauthor of the 2020 paper, says of Black children across the United States. “Race was being inappropriately used as a proxy for biology. And, because the guideline applied to the majority of infants 2-24 months old coming to the ER or clinic with fever, an enormous number of children could be negatively affected by its use.”

Even calculations where race would potentially tilt treatment in favor of earlier interventions for Black individuals are problematic, some scholars argue. In a 2022 article, Joseph Wright, a professor of pediatrics and health policy and management, and Chief Health Equity Officer of the University of Maryland Medical System, and his colleagues describe the example of the risk for clogged arteries, or atherosclerosis, and resulting cardiovascular disease.5 “One might argue that the ASCVD Risk Estimator may be protective in terms of potentially skewing early cardiovascular care toward Black patients,” they write. “However, the inherent danger is directing differential treatment to Black versus white patients on the basis of a flawed phenotypic signal in the face of what might otherwise be identical underlying risk profiles. Incorporating race as proxy for the biological effects of differential lived experience is misplaced.”

The cognitive dissonance, for me, is not dissimilar to the Brown vs Board of Education decision: Separate care is prima facie unequal care.

So if the intention behind these “corrections” was to lead to more equitable care, how did it end up doing the opposite?

The efforts to incorporate race in medical research and clinical decision-making had noble goals, including sniffing out health disparities and ensuring research studies were adequately representative of the broader population to make their results generalizable. To that end, since the 1970s, medical researchers in the U.S. have collected data on race and ethnicity and used it as a variable (as well as sex, education, socioeconomic status) to analyze results. The aim has been to determine how a specific treatment impacted a subgroup.

There is an important distinction though: When research studies lead to clinical practice guidelines, no guideline suggests using one approach or therapy over another based on sex, education, or socioeconomic status for something like blood pressure management. No “correction” exists for these variables. Race stands alone: a social construct that’s misused as a biological one.

This approach to dragging race into health has a deeply shameful history in Western-based medicine. In an early example, Thomas Jefferson wrote about “dysfunctional” Black lungs, justifying slavery as a way to improve blood supply to further develop these lungs. Jefferson was suggesting that slavery was beneficial to Black physiology, without noting that any lung differences likely had a social cause that was due to slavery and over-exertion.

Separate care is prima facie unequal care.

Ensuing centuries of dubious claims for scientific racism failed to turn up solid rationale for the biological differences of races. Finally, in a seminal 2003 New England Journal of Medicine article, a team of prominent geneticists asserted: “Meaningful biologic differences simply do not exist between different races in a correlation or association.”6

Indeed, any differences may be explained by sociocultural differences and lived experience. Neuroimaging studies have found, for example, only cultural links (not racial ones) to brain appearance. As Nigerian-British writer Chimamanda Ngozi Adichie has expressed, the dangers of applying a single story to a group, to assume each individual has a similar experience, are many. When these ideas are applied to health, it’s catastrophic.

The arrival of genetics-based studies in the first decade of the new millennium was supposed to save us from all of this. They were to usher in a new, agnostic era of precision medicine. But race has proven a stubborn habit to kick, even in the time of rapid genome sequencing and big data.

“I always find it ironic that when it comes to biomedical research, there’s a lot of emphasis on precision medicine,” says Genevieve Wojcik, an assistant professor in the department of genetic epidemiology at Johns Hopkins Bloomberg School of Public Health who studies diverse populations and genetic associations. But, she says, there remains “an acceptance of some kinds of imprecision, such as race, which is a very imprecise variable.”

The variable of race becomes especially tricky when larger studies are translated into clinical decision-making, she tells me. “Race in most contexts is used as a proxy, and disentangling it from what we really want to measure is a challenge that would require research to distill these variables down into their sort of root causes … For example, is it race we want to measure or the effects of racism?” Wojcik asks.

Ongoing debate around affirmative action in college admissions echoes some of these same chords. Is it simply a matter of race, or is it a matter of social factors or experiences of adversity? Of course, race and social adversity don’t have a 1:1 association, in the same way that a given gene is not 1:1 associated with race. Might social adversity, not race, be a better marker of pluralism—and more equitable medical treatment? So perhaps ridding ourselves of the last remnants of race-based medicine, much like pushing for race-neutral college admissions policies, makes sense.

My research and reporting thus far had led me toward that conclusion. But then things got more complicated.

I met Gregory Hall at a medical conference in February. Hall is a primary care physician based in Cleveland, the medical director of University Hospital’s Cutler Center for Men, and an associate professor in Internal Medicine at Northeast Ohio Medical University College of Medicine. He is also the author of Patient Centered Clinical Care for African Americans: A Concise Evidence-Based Guide to Important Differences and Better Outcomes, and his patient population is predominantly African-American, an ethnic group he also identifies with.

Hall disagrees with the principle of dismissing race altogether in clinical decision making. He gives the example of a study that looked at rates of prostate cancer. The study found prostate cancer risk was 10 times higher in African-American men compared to West African men, and so the cause was likely not along race lines but due to inequity and other social factors. He also cites the higher prevalence of HER2-negative breast cancers in African-American women and the higher likelihood of aggressive prostate cancer that metastasizes to bone in African-American men. These race-based differences are essential to acknowledge, he says.

Race has proven a stubborn habit to kick, even in the time of rapid genome sequencing and big data. 

Some health organizations in Hall’s state of Ohio have stopped collecting race data in efforts to be race-neutral. Beginning in 2020, several U.S. hospitals also began to halt the use of race as a variable in their clinical decision making. The University of Washington stopped using the race-based kidney filtration variable in their clinical decision making, and a multidisciplinary, multi-institution, group created an alternative “race-free” equation to assess kidney failure. In 2021, physicians from the University of Pennsylvania recommended that the American Thoracic Society revise their guidelines for assessing lung function in 2021.

Last year, in the Lancet Digital Health journal, Jones’ colleagues at Harvard published a paper recommending a revision to the atherosclerotic cardiovascular disease calculator to exclude race.7 A few months later, a group at the University of Pittsburgh looked at Kowalsky’s claim and, in their study in JAMA Pediatrics, using the data from 16 previous studies (covering almost 180,000 children) found that replacing race with previous history of UTI and fever duration was found to be similarly accurate at predicting risk of UTI in statistical models. The American Academy of Pediatrics has since offered a new, race-agnostic management guideline.

But some of these actions, though they seem in their own way corrective, concern Hall.

“My worry is if we take race totally out of things, communities that have serious health problems that need targeted attention will not be readily identified,” Hall tells me. “Biologically we are 99.9 percent the same, and based on that alone, race really is a social construct. But that social construct has created the health landscape that we currently live in,” he says. “If we completely remove race as a consideration in our clinical thinking and how it impacts patients’ predispositions, treatment, health, and access to services, we will be ignoring some very critical aspects of their overall health.”

Hall’s perspective led me to understand that a deeper issue was indeed at play. It wasn’t simply about whether to remove race from a series of clinical decision-making algorithms. It was about how to use race in a way that would best inform overall care.

Unbiased healthcare provision is not attained simply by deleting “corrections” from medical texts, but will require taking the harder path of shifting systemic barriers that consistently lead to worse health outcomes in Black patients.

So rather than turning a blind eye to race in medicine, “we need to collect much more comprehensive data about patients’ social and economic exposures, and do so longitudinally,” Jones says. “This will require a ton of work—but so did the human genome project. If doctors agree that something is important, then the needed research will get done.”

Indeed, to move from race-based to race-conscious medicine requires bravely challenging the very structures that were responsible for creating the healing profession in the first place.8

In March, the National Academy of Sciences released a new report, which makes clear the ongoing concerns around presuming race and ethnicity has biological underpinnings, but acknowledged that the data were helpful demographically—something Hall agrees with.

“To be ‘race-conscious’ of something is to consider race as part of the decision-making process,” Hall says. “No medical decision should be purely race-based.”

If the patient I had seen in medical school were back in the hospital today—or in the very near future—hopefully his care team would look beyond outdated, black-and-white statistics to turn a careful eye to things like systemic barriers to optimum blood pressure management. They would be aware of his race but would not automatically prescribe a medication simply because of it.

Amitha Kalaichandran is a public health-trained physician, medical journalist, and health tech consultant based in New York.

Lead image: Nicoleta Ionescu / Shutterstock


1. Deyrup, A. & Graves Jr., J.L. Racial biology and medical misconceptions. The New England Journal of Medicine 386, 501-503 (2022).

2. Holt, H.K., et al. Differences in hypertension medication prescribing for Black Americans and their association with hypertension outcomes. The Journal of the American Board of Family Medicine 35, 26-34 (2022).

3. Gilliam, C.A., et al. Use of race in pediatric clinical practice guidelines: A systematic review. JAMA Pediatrics 176, 804-810 (2022).

4. Kowalsky, R.H., Rondini, A.C., & Platt, S.L. The case for removing race from the American Academy of Pediatrics clinical practice guideline for urinary tract infection in infants and young children with fever. JAMA Pediatrics 174, 229-230 (2020).

5. Wright, J.L., et al. Eliminating race-based medicine. Pediatrics 150, e2022057998 (2022).

6. Cooper, R.S., Kaufman, J.S., & Ward, R. Race and genomics. The New England Journal of Medicine 348, 1166-1170 (2003).

7. Vyas, D.A., James, A., Kormos, W., & Essien, U.R. Revising the atherosclerotic cardiovascular disease calculator without race. The Lancet Digital Health 4, e4-e5 (2022).

8. Cerdeña, J.P., Plaisime, M.V., & Tsai, J. From race-based to race-conscious medicine: How anti-racist uprisings call us to act. Lancet 396, 1125-1128 (2020).

The post Is There Any Place for Race in Medicine? appeared first on Nautilus.

]]> 0
To Supercharge Learning, Look to Play Tue, 04 Apr 2023 22:37:32 +0000 Play and art engage all of our senses and enhance attention.

The post To Supercharge Learning, Look to Play appeared first on Nautilus.

David Zhang of Guangzhou University recently led a group high into the Tibetan Tableau of Southwestern China, an area known as “the roof of the world” for its elevation 4,000 meters above sea level. There, they found a piece of limestone that had fossilized a playful composition of hand and foot impressions. The pattern was “deliberate” and “creative,” according to a paper that Zhang and fellow researchers published in the journal Science Bulletin in 2021, and the piece “highlights the central role” that artistic exploration and play has held for our species. Uranium series dating determined that this artwork could be 226,000 years old. With our hands and our feet as our first artistic tools, we’ve been leaving behind our imaginative impressions since Earth’s last ice age.

Play is a key component of the arts and aesthetics in myriad ways. Art and play are like two sides of the same coin, with play being a part of artistic expression, imagination, creativity, and curiosity. Though it often gets buried in adulthood, the urge to play exists in all of us. It has been a major part of how we’ve evolved as a species. As Plato famously said, “You can discover more about a person in an hour of play than in a year of conversation.”

Kids today will have jobs and careers that are nothing like anything their parents and elders recognize.

Roberta Michnick Golinkoff, a professor of education at the University of Delaware, and Kathy Hirsh-Pasek, a professor in the Department of Psychology at Temple University and a senior fellow at the Brookings Institution in D.C., have identified play as a key ingredient for learning. The title of their 2003 book pretty well sums up their philosophy and their years of research around play and learning. It’s titled: Einstein Never Used Flashcards: How Children Really Learn and Why They Need to Play More and Memorize Less.

“If you’re not having a good time, you’re really not learning,” Roberta told us. “And there are so many ways in which we can infuse play into classrooms and informal learning environments.” This is supported in the research on the neuroscience of play and learning. Play, the research notes, is universal to our species, and when humans play it positively influences both their cognitive development and their emotional well-being.

There are, Roberta and Kathy point out, two major kinds of play, free play and guided play. Free play is under a child’s control and not designed to satisfy any external goal. Kids excel at this. Think about playing dress-up or make-believe. Play with an adult who has a learning goal is guided play. When it’s done well, it supports the learning of new skills.

Roberta and Kathy use the example of bowling. In most alleys, you can ask for bumpers to be raised so that the ball never rolls into the gutter. When a person is first learning to bowl, the bumpers help to make the skill more fun if they get to discover the joy of knocking a few pins down. “With guided play, we set up the environment for kids so that they can learn different things,” Roberta explains.

When children have an opportunity to learn in a playful way where they have some agency, where they’re active, where they get to be involved and collaborate, it leads to the gold standard of learning: transfer. “You can take something you learned in one context and apply it to another, and when you can create environments in which these things are exemplified, that’s when you get real learning,” Roberta says.

One teacher they mentioned set up a center with all kinds of writing implements and paper. “Now, this was kindergarten before kids really knew how to write,” Roberta says. “But what did it do? It spurred writing during their free time. They would come up and ask the teacher: ‘How do I write this letter? How do I write my name?’ ”

Kathy and Roberta’s latest book, the bestseller Becoming Brilliant, offers an honest assessment of where we are: Kids today will have jobs and careers that are nothing like anything their parents and elders recognize. They need to be able to adapt to rapidly changing realities. They need to be able to entertain different ideas and approach them without consequence.

BORN TO PLAY: Video game EndeavorRx is an FDA-approved video game for kids with ADHD. Players must navigate a 3-D world full of distractions and obstacles, such as waterfalls and icebergs. The benefits to attention were shown to last even after the kids stopped playing. Image courtesy of EndeavorRx®.

And yet with all the changes under way, we’re still pretending that content in education is king, but it is not the only thing that is important. Really what kids need to learn, they told us, are the “6 C’s”: collaboration, communication, content, critical thinking, creative innovation, and confidence. Play and the arts, based on their research, build the 6 C’s. An initiative that the two of them are involved in, one that Susan also helped to build, is called Playful Learning Landscapes Action Network.

By 2050, nearly three-quarters of the world will live in urban settings, and this network is designing play into that eventual reality. The landscapes incorporate evidence-based designs that use the arts, and games, to transform everyday public places—bus stops, libraries, parks—into hubs of playful learning. In one game called “jumping feet” a series of stones with drawings of either one or two feet are placed at strategic distances, with signs that encourage kids to jump in specific patterns. It’s a twist on hopscotch that uses cognitive-science methods proven to improve attention and memory.

Studies of pilot programs in the cities of Chicago, Philadelphia, and Santa Ana have found that these playful environments encourage children to talk about numbers, letters, colors, and spatial relations with their caregivers, and they can increase children’s understanding of mathematical concepts, including fractions and decimals, among other skills. They also create intergenerational and peer-to-peer learning where the world becomes a playful studio. Once you open up to the idea of playful landscapes being everywhere and anywhere, suddenly your surroundings become a world of possibility.

To learn anything at all, one of the most important cognitive states has to be present from the very beginning: attention.

Attention directs our consciousness to focus on some things and not others, and it is a fluid state that moves through varying degrees. “Attention is your ability to selectively focus and sustain focus,” Adam Gazzaley explained to us one afternoon. Adam is a neuroscientist and professor of neurology, physiology, and psychiatry at the University of California, San Francisco, and the founder and executive director of Neuroscape, and he’s been studying the brain’s capacity for attention for decades. “Your ability to move your attention flexibly, which is called switching, is quite limited,” he said.

Sustained attention is a challenge for all of us. Adam, along with psychologist Larry D. Rosen, disabused us of the myth that humans can, in fact, multitask. In their 2017 book The Distracted Mind, they explain that the human brain isn’t actually capable of doing multiple things at once. The human brain never multitasks, Adam said. Our brain is actually toggling quickly between tasks.

For many, the ability to maintain levels of attention is a challenge. It’s estimated that as many as 366 million adults around the globe live with attention deficit hyperactivity disorder (ADHD). Six million children have been diagnosed with ADHD in the United States as of 2016. ADHD can make it difficult to sit still, focus, and stay quiet. Because ADHD affects divided attention, or what has previously been described as multitasking, it can inhibit organization. This often means that the experience of a traditional classroom setting is horrible. Young people with ADHD are often labeled as having behavior problems and being classroom disruptors. These labels are misplaced and can forever damage a child’s belief in their own intelligence and abilities. There is the roller-coaster ride that can come from trying different medications, not to mention the staggering cost of these pharmaceuticals, which can run in the hundreds of dollars per month.

Adam has a solution for ADHD: immersive video games.

That’s right. The great scourge of parents everywhere in the “drop that controller and pick up your homework” debate can be an excellent form of learning when designed with neuroarts in mind. To boost a child’s attention, it may help to let them play Adam’s video game, NeuroRacer, for 30 minutes a day.

Adam has studied how neural networks in the brain underlie our ability to pay attention, or our failure to. He started researching what happens in the brains of those with ADHD in order to translate brain science into devices and processes that help at the neuronal level.

Adam’s group was able to prove the sustained benefits to neural mechanisms after playing this video game.

Our capacity to pay attention is crucial for everything we do in life. “When attention is degraded or it doesn’t develop well or it’s simply fragmented by too much switching in life, everything is impacted,” he says. “How you interact with your family, how you go to sleep at night, how you perform your schoolwork and your homework or your other work.”

Adam wondered how he might harness the brain’s plasticity to lead to improved attention. The best way to stimulate plasticity, he knew, was through immersive experiences. But he needed to figure out how to create experiences that selectively targeted neural networks in such a way that they created improved attention span. Learning is, by its very nature, a moving target. When our brain shifts to create new neural circuits, it’s not the same as when we started, so Adam puzzled over how you create a process that adapts with the brain’s plasticity.

That’s when he took a page from engineering and looked into the closed-loop system. The example he used with us is that of a clothes dryer. “A closed-loop dryer would be one that, instead of just setting a time and then going back and checking if your clothes are dry, as our dryer works, which is very inefficient, the dryer has sensors that detects the moisture in your clothes and basically turns off when your clothes are dry,” he explained.

“So, a closed-loop system has sensors to take in input about what they want to change, and then has a processor that makes a decision based on that data,” he said. “So, what I wanted to create was a tool where your environment, the stimuli, the challenges, the rewards are all updating in real time based on your brain.”

Video-game design and augmented reality are dynamic aesthetic enhancers. The artistic components that create a successful game—an immersive world, strong art and graphic design, vibrant colors, sounds, music, storytelling, and characters—are what make gameplay so compelling. And there is also the emotional attachment to the narrative and your role in the story.

Adam set about designing an aesthetically rich video game that would support and build the brain’s capacity to pay attention. He partnered with talented video-game developers in order to create something that worked with an attention-deficit brain. “I took the art and aesthetics of the game quite seriously,” he said. It took a year, and in 2013, he released NeuroRacer.

His immersive game challenged the neural networks that are fundamental to cognitive control and attention abilities: interference resolution, distraction resistance, task-switching. By playing, the hope was that the game could reshape those circuits. And these benefits would transfer by lasting beyond the game and be carried over into the rest of your life. “If a kid gets better at a test in school but is not actually able to perform that work in the real world, you’d sort of consider that a failure of the education system, right? Because the goal is not just to get better at your test and not to get a great SAT, but to actually be smarter and savvier and wiser,” Adam says.

As a player in Adam’s game, you’re in an environment where you have a series of goals and they’re challenging you in an adaptive way. “One task is navigating through this 3-D world,” Adam describes, “where you have to go over these icebergs and waterfalls. Then at the same time, you also have to respond to targets and ignore distractions. This is like cognitive control through the roof. You have two tasks that are both equally important to you that are occurring simultaneously in the presence of distractions.” The game starts easy and gets harder as your attention capacity improves.

Adam and his team found that improvement in attention for players did extend outside of the game, and their research made the cover of the scientific journal Nature. Adam’s group was able to prove the sustained benefits to neural mechanisms after playing this video game.

Next, Adam started a company called Akili that took the game to the next level by infusing it with more sophisticated levels of art, music, story, and reward cycles for the players. That version of the game went through multiple clinical trials, including a Phase 3 placebo-controlled trial in children with ADHD. For the trial, a kid played for 30 minutes a day, five days a week, for one month and that was considered, to put it in pharmaceutical terms, “a dose.”

In 2020, the game, now called EndeavorRx, was approved by the FDA as a Class 2 medical device to treat children with ADHD. “This is the first non-drug treatment for ADHD and the first digital treatment for children in any category, so it was really pretty exciting when it happened,” Adam says. His video game is now being prescribed by doctors. What’s extraordinary about Adam’s work is his capacity to understand how learning differences—like attention deficits—actually work in the brain, and how arts-infused experiences can address them.

Susan Magsamen is the founder and director of the International Arts + Mind Lab, Center for Applied Neuroaesthetics at Johns Hopkins University School of Medicine, where she is a faculty member. She is also the co-director of the NeuroArts Blueprint. Susan works with both the public and private sectors using arts and culture evidence-based approaches in areas including health, child development, education, workforce innovation, rehabilitation, and social equity.

Ivy Ross is the vice president of design for hardware product area at Google, where she leads a team that has won over 225 design awards. She is a National Endowment for Arts grant recipient and was ninth on Fast Company’s list of the one hundred Most Creative People in Business in 2019. Ross believes that the intersection of arts and sciences is where the most engaging and creative ideas are found.

From the book Your Brain on Art by Susan Magsamen and Ivy Ross. Copyright © 2023 by Susan Magsamen and Ivy Ross. Reprinted by arrangement with Random House, a division of Penguin Random House LLC. All rights reserved.

Lead image: Hibrida / Shutterstock

The post To Supercharge Learning, Look to Play appeared first on Nautilus.

]]> 0
Exercise Is Great for Our Brains, Too, Right? Mon, 03 Apr 2023 22:21:22 +0000 One question for Luis Ciria, a neuroscientist at the University of Granada.

The post Exercise Is Great for Our Brains, Too, Right? appeared first on Nautilus.

One question for Luis Ciria, a neuroscientist at the University of Granada, where he studies how the brain works under physical exertion.

Photo courtesy of Luis Ciria

Exercise is great for our brains too, right?

Probably not. At the beginning of our experiments about a decade ago, we believed that this idea, this hypothesis, was straightforward: that exercise must improve your brain because you improve the rest of your body—why not the brain? But after several experiments and reading the literature, we started to think maybe there are some big problems. We started to go deeper, and we realized that the literature is not solid enough to conclude these things.

It’s not that there is no effect. In our new paper, we only say that there is no strong or solid evidence to say that physical exercise improves our cognition. My feeling is that there is a small effect, but not big enough to improve our daily life. For example, to remember what I need when I go to a supermarket, I need a list even if I exercise a lot. 

We have no idea what is going on inside our brain.

Before our review, some other reviews indicated that there is a small but positive effect. We went to the primary sources of evidence, all these randomized control trials, and show statistically that most of these randomized control trials are underpowered studies, meaning they have low sample sizes. So we cannot believe their results.

The idea behind these studies is that exercise is so good for many other things—for our cardiorespiratory systems, our muscles—but people tend to think of our brain as a muscle. So they say, “If you train your body and your brain, you will improve your cognitive function. It will show better performance.” But that’s not the case. The brain doesn’t work as a muscle. Some people point out that exercise increases blood flow in your brain. Or that it leads to higher releases of catecholamines, a neurotransmitter related to the stress response. But it’s not clear. The brain is so complex. We have no idea what is going on inside our brain. It’s so difficult. This is one of the main weaknesses of this literature. 

In our review, we only focus on the physical component of exercise. But for example, there are some sports, like soccer or basketball, that incorporate core cognitive components. When you play basketball, it’s not just running or jumping. You also have to make decisions, pay attention—a lot of cognitive things at the same time. One of the points that we highlight in the paper is that maybe the combination of physical exercise and cognitive training together could improve our cognitive function. Of course, if you train your memory, you are going to improve your performance on a memory task. That’s obvious. So in our review, we excluded the studies with yoga, for example, and tai chi, because these are physical activities which incorporate a really important cognitive component, where you’re focusing your mind or attention in one way or another. That’s something that we need to investigate further. 

If you ask me, if you exercise, are you going to feel better? No doubt. If you run 10 kilometers now, you’re going to release stress. You’re going to meet people, maybe make friends. You’re going to form social bonds. It’s going to be much better than if you go to your sofa and drink four beers and watch TV. If you exercise, you usually breathe clean air. You’ll get some sun. You will sleep much better. 

Sleeping well is so important for our mental and physical health, it’s amazing. It’s probably not the exercise that’s directly benefiting you. It’s sleeping well. Eating well, and being with people, relieves your stress. All these factors around exercise are what probably helps improve your brain, or slow down your cognitive decline, rather than the exercise itself.

Lead image: SpicyTruffel / Shutterstock

The post Exercise Is Great for Our Brains, Too, Right? appeared first on Nautilus.

]]> 0