*
The Great Unknowns
August 31, 2011  |  by Michael Anft

We laugh. We blush. We kiss. But why? What, evolutionarily speaking, are the advantages of swapping germs with someone when a sloppy smackeroo is hardly integral to propagating the species? We travel on a smallish stone that orbits a yellow dwarf of a star on the edge of one of billions of galaxies in the universe. Where did all these galaxies come from? Our bodies and minds respond to the fake-out that happens when sugar pills are substituted for medicine. What makes the so-called placebo effect work?

miracle4Never mind the enduring mysteries surrounding cancer and other incurable diseases, or the enigma of the disappearing contents of your sock drawer. Scientists remain daunted by some of the most basic questions regarding human behavior, the cosmos, and the building blocks of life.

The list of what science doesn’t know is voluminous. Unraveling 14 billion years of natural history—the machinations of the universe, of cells, molecules, atoms, quarks, of why animals and humans do what they do—in the few short centuries that humanity has hashed out and honed the scientific method is a task that slogs along at its own cautious pace. Despite endless questioning, layer upon layer of observations, and long lab hours, science continues to be mocked by nature—or at least made to toss and turn at night. Here are six “problems” that have science stumped.

What makes up 95 percent of the universe?

The answer to the most basic of questions—what’s out there?—has been undergoing constant revision for millennia. Aristotle thought all could be explained by the quartet of earth, air, fire, and water. During the past century or two, various discoverers of the smallest of things—atoms, electrons, quarks, and other subatomic particles—posited that these tiny bits of matter made up each iota of the Great Beyond, and Earth, too.

It turns out that atoms and other particles we know and understand only make up about 5 percent of the whole shebang. Decades ago, astrophysicists who had attempted to “weigh” all the matter and energy in the universe knew that their calculations added up to an impossibly large figure, given that most of space is, well, space. How to explain it?

In 1998, an international team of researchers, including Harvard postdoc Adam Riess, now a Johns Hopkins professor of physics and astronomy, investigated the light emanating from exploding stars billions of years old. Riess found that those stars were racing away—a sign not only that the universe is expanding, as Edwin Hubble had posited in 1929, but that it is moving outward faster and faster all the time. The underlying cause impelling everything in the universe to accelerate apart is now called dark energy, a kind of antigravity that weightlessly takes up space.

“Dark energy could be nothing more than how much nothing weighs,” says Chuck Bennett, a professor of physics and astronomy at Johns Hopkins. It could also be the “cosmological constant,” a quantity put forth by Albert Einstein to serve as a counterweight to gravity. We now know that Einstein’s reasoning for introducing this alleged constant was wrong, but the cosmological constant, with a different value than Einstein’s, is the current favorite candidate for dark energy. We also know that dark energy accounts for about 74 percent of the universe.

Subsequent telescope studies led by Bennett that focused on cosmic microwave background radiation, remnants of light that date from the Big Bang, confirmed the existence of another “dark” entity that coyly suffuses the universe with strange bits of stuff. Called dark matter, it is unresponsive to light and radiation and not made of atoms. It accounts for about 21 percent of the universe’s energy density. Dark matter contains mass, which will lead to its being detected, measured more accurately, and characterized. Thousands of scientists, including a few dozen from Johns Hopkins, are smashing specks of matter together at unprecedented energies in a subterranean supercollider in Switzerland to see if they can create and detect dark matter. Others are trying to detect it as it passes through old mine shafts in Minnesota.

“We’ll see some amazing developments that help us explain dark matter, possibly in three to five years,” predicts Jonathan Bagger, vice provost for graduate and postdoctoral programs at Johns Hopkins and a professor of physics and astronomy. “The roof will blow off of science as we discover dark matter on Earth underground.”

But there’s still the infinite issue of what’s going on outside that roof. “We really don’t have much of a handle on dark energy,” concedes Bagger. Which means we still won’t know what constitutes about three-quarters of everything we “know.”

Why do we need sleep?

Ask anybody who has worked a double shift or spent the night cramming for an exam—a night without sleep is like a day without air. Humans crumble without eight or so hours of nightly shut-eye. When our sleep is regularly disrupted, we become much more sensitive to pain. Our organs and central nervous system become much less efficient. If we’re limited to four hours of sleep, our white blood cells create high levels of inflammation that lead to disease. If we log four hours of snooze time for each of six consecutive nights, we’ll develop insulin resistance, a condition that can lead to weight gain and, eventually, diabetes. Our mood suffers; only clinically depressed people improve their condition by sleeping less. It’s even worse for an insomniac rat, whose inability to maintain its immune system, metabolism, and body temperature proves fatal.

It’s nearly universal throughout zoology—all critters, save shrews and a few plants, need to sleep, hibernate, or otherwise shut down. But why?

Two prevailing theories argue that sleep either restores the energy we need to thrive, or it helps us adapt to threats. Both concepts turn on the idea that evolution made us sleep for a reason, which seems like a convenient fact to relay to your boss on the days you show up late. “Sleep leaves us vulnerable to predators in the wild. It’s a potentially dangerous state,” says Michael Smith, associate professor of psychiatry and behavioral medicine at Johns Hopkins. “So, evolutionarily, it had to serve important functions.”

Smith, who studies the link between sleep deprivation and pain, sees value in both theories. “It may be a fine-tuning system for the organism. It’s restorative,” he says. Some researchers testing that hypothesis search for a vital chemical or substance in the body that is either fully synthesized or broken down only during sleep. Others are looking for evidence that pathways in the brain could conceivably get the rest and recharging they need to fully function during the energy-consuming hours of the day.

But there’s evidence to counter that—and that’s where the adaptive theory comes into play. During rapid eye movement (REM) sleep, neurons in the brain fire as if we’re awake. The brain is far from rest, churning out the detailed dreams we tend to remember. Certain types of memories are stored and consolidated during REM sleep—it’s not for nothing, apparently, that we’re told we’ll make a better decision after sleeping on it. “REM might have something to do with wiring our nervous system when we’re very young, including when we’re fetuses,” adds Smith. All of which would help an emerging intelligent species adapt and survive better.

An adaptive mechanism aided by sleep could help make our brains better at learning and retaining things. But not everyone buys all of that. Having enough juice in the tank to make it through the next day might have more to do with it. “It may all come down to energy,” says Samer Hattar, an associate professor of biology at the Krieger School of Arts and Sciences. Hattar has studied sleep and circadian rhythms in various species for 18 years. “Whenever there is a limit on energy—the sun to make it and oxygen to fuel functions—animals tend to shut down. That fits with our ideas about circadian rhythms and why we developed them. It’s important to maximize the time when you can gather energy.” Bats, for example, sleep more than 20 hours a day, saving energy for the few hours of the day when the insects they feed on are out and about.

Hattar doesn’t believe the sleep-is-dangerous hype, either. Animals that aren’t moving are less likely to draw the attention of predators, he says. Sleep can be a form of hiding. “Sleep can be advantageous because when you don’t need to get energy, you can shut down and conserve what you have,” he says.

In addition to debating reasons for why we sleep, scientists still kick around how we regulate the amount of sleep we get and how the lack of it ties into the development of diseases. But science has only investigated sleep intensively for about 60 years—nowhere near long enough to figure out exactly why we spend one-third of our lives dead to the world.

How do we come to make decisions?

We’ve basically been programmed by evolution to make decisions that lead us to eat, drink, have sex, and seek other pleasures. The question is, how do we make decisions that require higher thinking?

Like all matters relating to gray matter, the answers aren’t clear. “We probably don’t know 99 percent about how the brain does what it does,” says Charles “Ed” Connor, a professor of neuroscience at the Johns Hopkins Brain Science Institute. Still, researchers are making great inroads in understanding things at the cellular and molecular levels, he adds. They understand that the brain collects information delivered by the senses, and when that data reaches a critical mass, parts of the prefrontal cortex act as judge and jury, leading us to come to a conclusion.

“We know from human behavioral experiments that context in decision making is very important,” says Veit Stuphorn, an assistant professor at the Brain Science Institute. “That implies that there is a computational process, and that decision making isn’t something preordained or deterministic.” A highly developed—and largely not understood—system that assigns levels of value to each option we mull over might explain it all. The brain calls up and computes stimuli based on what it values, and then pulls the situational trigger, thousands of times a day. But how does it get there? Which basic brain cells (called neurons) fire when? Chemically speaking, how does the brain assign a value to something? And how does it work to decide which value among many is more valuable at a given moment?

Stuphorn and his ilk are in the dark, feeling around with the help of experimental rhesus monkeys, hoping they grab onto some clues. They use a monkey’s thirst, computer images, and electrodes to see how he makes a choice between competing visual stimuli. “When we make a monkey thirsty, we increase the value of water to him, and that affects what decision he’ll make,” Stuphorn explains. His staff denies the monkey water for most of the previous day. If on the next day the monkey looks intently at a correctly colored light among two shown on a video screen, as he has been trained to do, he receives water through a tube. With the help of imaging technology, Stuphorn can identify the individual neurons the monkey uses to make simple choices as he makes them.

But a good monkey can only take research so far. Stuphorn says he needs to measure 10 to 20 neurons at a time to answer basic questions about variability in decision making—and the technology to do that is just now being honed. “We are only at the beginning,” says Stuphorn, speaking for neuroscientific research as a whole. “We can see that certain neurons represent certain variables, like the action values. But why do they show this activity pattern? We don’t understand the connections between neurons, so we need to record from multiple neurons simultaneously to find some answers. We’re just starting to establish how to do that.”

When will an earthquake strike?

In February 1974, a devastating earthquake measuring 7.3 on the Richter scale roiled Haicheng, a city in northeastern China that was home to 1 million people. Seismic disasters were nothing new to China, and in fact had become more than a bit of a specter during the 1960s, when hundreds of thousands of people in several Chinese cities had been killed. But the Haicheng event was different. Thousands of peasants and scores of their local leaders had been trained by Chairman Mao’s central government to monitor events in and around northeastern China for harbingers of earthquakes. In the months before the quake, peasant reporters noted changing water levels in wells, new geysers, snakes that abandoned hibernation and then died on frozen ground, and a scurrying horde of rats. When a foreshock hit Haicheng, authorities ordered it evacuated. When the ground started rolling in earnest five hours later, most of the city’s residents were too far away to watch 90 percent of the city crumble into dust.

Only 2,000 people died—many fewer than the 150,000 that ordinarily would have died had there been no evacuation. The success of the Chinese government awakened the seismic community. Had humanity finally figured out a way to predict the date and place of an earthquake? Federal officials in the United States began to push for more studies on how to do that in California, where the San Andreas Fault looms as a promise of calamity.

And then, two years later, an earthquake that neglected to offer a foreshock as a warning flattened the Chinese city of Tangshan, killing 250,000. The work of Mao’s minions there had gone for naught. The California studies, predictably enough in hindsight, didn’t yield much either. Using patterns of earthquakes to predict when and where the Next Big One might strike, the U.S. Geological Survey’s estimates were off—not just by years but decades. “We learned an awful lot about California’s geology but not much about predicting earthquakes,” says Peter Olson, professor of Earth and planetary sciences at the Krieger School.

Even though Haicheng offered hope for researchers during the 1970s and 1980s, the faith of seismologists in making predictions is much more shaky these days. They can say with some degree of probability where earthquakes will strike and how strong they might be. But matching the coordinates of time and place is still impossible. “For insurance purposes, we can statistically predict them, but not with any certainty regarding time,” says Olson. Science still lacks the basic understanding of how the Earth’s crust churns and maneuvers. “With plate tectonics, we understand the geometry but not the dynamics,” says Olson. “We don’t understand the pressures and forces and how they play into seismic activity. We need to learn about the physical properties of the Earth’s crust 100 kilometers deep, and we’re not there yet.”

About the animals-know-first theory that the Chinese used as part of their prediction method—there is some evidence for it. One recent study says that the common toad could serve as the earthquake’s canary in a coal mine. Somehow, though, it’s hard to imagine that scientists would call for the evacuation of thousands of people because of some jumpy amphibians.

How many people on the planet is too many?

In 1798, an Anglican clergyman and scholar named Thomas Robert Malthus published An Essay on the Principle of Population, a dim prediction of population growth. The world’s skein of humanity was approaching 1 billion then, and educated people were starting to wonder how many needy, greedy humans Earth could hold before food and other resources ran out—what demographers and public health scientists now term carrying capacity. Malthus’ formulation was dour: The world’s food supply grows mathematically, or by addition, while human populations boom with the fecundity of geometry, by multiplication. In the absence of famine and disease, a booming population would exceed the world’s capacity to provide.

Since then, the world has added 6 billion people, with the prospect of 2 billion more by 2050. For most of the past century, Malthus has been derided as the Grim Reaper of demography, an exemplar of cynicism and limited thinking. Humanity, after all, has overrun its big blue marble of a planet, and yet, notable numbers of people live in the lap of luxury, free from worries about food and, largely, premature death from diseases. Human ingenuity has circumvented Malthus.

Or has it? One billion people, one-seventh of the global count, don’t get enough to eat each day. “If you think 1 billion underfed people is OK, then I guess we can support the number of people we have now,” says Robert Lawrence, a professor at the Bloomberg School of Public Health’s Center for a Livable Future. “But if the term carrying capacity truly means feeding all the world’s people, then we’re not there at all.”

Such observations throw a monkey wrench into attempts to calculate a reasonable estimate of how many people Earth can hold. Complicating matters further is that another 1 billion people worldwide are overweight or obese, with many of them living in areas where agriculture supports a huge herd of livestock—hardly the most efficient users of food, Lawrence adds. If the Western diet were different, could the Earth take care of those who don’t get enough to eat, and maybe more of those to come? Trends are running counter to that thinking. Much of the rest of the developing world, including China and India, is beginning to mimic our eating habits, seeking out more meat and dairy foods. Lawrence estimates that if everyone ate like we do, the world could support about 4 billion people—that’s 3 billion people ago.

Phenomena relating to climate change and peak oil, the point at which total fossil fuel supplies begin to diminish, make coming up with a firm maximum number of well-supported earthlings impossible. (“They’re wild cards,” says Lawrence.) Environmental degradation wrought from digging for dwindling energy supplies, metals, and minerals limits the amount of habitat humanity can use for farming or grazing. And the increasing use of water threatens food supplies, too. Nearly 70 percent of the world’s water is currently used for agriculture. But with growing development and prosperity, much of that supply will be threatened. Will limiting agricultural growth act as a brake on population, as Malthus theorized?

How might the future look, if population continues to rise and humanity doesn’t get a handle on its pressing climate and resources problems? Lawrence says that depleted, environmentally tenuous countries like Haiti and Niger, where a majority live in squalor, offer clues. “Their populations have outstripped the country’s ability to support them,” says Lawrence. “They exist because of the assistance of the rest of the world. These are examples of the Malthusian dilemma, at the country level.”

Although some demographers believe that aging populations in the West and in China and Japan will slow worldwide growth sometime around 2035 to 2050, Lawrence says we’re more likely to see decaying and empty landscapes surrounded by larger, more populous ones, not an overall drop in the rate of human propagation. What happens after that is anybody’s guess—as is how long we’ll be able to support a planet teeming with folks.

Two centuries’ worth of history have proved Malthus wrong—though in the long run, the old man may end up having the last wicked laugh.

Are we alone?

As scientists, starting in earnest in the 1960s, awakened to the possibility of an Earth overrun, they began plotting a way out. Way, way out. Astrophysicists turned their imaginations toward looking into space for celestial bodies that would be kind to human habitation. After 50 years of space travel, they have begun to alight, figuratively speaking, on exoplanets—bodies like Earth that circle stars elsewhere in our galaxy. Within the past year and a half, NASA’s Kepler telescope mission has transmitted images and information that show that millions of exoplanets exist.

“Kepler has found that there are lots of ‘Earths’ out there in terms of mass,” says Richard Conn Henry, a professor of physics and astronomy at Johns Hopkins. “The question is, do any have an oxygen atmosphere like ours does?” Such an atmosphere would create the possibility that humans could one day relocate to a new planet with resources of its own. But the possible existence of Earth-like exoplanets raises another question: What if there are planets or moons out there that are already inhabited? What if we’re not alone?

Already we know that Europa, one of the moons of Jupiter, features a liquid subsurface ocean that could, conceivably, support life. And Titan, a methane-enshrouded moon of Saturn, offers the kind of disequilibrium between planet and atmosphere that excites scientists searching for extraterrestrial life. Along with other scientists around the world, Henry searches for intelligent life. He is one of several investigators around the globe who will use the Allen Telescope Array, financed by a wealthy Microsoft executive, in a search for radio waves from extraterrestrial civilizations. He says he has little doubt that life is Out There Somewhere, though the circumstances of Earth, a planet on the edge of a galaxy and away from central black holes and bombarding radiation, might mean that our kind of life is relatively rare. “I doubt very seriously that the universe is Manhattan,” says Henry.

Still, many astrobiologists and cosmologists believe that Earth may be the universe’s New York, New York; if we can make it here, life can make it anywhere. Is any of it hyperintelligent enough to broadcast its knowledge throughout the cosmos?

Henry is hopeful that beings more intelligent than we are—perhaps the products of a planet with a 10-billion-year history of evolving life, as opposed to Earth’s 4 billion years—will send us a message, possibly after watching from their planet as the Earth and moon travel across the light of the sun. If they did contact us, “we’d be receiving an Encyclopaedia Galactica from beings that likely have been around for perhaps 100 million years,” says Henry. “The analogy I make is if Socrates had a phone on his desk, and one of our experts now who knows ancient Greek magically called him, what would Greek civilization end up with? McDonald’s. We’d basically wreck their civilization.” We’d be the Greeks this time around.

Emulating an advanced culture would ruin ours. Despite that glum prospect, Henry says he’d welcome a communiqué from the other side of the galaxy with open ears—a stance that epitomizes the spirit of scientific inquiry. “I’d have mixed feelings about it all,” he says. “But I’d listen.”

Michael Anft is a senior writer for Johns Hopkins Magazine.

Related reading:

The Miracle of Science?


1 Comment


  1. Hello,
    I enjoyed your article. Thank you.
    When I tried to “recommend” it it did not connect to Facebook. I shared the link instead.
    Best,
    Christy

Add your thoughts