Snow and rain tug on earthquake faults in California

Winter weather brings seismic tremors. A new study reveals how water buildup and runoff throughout the year can increase stress along faults in California, triggering small earthquakes.

“This kind of observation is extremely important to constrain our models of earthquakes,” says Jean-Philippe Avouac, a geologist at Caltech who was not involved in the study. Improved models could ultimately help scientists better forecast seismic activity.

Snow and rain compress mountain ranges in Northern California several millimeters during wet winter months. But with the weight of the water gone during the dry summers, the landscape lifts back up. This seasonal squeeze and release of the terrain puts stress on nearby faults, which can set off more small earthquakes.
Researchers compared observations of ground movement from 661 GPS stations in California with the state’s earthquake record from 2006 to 2014. The landscape’s seasonal, water-induced rise and fall corresponded to periodic increases in small quakes, scientists report in the June 16 Science. Most of the quakes were between magnitude 2 and 3 — so small that they wouldn’t have been widely felt, says study coauthor Christopher Johnson, a seismologist at the University of California, Berkeley.
“It’s not like there’s an earthquake season,” Johnson says. Some faults experience more significant stress increases when the land is compressed, others when the land rebounds, depending on the fault orientation. So different faults exhibit more small earthquakes at different times of year. For instance, faults along the Sierra Nevada’s eastern edge have more tremors in late winter and early spring. But the San Andreas Fault system to the west sees more quakes in late summer and early fall, when water levels have dropped and the land is rebounding.

“We’re not yet at the point where we could start applying this knowledge to the hazard forecast,” Johnson says. But the new findings are helping geologists better understand what forces can trigger rumbles under our feet.

Rising temperatures may mean fewer passengers on airplane flights

As if air travel weren’t annoying enough, new research suggests that global warming could force planes to carry fewer passengers to get off the ground. While a little more legroom might sound good, it could make flying more expensive.

Researchers examined the impact of rising temperatures on five types of commercial planes flying out of 19 of the world’s busiest airports. In the coming decades, an average of 10 to 30 percent of flights that take off during the hottest time of day could face weight restrictions.

That’s because warmer air particles are more spread out, generating less lift under a plane’s wings as it goes down the runway. So a plane must be lighter to take off. In some cases, a Boeing 737-800 would have to jettison more than 700 pounds — several passengers’ worth of weight — the researchers report online July 13 in Climatic Change.

Fire ants build towers with three simple rules

When faced with rushing floodwaters, fire ants are known to build two types of structures. A quickly formed raft lets the insects float to safety. And once they find a branch or tree to hold on to, the ants might form a tower up to 30 ants high, with eggs, brood and queen tucked safely inside. Neither structure requires a set of plans or a foreman ant leading the construction, though. Instead, both structures form by three simple rules:

If you have an ant or ants on top of you, don’t move.
If you’re standing on top of ants, keep moving a short distance in any direction.
If you find a space next to ants that aren’t moving, occupy that space and link up.
“When in water, these rules dictate [fire ants] to build rafts, and the same rules dictate them to build towers when they are around a stem [or] branch,” notes Sulisay Phonekeo of the Georgia Institute of Technology in Atlanta. He led the new study, published July 12 in Royal Society Open Science.

To study the fire ants’ construction capabilities, Phonekeo and his Georgia Tech colleagues collected ants from roadsides near Atlanta. While covered in protective gear, the researchers dug up ant mounds and placed them in buckets lined with talc powder so the insects couldn’t climb out. Being quick was a necessity because “once you start digging, they’ll … go on attack mode,” Phonekeo says. The researchers then slowly flooded the bucket until the ants floated out of the dirt and formed a raft that could be easily scooped out.

In the lab, the researchers placed ants in a dish with a central support, then filmed the insects as they formed a tower. The support had to be covered with Teflon, which the ants could grab onto but not climb without help. Over about 25 minutes, the ants would form a tower stretching up to 30 mm high. (The ants themselves are only 2 to 6 mm long.)
The towers looked like the Eiffel Tower or the end of a trombone, with a wide base and narrow top. And the towers weren’t static, like rafts of ants are. Instead, videos of the ant towers showed that the towers were constantly sinking and being rebuilt.

Peering into the transparent Petri dish from below revealed that the ants build tunnels in the base of a tower, which they use to exit the base before climbing back up the outside.

“The ants clear a path through the ants underneath much like clearing soil,” Phonekeo says. Ants may be using the tunnels to remove debris from inside the towers. And the constant sinking and rebuilding may give the ants a chance to rest without the weight of any compatriots on their backs, he says.

To find out what was happening inside the tower, the researchers fed half their ants a liquid laced with radioactive iodide and then filmed the insects using a camera that captured X-rays. In the film, radioactive ants appeared as dark dots, and the researchers could see that some of those dots didn’t move, but others did.

The team then turned to the three rules that fire ants follow when building a raft and realized that they also applied to towers. But there was also a fourth rule: A tower’s stability depends on the ants that have attached themselves to the rod. The top row of ants on the rod aren’t stable unless they form a complete ring. So to get a taller tower, there needs to be a full ring of ants gripping to the rod and each other.

That such simple rules could form two completely different structures is inspiring to Phonekeo. “It makes me wonder about the possibilities of living structures that these ants can build if we can design the right environment for them.”

What Curiosity has yet to tell us about Mars

After five years on Mars, the Curiosity rover is an old pro at doing science on the Red Planet. Since sticking its landing on August 5, 2012, NASA’s Little Robot That Could has learned a lot about its environs.

Its charge was simple: Look for signs that Gale crater, a huge impact basin with a mountain at its center, might once have been habitable (for microbes, not Matt Damon). Turning over rocks across the crater, the rover has compiled evidence of ancient water — a lake fed by rivers once occupied the crater itself — and organic compounds and other chemicals essential for life.
NASA has extended the mission through October 2018. And there’s still plenty of interesting chemistry and geology to be done. As the robot continues to climb Mount Sharp at the center of the crater, Curiosity will explore three new rock layers: one dominated by the iron mineral hematite, one dominated by clay and one with lots of sulfate salts.

So, here are four Martian mysteries that Curiosity could solve (or at least dig up some dirt on).

Does Mars harbor remnants of ancient life?
Curiosity’s Mars Hand Lens Imager can take microscopic images, but preserved cells or microfossils would still have to be pretty big for the camera to see them. What the rover can do is detect the building blocks for those cells with its portable chemistry lab, Sample Analysis at Mars. The lab has already picked up chlorobenzene, a small organic molecule with a carbon ring, in ancient mud rock. Chains of such molecules go into making things like cell walls and other structures.
“We’ve only found simple organic molecules so far,” says Ashwin Vasavada, a planetary scientist at NASA’s Jet Propulsion Laboratory who leads Curiosity’s science team. Detective work in chemistry labs here on Earth could shed light on whether bigger organic molecules on Mars’ surface might degrade into smaller ones like chlorobenzene.

Curiosity could still turn up intact, heavier-duty carbon chains. The rover carries two sets of cups to do chemistry experiments, one dry and one wet. The latter contains chemical agents designed to draw out hard-to-find organic compounds. None of the wet chemistry cups have yet been used. A problem with Curiosity’s drill in December 2016 has held up the search for organics, but possible solutions are in the works.
How did Mars go from warm and wet to cold and dry?
That’s one of the million-dollar questions about the Red Planet. Curiosity has piled on evidence that Mars was once a much more hospitable place. Around 3.5 billion years ago, things changed.

The going theory is that particles from the sun stripped away much of Mars’ atmosphere (and continues to do so) when the planet lost most of its protective magnetic field. “That caused the climate to change from one that could support water at the surface to the dry planet it is today,” Vasavada explains. Curiosity found a higher ratio of heavy elements in the current atmosphere, adding credence to this argument — presumably the lighter elements were the first to go.

There’s also a chance that as the rover hikes up Mount Sharp it could capture regional evidence of the wet-to-dry transition. So far, Curiosity has investigated rocks from the tail end of the wet period. The new geologic layers it will encounter are younger.

“Hopefully we’ll be able to get some insight by looking at these rocks into some of the global changes happening that maybe no longer permitted a lake to be present on the surface,” says Abigail Fraeman, a research scientist at NASA’s Jet Propulsion Lab.
Does Mars really have flowing water today?
Some mineralized salts absorb water and release it as liquid when they break down at certain temperatures. The Curiosity team looked for the bursts of water that might result from such a process in Gale crater and came up empty.

But in 2015, the Mars Reconnaissance Orbiter snapped images of shifting salt streaks indicative of actively flowing water. The images are the best evidence yet that liquid water might not be a thing of the past.

Mount Sharp has similar dark streaks, and Curiosity periodically takes pictures of them. “It’s something we keep an eye on,” Vasavada says. If the streaks change in a way that might indicate that they’re moving, the rover could corroborate evidence of modern-day water on Mars. But so far, the streaks have stayed stagnant.

Where does the methane in Mars’ atmosphere come from?
On Earth, microbes are big methane producers, but on Mars, methane’s origins are still unclear. Early on Curiosity detected extremely low levels of the gas in Mars’ atmosphere. This baseline appears to subtly fluctuate annually — perhaps driven by temperature or pressure. Curiosity continues to monitor methane levels, and more data and modeling could help pinpoint what’s behind the annual ups and downs.

At the end of 2014, researchers noticed a spike 10 times the baseline level. Scientists suspect that methane sticks around in the air on Mars for only about 300 years. So, the methane spike must be relatively new to the atmosphere. “That doesn’t necessarily mean it’s being actively created,” Vasavada says. “It could be old methane being released from underground.” Minerals interacting with subterranean water sometimes make methane gas.

Mars’ methane could also be the product of planetary dust particles broken down on the surface. And yet another possible explanation is biological activity. “We have zero information to know whether that’s happening on Mars, but we shouldn’t exclude it as an idea,” says Vasavada. So, Martian life is unlikely but can’t be completely ruled out.

These spiders crossed an ocean to get to Australia

If you look at a map of the world, it’s easy to think that the vast oceans would be effective barriers to the movement of land animals. And while an elephant can’t swim across the Pacific, it turns out that plenty of plants and animals — and even people — have unintentionally floated across oceans from one continent to another. Now comes evidence that tiny, sedentary trapdoor spiders made such a journey millions of years ago, taking them from Africa all the way across the Indian Ocean to Australia.

Moggridgea rainbowi spiders from Kangaroo Island, off the south coast of Australia, are known as trapdoor spiders because they build a silk-lined burrow in the ground with a secure-fitting lid, notes Sophie Harrison of the University of Adelaide in Australia. The burrow and trapdoor provides the spiders with shelter and protection as well as a means for capturing prey. And it means that the spiders don’t really need to travel farther than a few meters over the course of a lifetime.

There was evidence, though, that the ancestors of these Australian spiders might have traveled millions of meters to get to Australia — from Africa. That isn’t as odd as it might seem, since Australia used to be connected to other continents long ago in the supercontinent Gondwana. And humans have been known to transport species all over the planet. But there’s a third option, too: The spiders might have floated their way across an ocean.

To figure out which story is most likely true, Harrison and her colleagues looked at the spider’s genes. They turned to six genes that have been well-studied by spider biologists seeking to understand relationships between species. The researchers looked at those genes in seven M. rainbowi specimens from Kangaroo Island, five species of Moggridgea spiders from South Africa and seven species of southwestern Australia spiders from the closely related genus Bertmainius.

Using that data, the researchers built a spider family tree that showed which species were most closely related and how long ago their most recent common ancestor lived. M. rainbowi was most closely related to the African Moggridgea spiders, the analysis revealed. And the species split off some 2 million to 16 million years ago, Harrison and her colleagues report August 2 in PLOS ONE.

The timing of the divergence was long after Gondwana split up. And it was long before either the ancestors of Australia’s aboriginal people or later Europeans showed up on the Australian continent. While it may be improbable that a colony of spiders survived a journey of 10,000 kilometers across the Indian Ocean, that is the most likely explanation for how the trapdoor spiders got to Kangaroo Island, the researchers conclude.

Such an ocean journey would not be unprecedented for spiders in this genus, Harrison and her colleagues note. There are three species of Moggridgea spiders that are known to live on islands off the shore of the African continent. Two live on islands that were once part of the mainland, and they may have diverged at the same time that their islands separated from Africa. But the third, M. nesiota, lives on the Comoros, which are volcanic islands. The spiders must have traveled across 340 kilometers of ocean to get there.
These types of spiders may be well-suited to ocean travel. If a large swatch of land washes into the sea, laden with arachnids, the spiders may be able to hide out in their nests for the journey. Plus, they don’t need a lot of food, can resist drowning and even “hold their breath” and survive on stored oxygen during periods of temporary flooding, the researchers note.

Seeing one picture at a time helps kids learn words from books

We’re going through a comic book phase at my house. Since lucking into the comics stash at the library, my 4-year-old refuses any other literary offering. Try as I might to rekindle her love of Rosie Revere, my daughter shuns that scrappy little engineer for Superman every single night.

I know that comic fans abound, but I’ll admit that I get a little lost reading the books. The multi-paneled illustrations, the jumpy story lines and the fact that my daughter skips way ahead make it hard for me to engage. And I imagine that for a preliterate preschooler, that confusion is worse.

There’s evidence to this idea (although it won’t help me force my daughter to choose girl-power science lit over Superman). A recent study found that kids better learn new vocabulary from books when there’s just one picture to see at a time.

Psychologist Jessica Horst and colleague Zoe Flack, both of the University of Sussex in England, read stories to 36 3½-year-olds. These were specially designed storybooks, with pages as big as printer paper. And sprinkled into the text and reflected in the illustrations were a few nonsense words: An inverted, orange and yellow slingshot that mixed things, called a tannin, and a metal wheel used like a rolling pin, called a sprock.

The researchers wanted to know under which reading conditions kids would best pick up the meaning of the nonsense words. In some tests, a researcher read the storybook that showed two distinct pictures at a time. In other tests, only one picture was shown at a time. Later, the kids were asked to point to the “sprock,” which was shown in a separate booklet among other unfamiliar objects.

Kids who saw just one picture at a time were more likely to point to the sprock when they saw it again, the researchers found. The results, published June 30 in Infant and Child Development, show how important pictures can be for preliterate kids, says Horst.

“As parents, it’s easy to forget that children do not look at the written text until they themselves are learning to read,” she says. (This study shows how infrequently preschoolers look at the words.) That means that kids might focus on pictures that aren’t relevant to the words they’re hearing, a mismatch that makes it harder for them to absorb new vocabulary.
Does this mean parents ought to trash all books with multiple pictures on a page? Of course not. Horst and Flack found that for such books, gesturing toward the relevant picture got the word-learning rate back up. That means that parents ought to point at Wonder Woman’s Lasso of Truth or wave at the poor varlet that Shrek steals a lunch from. (Shrek!, the book by William Steig, contains delightful vocabulary lessons for children and adults alike.)

Those simple gestures, Horst says, will help you and your child “literally be on the same page.”

Rings of Uranus reveal secrets of the planet’s moon Cressida

If you could put Uranus’ moon Cressida in a gigantic tub of water, it would float.

Cressida is one of at least 27 moons that circle Uranus. Robert Chancia of the University of Idaho in Moscow and colleagues calculated Cressida’s density and mass using variations in an inner ring of the planet as Uranus passed in front of a distant star. The team found that the density of the moon is 0.86 grams per cubic centimeter and its mass is 2.5×1017 kilograms. The results, reported August 28 on arXiv.org, are the first to reveal details about the moon. Knowing its density and mass helps researchers determine if and when Cressida might collide with another of Uranus’ moons and what will become of both of them.

Voyager 2 discovered Cressida and several other moons when the spacecraft flew by Uranus in 1986. Those moons, and two discovered later, orbit within 20,000 kilometers of Uranus and are the most tightly packed in the solar system.

Such close quarters puts the moons on collision courses. Based on the newly calculated mass and density of Cressida, simulations suggest it will slam into another moon, Desdemona, in a million years.

Cressida’s density suggests it is made of water ice with some contamination by a dark material. If the other moons have similar compositions, the moon collisions may happen in the more distant future than researchers thought. Determining what the moons are made of will also reveal their ultimate fate after a collision: whether they merge, bounce off each other or shatter into millions of pieces.

Two artificial sweeteners together take the bitter out of bittersweet

Artificial sweeteners can have a not-so-sweet side — a bitter aftertaste. The flavor can be such a turnoff that some people avoid the additives entirely. Decades ago, people noticed that for two artificial sweeteners — saccharin and cyclamate, which can taste bitter on their own — the bitterness disappears when they’re combined. But no one really knew why.

It turns out that saccharin doesn’t just activate sweet taste receptors, it also blocks bitter ones — the same bitter taste receptors that cyclamate activates. And the reverse is true, too. The result could make your bitter batter better. And it could help scientists find the next super sweetener.

Saccharin is 300 times as sweet as sugar, and cyclamate is 30 to 40 times as sweet as the real deal. Saccharin has been in use since its discovery in 1879 and is best known as Sweet’N Low in the United States. Cyclamate was initially approved in the United States in 1951, but banned as a food additive in 1969 over concerns that it caused cancer in rats. It remains popular elsewhere, and is the sweetener behind Sweet’N Low in Canada.

In the 1950s, scientists realized that the combination of the two (sold in Europe under brand names such as Assugrin), wasn’t nearly as bitter as either sweetener alone.

But for the more than 60 years since, scientists didn’t know why the combination of cyclamate and saccharin was such a sweet deal. That’s in large part because scientists simply didn’t know a lot about how we taste. The receptors for bitter flavors were only discovered in 2000, explains Maik Behrens, a molecular biologist at the German Institute of Human Nutrition Potsdam-Rehbruecke.

(Now we know that that there are 25 potential bitter taste receptors, and that people express them in varying levels. That’s why some people have strong responses to bitter flavors such as those in coffee, while others might be more bothered by the bitter aftertaste of the sweetener put in it.)

Behrens and his colleagues Kristina Blank and Wolfgang Meyerhof developed a way to screen which of the bitter taste receptors that saccharin and cyclamate were hitting, to figure out why the combination is more palatable than either one alone. The researchers inserted the genes for the 25 subtypes into human kidney cells (an easier feat than working with real taste cells). Each gene included a marker that glowed when the receptors were stimulated.
Previous studies of the two sweeteners had shown that saccharin alone activates the subtypes TAS2R31 and TAS2R43, and cyclamate tickles TAS2R1 and TAS2R38. Stimulating any of those four taste receptor subtypes will leave a bitter taste in your mouth.

But cyclamate doesn’t just activate the two bitter receptors, Behrens and his colleagues showed. It blocks TAS2R31 and TAS2R43 — the same receptors that saccharin stimulates. So with cyclamate around, saccharin can’t get at the bitter taste subtypes, Behrens explains. Bye, bye bitter aftertaste.

The reverse was true, too: Saccharin blocked TAS2R1 — one of the bitter receptors that cyclamate activates. In this case, though, the amount of saccharin required to block the receptors that cyclamate activates would have bitter effects on its own. So it’s probably the actions of cyclamate at saccharin’s bitter receptors that help block the bitterness, Behrens and his colleagues report September 14 in Cell Chemical Biology.

The researchers also tested whether cyclamate and saccharin together could be stronger activators of sweet receptors than either chemical alone. But in further tests, Behrens and his colleagues showed that, no, the sweet sides of saccharin and cyclamate stayed the same in combination.

“This addresses a longstanding puzzle why mixing two different sweeteners changes the aftertaste,” says Yuki Oka, a neuroscientist at Caltech in Pasadena. “They are interrupting each other at the receptor level.” It’s not too surprising that a sweetener might block some receptor subtypes and stimulate others, he notes, but that saccharin and cyclamate have such clear compatibilities is a lucky chance. “Mechanism-wise, it’s surprisingly beautiful.”

Oka notes that no actual tongues tasted artificial sweeteners in these experiments. The tests took place on cells in dishes. But, he says, because the researchers used the human bitter taste receptors, it’s likely that the same thing happens when a diet drink hits the human tongue.

Behrens hopes the cell setup they used for this experiment can do more than solve an old mystery. By using cells in lab tests to predict how different additives might interact, he notes, scientists can develop sweeteners with fewer bitter effects. The technique was developed with funding from a multinational group of researchers and companies — many of which will probably be very interested in the sweet results. And on the way to sweeteners of the future, scientists may be able to resolve more taste mysteries of the past.

Plate tectonics started at least 3.5 billion years ago

Plate tectonics may have gotten a pretty early start in Earth’s history. Most estimates put the onset of when the large plates that make up the planet’s outer crust began shifting at around 3 billion years ago. But a new study in the Sept. 22 Science that analyzes titanium in continental rocks asserts that plate tectonics began 500 million years earlier.

Nicolas Greber, now at the University of Geneva, and colleagues suggest that previous studies got it wrong because researchers relied on chemical analyses of silicon dioxide in shales, sedimentary rocks that bear the detritus of a variety of continental rocks. These rocks’ silicon dioxide composition can give researchers an idea of when continental rocks began to diverge in makeup from oceanic rocks as a result of plate tectonics.

But weathering can wreak havoc on the chemical makeup of shales. To get around that problem, Greber’s team turned to a new tool: the ratios of two titanium isotopes, forms of the same element that have different masses. The proportion of titanium isotopes in the rocks is a useful stand-in for the difference in silicon dioxide concentration between continental and oceanic rocks, and isn’t so easily altered by weathering. Those data helped the team estimate that continental rocks — and therefore plate tectonics — were already going strong by 3.5 billion years ago.

Quantum mysteries dissolve if possibilities are realities

When you think about it, it shouldn’t be surprising that there’s more than one way to explain quantum mechanics. Quantum math is notorious for incorporating multiple possibilities for the outcomes of measurements. So you shouldn’t expect physicists to stick to only one explanation for what that math means. And in fact, sometimes it seems like researchers have proposed more “interpretations” of this math than Katy Perry has followers on Twitter.

So it would seem that the world needs more quantum interpretations like it needs more Category 5 hurricanes. But until some single interpretation comes along that makes everybody happy (and that’s about as likely as the Cleveland Browns winning the Super Bowl), yet more interpretations will emerge. One of the latest appeared recently (September 13) online at arXiv.org, the site where physicists send their papers to ripen before actual publication. You might say papers on the arXiv are like “potential publications,” which someday might become “actual” if a journal prints them.

And that, in a nutshell, is pretty much the same as the logic underlying the new interpretation of quantum physics. In the new paper, three scientists argue that including “potential” things on the list of “real” things can avoid the counterintuitive conundrums that quantum physics poses. It is perhaps less of a full-blown interpretation than a new philosophical framework for contemplating those quantum mysteries. At its root, the new idea holds that the common conception of “reality” is too limited. By expanding the definition of reality, the quantum’s mysteries disappear. In particular, “real” should not be restricted to “actual” objects or events in spacetime. Reality ought also be assigned to certain possibilities, or “potential” realities, that have not yet become “actual.” These potential realities do not exist in spacetime, but nevertheless are “ontological” — that is, real components of existence.

“This new ontological picture requires that we expand our concept of ‘what is real’ to include an extraspatiotemporal domain of quantum possibility,” write Ruth Kastner, Stuart Kauffman and Michael Epperson.

Considering potential things to be real is not exactly a new idea, as it was a central aspect of the philosophy of Aristotle, 24 centuries ago. An acorn has the potential to become a tree; a tree has the potential to become a wooden table. Even applying this idea to quantum physics isn’t new. Werner Heisenberg, the quantum pioneer famous for his uncertainty principle, considered his quantum math to describe potential outcomes of measurements of which one would become the actual result. The quantum concept of a “probability wave,” describing the likelihood of different possible outcomes of a measurement, was a quantitative version of Aristotle’s potential, Heisenberg wrote in his well-known 1958 book Physics and Philosophy. “It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.”

In their paper, titled “Taking Heisenberg’s Potentia Seriously,” Kastner and colleagues elaborate on this idea, drawing a parallel to the philosophy of René Descartes. Descartes, in the 17th century, proposed a strict division between material and mental “substance.” Material stuff (res extensa, or extended things) existed entirely independently of mental reality (res cogitans, things that think) except in the brain’s pineal gland. There res cogitans could influence the body. Modern science has, of course, rejected res cogitans: The material world is all that reality requires. Mental activity is the outcome of material processes, such as electrical impulses and biochemical interactions.

Kastner and colleagues also reject Descartes’ res cogitans. But they think reality should not be restricted to res extensa; rather it should be complemented by “res potentia” — in particular, quantum res potentia, not just any old list of possibilities. Quantum potentia can be quantitatively defined; a quantum measurement will, with certainty, always produce one of the possibilities it describes. In the large-scale world, all sorts of possibilities can be imagined (Browns win Super Bowl, Indians win 22 straight games) which may or may not ever come to pass.

If quantum potentia are in some sense real, Kastner and colleagues say, then the mysterious weirdness of quantum mechanics becomes instantly explicable. You just have to realize that changes in actual things reset the list of potential things.

Consider for instance that you and I agree to meet for lunch next Tuesday at the Mad Hatter restaurant (Kastner and colleagues use the example of a coffee shop, but I don’t like coffee). But then on Monday, a tornado blasts the Mad Hatter to Wonderland. Meeting there is no longer on the list of res potentia; it’s no longer possible for lunch there to become an actuality. In other words, even though an actuality can’t alter a distant actuality, it can change distant potential. We could have been a thousand miles away, yet the tornado changed our possibilities for places to eat.

It’s an example of how the list of potentia can change without the spooky action at a distance that Einstein alleged about quantum entanglement. Measurements on entangled particles, such as two photons, seem baffling. You can set up an experiment so that before a measurement is made, either photon could be spinning clockwise or counterclockwise. Once one is measured, though (and found to be, say, clockwise), you know the other will have the opposite spin (counterclockwise), no matter how far away it is. But no secret signal is (or could possibly be) sent from one photon to the other after the first measurement. It’s simply the case that counterclockwise is no longer on the list of res potentia for the second photon. An “actuality” (the first measurement) changes the list of potentia that still exist in the universe. Potentia encompass the list of things that may become actual; what becomes actual then changes what’s on the list of potentia.

Similar arguments apply to other quantum mysteries. Observations of a “pure” quantum state, containing many possibilities, turns one of those possibilities into an actual one. And the new actual event constrains the list of future possibilities, without any need for physical causation. “We simply allow that actual events can instantaneously and acausally affect what is next possible … which, in turn, influences what can next become actual, and so on,” Kastner and colleagues write.

Measurement, they say, is simply a real physical process that transforms quantum potentia into elements of res extensa — actual, real stuff in the ordinary sense. Space and time, or spacetime, is something that “emerges from a quantum substratum,” as actual stuff crystalizes out “of a more fluid domain of possibles.” Spacetime, therefore, is not all there is to reality.

It’s unlikely that physicists everywhere will instantly cease debating quantum mysteries and start driving cars with “res potentia!” bumper stickers. But whether this new proposal triumphs in the quantum debates or not, it raises a key point in the scientific quest to understand reality. Reality is not necessarily what humans think it is or would like it to be. Many quantum interpretations have been motivated by a desire to return to Newtonian determinism, for instance, where cause and effect is mechanical and predictable, like a clock’s tick preceding each tock.

But the universe is not required to conform to Newtonian nostalgia. And more generally, scientists often presume that the phenomena nature offers to human senses reflect all there is to reality. “It is difficult for us to imagine or conceptualize any other categories of reality beyond the level of actual — i.e., what is immediately available to us in perceptual terms,” Kastner and colleagues note. Yet quantum physics hints at a deeper foundation underlying the reality of phenomena — in other words, that “ontology” encompasses more than just events and objects in spacetime.
This proposition sounds a little bit like advocating for the existence of ghosts. But it is actually more of an acknowledgment that things may seem ghostlike only because reality has been improperly conceived in the first place. Kastner and colleagues point out that the motions of the planets in the sky baffled ancient philosophers because supposedly in the heavens, reality permitted only uniform circular motion (accomplished by attachment to huge crystalline spheres). Expanding the boundaries of reality allowed those motions to be explained naturally.

Similarly, restricting reality to events in spacetime may turn out to be like restricting the heavens to rotating spheres. Spacetime itself, many physicists are convinced, is not a primary element of reality but a structure that emerges from processes more fundamental. Because these processes appear to be quantum in nature, it makes sense to suspect that something more than just spacetime events has a role to play in explaining quantum physics.

True, it’s hard to imagine the “reality” of something that doesn’t exist “actually” as an object or event in spacetime. But Kastner and colleagues cite the warning issued by the late philosopher Ernan McMullin, who pointed out that “imaginability must not be made the test for ontology.” Science attempts to discover the real world’s structures; it’s unwarranted, McMullin said, to require that those structures be “imaginable in the categories” known from large-scale ordinary experience. Sometimes things not imaginable do, after all, turn out to be real. No fan of the team ever imagined the Indians would win 22 games in a row.