This history book offers excellent images but skimps on modern science

Books about the history of science, like many other histories, must contend with the realization that others have come before. Their tales have already been told. So such a book is worth reading, or buying, only if it offers something more than the same old stories.

In this case, The Oxford Illustrated History of Science offers most obviously an excellent set of illustrations and photographs from science’s past, from various ancient Egyptian papyruses to the Hubble Space Telescope’s ultradeep view of distant galaxies. Some of the images will be familiar to science fans; many others are obscure but apt; nearly all help illustrate various aspects of science’s history.
And yet the pictures, while many may be worth more than 10,000 words, are still just complements to the text. Oxford attempts a novel organization for recounting the story of science: a sometimes hard-to-follow mix of chronological and topical. The first section, “Seeking Origins,” has six chapters that cover ancient Mediterranean science, science in ancient China, medieval science (one chapter for the Islamic world and Europe, one for China), plus the scientific revolution and science in the Enlightenment. The second section, “Doing Science,” shifts to experimenting, fieldwork, biology, cosmology, theory and science communication.
Each chapter has a different author, which has the plus of bringing distinct expertise to each subject matter but the minus of vast divergence in readability and caliber of content. Some chapters (see “Exploring Nature,” on field science) are wordy, repetitive and lack scientific substance. Others (“Mapping the Universe”) are compelling, engaging and richly informative. A particularly disappointing chapter on biology (“The Meaning of Life”) focuses on 19th century evolution, with only a few paragraphs for the life science of the 20th and 21st centuries. That chapter closes with an odd, antiscientific tone lamenting the “huge numbers of people … addicted to antidepressants” and complaining that modern biology (and neuroscience) “threatens to undermine traditional values of moral responsibility.”

Some of the book’s strongest chapters are the earliest, especially those that cover aspects of science often missing in other histories, such as science in China. Who knew that the ancient Chinese had their own set of ancient elements — not the Greeks’ air, earth, water and fire, but rather wood, fire, water, soil and metal?

With the book’s second-half emphasis on how science was done rather than what science found out, the history that emerges is sometimes disjointed and out of order. Discussions of the modern view of the universe, which hinges on Einstein’s general theory of relativity, appear before the chapter on theory, where relativity is mentioned. In fact, both relativity and quantum theory are treated superficially in that chapter, as examples of the work of theorists rather than the components of a second scientific revolution.
No doubt lack of space prevented deeper treatment of science from the last century. Nevertheless the book’s merits outweigh its weaknesses. For an accessible account of the story of pre-20th century science, it’s informative and enjoyable. For more recent science, you can at least look at the pictures.

Intense storms provide the first test of powerful new hurricane forecast tools

This year’s Atlantic hurricane season has already proven to be active and deadly. Powerful hurricanes such as Harvey, Irma and Maria are also providing a testing ground for new tools that scientists hope will save lives by improving forecasts in various ways, from narrowing a storm’s future path to capturing swift changes in the intensity of storm winds.

Some of the tools that debuted this year — such as the GOES-16 satellite — are already winning praise from scientists. Others, such as a new microsatellite system aiming to improve measurements of hurricane intensity and a highly anticipated new computer simulation that forecasts hurricane paths and intensities, are still in the calibration phase. As these tools get an unprecedented workout thanks to an unusually ferocious series of storms, scientists may know in a few months whether hurricane forecasting is about to undergo a sea change.

The National Oceanic and Atmospheric Administration’s GOES-16 satellite is perhaps the clearest success story of this hurricane season so far. Public perceptions of hurricane forecasts tend to focus on uncertainty and conflicting predictions. But in the big picture, hurricane models adeptly forecasted Irma’s ultimate path to the Florida Keys nearly a week before it arrived there, says Brian Tang, an atmospheric scientist at the University at Albany in New York.
“I found that remarkable,” he says. “Ten or so years ago that wouldn’t have been possible.”

One reason for this is GOES-16, which launched late last year and will become fully operational in November. The satellite offers images at four times the resolution of previous satellites. “It’s giving unparalleled details about the hurricanes,” Tang says, including data on wind speeds and water temperatures delivered every minute that are then fed into models.

GOES-16’s crystal-clear images also give forecasters a better picture of the winds swirling around a storm’s central eye. But more data from this crucial region is needed to improve predictions of just how strong a hurricane might get. Scientists continue to struggle to predict rapid changes in hurricane intensity, Tang says. He notes how Hurricane Harvey, for example, strengthened suddenly to become a Category 4 storm right before it made landfall in Texas, offering emergency managers little time to issue warnings. “That’s the sort of thing that keeps forecasters up at night,” he says.
In December, NASA launched a system of eight suitcase-sized microsatellites called the Cyclone Global Navigation Satellite System, or CYGNSS, into orbit. The satellites measure surface winds near the inner core of a hurricane, such as between the eyewall and the most intense bands of rain, at least a couple of times a day. Those regions have previously been invisible to satellites, measured only by hurricane-hunter airplanes darting through the storm.

“Improving forecasts of rapid intensification, like what occurred with Harvey on August 25, is exactly what CYGNSS is intended to do,” says Christopher Ruf, an atmospheric scientist at the University of Michigan in Ann Arbor and the lead scientist for CYGNSS. Results from CYGNSS measurements of both Harvey and Irma look very promising, he says. While the data are not being used to inform any forecasts this year, the measurements are now being calibrated and compared with hurricane-hunter flight data. The team will give the first detailed results from the hurricane season at the annual meeting of the American Geophysical Union in December.
Meanwhile, NOAA has also been testing a new hurricane forecast model this year. The U.S. forecasting community is still somewhat reeling from its embarrassing showing during 2012’s Hurricane Sandy, which the National Weather Service had predicted would go out to sea while a European meteorological center predicted, correctly, that it would squarely hit New York City. In the wake of that event, Congress authorized $48 million to improve U.S. weather forecasting, and in 2014 NOAA held a competition to select a new weather prediction tool to improve its forecasts.

The clear winner was an algorithm developed by Shian-Jiann Lin and colleagues at NOAA’s Geophysical Fluid Dynamics Laboratory in Princeton, N.J. In May, NOAA announced that it would test the new model this hurricane season, running it alongside the more established operational models to see how it stacks up. Known as FV3 (short for Finite-Volume Cubed-Sphere Dynamical Core), the model divides the atmosphere into a 3-D grid of boxes and simulates climate conditions within the boxes, which may be as large as 4 kilometers across or as small as 1 kilometer across. Unlike existing models, FV3 can also re-create vertical air currents that move between boxes, such as the updrafts that are a key element of hurricanes as well as tornadoes and thunderstorms.

But FV3’s performance so far this year hasn’t been a slam dunk. FV3 did a far better job at simulating the intensity of Harvey than the other two leading models, but it lagged behind the European model in determining the hurricane’s path, Lin says. As for Irma, the European model outperformed the others on both counts. Still, Lin says he is confident that FV3 is on the right track in terms of its improvement. That’s good because pressure to work out the kinks may ramp up rapidly. Although NOAA originally stated that FV3 would be operational in 2019, “I hear some hints that it could be next year,” he says.

Lin adds that a good model alone isn’t enough to get a successful forecast; the data that go into a model are ultimately crucial to its success. “In our discipline, we call that ‘garbage in, garbage out,’” he says. With GOES-16 and CYGNSS nearly online, scientists are looking forward to even better hurricane models thanks to even better data.

Ice in space might flow like honey and bubble like champagne

Ice in space may break out the bubbly. Zapping simulated space ice with imitation starlight makes the ice bubble like champagne. If this happens in space, this liquidlike behavior could help organic molecules form at the edges of infant planetary systems. The experiment provides a peek into the possible origins of life.

Shogo Tachibana of Hokkaido University in Sapporo, Japan, and colleagues combined water, methanol and ammonia, all found in comets and interstellar clouds where stars form, at a temperature between ‒263° Celsius and ‒258° C. The team then exposed this newly formed ice to ultraviolet radiation to mimic the light of a young star.

As the ice warmed to ‒213° C, it cracked like a brittle solid. But at just five degrees warmer, bubbles started appearing in the ice, and continued to bubble and pop until the ice reached ‒123° C. At that point, the ice returned to a solid state and formed crystals.

“We were so surprised when we first saw bubbling of ice at really low temperatures,” Tachibana says. The team reports its finding September 29 in Science Advances.

Follow-up experiments showed fewer bubbles formed in ice with less methanol and ammonia. Ice that wasn’t irradiated showed no bubbles at all.

Analyses traced spikes of hydrogen gas during irradiation. That suggests that the bubbles are made of hydrogen that the ultraviolet light split off methane and ammonia molecules, Tachibana says. “It is like bubbling in champagne,” he says — with an exception. Champagne bubbles are dissolved carbon dioxide, while ice bubbles are dissolved hydrogen.
The irradiated ice took on another liquidlike feature: Between about ‒185° C and ‒161° C, it flowed like refrigerated honey, despite being well below its melting temperature, Tachibana adds.

That liquidity could help kick-start life-building chemistry. In 2016, Cornelia Meinert of the University Nice Sophia Antipolis in France and colleagues showed that irradiated ice forms a cornucopia of molecules essential to life, including ribose, the backbone of RNA, which may have been a precursor to DNA (SN: 4/30/16, p. 18). But it was not clear how smaller molecules could have found each other and built ribose in rigid ice.

At the time, critics said complex molecules could have been contamination, says Meinert, who was not involved in the new work. “Now this is helping us argue that at this very low temperature, the small precursor molecules can actually react with each other,” she says. “This is supporting the idea that all these organic molecules can form in the ice, and might also be present in comets.”

The brain’s helper cells have a hand in learning fear

WASHINGTON, D.C. — Helper cells in the brain just got tagged with a new job — forming traumatic memories.

When rats experience trauma, cells in the hippocampus — an area important for learning — produce signals for inflammation, helping to create a potent memory. But most of those signals aren’t coming from the nerve cells, researchers reported November 15 at the Society for Neuroscience meeting.

Instead, more than 90 percent of a key inflammation protein comes from astrocytes. This role in memory formation adds to the repertoire of these starburst-shaped cells, once believed to be responsible for only providing food and support to more important brain cells (SN Online: 8/4/15).
The work could provide new insight into how the brain creates negative memories that contribute to post-traumatic stress disorder, said Meghan Jones, a neuroscientist at the University of North Carolina at Chapel Hill.

Jones and her colleagues gave rats a short series of foot shocks painful enough to “make you curse,” she said. A week after that harrowing experience, rats confronted with a milder shock remained jumpy. In some rats, Jones and her colleagues inhibited astrocyte activity during the original trauma, which prevented the cells from releasing the inflammation protein. Those rats kept their cool in the face of the milder shock.

These preliminary results show that neurons get a lot of help in creating painful memories. Studies like these are “changing how we think about the circuitry that’s involved in depression and post-traumatic stress disorder,” says neuroscientist Georgia Hodes of Virginia Tech in Blacksburg. “Everyone’s been focused on what neurons are doing. [This is] showing an important effect of cells we thought of as only being supportive.”

CRISPR gene editor could spark immune reaction in people

Immune reactions against proteins commonly used as molecular scissors might make CRISPR/Cas9 gene editing ineffective in people, a new study suggests.

About 79 percent of 34 blood donors tested had antibodies against the Cas9 protein from Staphylococcus aureus bacteria, Stanford University researchers report January 5 at bioRxiv.org. About 65 percent of donors had antibodies against the Cas9 protein from Streptococcus pyogenes.

Nearly half of 13 blood donors also had T cells that seek and destroy cells that make S. aureus Cas9 protein. The researchers did not detect any T cells that attack S. pyogenes Cas9, but the methods used to detect the cells may not be sensitive enough to find them, says study coauthor Kenneth Weinberg.
Cas9 is the DNA-cutting enzyme that enables researchers to make precise edits in genes. Antibodies and T cells against the protein could cause the immune system to attack cells carrying it, making gene therapy ineffective.

The immune reactions may be a technical glitch that researchers will need to work around, but probably aren’t a safety concern as long as cells are edited in lab dishes rather than in the body, says Weinberg, a stem cell biologist and immunologist.

“We think we need to address this now … as we move toward clinical trials,” he says, but “this is probably going to turn out to be more of a hiccup than a brick wall.”

Humans are overloading the world’s freshwater bodies with phosphorus

Human activities are driving phosphorus levels in the world’s lakes, rivers and other freshwater bodies to a critical point. The freshwater bodies on 38 percent of Earth’s land area (not including Antarctica) are overly enriched with phosphorus, leading to potentially toxic algal blooms and less available drinking water, researchers report January 24 in Water Resources Research.

Sewage, agriculture and other human sources add about 1.5 teragrams of phosphorus to freshwaters each year, the study estimates. That’s roughly equivalent to about four times the weight of the Empire State Building. The scientists tracked human phosphorus inputs from 2002 to 2010 from domestic, industrial and agricultural sources. Phosphorus in human waste was responsible for about 54 percent of the global load, while agricultural fertilizer use contributed about 38 percent. By country, China contributed 30 percent of the global total, India 8 percent and the United States 7 percent.

New technique shows how 2-D thin films take the heat

High-energy particle beams can reveal how 2-D thin sheets behave when the heat is cranked up.

Researchers have devised a way to track how these materials, such as the supermaterial graphene, expand or contract as temperatures rise (SN: 10/3/15, p. 7). This technique, described in the Feb. 2 Physical Review Letters, showed that 2-D semiconductors arranged in single-atom-thick sheets expand more like plastics than metals when heated. Better understanding the high-temp behaviors of these and other 2-D materials could help engineers design sturdy nano-sized electronics.
Commonly used silicon-based electronics are “hitting a brick wall,” regarding how much smaller they can get, says Zlatan Aksamija, an electrical engineer at the University of Massachusetts Amherst not involved in the work. Materials made of ultrathin, 2-D films could be ideal for building the next generation of tinier devices.

But electronics warm up as electric current courses through them. If 2-D materials in a nanodevice expand or shrink at different rates from each other when heated, that could change the device’s electronic properties — such as how well it conducts electricity, says Antoine Reserbat-Plantey, a physicist at the Institute of Photonic Sciences in Barcelona not involved in the research. It’s crucial to know how the thin films react to higher temps.

The new method uses a scanning transmission electron microscope to bombard a film with a beam of high-energy particles. That particle beam stirs up electrons in the 2-D sheet, making the electrons swish back and forth through the material. The collective oscillation, called a plasmon, occurs at a frequency that depends on the material’s density, explains Matthew Mecklenburg, a physicist at the University of Southern California in Los Angeles who was not involved in the work.

The plasmon frequency affects how much energy the particles of the microscope beam lose as they streak through the 2-D material: the higher the frequency, the denser the material, and the more energy that is sapped from the beam. By using another instrument to measure the energies of beam particles after they’ve passed through the 2-D material, researchers can discern the material’s density — and track how that density changes as they turn up the heat.
Robert Klie, a physicist at the University of Illinois at Chicago, and colleagues used this technique on samples of graphene, which is made of carbon atoms, and four 2-D semiconductors made of transition metal and chalcogen atoms. (Chalcogen elements are found in group 16 on the periodic table and include sulfur and selenium). These materials were arranged in sheets from a single atom to a few atoms thick. The team measured the density of each sample at eight temperatures between about 100° and 450° Celsius. That allowed the scientists to calculate how much each material expanded or contracted per degree of temperature increase.

These measurements revealed that the thinnest structures undergo more significant size changes than thicker sheets: A single layer of graphene, which contracts when heated, shrinks more than materials composed of a few graphene layers. The 2-D semiconductors expand at higher temps, but those made of one-atom-thick sheets swell more than semiconductors a few atoms thick. In fact, the heat response of the single-atom-thick semiconductors is “more like [that] of a plastic than a metal,” Mecklenburg says.

This finding may indicate that, like plastics, some 2-D semiconductors have low melting temperatures, which could affect how or whether they’re used in future electronics.

Shipping noise can disturb porpoises and disrupt their mealtime

Harbor porpoises are frequently exposed to sounds from shipping vessels that register at around 100 decibels, about as loud as a lawnmower, scientists report February 14 in Proceedings of the Royal Society B. Sounds this loud can cause porpoises to stop echolocation, which they use to catch food.

While high-frequency submarine sonar has been found to harm whales (SN: 4/23/11, p. 16), low-frequency noise from shipping vessels is responsible for most human-made noise in the ocean, the researchers say. Porpoises have poor hearing in lower frequencies, so it was unclear if they were affected.

In the first study to assess the effects of shipping vessel noise on porpoises, researchers tagged seven harbor porpoises off the coast of Denmark with sensors that tracked the animals’ movement and echolocation usage in response to underwater noise over about 20 hours.

One ship created a 130 decibel noise — twice as loud as a chainsaw — that caused a porpoise to flee at top speed. These initial results indicate that ship noise could affect how much food porpoises hunt and consume.

When it comes to baby’s growth, early pregnancy weight may matter more than later gains

When you’re pregnant, you spend a lot of time on scales. Every doctor appointment begins with hopping (or waddling) up for a weigh-in. Health care workers then plot those numbers into a (usually) ascending curve as the weeks go by.

A morbid curiosity about exactly how enormous you’re getting isn’t what’s behind the scrutiny. Rather, the pounds put on during pregnancy can give clues about how the pregnancy is progressing.

Weight gain during pregnancy is tied to the birth weight of the new baby: On average, the more weight that mothers gain, the bigger the babes. If a mother gains a very small amount of weight, her baby is more likely to be born too early and too small. And if a mother gains too much weight, her baby is at risk of being born large, which can cause trouble during delivery and future health problems for babies.
But staying within the recommended weight range is hard. Very hard. A 2017 review of studies that, all told, looked at over a million pregnancies around the world showed that the vast majority of women fell outside the weight gain sweet spot. Twenty-three percent of those women didn’t gain enough, and 47 percent gained too much, the review, published in JAMA, shows.

But here’s the tricky part. Many studies on weight gain during pregnancy and babies’ outcomes start monitoring women who are already pregnant. That means that these studies rely on women to remember, and report correctly, their prepregnancy weight. And that might not always be accurate.

A new study offers a more nuanced look at pregnancy weight gain. The results, taken from the pregnancies of more than 1,000 Chinese women, suggest that when it comes to babies’ birth weights, the timing of maternal weight gain matters, a lot.
Overall, a woman’s weight gain during pregnancy was clearly linked to baby’s weight at birth, the researchers found. But within those 40 weeks, there were big differences. Prepregnancy weight and weight gain during the first half of pregnancy are the measurements that matter, researchers suggested in the February JAMA Pediatrics. Weight gain after 18 weeks wasn’t linked to babies’ birth weight, researchers note.

Similar results, described in PLOS ONE, come from a 2017 study of Vietnamese women: Weight gain during the first half of pregnancy had two to three times the influence on infant birth outcomes than weight gain in the second half of pregnancy. It’s worth mentioning that nearly three-quarters of the Vietnamese women gained too little weight during pregnancy. And on the whole, the Chinese women were lean before they got pregnant, scenarios that make it hard to translate those findings to women who began pregnancy overweight.

Still, the point remains that weight gain during the first half of pregnancy (and even before it) may have outsized influence on the baby’s growth. Pregnancy — and the growing baby — change so much from week 0 to week 40. It makes sense that all pregnancy weight gain isn’t all one and the same.

It’s nice to see these complexities emerge as scientists get more fine-grained data. There’s still so much we don’t know about how weight gain during pregnancy, as well as other aspects of the in utero environment, can shape babies’ future health.

Here’s when the universe’s first stars may have been born

For the first time, scientists may have detected hints of the universe’s primordial sunrise, when the first twinkles of starlight appeared in the cosmos.

Stars began illuminating the heavens by about 180 million years after the universe was born, researchers report in the March 1 Nature. This “cosmic dawn” left its mark on the hydrogen gas that surrounded the stars (SN: 6/8/02, p. 362). Now, a radio antenna has reportedly picked up that resulting signature.
“It’s a tremendously exciting result. It’s the first time we’ve possibly had a glimpse of this era of cosmic history,” says observational cosmologist H. Cynthia Chiang of the University of KwaZulu-Natal in Durban, South Africa, who was not involved in the research.

The oldest galaxies seen directly with telescopes sent their starlight from significantly later: several hundreds of millions of years after the Big Bang, which occurred about 13.8 billion years ago. The new observation used a technique, over a decade in the making, that relies on probing the hydrogen gas that filled the early universe. That approach holds promise for the future of cosmology: More advanced measurements may eventually reveal details of the early universe throughout its most difficult-to-observe eras.

But experts say the result needs additional confirmation, in particular because the signature doesn’t fully agree with theoretical predictions. The signal — a dip in the intensity of radio waves across certain frequencies — was more than twice as strong as expected.

The unexpectedly large observed signal suggests that the hydrogen gas was colder than predicted. If confirmed, this observation might hint at a new phenomenon taking place in the early universe. One possibility, suggested in a companion paper in Nature by theoretical astrophysicist Rennan Barkana of Tel Aviv University, is that the hydrogen was cooled due to new types of interactions between the hydrogen and particles of dark matter, a mysterious substance that makes up most of the matter in the universe.
If the interpretation is correct, “it’s quite possible that this is worth two Nobel Prizes,” says theoretical astrophysicist Avi Loeb of Harvard University, who was not involved with the work. One prize could be given for detecting the signature of the cosmic dawn, and another for the dark matter implications. But Loeb expresses reservations about the result: “What makes me a bit nervous is the fact that the [signal] that they see doesn’t look like what we expected.”

To increase scientists’ confidence, the result must be verified by other experiments and additional tests, says theoretical cosmologist Matias Zaldarriaga of the Institute for Advanced Study in Princeton, N.J. Several other efforts to detect the signal are already under way.

Experimental cosmologist Judd Bowman of Arizona State University in Tempe and colleagues teased out their evidence for the first stars from the impact the light had on hydrogen gas. “We don’t see the starlight itself. We see indirectly the effect that the starlight would have had” on the cosmic environment, says Bowman, a collaborator on the Experiment to Detect the Global Epoch of Reionization Signature, EDGES, which detected the stars’ traces.
Collapsing out of dense pockets of hydrogen gas early in the universe’s history, the first stars flickered on, emitting ultraviolet light that interacted with the surrounding hydrogen. The starlight altered the proportion of hydrogen atoms found in different energy levels. That change caused the gas to absorb light of a particular wavelength, about 21 centimeters, from the cosmic microwave background — the glow left over from around 380,000 years after the Big Bang (SN: 3/21/15, p. 7). A distinctive dip in the intensity of the light at that wavelength appeared as a result.

Over time, that light’s wavelength was stretched to several meters by the expansion of the universe, before being detected on Earth as radio waves. Observing the amount of stretching that had taken place in the light allowed the researchers to pinpoint how long after the Big Bang the light was absorbed, revealing when the first stars turned on.

Still, detecting the faint dip was a challenge: Other cosmic sources, such as the Milky Way, emit radio waves at much higher levels, which must be accounted for. And to avoid interference from sources on Earth — like FM radio stations — Bowman and colleagues set up their table-sized antenna far from civilization, at the Murchison Radio-astronomy Observatory in the western Australian outback.

Scientists hope to use similar techniques with future, more advanced instruments to map out where in the sky stars first started forming, and to reveal other periods early in the universe’s history. “This is really the first step in what’s going to become a new and exciting field,” Bowman says.