Illustration by Rob Dobi
This article appears in the April 2023 issue of The American Prospect magazine. Subscribe here.
In the spring of 2011, heavy rainfall swelled the Mississippi River to record levels, flooding trailer parks and pushing up gas prices as refineries and fuel terminals along the waterway closed. Surveying the wreckage, Heather McTeer Toney, then mayor of Greenville, Mississippi, found a crucial partner in Mars, Inc., the international conglomerate that makes M&M’s and pet food.
Mars operates an 80-acre rice farm in Greenville—its largest factory in the world. The company sent senior officials and shared its in-house risk assessment with Toney, which helped her plan the city’s police and fire response, design street upgrades, identify points of weakness in the wastewater and levee systems, and work with the Army Corps of Engineers.
“Today, we would call that climate risk,” Toney told the Prospect, but at the time it was “just protecting infrastructure.”
Local officials, civil engineers, and homeowners describe a growing need for information on exposure to the risks of extreme weather. In the past five years, demand has exploded. But not all cities have an anchor business as willing to share as Mars, and many might prefer not to depend on private industry for public planning.
Financial markets and private companies, meanwhile, are in an “arms race” for climate intelligence. Some firms have announced decarbonization plans, while others are pledging to double down on fossil fuels. Regulators, struggling to keep up, have asked for more disclosure.
Private climate risk modelers have been the beneficiaries of this gold rush. Their guidance falls into two buckets: physical risk, or material exposure of assets to hazards, or transition risk, which includes fallout from policy changes, impact on the financial system, and reputation.
Financial institutions have snapped up these modelers, driving rapid consolidation in the nascent industry. BlackRock, the world’s largest asset manager, runs a platform called Aladdin, which has been called the “central nervous system” of the investment industry. It recently acquired a climate modeler to launch Aladdin Climate, which tracks exposure to environmental risk. Credit rating agencies like Moody’s and S&P have joined in, acquiring their own climate rating firms.
The remaining independent modelers look, themselves, a lot like rating agencies. They are private entities vetting the soundness of everything from city infrastructure to financial portfolios. Their unregulated new products—vast troves of climate analytics, with little standardization—are already being used by federal agencies, incorporated into municipal bond ratings, and influencing how investors spend money.
Observers widely agree that new modeling is an improvement on the backward-looking and patchy status quo. But quality varies considerably. New consumer-facing products provide detailed assessments of exposure, but experts have warned that they could be overstating what current models are able to predict at specific properties, or over long time horizons.
“A lot of these bold, hyperlocal claims are greatly outpacing the science,” Daniel Swain, a climate scientist at UCLA, told the Prospect. Some industry leaders voiced frustration. “It’s a Wild West right now,” said Cal Inman, principal at the risk data firm ClimateCheck.
Private climate risk modelers have been the beneficiaries of a growing need for information on exposure to the risks of extreme weather.
Physical scientists interviewed by the Prospect raised concerns about claims made by some of these firms, most of which are led and heavily staffed not by climate scientists but by lawyers, marketing specialists, public-policy experts, and economists.
Meanwhile, the federal government is seeking precise estimates of global warming’s economic impacts. But there too, the demand for certainty can leave regulators crafting exercises that create a false sense of security, relying on risk modelers who are selling precision in an inherently imprecise business, and formulating questions that sound technical but convey little meaningful information.
Noah Kaufman, who until last year was the top climate economist at the White House working on the social cost of carbon, said he worries about the government’s reliance on risk models built around carbon pricing. “Everybody’s looking for the number—the damage value. And the truth is, it’s an impossible question,” Kaufman said. “We don’t know the dollar per ton. We just know it might be really big.”
WHEN A PROSPECTIVE BUYER BROWSES HOMES on real estate sites Realtor.com and RedFin, she can now view scores ranking a property’s exposure to extreme weather, such as heat, flood, and fire. A RedFin study found that the tool is already having an impact on homebuying, making customers less likely to bid on homes deemed higher-risk.
Scores for both websites are provided by the First Street Foundation, a Brooklyn-based nonprofit. In fine print, RedFin advises customers that they should “independently investigate the property’s climate risks to their own personal satisfaction.”
But assessments of risk exposure are hardly a matter of personal taste. Some observers have argued that regulators should vet the information. Instead, the relationship runs in the other direction: Private climate intelligence firms are increasingly being tapped to supply the government with data and analysis.
The most prominent is First Street, whose dozens of government clients include the U.S. Treasury, the Federal Housing Finance Agency, the Consumer Financial Protection Bureau, and all 12 Federal Reserve Banks. First Street is cited in the president’s 2023 budget, and its data is used in a new White House “Climate and Economic Justice Screening Tool.”
First Street Founder and CEO Matthew Eby previously launched the digital marketing agency Anthro. Before that, he was vice president of consumer and brand marketing at The Weather Company, the world’s largest private weather firm.
Eby has argued that it is crucial to make climate risks feel immediate and specific. “If we can make it personal for them, if we make the impacts here and now, that’s what we’ve found as marketers is the best way to convey the impact of something as big as this,” he told a Virginia newspaper when he launched an earlier iteration of the flood tool.
First Street often attracts media attention when it floats a new product, as it did with a splashy press release last summer launching a new model on extreme heat. “We need to be prepared for the inevitable, that a quarter of the country will soon fall inside the Extreme Heat Belt with temperatures exceeding 125°F and the results will be dire,” Eby is quoted saying.
Eby was actually referring to a spike in the heat index—a measure of temperature plus humidity—not to air temperature. While the heat index helps capture the human toll of extreme heat, conflating it with temperature is misleading.
Eby’s site bio is similarly eye-catching. “Under Matthew’s leadership,” it reads, “the Foundation created a first-of-its-kind, peer-reviewed flood model, wildfire model, and extreme heat model to calculate the past, present, and future climate risk of every property in the United States. The Foundation has also calculated the associated economic damage for every property” (emphasis added). The foundation’s more modest Mission page claims only to provide “the most up to date science available.”
IT IS FIENDISHLY DIFFICULT to predict the likelihood and severity of future natural catastrophes.
The average, long-run trend of global heating has been well established for decades—as well as the fact that it intensifies extreme weather. To show those macro trends, scientists use general circulation models (GCMs), which simulate the atmosphere, oceans, land, and ice and compute their interactions using basic laws of physics.
But GCMs suck up enormous computing power, making them costly to run. And while they are good at predicting averages over big areas, that makes them bad at forecasting local weather or catching anomalies. Scientists have tried creating higher-resolution physical models, but it has been slow going.
Not that relying on historical data is better. On the contrary: Extreme weather events are rare by definition, meaning there is a thin data set to draw on. There is also the issue of local climate variability, which, as climate risk expert Kate Mackenzie explains, makes it “harder to detect the ‘fingerprint’ of global warming” in any given event.
The catastrophe, or “cat risk,” modeling industry, which sells insurers and reinsurers odds of losses on storms, has long relied on historical statistics, mixing in some proprietary assumptions and information on clients’ portfolios. For years, they openly disdained scientists who used physics equations in their models. One quipped that climate modelers were “doing brain surgery with a chain saw.”
Insurers could afford to use these backward-looking models as long as climate change was far off in the future, they reasoned, since they write short-term policies. But in the past five years, the industry has been pummeled by some of the highest losses on record, making insurers more interested in what climate modelers have to say.
Hot demand has driven rapid progress in physics-based modeling. But the best versions of these tools come with warning labels, including qualitative assessments based on local terrain data. Insurers and financial institutions are also more capable than ordinary consumers of absorbing uncertainties in modeling, since they apply models across a portfolio of locations.
First Street’s models claim to be able “to calculate the past, present, and future climate risk of every property in the United States.”
FIRST STREET BURST TO NATIONAL ATTENTION in 2020 with its launch of Flood Factor, which found that federal flood maps grossly underestimated risk. Almost twice as many properties were exposed than what FEMA claimed, First Street found. The New York Times published a glossy spread on the report.
First Street licensed its U.S. flood map data from Fathom, a U.K.-based research group. The firms have since split, and Fathom said it could not comment for this story. People familiar with the work told the Prospect that Fathom provided the backbone of the model before becoming concerned about First Street’s handling of it.
On its website, First Street says that Flood Factor’s methods “are going through blind reviews for traditional peer-reviewed scientific publication and have already been through an additional expert panel-review process.” That additional panel review, not the standard in academic science, involved First Street hand-picking three academics to give feedback.
“To me, that terminology of ‘peer review’ was a bit misleading and confusing,” Marco Tedesco, a climate scientist at Columbia who worked briefly with First Street, told the Prospect.
Ed Kearns, First Street’s chief data officer, pointed out in an interview with the Prospect that the Fathom model underlying their work is peer-reviewed, and even won an award. “Still today, we are using the Fathom U.S. model that Fathom created, but now we are making modifications to it and advances to it,” he said.
Asked how accurately First Street can approximate water level rise for any given property, Kearns said that “some of the work Fathom did shows that it’s around five centimeters or so.” (That’s before adding in other sources of error, he added.) But published work by Fathom suggests that their error range for flood depth is closer to one meter, which can be the difference between wet and dry, or between small and enormous losses.
In February, First Street launched a new product, Wind Factor, geared at predicting changes in storm- and hurricane-caused wind damage. Rather than using sparse historical data, it draws on a data set of some 50,000 “synthetic” hurricanes, examining their potential paths and damage.
At a launch event, Eby repeatedly emphasized the importance of trust, open science, and peer review. “Everything we do, we publish, we put through the peer review process,” he said.
But that is inaccurate. While First Street’s work has been published by journal publisher MDPI, including wildfire, extreme heat, and flooding models, other core products are not peer-reviewed. In fact, Wind Factor, which Eby was promoting at that event, has not been peer-reviewed. (Several critics pointed out to the Prospect that MDPI, the world’s largest publisher of open-access articles, is seen by many scientists as low-quality.)
Asked about peer review for Wind Factor, Kearns said that the model is based on peer-reviewed work by Kerry Emanuel, a pre-eminent hurricane expert at MIT who sits on First Street’s advisory board. “Since the method itself was published back in 2006, we thought, ‘We don’t need to get the technique peer-reviewed, but we can go after getting the results peer-reviewed,’” Kearns said. “It’s on our list of things to do.”
Reached by the Prospect, Emanuel enthusiastically endorsed First Street’s use of his work. “It’s just the way I hoped these tracks would be used someday,” he said. He emphasized that models currently in use are seriously outdated, and that it is particularly important to improve assessments of present-day risk.
Other experts expressed reluctance to criticize property-level predictions of modelers like First Street, saying that even where they may lack accuracy, they are a major improvement on existing tools. But studies have shown that overstating precision could encourage misallocation of capital. Rating agencies, for example, may downgrade debt based on inaccurate assessments of creditworthiness.
Tedesco wrote a paper with First Street’s then-head of data science, but before long grew uneasy with their methods and left to work with Cloud to Street, a flood insurance provider.
“I felt more comfortable because it was really more like a spin-off from academia, using a lot of academic tools to translate the research in a robust way to operational products. And this [First Street], to me, feels different. There’s no way to say, exactly, what is the role of academia,” Tedesco said.
FIRST STREET’S WEBSITE LISTS a large bench of scientists and economists as part of its “full research lab team.” But several said they had little affiliation.
“I haven’t done any work for them,” said Brett Sanders, a professor of civil engineering at UC Irvine. He believes that he is listed because he is working on a model of flood risk in Southern California, and he wanted to see how his model compared with First Street data. To access the information, he said, First Street asked him to agree to put his face and name on the website.
What work has University of Central Florida engineering professor Thomas Wahl done with First Street? “Not much, to be honest. My name is in there because we’re part of a big grant funded by the National Academies of Science,” he told the Prospect. Wahl and colleagues also reached out to access First Street data.
One academic listed on the site said that he had never even used the data he accessed through First Street.
The constant search for data is part of a bigger problem. Climate models are resource-intensive, partly due to computing power constraints. Big tech companies see an opportunity here. Supercomputing is increasingly performed in the cloud, hosted by Amazon, Microsoft, and Google. First Street is funded in part by Amazon Web Services, a cloud-computing subsidiary of Amazon. Microsoft and Google are both sponsoring efforts to bypass physics-based modeling altogether.
The Inflation Reduction Act, which lavishly subsidizes green spending without penalizing fossil fuels, is the opposite of carbon pricing.
Aditya Grover, a computer scientist at UCLA, is working on a machine learning model for weather and climate projections. Grover said his group teamed up with Microsoft in order to access their computer servers. He insisted that their tool remain open-source, he told the Prospect, but acknowledged that Microsoft may want to build out future uses and commercialize them, which he could not control.
These developments worry Swain, the climate scientist at UCLA, who also serves as an adviser to ClimateCheck, a competitor to First Street. “There’s an incentive to produce products that may or may not be correct, but to be the first,” he said. “That’s the tech industry, in a nutshell. Move fast and break things.”
Kearns said First Street has scaled these barriers to entry. “I don’t know any [competitor] that’s doing everything to as high degree of spatial resolution as we are. They’re usually driven off by the expense and the labor involved,” he said. It may not stay that way, he added. “As I’ve told other people, it’s like, Hey, if 30 well-determined and fairly smart people in Brooklyn can create these products, you know, it’s not that hard.”
WHEN THE FEDERAL RESERVE RELEASED details in January about its first analysis of major banks’ exposure to climate change, climate scientists reacted with derision.
“Who’s up to take the measure of severe hurricanes under rapid warming 30 years from now?” one modeler wrote on Twitter. Another professor joked to his students that if they use the same techniques, “you will get a lot of points off.”
The Fed chose to focus its climate risk scenario on a future hurricane in the Northeast. Scientists wondered why it would pick one of the most complex and noisiest hazards for the exercise.
R. Saravanan, head of the Department of Atmospheric Sciences at Texas A&M University, suggested in a blog post that regulators were motivated by New York City’s recent experience with Hurricane Sandy. But while Sandy inflicted tens of billions of dollars in damage, it was only a Category 1 hurricane when it made landfall in the U.S.
“Bankers may be surprised to learn that climate change might actually make weak storms like Sandy rarer in the future,” Saravanan wrote. Another danger of the model, he said, is that the Fed lets banks make a “smorgasbord” of modeling assumptions.
The Fed’s scenario analysis has no regulatory consequences—it does not trigger stricter capital requirements or supervision—so its main effect might be to create a false sense of security. But it points to a bigger problem: Where were the climate scientists when the stress test was designed?
“There’s clearly been a breakdown in interdisciplinary communication,” said Madison Condon, a law professor at Boston University who works on climate risk. “This is part of the bigger cultural trend of thinking that the economists are authoritative experts on all things—to think they could design a hurricane stress test, with no hurricane experts.”
In a new paper, “Climate Services: The Business of Physical Risk,” Condon surveys the rise of physical risk modelers, raising concerns about their use of nonstandard tools and “black box models.”
A Fed spokesperson declined to say whether any climate scientists were involved with the design of the exercise. The Fed’s published work leading up to the scenario guidance was written by economists with no apparent background in climate science.
IF CLIMATE EXPERTS DOUBT THE FED’S ABILITY to vet models of physical climate risk, they are even more skeptical of its ability to interpret those effects on the economy.
Climate economists have long relied on integrated assessment models (IAMs), a technique pioneered by Bill Nordhaus. A neoclassical economist who won the Nobel Prize in 2018, Nordhaus is reviled by activists for drawing attention to the greenhouse gas effect in the 1990s—only to emphasize the high costs of taking action.
IAMs rely on damage functions, which model how climate change harms the economy. Most look at the historical relationship between temperature rise and GDP, and project that forward. The tool has been widely criticized by both progressive economists and scientists, and more recent damage functions have attempted to enumerate effects within specific sectors.
However, to model transition risk, the Fed is using tools developed by the Network for Greening the Financial System (NGFS), a coalition of central banks. The damage function used in NGFS’s latest scenarios examines how weather has historically impacted GDP, abstracting away from specific sectors or macroeconomic trends. It omits sea level rise, biodiversity and ecosystem damages, and ignores tipping points or sudden changes, like the collapse of a Florida-sized ice shelf, or a crisis in the troubled Florida housing market. It even excludes climate change’s toll on human life, even though the authors acknowledge that years of life lost “constitute the major share of the costs of global warming in the United States.”
NGFS then describes climate impacts under three future scenarios. “They’re trying to come up with an approach simple enough that the banks would actually try to use it,” said Robert Brammer, an atmospheric and oceanic scientist at the University of Maryland.
But those three static scenarios mask huge amounts of uncertainty, Brammer said. “It’s like Goldilocks: cool, warm, hot.”
The Federal Reserve’s climate stress test was developed by economists, not climate scientists.
POLICY MODELS LIKE THOSE USED by the Congressional Budget Office fail to capture the costs of a warming climate, or the benefits of the energy transition, outside the limited ten-year budget window, giving lawmakers little basis for action.
The Biden administration has made some headway with integrating climate risk into modeling. The White House’s long-term budget outlook estimated the economic impact of future greenhouse gas emissions, and says findings should be seen as a minimum estimate of likely costs, “in the context of substantial uncertainty.”
Yet the administration has also forged ahead with carbon pricing initiatives, an issue on which climate hawks are split.
In January, the Department of Commerce released new guidance for environmental-economic decisions, in an attempt “to put nature on the national balance sheet.” The aim is to create “natural capital accounts,” to value critical natural resources like water and forests.
Basic economics introduces the concept of an “externality,” and pollution is the classic example: It’s costly, but you don’t have to pay for it. The solution is to “price it in.” Fifty years of environmental policy has proposed varying sorts of carbon pricing, a solution that is famously elegant and even more famously unworkable in the U.S. political system.
President Obama introduced the “social cost of carbon,” which he valued at $43 a ton. The Trump administration slashed that to $3–$5 a ton. Biden plans to raise the number to $190 a ton, a number that could inform everything from energy efficiency mandates to fuel efficiency rules and environmental reviews for major projects.
A higher cost of carbon has powerful supporters. It should be used “across government decision-making, not just in regulations,” Sen. Sheldon Whitehouse (D-RI), chairman of the Senate Budget Committee, told E&E News. “Think grants, permitting, purchasing, royalty rates, investment decisions, and trade agreements, just to name a few.”
Kaufman, who was until recently a senior climate economist in the White House, questioned the need for the government to set a single price. Regulators should stop “pretending we can quantify things that are not quantifiable,” he told the Prospect. “Do we really need a quantitative estimate of climate damages in 2100 under a certain scenario?”
The conundrum is that Congress runs on budgetary costs. Without showing how a policy is cost-effective, it won’t be passed into law.
But realist critics point out that the U.S. has repeatedly rejected carbon pricing. Climate policy experts Danny Cullenward and David Victor have shown that even where the U.S. superficially appears to have passed a carbon price—such as with cap-and-trade programs—direct regulation has done the heavy lifting.
The Inflation Reduction Act, the White House’s signature climate legislation, lavishly subsidizes green spending without penalizing fossil fuels. It is the opposite of carbon pricing, which is about making emissions more expensive. In that context, observers say, the administration’s carbon pricing efforts look vestigial.
DURING THE 20TH CENTURY, America’s world-class weather forecasting provided data that drove improvements in crop yields, safe transportation, and warnings for freak storms. Scientific advancements in weather also informed strategy at the Department of Defense.
In recent years, the private sector has chipped away at this public good. Companies like AccuWeather have pushed Republicans to protect them from competition from the National Weather Service, which makes its predictions available for free online. These attempts got a boost in 2018, when President Trump nominated the CEO of AccuWeather to lead NOAA.
Arguing against creating a National Climate Service to accompany the National Weather Service, Rep. Frank Lucas (R-OK), chairman of the House Science Committee, has said that it would only “create more red tape and hurdles to our budding weather industry.”
Yet AccuWeather and its counterparts, which consult for the aviation and logistics sectors, rely on high-quality, freely available data provided by the U.S. government. In medicine and drug development, the government funds basic research, but there is a large intermediate industry where pharmaceutical companies are expected to invest billions in direct research. There is no such intermediate industry in weather; the government produces most of the data, and then the private sector uses it directly.
The accuracy of weather prediction has improved dramatically over the past several decades. By contrast, while the federal government does produce climate-related economic models, its forecasting is orders of magnitude cruder.
In a forthcoming book, Doyne Farmer, a Houston-born physicist who now teaches at Oxford, emphasizes this divergence between weather and economic forecasting. Whereas weather is modeled from the bottom up, packed with local details, the economic models most in use are top-down aggregates, which typically rely on a single representative household rather than attempting to describe the behavior of real market participants.
To Farmer, who pioneered the field of complex systems science, this seems ludicrous. It is, he writes, as if meteorologists were to undertake the “utterly hopeless” exercise of “predicting the U.S. weather based solely on measurements of its average temperature, barometric pressure, and wind velocity.”
As the systemic risks of global warming come into view, many economists are becoming less sanguine about internalizing the costs of pollution. Michael Greenstone, the Milton Friedman Distinguished Service Professor in Economics at the University of Chicago, has called climate change “the ultimate negative externality.”
Perhaps climate change is just the mother of all externalities—an anomaly, given its scope and the way it links together disparate risks—in a world that is otherwise well described by neoclassical models. Seen this way, the climate emergency is just a bad fit for economics. Once its catastrophic harms are contained, we’ll be able to continue business as usual with smooth cost-benefit projections.
Or perhaps climate is just one risk-riddled complex process among many interactive and hazardous systems handed down by industrial modernity. On that view, climate is no aberration. Planetary catastrophe, long underestimated and still poorly described by neoclassical economists, exposes the flimsiness of their models.