Science, at its best, is a self-correcting process—a continual pursuit of truth through rigorous evidence, transparent methodology, and open debate. But what happens when errors are introduced, perpetuated, and defended? Roger Pielke Jr. uncovers the story of the so-called “Frankenstein dataset” in hurricane damage research, which illustrates how flawed data practices distort public perception and policy.
The Frankenstein Sourcebook, born out of undocumented changes to a rigorous, peer-reviewed sourcebook, highlights a crisis in scientific integrity. Its story is not only a cautionary tale about bad data, but also a case study in how the scientific process can fail when institutional accountability is lacking. Let’s take a look at how this dataset came about, why it’s important, and what it reveals about the current state of climate science.
The Origins of the Sourcebook
The original data set, developed over decades of research by Pielke and his colleagues, was designed to normalize hurricane losses by adjusting for inflation, population growth and other economic factors. This normalization process allows researchers to compare losses from historical hurricanes on a case-by-case basis, distinguishing trends in economic losses from changes in wealth or development.
This data set is well documented in studies, e.g. Winkel et al. (2018) and Pielke et al. (2008)as a reliable tool for understanding hurricane impacts. It is based on NOAA's “Best Track” data covering hurricanes making landfall in the United States and follows a consistent methodology.
However, the story took a dark turn when the collection fell into the hands of insurance company ICAT.
How the Frankenstein Sourcebook came about
back Pielke et al. (2008) Following the launch, Pielke's team worked with ICAT to create the ICAT Damage Estimator, an online tool designed to visualize hurricane damage using peer-reviewed data sets. Initially, the collaboration worked as expected: the tool increased access to high-quality research for industry stakeholders.
But in 2010, ICAT was acquired by another company, and Pilke was no longer involved. Over the next several years, ICAT staff who lacked expertise in disaster normalization made undocumented changes to the data set. The changes include replacing post-1980 entries with information from NOAA's Billion Dollar Disasters (BDD) database, which uses a completely different methodology.
Main changes
- Replaced with NOAA's BDD data: ICAT replaces post-1980 entries with BDD information, which includes inland flood losses (from the National Flood Insurance Program, or NFIP) as well as broader economic impacts such as commodity losses and disaster relief expenditures. These additional factors exaggerated post-1980 damage estimates, artificially creating an upward trend.
- Additional activities: The revised data set ICAT17 introduced 61 additional storm damage events, but none of these events were sourced or recorded. Most of these unrecorded events occurred after 1980, further distorting the data set.
- Methodological discontinuities: The BDD approach adopted by NOAA in 2016 is incompatible with the original data set. For example, NFIP compensation did not exist before 1968, so comparisons of damages before and after 1968 are inherently flawed.
- Unsupervised changes: In addition to replacing the BDD material, ICAT17 contains other undocumented changes to the original data set. These changes introduce an upward bias even before the standardized adjustments are applied.
Steve McIntyre commented on Pielke Jr.'s post.
By the time ICAT posted this Frankenstein data set online, it was so far removed from the original peer-reviewed data that it bore no resemblance to a rigorous research product.
How the Frankenstein Sourcebook Has Been Abused
The ICAT17 dataset was later expanded and renamed 'XCAT/ICAT 23 in Willoughby et al 2024', adopted by researchers who considered it a professionally maintained and reliable resource. especially:
- Grinstead et al. (2019) and Willoughby et al. (2024) Use XCAT to claim that hurricane loss normalization is on the rise in the United States and attribute this trend to climate change.
- These studies are published in reputable journals such as Proceedings of the National Academy of Sciences and United Aviation Medical Center It was subsequently cited in influential reports, including the IPCC's AR6.
However, Pielke's analysis shows that when the original data set (Winkel et al. 2018) is used instead of XCAT/ICAT23. In other words, the upward trends claimed in these studies are entirely the product of flawed data practices.
Implications for climate science and policy
The consequences of these mistakes are far-reaching:
- Distorted public perception: These flawed studies were amplified by major journals and the Intergovernmental Panel on Climate Change (IPCC), reinforcing claims that climate change is causing increased hurricane losses. While politically expedient, this assertion is not supported by direct NOAA measurements, which do not show long-term trends in U.S. landfalling hurricanes or their intensity.
- Scientific integrity compromised: Peer-reviewed journals are willing to publish studies based on undocumented, methodologically inconsistent data, and refuse to withdraw when such defects become apparent.points to the breakdown of the scientific process. This failure damaged public trust.
- Wrong policy decisions: Policies based on flawed data can divert resources away from effective hazard reduction strategies. These studies exaggerate the role of climate change in hurricane damage and obscure real drivers of vulnerability, such as poor land-use planning and inadequate building codes.
Call for a course correction
As Pielke said, “In science, mistakes are made.” What matters is how the scientific community reacts when these errors are discovered. The Frankenstein Sourcebook Saga offers an opportunity for a course correction:
- Journals such as Proceedings of the National Academy of Sciences and United Aviation Medical Center The flawed study should be withdrawn to prevent further misuse of the ICAT17/XCAT dataset.
- The climate science community must adopt stricter standards for data transparency and provenance to avoid similar errors in the future.
- Policymakers should demand higher quality evidence before enacting costly climate policies based on unsubstantiated claims.
This case is not just about bad data but also about the integrity of the scientific process. If climate science is to be credible and fulfill its stated role of informing policy, it must adhere to the highest standards of rigor, transparency and accountability.
final thoughts
The Frankenstein Sourcebook is a stark reminder of the dangers of uncritical acceptance of science. Although the temptation to fit data to convenient narratives is strong, true scientific progress requires resisting this impulse. As Pielke's critique demonstrates, science can only fulfill its promise as a self-correcting endeavor if errors are faced and corrected. Let this be a wake-up call for climate science: Integrity must precede ideology.
Source:
Don’t use the ICAT hurricane loss ‘dataset’: An opportunity for course correction in climate science
https://twitter.com/RogerPielkeJr/status/1870496128304578675
https://twitter.com/RogerPielkeJr/status/1870873521808871774
Relevant
Learn more from Watts Up With That?
Subscribe to have the latest posts delivered to your email.