Over the weekend, the nation saw the horrific toll of massive flooding across South Carolina, which has left at least nine dead. The Charleston area got nearly a dozen inches of rain on Saturday, while some parts of the state shattered existing rainfall records with as much as 24 inches over the weekend. Gov. Nikki Haley, among other officials, has categorized the rains as the worst “in 1,000 years.”
The meteorological events the state experienced are the result of a complex interplay of fronts related to, but not solely or directly caused by, the passing of Hurricane Joaquin significantly offshore. As Jeff Halverson of the Washington Post‘s Capital Weather Gang explains:
As Hurricane Joaquin tracked north, well east of the coast, a separate, non-tropical low pressure system was setting up shop over the Southeast late last week. This system drew in a deep, tropical plume of water vapor off the tropical Atlantic Ocean. At the same time, this upper-level low pressure system tapped into the moist outflow of Hurricane Joaquin.
The moisture pipeline fed directly into a pocket of intense uplift on the northern side of the non-tropical vortex. Within this dynamic “sweet spot,” thunderstorms established a training pattern, passing repeatedly over the same location and creating a narrow corridor of torrential rain stretching from Charleston to the southern Appalachians.
The remarkable thing about this process is that it was sustained for three days.
While there’s no question that these rains have had huge and tragic impacts, it’d be useful to take the “1,000-year” declarations with an enormous grain of salt. We do not have anything like 1,000 years of rainfall or flooding data for South Carolina. We barely have 100 years of solid data. To say the rainfall totals constitute a 1,000-year event is simply to say that our models assigned the annual probability of this degree of rainfall to be about 0.1 percent. But from the science of Bayesian statistics, we must draw the inference that the odds that our models were incomplete or just simply wrong far outweigh the probability that we really are experiencing a millennial event.
Moreover, when it comes to flooding specifically, the facts “on the ground” have changed quite literally. As the built environment expands, land that previously was permeable to rainfall and wetlands that previously acted as a sink for flooding become covered with impermeable homes and businesses and parking lots. Similarly, as we build dams and levies and retention basins and other flood-control mechanisms, the flow of that water — which still needs to go somewhere — is inexorably altered. Storms that wouldn’t have been much of a concern at all a generation or two ago can cause devastating flooding today.
Many of the flooding projections quoted in the media and repeated by public officials come from the Flood Insurance Rate Maps prepared for the National Flood Insurance Program by the Federal Emergency Management Agency. But we’ve known for years that those maps are inadequate and outdated; in some cases, they haven’t been updated in decades. FEMA’s efforts to fix and update the maps — including through the use of advanced Light Detection And Ranging (LIDAR) remote-sensing technologies — date all the way back to 1997.
But Congress has been somewhat schizophrenic in its approach to actually completing the project. Communities newly mapped as Special Flood Hazard Zones and those who see their designations changed to reflect higher risk tend to complain vociferously to their representatives when they find they have to purchase flood insurance or when the cost of their coverage increases. While the importance of getting accurate, updated, digitized maps has been stressed in repeated flood-insurance debates by members of both parties, funding for FEMA’s Flood Hazard Mapping and Risk Analysis program was slashed from $222 million in FY 2009 to $204 million in 2010 to $99 million in 2011 to just $84 million in 2014. The largest chunk of these cuts came from the 2011 sequester deal.
Moreover, while the Biggert-Waters Act of 2012 required the NFIP to adjust the rates it charges to reflect updates to the rate maps, Congress moved in February 2014 to defund that provision — Section 100207, originally set to take effect in October 2014 — for 12 to 18 months. Just two months later, it would repeal it altogether.
It should be noted that all of these questions — the risk of catastrophic rainfall, the risk of flooding, the interplay of flood risk with the built environment, the degree to which flood maps are accurate and up-to-date — assume that the level of “real risk” we face is a fixed quantity. In reality, it almost certainly isn’t. Sea levels are rising. Surface, water and atmospheric temperatures are also rising. We don’t know how these changes will impact the risks we all face. Even if our ability to use the historical record to model future events was absolutely flawless, there would still be a large fuzzy area of uncertainty.
What we can take away as the primary lesson of experiencing something the experts call a “1,000-year” event — whether a storm or a flood or a drought — is that it probably isn’t.
Read more at Right Street Blog.