Challenging the Wildfire Paradigm
I’ve been studying fire ecology for decades, an interest which led to the publication in 2006 of my book WIldfire: A Century of Failed Forest Policy. My interest in wildfire did not end with the book and I have continued to read and digest the fire-related literature, attend conferences, and most importantly visit and observe large blazes around the West.
What I began to question, even when I put together Wildfire, was the idea that low severity/high frequency fires were the dominant influence upon western dry forest landscapes
Yet the majority of forest service “restoration” is based upon the idea that somehow our forests are out of whack. That fire suppression has created dense stands that have allowed fuel buildup and thus we are experiencing abnormal fires. That the common story that everyone repeats. The problem is that it is probably not true.
Therefore, all the forest restoration work being done is likely not restoring anything, rather is more an excuse for logging than for anything.
Consider these points.
1. The majority, if not all, low severity/high frequency fires are small. There are tens of thousands of lightning caused fires that occur around the West. But the vast majority (like 99%) burn out before they can char more than a few trees. Even if you totaled up all the acreage burned by these thousands upon thousands of fires, the overall effect on the landscape would be very small because the geographical footprint of each blaze is tiny. I have probably traveled more of the West looking at fires than anyone I know, and I have yet to see a significant area burned as a low severity fire. The reason is that the major factor that determines fire spread, severity and size are burning conditions. You get low severity fires when the conditions for a burn are not favorable for fire spread.
2. The vast majority of the acreage burned in any year is due to a very small number of fires. These blazes occur under highly favorable climate/weather conditions of low humidity, high winds, high temperatures, and drought. They have little to do with fuels. Think of this for yourself–there’s more fuel in the Olympic rainforest than anyplace else in the West, but the Olympic forests seldom burn. Why? Because they are too wet most of the time for a fire to get started and even if one does start, to burn much acreage.
3. Most larger landscape scale fires do not burn as a single type of blaze. Rather they are a mixture of low, mixed, and high severity burns. We call some of these “stand replacement” fires meaning that the majority of trees may be killed by fire–but even in stand replacement blazes, it is unusual to get more than a 50% kill of trees within the burn perimeter. Fires burn in a mosaic with patches of fire killed trees, other patches intermixed with live and dead trees, and still other patches where few if any of the trees are killed. So even in a “stand replacement” burn you can easily have 50% of the forest that is either mixed or low severity (or no burn at all).
4. I’ve been re-reading a lot of the fire scar studies that have been done around the West upon which “restoration” is based, and most of them (maybe all of them) are flawed. They all have several statistical and other errors that exaggerate the number of fires.
One flaw is targeted sampling. Basically one goes out and finds trees with fire scars and samples them. But these are not random samples. In other words, one is seeking out trees that are scarred by fire, which means you are ignoring the majority of all trees. But then people try to suggest these fire scar trees represent the condition of the landscape as a whole. It’s like walking in a bar in Dillon Montana and noting that the majority of men sitting there have cowboy boots on, but then trying to suggest that the majority of all men in America wear cowboy boots. Obviously it may be true about bar patrons in Dillon, but not about men in general. Same is true about the results of fire scar reconstructions.
A second flaw is that most fire scar reconstructions use “composites” of the fire scars. In other words, they add all scars together to come up with the “fire interval”. But this is highly biased in a number of ways.
As noted above, most fires affect only a few trees or small acreage. So should they have the same “weight” as say a fire that burns the entire study area? What you find is that the majority of small fires does not affect much area, and probably have little overall influence on the landscape. In other words, you have a thousand acre study area and lightning causes a single tree to burn—should you imply—as most studies do—that this is one “interval” in the forest burn cycle?
Worse yet, the larger the sample area, the more likely you are to pick up a lot of these single burn trees, so this tends to skew the fire interval to shorter and shorter time frames, giving a false picture of the burn frequency across the landscape. On the other hand, too small a sample size can also skew things since you might miss a large stand replacement event because the one plot you sampled for whatever reason might have been one of the no-burn or lightly burned sites in an otherwise more severe and widespread fire.
It is the relatively rare, but large fires that do the bulk of the ecological work. In addition, unless you cross date the fires, you can have a lot of single tree scarred trees, but each one due to a different lightning strike, and not related to any other fires in the area and all burning only a tiny fraction of the total landscape.
A third flaw is the way people think about the results. Fires are episodic much like floods on rivers. The vast majority of fires occur in series due to climate/weather conditions. Thus you can have 2-3 fires in one decade, followed by maybe 80 years without any fires, then another decade of drought where you have a series of very large blazes. In other words you could easily have 5 fires in a hundred years which would give you a fire return interval of every 20 years, but this would be deceptive. In reality you had 80 years without a single fire.
This is somewhat like river floods. Despite the name of “Hundred Year Floods” you can have two hundred year floods back to back, followed by 200-400 years without any significant floods. Same with fires. Such a fire temporal pattern would undoubtedly lead to dense forest stands that are occasionally “thinned” by fire, beetles, or disease.
We know from other methods including geo morphic, fire atlases, pollen and charcoal records, and other alternative means of deciphering fire patterns that fires are highly influenced by changing climatic conditions. And these conditions are largely influenced by factors like off shore currents, periodic shifts in solar input, and so on. These large global influences have a lot to do with how much forest burns and under what kinds of conditions. What these studies indicate is that large fires are quite normal–even in so called “dry forests” like ponderosa pine if you view things from the proper temporal and spatial scales. At least for many forest types we are not likely experiencing larger fires or fires that are outside of the ‘historic” variability if you view them from the proper time and geographical scales.
The other factor is the cultural bias against dead trees. Dead trees are a sign of a healthy forest. We need beetle kill, wildfires and diseases like mistletoe to keep our forest ecosystems functioning. Most forest management is designed to reduce or eliminate these important factors. The way to think about beetles, fires, and disease is like predators. These are the predators that keep a forest healthy, just as wolves keep the elk herd healthy. Trying to limit these natural processes to a small part of the landscape is like saying it’s OK for a few token wolves to kill a few elk, but we don’t want them affecting elk across the state. However, if you have that attitude, than you are effectively eliminating wolf predations as a major ecological factor. Same thing applies to managing forests to reduce the occurrence of fires, disease and beetles. We need to embrace these forest processes for the critical role they play in maintaining healthy forest ecosystems.
The end result of all this is that the vast majority of forests now being “thinned” for restoration to “restore” their “historic variability” are likely not out of historic variability at all, thus do not need restoration. I would not suggest this applies to every forest stand, but I am willing to bet the vast majority of restoration projects are based on out of date interpretations of past historic conditions.
George Wuerthner is the Ecological Projects Director for the Foundation for Deep Ecology and has published 35 books, including soon to be released Energy: Overdevelopment and the Delusion of Endless Growth.