The Man at Work Collection--Studies in Sustainability

Installment Ten:  “Our Marine Fisheries— Strategies for Stopping Their Decline and Promoting Recovery”

By Deborah L Jackman, PhD, PE, LEED AP™


Image Fisherman Hauling in their Nets at Sea.jpg

Fishermen Hauling in their Nets at Sea, ca 1890,

  by Georges-Jean-Marie Haquette, oil on canvas

Introduction:

          The inspiration for this essay came from a trip to the grocery store.   I am a compulsive food label reader--always scanning labels for ingredients and the origins of items I purchase.  I try to buy locally (or at least regionally) whenever possible, and I also try to purchase items that are produced as sustainably as possible. If given the choice between several similar products with different places of origin, I will generally choose the product from the region that has in place the best environmental safeguards.  I was shopping for frozen fish for dinner one evening and noticed that all of the fish available in the freezer section of that particular supermarket were labelled a “Product of China”.  Even the shrimp (which are abundant in our own U.S. Gulf Coast region) were unavailable except as a “Product of Thailand”.   I have nothing inherently against either China or Thailand, but truth be told, neither nation has a particularly good environmental track record.  This incident really got me thinking about how the seafood we consume is harvested and how we can help ensure this is done responsibly and sustainably.

          The fishermen in our subject painting, Fishermen Hauling in their Nets at Sea, didn’t have to worry about the impact their fishing operations were having on the ecosystems of the oceans.  Art historians have noted that the fishermen depicted in the painting are using an old-style trawl net to capture fish such as sardines, mackerel, and blue whiting off the coast of France. In the 19th century, the world’s human population was much smaller, thereby placing much less pressure on fish populations as food sources. Furthermore, many of the pollutants and factory ship fishing methods which jeopardize both fish populations and the larger ocean ecosystem today were not a factor back then.  The type of trawl net being used by the men in the painting is dwarfed by modern day trawl nets which are much larger, extend much further out from the boat and which can haul in tons of fish at a time because they are operated by mechanized winches rather than hauled in using human strength alone.

          With the earth’s human population currently estimated at more than 7 billion people, sources of protein in the human diet are increasingly at a premium. Production of land-based protein sources such as livestock and terrestrial plant-based sources such as seeds, nuts, and legumes are limited by their own set of environmental and economic constraints.  Throughout human history, the oceans have appeared to be an inexhaustible resource for mankind. However, in the last century humans have exploited the oceans at increasing rates, and we have reached a point that would have been unimaginable to our forebears—a point where we are exploiting ocean resources at a rate that exceeds the oceans’ ability to regenerate themselves.  In short, our utilization of ocean resources, and specifically its fisheries, has become unsustainable.  This essay will examine the factors which contribute to the decline of our oceans’ fisheries and will suggest ways we might reverse this decline.      


      Causes of the Current Decline in Fishstocks:

          There is no single reason for the current decline of ocean ecosystems and our fisheries.  The oceans are an extremely complex ecosystem and not all the dynamics governing them are fully understood, even today. However, most experts agree that three main forces are likely to blame for the fisheries’ decline:

  • Overfishing
  • Global warming/global climate change
  • General degradation of ocean ecosystems due to pollution

 

          Overfishing occurs when fish are harvested in numbers too great to allow breeding stocks to replenish themselves.  In 2006, the Global International Waters Assessment (GIWA) taskforce, which was commissioned by the United Nations Environment Programme (UNEP), published a comprehensive report on the status of the world’s aquatic ecosystems, entitled “Challenges to International Waters:  Regional Assessments in a Global Perspective” [1]. One of the major conclusions from the report was that 52% of the world’s fish stocks were being fished at capacity (meaning any additional pressures on them would cause them to become over exploited and unsustainable), 16% were over exploited, and 7% were depleted.  Hence, only 25% of the world’s fish stocks were deemed to be at healthy levels. Since the report was released in 2006 (nearly a decade ago) pressures on fisheries have grown even greater.  It is likely that the current statistics are worse than those published in the 2006 GIWA report. 

          It is generally agreed that the main causes of overfishing include 1) over-exploitation, 2) excessive by-catch and discards; and 3) destructive fishing practices.
          Over-exploitation is caused by over harvesting of commercial species by industrial fishing fleets.  Industrial fishing ships use large mechanized trawl nets and can harvest tons of fish each day.  Many also house on-board fish processing and freezer facilities so that fish harvests can be immediately preserved for market.  Such on-board processing facilities tend to encourage over-harvesting because fishing industry personnel know that everything they catch will be marketable and that there will be no waste due to product spoilage.

          Many countries’ commercial fishing fleets knowingly violate agreed upon bag limits, harvesting far more fish than what has been determined to be sustainable. Recently, the scientific journal Nature [2] reported that commercial Chinese fishing fleets operating in waters off the coast of West Africa had taken 2.9 million tons of fish from those waters during 2011.  This severely impacted the remaining fish stocks available to local West African peoples, who rely on fishing to provide themselves with a subsistence diet. Worldwide, Chinese commercial fishing operations harvested 4.6 million tons of fish in 2011, which was 12 times what they reported to the United Nations.   The discrepancy was uncovered by researchers who compared what the Chinese reported having caught against commercial sales figures of fish sold under a “Product of China” label of origin. (Any fish which were the product of Chinese aquaculture as opposed to wild-caught were discounted from the calculations. So, the reported numbers reflect only wild-caught fish.)   While the Chinese are not the only commercial fleet to violate bag limits, theirs has been the subject of some of the most egregious violations in recent years.

          Excessive by-catch and discards also contribute to commercial overfishing and to damage of the ocean ecosystem.  When certain types of nets and fishing practices are employed, many species of non-commercial fish, reptiles such as sea turtles, and marine mammals can be incidentally captured along with the targeted species.  These species are either left to die in the nets, or if returned to the ocean, often die anyway due to trauma.  While not commercially desirable, many of the by-catch fish are part of the food chain and provide food to commercial species of fish.  If stocks of non-commercial fish are depleted through by-catch, populations of commercially desirable fish are also negatively impacted.  

          Destructive fishing practices include bottom trawling (which when employed in coastal areas can destroy delicate coral reefs), blast fishing (which uses explosives like dynamite to set off underwater explosions that kill or stun fish and brings them to the surface for collection by fishermen), and poisoning (the use of chemicals like cyanide and chlorine bleach) to kill fish, which then float to the surface and are subsequently collected.  Blast fishing and poisoning not only contribute to reef destruction (from which it is estimated a coral reef can require up to a century to recover) but can also jeopardize human health.  There are reported instances of people becoming sickened by consuming fish harvested by poisoning methods.  Blast fishing and poisoning are not generally used by large commercial fishers, but by subsistence fishers who live near the coast and rely on fishing for their livelihoods.  Nonetheless, the ecological damage done by these practices is severe because it impacts delicate coastal areas that are already being threatened by agricultural chemical run-off, untreated human sewage, oil spills, and soil erosion from the land.  Blast fishing and poisoning are currently the most prevalent in Southeast Asia and off the coast of East Africa, near Somalia [1].   

          Scientists also believe that certain important commercial fish populations and marine ecosystems are in decline due to global climate change.  This decline will likely continue to increase as atmospheric CO2 levels continue to rise.  Rising CO2 levels not only cause an increase in ocean temperatures, but a decrease in salinity (as ice caps containing fresh water melt and dilute salt concentrations) and an increase in acidity (since higher dissolved CO2 levels in the water lower pH).  Two examples of how this impact may play-out are discussed below. However, the projected damage to global fisheries is not limited to these examples but threatens to be much more wide-spread and pervasive.

          One of the best examples of damage to a marine ecosystem resulting from increased atmospheric carbon dioxide levels is that of the decline and death of coral reefs in many places on earth [3].   While localized pollution has contributed to their decline, the more generalized culprits are bacterial infections and increased ocean acidity.  An increase in water acidity slows the formation of the calcium carbonate skeleton of the coral and in extreme cases can even cause it to soften or dissolve. The coral is therefore less able to recover from the normal wear-and-tear it experiences due to ocean currents, tide forces, and storms. And, as increased CO2 levels cause ocean temperatures to rise, the warmer waters promote more rapid bacterial growth, and bacterial infections in coral become more frequent and severe.  Since many species of fish and shellfish of commercial interest depend on the health of coral reefs, the decline of these reefs (which is an environmental tragedy in its own right) also can impact the health of that fishery.

          An example of an important commercial fish species that will likely be impacted as global climate change progresses is that of Atlantic Cod [4],[5].  Cod has been an important commercial fish for millennia.  The maritime cultures of the North Atlantic region including those in Norway, Newfoundland, and New England were historically based on cod.  And cod was traditionally abundant.  However, due to severe over fishing in the last half century, cod populations plummeted.  After stringent bag limits and outright bans in some fisheries were instituted starting in the 1990s, the cod population has slowly begun to recover.  However, now it faces an uncertain future not because of overfishing, but because of global climate change.   Cod is a cold water fish and it needs cold water in which to breed. Cod is also a carnivore which feeds on other fish.  As ocean temperatures in the traditional cod fisheries of the North Atlantic begin to rise, cod will migrate further north, into the colder waters of the arctic.  Preliminary attempts at computer modelling designed to project cod populations in the presence of global climate change suggest that in the short term this migration will increase cod stocks in far northern waters.  But, as populations in the north grow, stocks of other species of arctic fish will become depleted by the voracious cod.  In the extreme, the cod become cannibalistic, feeding off of their own young if sufficient feedstock of other species are not present.  Then populations will plummet.  In the process, species of fish indigenous to the arctic will be driven to extinction.  Since ocean salinity levels will drop at the same time water temperatures rise, the viability of cod eggs and young hatchlings is also uncertain.  They may not be able to survive outside of a relatively narrow range of salinity.  So, at this point, the ultimate fate of the cod is unknown.  Under the best of circumstances, cod populations will reach some new steady state level, but at much higher latitudes than before.  They will become scarce in their traditional fisheries of the North Atlantic.

          In addition to overfishing and climate change, the general decline of ocean ecosystems due to pollution also negatively impacts fish populations.   Pollution takes many forms:  dumping of untreated sewage into oceans, chemical and agricultural run-off from land-based activities, unauthorized dumping of garbage at sea, oil spills, etc.  A comprehensive discussion of ocean pollution is beyond the scope of this article. However, two general points can be made: 1) increased ocean pollution follows from increased human population and human activity, and 2) ocean pollution can be controlled if human beings enact and follow basic environmental protocols and pollution prevention strategies.   

 

Why Fish Farming is Not the Definitive Solution:

          A substantial amount of the fish we consume today is no longer wild-caught, but produced via aquaculture, (i.e. on so-called “fish farms”).  And, levels of aquacultural production are projected to grow at a rapid pace into the future.  At first blush, this approach may seem to be a solution to overfishing, but, in fact, fish farming carries its own set of negative environmental impacts.

          Emerson [6] provides an excellent overview of the negative environmental and social impacts of aquaculture.   These include:

  • Pollution of inland and coastal waters—Most water pollution from aquaculture is the result of fish feed.  Direct pollution results from overfeeding fish.  Uneaten food adds to the biological oxygen demand (BOD) in the water and can result in eutrophication of those waters.  Oxygen is needed to breakdown the uneaten feed into the fundamental byproducts of organic decay—carbon dioxide, nitrogen, and water.  When large amounts of organic matter, such as fish food, enter the ecosystem, the oxygen needed to facilitate organic breakdown comes from the water, thereby reducing dissolved oxygen levels in the water. This can result in fish kills and the promotion of excessive algae growth.  Pollution is also caused when the fish eat the food and then produce waste products that are introduced into the water system.  Via this mechanism, high levels of nitrogen based compounds and bacteria can be introduced.  Overfeeding the fish exacerbates this because the fish will consume more food than can be digested, with undigested and partially digested food passing through them into the ecosystem. Practices that can mitigate the negative environmental impacts of fish feeding include 1) not overfeeding fish; 2) careful site selection for coastal fish farms, choosing areas that have strong currents and good water circulation, in order to more quickly flush out byproducts; 3) biological and/or physical pretreatment of effluent waters before discharge to the larger body of water, to remove excess pollutants (where practical); and 4) polyculture—a practice where several species of fish are raised together, consisting of at least one species of “bottom feeder”, which can consume excess fish food that falls to the bottom.  Common species that form polyculture systems are scaly finned fish in combination with oysters or mussels.

 

  • Using natural fish stocks to feed farmed fish— Farmed fish, like all animals, must eat in order to survive and grow.   For carnivorous fish, such as farm-raised salmon, diets must consist of large amounts of animal-based feed.  One commonly used source of such feed is to catch wild fish species that have little or no market value themselves—so-called “garbage fish”. These fish are fed to the farmed fish. The problem with this method is that many of the “garbage fish” may not be palatable to humans, but they form an important link in the natural food chain.  Depletion of these non-marketable species can negatively impact wild populations of commercially important species, causing their decline. Potential solutions to this problem include raising and creatively marketing more herbivorous species of fish, as opposed to carnivorous ones, and to begin to feed carnivorous fish feed manufactured from land-based meat and dairy by-products. 

 

  • Introduction of alien or modified species to wild ecosystems which can threaten biodiversity— Some farmed fish species have been improved through selective breeding such that they grow larger more quickly and also exhibit other traits not common in their wild relatives.  When such individuals are released (intentionally or by accident) into the wild ecosystem, their genetic material can dominate that of the wild variety, since they may grow and reproduce at an accelerated pace. Genetic material present in the wild individuals that could be critical in future species survival is potentially lost. This scenario becomes even more problematic when genetically engineered fish (who contain genes from other species) are farmed.  Then it is not just preferential selection of certain genes from within the natural genome of the species that occurs, but potentially the introduction of totally foreign genetic material into the natural fish population, should such GMO organisms escape into the wild.  (Such a scenario is not unlikely in the future given that as of March, 2014, the US Food and Drug Administration was in the final stage of approving for sale to consumers the AquAdvantage® salmon.  This genetically modified Atlantic salmon contains genes from both the unrelated Chinook salmon and an eel, allowing it to grow twice as fast as its natural, non-GMO relative [7].) Another related scenario occurs when a species not native to a region (and which therefore cannot interbreed with wild relatives) is farmed.  When such individuals escape into the wild, they do not interbreed with the wild fish, but rather can displace the wild species.  An example of this occurred in the late 1990s in the Pacific Northwest, where farm-raised Atlantic Salmon escaped and negatively impacted the native Pacific Salmon population.

 

  • Coastal habitat destruction— When coastal areas are cleared of their natural vegetation in order allow development of aquaculture, loss of natural species (both aquatic and land-based) and loss of erosion control capability can result.   A prime example is the large scale loss of tropical mangrove forests in Southeast Asia, as these areas are converted into shrimp aquaculture production.  Shrimp need the brackish water in ecosystems like those found in coastal mangrove forests to grow and reproduce. Conversion of these areas into aquaculture production has been expeditious for the shrimp production industries of such nations a Thailand, China, and Indonesia.  However, it jeopardizes many native species found only in the mangrove ecosystem and may also result in massive erosion of coastal lands when the next large storm, typhoon, or tidal wave hits these areas.  Addressing this concern means limiting large scale aquaculture operations to less sensitive coastal areas and establishing coastal preserves to ensure that these delicate ecosystems survive.

 

  • Displacement of subsistence fisherman by large aquaculture corporations—Hundreds of millions of people world-wide rely on subsistence fishing to provide protein to their diets and to provide some money for basic purchases, through the sale of fish at local fish markets.  Most of these subsistence fishing activities use traditional methods which have little impact on the larger environment.  When large aquaculture interests move in to coastal areas, they displace the local subsistence fishermen, robbing them of their livelihoods.  Not only does this promote economic hardship, but from an environmental standpoint it also encourages these impoverished people to engage in highly damaging fishing practices such as the blast fishing discussed above, in a desperate attempt to make a living. 

 

          As with most industries, aquaculture is not inherently bad when it is conducted in a controlled and responsible manner.  In fact, the argument can be made that it is an industry that will be crucial to the future of humanity and to the conservation of natural fisheries, and that its development is to be encouraged. However, such development and growth must be done responsibly.  Just as land-based industries have had to comply with a number of environmental regulations in recent decades, so must the aquaculture industry, in order for it to be sustainable.  One factor that has tended to exacerbate its negative environmental impacts over the last two decades is that aquaculture is growing most rapidly in nations with few or no environmental regulations.  As the economies of these nations mature-- and due to pressures applied to them by international organizations like the UN and by end consumers seeking sustainably produced products—it is increasingly likely that they will begin to impose reasonable environmental restrictions on their aquaculture industries.
          A comprehensive discussion of aquaculture is beyond the scope of this essay.  But, the interested reader is encouraged to read the report by Hall, et.al., [8], which provides an extensive discussion of the global status of aquaculture, along with detailed information on what is needed to ensure its long-term sustainability.

 

Actions to Help Ensure Sustainable Fisheries for Future Generations:

              The ocean’s ecosystems are extremely complex and one must be cautious when drawing conclusions regarding the reasons for the decline in our marine fisheries or when proposing solutions to the problem. However, it is generally agreed that the following initiatives will help improve the quantity and diversity of fish living in the Earth’s oceans over the long term: 

1.  Strict bag limits and moratoria on fishing of certain species to allow recovery time— This is not a new idea, since moratoria and bag limits have been used to manage commercial fish populations such as cod in the past with some success. But it is critical that it continue to be applied in the future. It is difficult to enforce moratoria and limits due to the vastness of the sea.  While the United Nations and some national governments attempt oversight, they are unable by themselves to ensure full compliance. Compliance depends to a large extent on education and on the “honor system.”  While many commercial fishing interests follow established rules, other commercial fishing interests balk at them, because of short-sighted profit motives. But, these commercial interests must be educated as to the impact of continued unrestrained overfishing.  If it continues, soon the commercial fishing industry will cease to exist altogether because the fish they seek will be extinct. Framed in this way and understanding that it is in their own best interests to ensure a sustainable population of fish, it may be possible to persuade the vast majority of commercial fishing interests to abide by limits. 

2.  Establish economic sanctions on companies or nations that routinely ignore agreed upon catch limits—In the perfect world, fishing interests might be persuaded to abide by catch limits purely for their own long-term self interest, i.e., to ensure the continued health of the fisheries upon which their businesses rely. However, since ours in not a perfect world, we can expect that some fishing interests will continue to violate limits despite our best efforts at education. For this reason, national governments and international organizations such as the UN should establish an agreed upon sanction system to economically punish companies and nations who continue to violate limits.  Such sanctions could include limiting the access of these entities to loans and investment capital, or freezing monetary assets held in banks.  Such measures have been used to pressure rogue states, e.g., Iran, for political actions incompatible with international law. They could also be used in cases of extreme environmental malfeasance, as relates to global fisheries.     

3.  Educate consumers—Consumers have the ability to “vote with their pocketbooks” and refuse to buy certain fish products that they don’t believe are of sufficient quality or which are tainted in environmental or political terms.  Many consumers, especially in wealthy first world countries such as the US, have become increasingly conscious of the quality and provenance of their food.  The desire for locally and regionally sourced foods, and for sustainably and humanely produced foods, is growing.  The interest in “fair trade” foods, which promise to give the farmer/producer a significant portion of the revenues from the sale of the food is also strong.  Given a choice, many consumers will select those food alternatives that are the most sustainably and justly produced. Labeling which includes the origin and content of fish, as well as information about conditions under which it was raised/harvested is critical to informed consumer choice.   Consumers must be also be educated about the state of our wild fisheries, about issues related to aquaculture, and about some of the environmental abuses present in the fishing and aquaculture industries.  Faced with strong consumer pressure to improve their operations or lose sales, the seafood industry, as a whole, will adopt better, more environmentally responsible practices.  There are prominent examples which illustrate that consumer pressure does drive improvements in seafood industry practices.  One example is the 1986 US consumer boycott of tuna, which led to changes in tuna fishing methods in the South Pacific to minimize by-catch of dolphins. While not perfect, the dolphin-safe tuna inspection and labeling system that US tuna producers adopted has yielded large reductions in dolphin mortality associated with tuna fishing. Another example is the 2002 boycott of the Chilean Sea Bass (aka, the Patagonian Toothfish) led by prominent chefs and restaurateurs.  This fish came perilously close to extinction due to overfishing in the 1990s when it became a trendy dish in high-end restaurants.  Following the 2002 boycott, its population began to recover, and 2014 reports from marine organizations and environmental groups indicate its population is recovering.     

4.  Discontinue or counter government subsidies that distort fish prices and incentivize poor production practices—In some instances, seafood that is produced in a non-sustainable manner is priced preferentially over that which is produced more responsibly, not because such seafood is cheaper to produce, but solely because of government subsidies.  In these cases, government subsidies must be removed or neutralized in order to promote better fishing practices.  While this will cause fish prices to rise, their rise will simply be a reflection of the true cost of production.  A prime example of this situation is the shrimp market in the United States. Shrimp is the single most popular seafood item in the US and the market for it is huge. Most shrimp sold in the US today is produced in China or Thailand by large aquaculture interests. Yet, the US has an enormous domestic source of naturally produced (wild) shrimp in its Gulf Coast shrimp fishery.  These wild shrimp fisheries are largely within US territorial waters and are subject to relatively strict US fishing regulations.  These regulations help to ensure that good practices are being used and that bag limits are complied with.   Prior to the early 2000s, most shrimp consumed in the US was caught in the Gulf Coast.  However, since 2009, the US market has been flooded with inexpensive shrimp from Southeast Asia.  The governments of seven countries-- China, Ecuador, India, Indonesia, Malaysia, Thailand, and Vietnam—heavily subsidize their own aquaculture interests with billions of dollars annually.  This allows these countries to sell shrimp into the US market at below production cost. The unsubsidized Gulf Coast shrimping industry, centered in Louisiana, cannot compete and many shrimpers have been driven out of business.   Recently, the Louisiana seafood industry has asked that the US government impose a tariff on imports of shrimp from these countries, in order to neutralize the effect of the subsidies [9].  Regardless of one’s economic views on tariffs and subsidies, from a sustainability perspective, it would be preferable for the Louisiana shrimp industry to supply more shrimp into the US market since much of the shrimp aquaculture done in Asia is largely unregulated and uses questionable practices.  Furthermore, the embodied energy (a concept that readers of this essay series are familiar with) associated with shrimp that must be transported thousands of miles is huge. 

5.  Better stewardship of the oceans to avoid  pollution—It is self-evident that the healthier and cleaner the ocean ecosystems are overall, the healthier its inhabitants (the fish) will be.  The subject of ocean pollution is a huge one. Its causes are many and the solutions complex.  Addressing ocean pollution will require international cooperation, since no one nation can by its own actions and laws fully address the problem.  It is also a subject that is beyond the scope of this essay. Let us simply say that we should do all we can to mitigate and prevent ocean pollution, both to protect fisheries and to help promote the health of the planet overall.   

6.  Control global warming—As with ocean pollution, the subject of global climate change and global warming is beyond the scope of this essay.  Its damaging effects go beyond those affecting fisheries and impact all aspects of life on Earth.  However, we have seen, as discussed above, that there are some specific concerns related to fisheries (i.e. the impact on coral reefs and on cod) being driven by global warming. We can’t reverse all of its impacts, but can mitigate some of its effects if we start now to address it.

 

          Arresting and reversing the decline in our marine fisheries will not be easy. It will require a broad commitment from all the nations of the world.  Most of the challenges are not technical because from a scientific perspective most agree on what needs to be done—prevent overfishing, stop pollution, engage in responsible aquaculture, etc.  Rather, the challenges are political and social.  We need the political and social will to improve current fishing and aquaculture practices.  Building consensus at the international level on the need to employ the various conservation strategies discussed will be a difficult task.  Yet, there are successes to build upon such as the recent resurgence in cod and Chilean sea bass populations.  And, there really is no alternative but to reach consensus because seven billion human beings need to eat if they are to survive; and the oceans continue, as they have throughout human history, to be a critical source of that food. 

The sea does not reward those who are too anxious, too greedy, or too impatient. One should lie empty, open, choiceless as a beach - waiting for a gift from the sea.

Anne Morrow Lindbergh


References and Further Reading:

  1. “Challenges to International Waters:  Regional Assessments in a Global Perspective”, report of the Global International Water Assessment group, published by the United Nations Environment Programme in collaboration with GEF, the University of Kalmar and the Municipality of Kalmar, Sweden, and the Governments of Sweden, Finland, and Norway, 2006.
  2. Pala, Christopher, “Detective Work Uncovers Under-reported Overfishing: Excessive Catches by Chinese Vessels Threaten Livelihoods and Ecosystems in West Africa”, Nature, Vol. 496, April 4, 2013.
  3. Lalasz, Bob, “Coral Reefs and Climate Change:  What the New IPCC Report Says,”  Cool Green Science:  The Science Blog of the Nature Conservancy, March 31, 2014.  Retrieved from http://blog.nature.org/science/2014/03/31/ipcc-coral-reefs-science-climate-change/  on June 24, 2014.
  4. “Cod and Future Climate Change,” International Council for the Exploration of the Sea (ICES) Report No. 305, September 2010.
  5. “Climate, Fisheries, and Protected Resources”, National Oceanic and Atmospheric Agency (NOAA), 2014.  Retrieved from  http://www.nmfs.noaa.gov/stories/2014/03/climate_portal.html   on June 26, 2014. 
  6. Emerson, C., “Aquaculture Impacts on the Environment”, CSA, 1999. Retrieved from http://www.csa.com/discoveryguides/aquacult/overview.php  on June 30, 2014.
  7. Sentenac, H., “GMO Salmon May Soon Hit Food Stores, but Will Anyone Buy It?” FoxNews.com, March 11, 2014.  Retrieved on July 10, 2014, from http://www.foxnews.com/leisure/2014/03/11/gmo-salmon-may-soon-hit-food-stores-but-will-anyone-buy-it/
  8. Hall, S.J., A. Delaporte, M. J. Phillips, M. Beveridge and M. O’Keefe. 2011. Blue Frontiers: Managing the Environmental Costs of Aquaculture. The WorldFish Center, Penang, Malaysia.   Retrieved on July 10, 2014, from http://www.worldfishcenter.org/resource_centre/WF_2818.pdf
  9. “Coalition of Gulf Shrimp Industries Files for Relief from Subsidized Shrimp Imports”. Louisiana Seafood News.com, January 15, 2013. Retrieved on July 11, 2014, fromhttp://www.louisianaseafoodnews.com/2013/01/15/gulf-shrimp-coalition-files-for-relief-from-subsidized-shrimp-imports/

 

  •   In Fall 2014, look for Installment Eleven of this series.  Sustainability issues related to the modern pharmaceutical industry will be explored. The inspiration for this essay is The Apothecary, by 20th century Hungarian artist, Vida Gabor.

The Man at Work Collection--Studies in Sustainability

Installment Nine:  “From Lanterns to LEDs -- A Look at the Evolution of Sustainable Lighting”

By Deborah L Jackman, PhD, PE, LEED AP™

glass blowers making lantern mantles.jpg

Glassblowing, 1932,

by Hugo von Bouvard (1879-1959), oil on canvas

A Brief History of Lighting:

          It is easy for us to forget that humans have only had access to artificial lighting for about two hundred years.  Prior to the early 19th century when gas lamps began to be introduced into some major cities in the United States and Europe, people lived a very different life—a life whose rhythms were governed largely by the rising and setting of the sun.  A. Roger Ekirch, a historian on the faculty of Virginia Tech, writes of life before artificial lighting in his best-selling book, At Day’s Close: Night in Times Past [1]. Ekirch’s research reveals glimpses into a human culture awash in superstition and fear, with many people refusing to leave their homes at night, convinced they would encounter demons, witches, or even Satan himself.   In large cities, like London or Paris, street crimes, committed by roving gangs of thugs, were rampant after dark.  The street crime rate in cities is estimated to have been as much as ten times greater prior to 1800 than today, when adjusted for differences in population.  People’s patterns of recreation and sleep also varied greatly from today, with activities requiring light, such as reading and writing, confined largely to daylight hours, and hence, largely reserved for the upper classes, who could afford leisure time during the day.  While candles and torches have existed for centuries, such devices require constant tending and were the source of fires, which in a time of limited fire-fighting ability, were feared by people perhaps even more than the alleged demons lurking in the dark.  Furthermore, candles tended to be expensive and the working classes usually used tapers instead. (Tapers consisted of a reed dipped in animal fat which would burn for only short periods of time and which smoked excessively.)  Even those in the upper classes who could afford high-quality bee’s wax candles would have had to burn as many as 100 candles simultaneously to achieve the same illumination as can be achieved today with a single 60 Watt incandescent light bulb.  

          Human society began to change significantly starting in the mid 18th century as  oil lanterns came into common use. Oil lanterns allowed for a more controlled burn, better fire safety, and more predictable, longer lasting light levels.  Then, starting in the early 19th century, gas lamps began to be used in large cities.  Cities installed the infrastructure to supply gas derived from coal and oil into people’s homes via a system of pipelines.  In rural areas, oil lanterns were gradually replaced by kerosene lanterns.  The subsequent increased demand for kerosene spurred the growth of the oil and gas industry, with the most prized early product of oil refining being kerosene, rather than gasoline.  It was only later on, after the Model T was introduced in the early 20th century, that gasoline overtook kerosene as the most sought after product of petroleum refining.   Our subject painting, Glassblowing, depicts artisans blowing glass mantles for use in the manufacture of kerosene lanterns.  Kerosene lanterns persisted as a major source of artificial light well into the early 20th century in rural America, prior to the rural electrification programs carried on as part of the New Deal.

          The biggest leap in the development of artificial lighting occurred with the development of the incandescent light bulb.  Light is produced in the incandescent light bulb by passing electrical current through a tungsten filament in a vacuum. When heated by an electric current, the tungsten glows, producing light. The absence of oxygen in the evacuated space keeps the filament from oxidizing. While Thomas Edison is traditionally credited with the invention of the incandescent light bulb, it existed in various forms well before Edison.  Much of the early research on incandescent bulbs centered on finding a suitable material for the filament—a material that would glow or “incandesce”, which was able to be manipulated easily during the manufacturing process, and which was cheap and durable.  Edison did propose a number of improvements to the device, starting in the early 1880s.  However, his major breakthrough came in 1905, when he patented an improved ductile tungsten filament, very similar to what we use today. Whatever his role in the actual invention of the light bulb, Edison was indisputably the major force in its commercialization.   Incandescent light bulbs represented a huge improvement over earlier forms of illumination because they operated without an actual flame.  This reduced the fire hazard associated with the use of artificial lighting immensely--a feature of increasing importance as America became more urbanized, with higher population densities crowded into fire-prone urban housing units. 

            Perhaps the most amazing feature of the incandescent light bulb is its longevity.  In a world where technologies are launched, mature, and become obsolete over the period of a few years, or at most a few decades, incandescent bulbs have remained a viable commercial product for nearly 110 years.  They are cheap to make and easy to use. However, from a technical standpoint, they are very inefficient producers of light.  Of the electrical power consumed to operate an incandescent light bulb, only about 5% of it is converted to light.  The remaining 95% is converted to heat energy.  That means that the traditional incandescent light bulb is actually a much better heater than it is a light! Thus, in this era of increased awareness of the importance of energy efficiency, the venerable incandescent bulb is finally being phased out, in favor of alternatives which include Compact Fluorescent Lamps (CFLs) and Light-Emitting Diodes (LEDs).

          Compact Fluorescent Lamps (CFLs) were first introduced commercially in the mid 1990s. They operate on the same principle as the more traditional tube style fluorescent lamps, except the tube is bent to fit into a space approximately the same size as a standard incandescent bulb.  Various chemical phosphors are excited by the electric ballast at the base of each bulb, and when excited, they luminesce, creating light.  CFLs became common place in the first decade of the 21st century, when various rebates and other purchasing incentives, coupled with education programs on how the use of CFLs would save consumers money on utility bills, provided many consumers with an incentive to purchase them as replacements for incandescent bulbs. They are much longer lasting and energy efficient than incandescent bulbs, but suffer from other shortcomings, which are discussed in greater detail below.

The newest type of artificial light source is the Light-Emitting Diode (LED).  LEDs are solid state semiconductor devices that emit a photon of a single wavelength (color) of light when an electric current passes through them.  Single LEDs have been in use since the 1970s as panel indicator lights on electronic devices.  However, it has only been in the last decade that LED based devices have been developed that can be used as replacements for incandescent light bulbs in task lighting applications. LED light bulbs are made up of many individual LED units in a variety of colors which combine to provide sufficient light intensity and color spectrum (collectively a white light.)  They are still quite expensive relative to both incandescent bulbs and CFLs, but show great promise to ultimately fully replace incandescent bulbs over the next years because of their good quality of light, their long life, and their energy efficiency.

Environmental Impacts of Artificial Lighting:                              

Before we look specifically at new developments in artificial lighting and at alternatives to the incandescent light bulb, we need to investigate some of the environmental impacts of artificial lighting.   This investigation informs the search for the best and most sustainable modern lighting systems.

1.    Energy efficiency--Nearly all Americans are aware of the government mandated transition that is currently underway to phase out incandescent light bulbs.  And most are also aware of the primary reason behind the phase-out--the need to become more energy efficient.  Indeed, energy efficiency is the major driver in many industries today, as society strives to become more sustainable by reducing the production of green house gases through greater energy conservation. Lighting consumes 22 to 25% of all electricity used in the United States and therefore represents a major opportunity for energy savings. As noted above, only 5% of the energy consumed by an incandescent light bulb is converted to light.  This is in contrast to compact fluorescent bulbs, which use only 25 to 35% as much electricity as incandescent bulbs to produce the same illumination and last 10 times longer [2].  Even more impressive are light emitting diodes (LEDs), which use 20% as much electricity as incandescent bulbs for the same illumination levels and last 25 times longer [3]. Clearly, on the basis of energy efficiency alone, the LED-based light fixture represents the way of the future.      

2.    Toxicity and Resource Depletion—Energy efficiency is only one component of how sustainable a product is.  As we have seen in other essays in this series, the best way to assess the overall sustainability of a product is via a life cycle assessment of that product.  Life cycle assessment takes into account not only how much energy is consumed during the use of the product, but also how much energy is consumed during the manufacture of the product (i.e. embodied energy), and how much hazardous waste or toxic chemicals are generated during the life cycle of the product.  When one looks at incandescent, fluorescent, and LED lamps in the life cycle assessment context, the picture is more nuanced than when considering energy efficiency alone.  According to researchers Lim, Kang, Ogunseitan and Schoenung [4] both CFLs and LEDs are categorized as hazardous waste under current federal regulations when disposed of at end of life due to excessive levels of lead leachability and their high content of copper, mercury, and zinc.  Incandescent bulbs, in contrast, are not so classified.  Lim, et al., also looked at the level of resource depletion that occurs during the production of the three light sources.  They concluded that CFLs and LEDs have higher resource depletion and toxicity potential than the incandescent bulb due primarily to their high aluminum, copper, gold, lead, silver, and zinc content. When compared on an equivalent quantity basis (taking into account and correcting for their different lifetimes), CFLs were found to have 3 to 26 times higher potential impact than incandescents, and LEDs 2 to 3 times higher impacts than incandescents.  This group of researchers only looked at resource depletion and hazardous waste impact of the three lighting sources.  They did not do a complete life cycle assessment which would factor in the reduced energy impacts of LEDs and CFLs.

The most interesting result of my research into these impacts is that a data base search did not reveal any journal articles in which a comprehensive life cycle assessment comparing the three light sources and including both energy use (operational and embodied) and toxics generation potential was conducted. Clearly, this is an area ripe for additional research. Were the operational energy consumption of these three light sources factored into a comprehensive life cycle assessment, one can surmise that LEDs would be shown to be at least as or more sustainable than incandescents.  Even though they have a 2 to 3 times higher resource depletion and hazardous waste generation potential than incandescents, the avoided pollution (both in terms of green house gases and toxics emitted by power plants) of the lower energy consumption of LEDs would likely off-set their environmental downside.  The case for CFLs is less clear because while more energy efficient than incandescents, their up to 26 times higher potential hazardous waste and resource depletion impact would require a great deal of avoided pollution from their energy use to off-set these impacts.  When compared directly to LEDs, CFLs are clearly the inferior choice, having less energy efficiency and higher negative environmental impacts.  In any case, the sustainability profile for both LEDs and CFLs can be significantly improved if an effective waste management system for collecting used bulbs is developed.  This will require a significant amount of consumer education and a change in the behavior of the general public, who in many cases still dispose of spent CFLs in their general trash, despite being instructed not to.  Perhaps some sort of cash deposit system, like that that used to be in effect for beverage containers in the 1960s and 1970s, could incentivize consumers to properly recycle LEDs and CFLs.  

          3.    Light Pollution—Another negative impact of our use of artificial lights is that of light pollution.  Light pollution is the alteration of light levels in the outdoor environment (from those present naturally) due to man-made sources of light [5]. The increase of light levels at night is more than just a nuisance that produces unattractive glare or interferes with activities that require a dark sky, such as star gazing.  Inappropriate levels of light during the night have been shown to be damaging to a number of nocturnal species of animals who depend upon darkness as part of their normal life cycle.  It is also a danger to humans, whose circadian rhythms can be disrupted, leading to sleep disorders and general health problems. So serious has this problem become that scientists and concerned citizens have founded the International Dark-Sky Association, an  organization whose purpose is to educate people about and help to reduce light pollution [6]. An excellent article by Gaston, Davies, Bennie, and Hopkins [7] summarizes the current state of the efforts to reduce light pollution.  They first summarize the various forms of light pollution: 1) glare and over-illumination (caused by excessive brightness of a light source); 2) light clutter (excessive grouping of light sources); 3) light trespass (unwanted direct lighting of an area); and 4) skyglow (increased night sky brightness produced by upwardly emitted and reflected light).  They then proceed to analyze and discuss the various ways to reduce light pollution.  These include such measures as zoning ordinances which prohibit artificial light in certain environmentally sensitive areas; reducing the duration of lighting; reducing the intensity of light to minimum levels needed for safety and human activity; reducing light trespass and skyglow through properly designed directional lighting; and broadening the spectrum of lights used to more nearly mimic those found in nature.  In terms of the three types of lighting devices under consideration here, LEDs offer some distinct advantages related to light pollution as discussed in greater detail below. 

4.    Color Rendition—Color rendition refers to the appearance of various objects illuminated by a light source, compared to how those same objects appear in natural sunlight. Certain light sources with poor color rendition make objects appear unnaturally red, yellow or blue. In general, the best light sources for general use are those that mimic the wavelengths present in natural sunlight (a fairly broad spectrum).  In certain non-critical, limited applications, lights having poor color rendition (such as low pressure sodium lamps often found in underground parking structures) can be used, if their energy efficiency, cost, and durability outweigh color rendition considerations.  But, for most sustained activities involving humans, broad spectrum lights are more efficacious and healthier for occupants.  Color rendition has been quantified by lighting designers through the Color Rendition Index (CRI) [8].  A CRI of 100 is considered the perfect light source in terms of color rendition, with natural sunlight having essentially a CRI of 100.  One of the advantages of incandescent bulbs is that their color rendition is excellent (CRI approximately equal to 100).  At the other end of the CRI spectrum are specialty lighting sources, such as the yellow low pressure sodium lamps discussed above, which actually have a negative value of CRI, so poor is their color rendition.  CFLs have historically been criticized for casting a blue tint on objects, although some of the “warm white” CFLs that are available are designed to somewhat minimize this blue tint.  A typical CFL has a CRI of between 50 and 70.  A positive feature of LEDs is their CRI, which while not as good as incandescents, is an improvement over CFLs.  CRIs of between 80 and 90 are typical for today’s LED lamps.   

Future Directions for More Sustainable Lighting Systems:

          What are the future trends in lighting?  The first is clearly a drive toward increased energy efficiency.  For this reason, the most common lamp will be the LED.  As discussed previously, it is more efficient than the incandescent bulb, has fewer environmental impacts than CFLs, has a very long life, and has reasonably good color rendition.  It is currently quite expensive, but costs are projected to drop as manufacturing is ramped up and production volumes increase.   The CFL--the most common energy-efficient alternative to the incandescent bulb in use currently--will likely vanish once LEDs become more cost competitive.  Most lighting experts agree that the CFL is an interim technology.  It suffers from a number of shortcomings in addition to its negative environmental impacts and mediocre color rendition.  Additional shortcomings include a lack of directionality and poor dimming capabilities.  The “dimmable” CFLs currently on the market generally perform poorly, exhibiting a narrow range of light intensity modulation and an audible buzzing sound while in operation. Perhaps most surprisingly, and despite the current phase-out of conventional tungsten filament incandescent bulbs, other types of incandescent lamps will likely continue to be used for specialized applications.  For example, halogen lamps (a type of incandescent bulb which is somewhat more energy efficient than tungsten-based incandescent bulbs) will continue in production and will be the preferred lighting source where exceptional color rendition is needed, such as in art museums, photography studios, and for commercial displays.

According to nationally-recognized lighting designer James R. Benya [9], additional future lighting trends will include:  

o   More efficient luminaires–the luminaire is the technical term for the light fixture that the lamp (i.e. the bulb) is placed in.  New luminaire designs promote directional lighting and effective shading. Light is directed onto the required task, and shading to mitigate light pollution is incorporated into the design.  Energy is also conserved because overall levels of illumination can be reduced, with light focused more efficiently on task areas.  LEDs are particularly compatible with this new concept in luminaire design because they are highly directional and exhibit little light scatter.

o   Integrated use of daylighting – Obviously, the most sustainable form of light is sunlight.  It costs us nothing and has no negative environment impacts.  In fact, humans who work in a day-lit environment report a greater sense of well-being than those who work under artificial lights.  The problem with extensive use of daylighting in the past has been that it is ephemeral, with optimum light levels only lasting for a few hours at most, depending on building orientation.  The rest of the time, the space is either over-lit or under-lit.  Modern control systems that employ light sensors and automated shading devices can optimize light levels in a room.  When daylight levels are high, all artificial lights are automatically turned off, conserving energy, and are gradually turned back on, in modulated fashion, as daylight dwindles.  If daylight levels are too intense, automated shading devices are used to adjust light levels to optimum intensity.  Daylighting, coupled with advanced control systems, offers an opportunity to reduce the need for artificial lights dramatically, provided a building is designed with suitable architecture and with daylighting as a design objective.

o   “Just enough light levels”-- Recent advances in understanding human physiology have allowed us to know how much light is needed for various tasks.  Any light in addition to required levels is wasted energy. Hence, we can design around the optimum levels needed for various tasks, and can eliminate excess wattage.  To allow for differences in light levels needed by different individuals, due to variations such as age, visual acuity, etc., task lighting can be designed with adjustable controls, so that occupants can dim or increase light levels to accommodate individual preferences.

o   Infrared sensors (IR) and/or motion sensors for outdoor lighting -- The use of outdoor path and security lighting contributes to light pollution, even if it is properly shaded to minimize light trespass.  One technique that will become increasingly common in the future will be to integrate IR or motion sensors into the control systems for outdoor lights.  When people are not present, such systems will be turned off completely.  This will further reduce light pollution and also save energy.  LED life is not adversely affected by on/off cycling like incandescents are, so there is no  downside to turning them off as much as required.

     Public recognition of the need for sustainable lighting systems is as critical as having the technology required to implement them.  As education about the importance of sustained periods of darkness to Earth’s ecology and to human health increases, public interest in better designed and controlled lighting systems will increase.  Many counties and municipalities have already incorporated light pollution codes into their zoning ordinances. For example, Cochise County, Arizona, has a particularly detailed and comprehensive Light Pollution Code on their website, http://cochise.az.gov/cochise_planning_zoning.aspx?id=476 .  Many other areas are promoting voluntary light reduction campaigns, coupled with education about the need for reduced and/or better designed lighting.  The Hudson Highlands area of New York state provides a good example of such a voluntary approach, http://www.hhlt.org/lightPollution.html .   While we will never return to the periods of deep darkness experienced by our ancestors before the advent of artificial lights, the world of the future will most likely be somewhat less brilliantly lit than the developed world of the late 20th and early 21st century that many of us are used to.  Even the most light-polluted city in the United States--Las Vegas, Nevada, with its lighting excesses (e.g. the infamous Las Vegas Strip)--has recently installed new LED streetlights, designed to save energy and reduce glare.   That says it all.

I will love the light for it shows me the way, yet I will endure the darkness because it shows me the stars.

Og Mandino

References and Further Reading:

  1. Ekirch, A. Roger (2006), At Day’s Close: Night in Times Past. W.W. Norton and Company.  ISBN-10: 0393329011.
  2. United States Department of Energy (October 17, 2013), Fluorescent Lighting.   Retrieved from http://energy.gov/energysaver/articles/fluorescent-lighting .
  3. United States Department of Energy (July 29, 2012), LED Lighting. Retrieved from http://energy.gov/energysaver/articles/led-lighting .
  4. Lim, S., et al, “Potential Environmental Impacts from the Metals in Incandescent, Compact Fluorescent Lamp (CFL), and Light-Emitting Diode (LED) Bulbs,” Environmental Science and Technology, Vol. 47, No. 2, January, 2013.
  5. Hollan, J., “What is Light Pollution, and How Do We Quantify It?Darksky2008 Conference Paper, Vienna, August 2008.
  6. http://darksky.org/
  7. Gaston, K., Davies, T., Bennie, J., and Hopkins, J., “Reducing the Ecological Consequences of Night-Time Light Pollution:  Options and Developments,” Journal of Applied Ecology,  Vol. 49, p. 1256-1266, 2012.
  8. Guo, X. and Houser, K.W., “A Review of Color Rendering Indices and Their Application to Commercial Light Sources,“ Lighting Research and Technology, Vol. 36, No. 3, p. 183-197, September 2004.
  9. The Energy Center UniversitySM Short Course , Lighting and Daylighting: Design, Controls, and Technology, Oconomowoc, Wisconsin, October 2, 2013. 

 

      In the Summer of 2014, look for Installment Ten of this series.  The issues surrounding making the world’s fisheries sustainable will be explored using the work Fishermen Hauling in their Nets at Sea by French artist Georges-Jean-Marie Haquette as inspiration.                        

The Printshop low resolution image.jpg


The Print Shop, oil on canvas by German artist, D. Heim, depicting a duplex printing press, converted from steam to electric power, ca. 1900.

 

 

Introduction:

 

The invention of the printing press is widely recognized as one of the most significant developments in all of human history.  With it knowledge and ideas could be widely disseminated, setting the stage for the Enlightenment, and helping to usher in the modern era.  Influential authors are responsible, in whole or in part, for initiating major societal movements including the Protestant Reformation (i.e., the Bible printed in the local vernacular), the abolition of slavery in the United States (think about Harriet Beecher Stowe’s seminal novel, Uncle Tom’s Cabin), women’s suffrage in Britain and the U.S., and innumerable others, including the rise of the modern environmental movement. 

 

In this essay, we examine the contributions and impact of several notable American environmental writers spanning the period from the Early Republic through the 20th Century.  We will look briefly at the contributions of James Madison, Henry David Thoreau, John Muir, Aldo Leopold, and Rachel Carson in furthering the modern environmental movement.  Arguably, not only were these individuals responsible for starting the environmental movement within the U.S. but also globally.  Without their contributions, it is likely that the importance of environmental stewardship and sustainability would not be widely recognized or understood.  We will also consider what characteristics future works of literature will need in order to support environmental progress globally.  Who will the Rachel Carson of the 21st century be, and what form will her work take in our current digital age?



The Modern Environmental Movement and the Authors who Contributed to It:


The noted environmental scholar, Ramachandra Guha [1], describes two phases in the development of the modern environmental movement: the First and Second Waves. The First Wave was characterized by the development of intellectual thought centering on the need for and importance of environmental stewardship and protection.  The Second Wave was characterized by the popularization of environmental ideas and philosophies developed during the First Wave and their subsequent broad-based adoption by the general population.  The broad-based adoption of the notions of environmentalism by the general population was the catalyst within the United States that led the federal government to adopt laws and regulations to protect the environment through statutory means. From the late 1960s through the mid 1970s, the United States was the first industrialized nation to adopt comprehensive federal laws to protect the environment.  Other developed nations, especially those in Western Europe and Japan, soon followed.

  

The First Wave spans the approximate time period from the start of the Industrial Revolution in the mid 18th century through the decade of the 1960s.  During this time, many authors and intellectuals wrote extensively in response to the observed degradation of the natural world as evidenced by rapid deforestation, uncontrolled hunting of such species as the American Bison and Passenger Pigeon, and the extensive burning of coal which generated smog in major cities like London, among other examples.  Some of the earliest authors writing in this genre were from Great Britain. For example, the great romantic age British poets William Wordsworth and William Blake wrote eloquently of the rural ideal.  However, starting in the late 18th and early 19th centuries, a preponderance of the writings on nature and conservation--which are precursors to environmentalism as we now understand it--were American in origin.


There are a number of noteworthy American authors who in some way contributed to an appreciation of the natural world during this First Wave period.  However, this essay will feature several whose works most significantly influenced the nascent environmental movement during the First Wave--namely, James Madison, Henry David Thoreau, John Muir, and Aldo Leopold.


James Madison (the fourth President of the United States) may not be as well known for his environmental sensibilities as he is for his political accomplishments but his contribution to the First Wave is significant nonetheless.  Like several other of America’s Founding Fathers (e.g., George Washington, Thomas Jefferson, and John Adams), Madison was a landowner and a farmer.  His identity as a farmer predated his involvement in the American Revolution and politics and it profoundly shaped his world view.  In fact, Madison, along with the other Founding Fathers, based much of the philosophy of the American system of government on a vision of America’s citizenry being a nation of yeoman farmers and landowners.   A fascinating book that connects the agricultural roots of America’s founders to the system of government they advocated is Andrea Wulf’s Founding Gardeners, The Revolutionary Generation, Nature and the Shaping of the American Nation [2].  Madison, more than the other founders, understood the connection between agriculture and ecology.  While the likes of Thomas Jefferson wrote extensively about proper farming practices and what crop varieties were preferred, Madison wrote the following in his famous 1818 address to the Agricultural Society of Albemarle [3]:


“The earth contains not less than thirty or forty thousand kinds of plants; not less than six or seven hundred of birds; nor less than three or four hundred of quadrupeds; to say nothing of the thousand species of fishes. Of reptiles and insects, there are more than can be numbered. To all these must be added, the swarms and varieties of animalcules and minute vegetables not visible to the natural eye, but whose existence is probably connected with that of visible animals and plants.”


“On comparing this vast profusion and multiplicity of beings with the few grains and grasses, the few herbs and roots, and the few fowls and quadrupeds, which make up the short list adapted to the wants of man, it is difficult to believe that it lies with him so to remodel the work of nature as it would be remodelled, by a destruction not only of individuals, but of entire species; and not only of a few species, but of every species, with the very few exceptions which he might spare for his own accommodation.”


In these two paragraphs, Madison sets forth two principles that are foundational to the modern environmental movement: (1) the interconnectedness of all the species in a given ecosystem—both visible and microscopic, and (2) that man has no right to alter or destroy the natural ecosystem just to further his own objectives.  This second principle contradicts the earlier Puritan notion of man’s right to have dominion over the Earth.


          With the passing of the Revolutionary generation, a new generation of American naturalists emerged.  This generation is epitomized by Henry David Thoreau, born in 1817 in Concord, Massachusetts.  Thoreau belonged to the Transcendentalist movement, along with Ralph Waldo Emerson, Margaret Fuller, and Bronson Alcott (the father of Louisa May Alcott) [4]. Transcendentalists believed that spiritual fulfillment was to be found through immersion in and study of the natural world, not through institutionalized religion. Thoreau’s most famous literary work, Walden, recounts the two years he spent living a primitive life in the Massachusetts woods near Walden Pond.  Walden, along with several essays he wrote late in life, advocate for Thoreau’s views on what is now known as ecology.   Thoreau’s other most famous literary work--the essay, Civil Disobedience--is also significant to the modern environmental movement.  Although Thoreau intended Civil Disobedience to be a critique on overly intrusive government and a treatise for how citizens should protest government actions they view to be unjust, his ideas of non-violent resistance were later used by activists such as Gandhi and Martin Luther King to promote social change.  Within today’s environmental movement, such organizations as Greenpeace employ Thoreau’s principles of non-violent resistance. 


          The Transcendentalist Movement deeply influenced another major environmental writer and advocate--John Muir.  Muir, born in Scotland in 1835, immigrated to the US with his parents as a child and was raised on a farm in Wisconsin.  He attended the University of Wisconsin Madison for several years, but never earned a degree because his choices of classes were too broad and eclectic to qualify as a concentrated major.  While at Madison, he studied chemistry, botany, geology, and various other subjects, which he later in life said may not have earned him a degree, but which prepared him ‘for his future wanderings’ [5]. After college, Muir traveled pretty much continuously for the next two decades.  Immediately following college he traveled for six years, visiting, in succession: Canada, Florida, Cuba, and New York.  It was during this time that he made his famous 1000 mile nature walk from Indiana to Florida. After a brief time in New York, Muir booked passage on a ship to California.  He ended up in the Yosemite area of California in 1868, where he lived in a primitive cabin for the next six years, studying Yosemite’s geology, wildlife, and botany, and reading and writing extensively. While in Yosemite, he struggled to survive because he was frequently unemployed and had no prospects for a career.  During this difficult time, Muir took solace in the writings of Ralph Waldo Emerson, and in doing so adopted many of the philosophical beliefs of Transcendentalism, which were layered upon his existing scientific training and knowledge.  In the mid 1870s, Muir traveled to Alaska and was one of the first European Americans to explore Glacier Bay.  Several years later, he traveled to Washington state and spent time climbing Mount Rainier and writing Ascent of Mount Rainier.  In his mid 40s, Muir married the daughter of a California fruit ranch owner and  settled down, managing the ranch, and continuing to write and advocate on behalf of the wilderness, and especially on behalf of the Yosemite area.   He was responsible for Yosemite becoming a national park and he was the co-founder of the Sierra Club, an organization that still exists to advocate on behalf of environmental stewardship and responsible use of the Earth’s resources [6].  Muir’s views on the environment were characterized by a belief in preservation rather than conservation.  He believed that wild areas should be left undisturbed, rather than believing that it was proper to merely manage them in a sustainable manner.  In this view, he differed from other conservationists of his day, who argued for “responsible” use of natural resources, such as the selective cutting of timber in natural areas and limited hunting of game.  Perhaps Muir’s biggest legacy within the environmental movement was that he was among the first of America’s environmental writers who was also an activist.  He not only wrote about nature and about the need for preservation, but lobbied and organized effectively for change.  In this way, he was perhaps more like modern environmentalists than any of his contemporaries in the 19th and early 20th centuries.

   

          The last of the First Wave environmental writers we will explore is Aldo Leopold. Leopold, born in Burlington, Iowa, in 1887, was drawn to the outdoors as a child, where he avidly hiked and catalogued species of birds, plants, and animals that he observed in the wild. Upon hearing of a new college program in forestry started at Yale University in 1900, the 13 year old Leopold decided on a career in forestry.  He ultimately entered that program at Yale in 1905.  Upon graduation from Yale, he worked for the US Forest Service, primarily in Arizona and New Mexico for the next 15 years, where he developed the first comprehensive management plan for the Grand Canyon, and helped to establish a wilderness designation for the Gila Wilderness Area, the first such wilderness area set-aside in the US Forest Service system.  The Forest Service transferred him to the Forest Products Laboratory in Madison, Wisconsin, in 1924 to serve there as its associate director.  He was subsequently appointed to a professorship in Game Management at the University of Wisconsin Madison in 1933.  While living in Madison and teaching there, he bought an eighty acre farm in central Wisconsin where he spent vacations and weekends.  It was on this farm that the inspiration for Leopold’s seminal environmental work, A Sand County Almanac [7], was born. Leopold worked the land on the farm to help restore its natural ecosystem because the farm had been logged and overgrazed at the time he purchased it.  He used his observations of the farm’s ecosystem along with his past experiences in forestry as inspiration for the essays in A Sand County Almanac.  Through these essays, Leopold reveals an environmental philosophy that includes a “wilderness ethic” similar to Muir’s.  He believed that wild places should be valued for their own sake and left undisturbed whenever possible rather than “managed.” This view put him into opposition with the utilitarian conservationists of the early 20th century such as Gifford Pinchot and Theodore Roosevelt.  The utilitarian conservationists advocated that nature be conserved so that it could be enjoyed by man through hunting and other recreational activities. Leopold also advocated for the “land ethic,” a concept which encourages the management of wildlife habitats by both public and private landowners.  Leopold believed that adequate conservation of ecosystems could not occur just by managing public lands, and that private landowners, educated in the basics of ecology and the scientific principles of land management, needed to embrace these principles if sufficient conservation was to occur.  A Sand Country Almanac was almost never published because Leopold finished it only a month before he died.  His family was instrumental in its subsequent publication in 1949 [8].  In some sense, Leopold can be viewed as a bridge between the First and Second Wave environmental writers because his writings did engage large numbers of readers who were already outdoor enthusiasts.  However, the larger population remained generally unaware of the importance of his works until they were re-popularized after the Second Wave had arrived.

      

          In moving from the First Wave to the Second Wave of environmentalism, we move from a period in which environmental writing and thought were largely the realm of intellectuals and academics to a period when environmentalism was embraced by large cross sections of the American populace.  This represents a significant paradigm shift because it was a necessary precursor to the major federal environmental regulations passed in the late 1960s and 1970s, e.g., the Clean Water Act, the Clean Air Act, the Resource Conservation and Recovery Act, Superfund legislation, and the creation of the federal Environmental Protection Agency (EPA).  Environmental scholar and activist Martin Branagan asserts that without grassroots support fundamental environmental reform is not possible [9].  Thus, without the shift from the First Wave to the Second Wave of the environmental movement, our current level of environmental protections and reforms would not have occurred.  The environmental writer who almost single-handedly ushered in the Second Wave was Rachel Carson.

 

          Rachel Carson, born in 1907, held an MS degree in zoology from Johns Hopkins and worked for two decades in relative anonymity as a science editor for the US Fish and Wildlife Service.  During this time, she also did freelance journalism for a number of popular publications such as the Atlantic Monthly, primarily on topics related to the oceans and the ecosystems of the sea.  She first became nationally known for her best-selling books on the oceans--The Sea Around Us and The Edge of the Sea.   The Sea Around Us garnered her the National Book Award for non-fiction in 1952.  For some time, she had been reading research studies and reviewing anecdotal accounts of the impact of pesticides on the natural world.  She became convinced that synthetic pesticides, especially DDT, were responsible for significant environmental damage, including the decline of a number of bird species.  At the encouragement of E.B. White, then the editor of The New Yorker, she embarked on an investigative journalism project to document pesticide impacts.  Silent Spring, published in 1962, was the result of her efforts.  This book, which contains an allegorical account of an American town without birds and other wildlife, captured the imagination of the American public like no book on the environment up to that point in time.  So impactful was Silent Spring that Carson was invited to testify before Congress in 1963 on the dangers of pesticides.  DDT was banned shortly thereafter, and the major pieces of environmental legislation of the 1960s and 1970s that continue in effect to the present day, e.g. the Clean Water Act, are directly attributable to popular and political support created by the publication of Silent Spring. It seemed that the only entities not enthralled with Carson’s work were a number of the major chemical corporations who produced DDT and other synthetic pesticides.  These companies spent large sums of money trying to discredit Carson and to smear her reputation, accusing her of junk science, and warning the American public that without pesticides, America would soon be so overrun by insects that we would be unable to grow enough food to feed our population [10].  Despite these scare tactics, the modern environmental movement in America was born.  Silent Spring was translated into a number of languages and also helped to ignite the world-wide environmental movement.    

   

The Future of Environmental Action and How It might be shaped through Literature:


          By certain narrow measures, our environment today is in much better condition than in 1962.  Through the Clean Water Act, the Clean Air Act, Superfund legislation, and a host of other regulatory actions, both in the US and abroad, we are now required to clean up many of the damaging by-products we generate in our daily lives before they enter the environment. Municipal and industrial waste water treatment is now the norm in the developed world; Lake Erie, a ‘dead’ lake in the 1960s, now supports a thriving Walleye population.  In the developed world, air emissions of toxic chemicals from power plants and other industrial sources have been significantly reduced. Recognition is beginning to dawn on the governments of China and India that air pollution threatens their peoples and economies, and in the near future air pollution controls like those used in the West will likely become more commonplace.  New pesticides and herbicides must undergo testing protocols before being allowed on the market to demonstrate they are safe.  There are many other similar examples.


          In a broader sense, however, the world faces environmental problems that are global and so fundamental in nature that they must be addressed if Earth, as we know it, is to survive.   Greenhouse gas emissions linked to global climate change, potable water shortages worldwide, deforestation, and overfishing of the oceans are just a few of the severe impacts we face as a planet.  Unlike the environmental battles in the 1960s and 1970s which were focused on narrow issues and geographically limited in scope, today’s environmental problems can only be effectively addressed on a global scale and they will require significant changes in how man interacts with the planet.  Wealthy multinational corporations with vested interests in the status quo along with dysfunctional political systems in many nations hamper direct action. 


     Given the scope of world environmental problems, is it even plausible that environmental writers can be drivers for fundamental reform?   The answer to this question is uncertain, but were such writers to emerge, their work would need to possess these characteristics:


  • ·       It would engage readers on both a rational and emotional level, as have all transformative literary works of the past.

 

  • ·       It would build a persuasive case against the modern materialist mind-set.  Humans cannot continue to use earth resources at the rate they are. It is unsustainable to try to support a life-style like that experienced in the U.S. during the 20th century.  This does not mean humanity must return to the Dark Ages, but it will require that we refocus our interests in a direction that does not promote rabid consumption.  It also means that we will have to make maximum use of energy conservation, renewable energy sources, water conservation and reuse, sustainable wildlife, forestry and farming practices, and materials reuse and recycling on a global scale.   The challenge will be for writers to frame this new paradigm in a way so that it is perceived as positive and even enjoyable rather than as sacrificial.  
      
  • ·       It would appeal to a broad spectrum of religions, cultures, and types of government. Not only do we live in a diverse world, but society seems to be becoming increasingly fractious, with the spirit of cooperation and civil discourse increasingly rare.  There is pressure to demonize the “other.”  The mass media, rather than helping to correct this, has been complicit in fostering it through sensational journalism and reporting.  The effective writer will have to make the case that there is more that connects us as humans than divides us.  He will also have to build the case that it is in the best interests of all to cooperate in addressing global environmental concerns.    

 

  • ·       It would build the economic case.  Those who argue against the sorts of paradigm shifts needed to ensure a sustainable future for planet Earth and its inhabitants often use arguments based on economics.  They claim such changes are too expensive.  They claim such changes will result in the loss of competitive advantage of one nation over another.  They claim such changes will result in diminished economic status for all.   The economic case must be made persuasively that the ultimate costs of doing nothing will dwarf the costs of any changes we implement now.  Such economic arguments are well developed within academia, but they need to be presented in an accessible and engaging way.

 

  • ·       It would promote grassroots action.  Governments and corporations tend to favor the status quo unless constituents and customers argue for change.  Even in non-democracies such as China, citizen pressure can foster change.  

 

  • ·       It would be presented using media that effectively engage 21st century minds and which takes full advantage of the digital revolution. Smart phones, Facebook and other social media sites, graphic novels, interactive computer gaming, the Internet, e-Books….  There is a lengthy list of new media and devices through which writers can reach their audiences.  The transformative environmental writer of the future will have to engage people using these media.

 

     Ever since Gutenberg, major social changes have been catalyzed through literature.   While 21st century environmental challenges are daunting, they, too, can be addressed through building a global consensus.  It is fortunate that such a consensus can be promoted through literature:


The instruction we find in books is like fire. We fetch it from our neighbors, kindle it at home, communicate it to others, and it becomes the property of all.

Voltaire

      

References and Further Reading:


  1. Guha, Ramachandra, Environmentalism: A Global History, Longman Publishing, 2000.  ISBN 0-321-01169-4.
  2. Wulf, Andrea, Founding Gardeners, the Revolutionary Generation, Nature and the Shaping of the American Nation, Alfred A. Knopf, 2011. ISBN 978-0-307-26990-4.
  3. Madison, James, “Address to the Agricultural Society of Albemarle, 12 May 1818,” The Papers of James Madison, Retirement Series, Volume 1: 4 March 1817-31, January 1820, edited by David B Mattern, J.C.A. Stagg, Mary Parke-Johnson, and Anne Mandeville Colony.  Charlottesville: University of Virginia Press, 2009.
  4. Witherwell, E., and DuBrulle, E., “The Life and Times of Henry David Thoreau”, published by the Thoreau Library, for the 150th anniversary celebration of the publication of Walden, 1995.  http://thoreau.library.ucsb.edu/thoreau_life.html
  5. Holmes, S.J., Young John Muir: An Environmental Biography, University of Wisconsin Press, 1999. ISBN 10: 0299161544.
  6. “John Muir”, Wikipedia, http://en.wikipedia.org/wiki/John_Muir 
  7. Leopold, A., A Sand County Almanac: With Other Essays on Conservation from Round River, republished by Ballantine Books in 1986, ISBN -10: 0345345053.
  8. Meine, C., Aldo Leopold: His Life and Work, University of Wisconsin Press, Madison, WI, 1988.  ISBN 0-299-11490-2.
  9. Branagan, M., “Environmental Education, Activism, and the Arts,” Convergence, Vol. 38, No. 4, 2005, p.33-50.
  10. Griswold, E., “How ‘Silent Spring’ Ignited the Environmental Movement”, The New York Times Magazine, September 21, 2012.

 

Coming in early 2014 is Installment Nine of this series:  “From Lanterns to LEDs – A Look at the Evolution of Sustainable Lighting,” an essay inspired by the painting Glassblowing by Hugo von Bouvard.  

By Deborah Jackman, PhD, PE, LEEDAP™ - originally posted on 08/14/2013

oil-rigs-in-baku.jpg

An Historical View of the Oil and Gas Industry:

In this installment of Studies in Sustainability, we are going to look at the environmental implications of one segment of the modern oil and gas industry-- that portion that encompasses hydraulic fracturing (i.e. fracing). To add context, it is interesting to first look at some of the history behind the development of the modern oil and gas markets. To assist us in gaining some historical perspective, we turn to "Oil Rigs in Baku at Caspian Sea," a 1911 painting by the German artist, Viggo Langer. The location and date of this painting are highly significant in the history of the modern energy industry. Baku is located in Central Asia in modern day Azerbaijan. It is an area that even today continues to hold large oil reserves. The Caspian Sea basin became one of the earliest centers for oil production in the world because of its geology. This region lacks cap rock over its oil reserves and consequently crude oil spontaneously seeps to the surface [1]. Because of this seepage, it was known for centuries that crude oil existed in the region. When modern drilling technology began to be developed in the late 19th century, Baku was among the first places where large scale crude oil production began.

In 1911, most crude oil was used to produce kerosene, the fuel of choice for lanterns. Kerosene replaced whale oil as a lantern fuel in the mid 19th century, after whales became scarce due to overhunting and the whaling industry declined. However, the demand for kerosene in the US had begun to decline at the start of the 20th century. At that time, natural gas was piped into cities to supply gas lamps, and electricity and electric lights were being introduced. John D. Rockefeller, the founder of Standard Oil, was rumored to be concerned for the welfare of his company, over concerns of declining kerosene demand. But Rockefeller had nothing to fear because just as kerosene demand began to decline, Henry Ford introduced the first mass-produced automobile in 1908-- the Model T Ford (a.k.a. the “Tin Lizzy”). The rise of the automobile not only stabilized the crude oil market but caused it to grow exponentially. Around this same time, demand for natural gas also began to increase. Even though electric lights gradually replaced gas lamps, natural gas was used (along with coal) to fuel electric power plants. By the mid 20th century, natural gas had also largely replaced both coal and oil as a fuel to heat homes. Collectively, usage of all forms of fossil fuels increased and the 20th century saw a rise in global energy demand unprecedented in human history. This increased energy demand fueled increased industrialization, urbanization, and a rise in the standard of living across the globe.

Today, our global thirst for gasoline, natural gas, and other fossil fuels remains unabated. Just when the developed world began to make progress in slowing the growth of their use of fossil fuels through conservation, increased efficiency, and the gradual introduction of renewable energy sources, rapidly developing nations such as China and India came of age and the increasing demand for energy continues. However, the continued use of fossil fuels is very problematic for two reasons: 1) the supply is finite, and 2) the production and use of fossil fuels has serious environmental impacts, including contributing to rising levels of greenhouse gases in the atmosphere and to global climate change. The subject of this essay is hydraulic fracturing -- popularly known as "fracing." It is a subset of the larger oil and gas industry and is perhaps one of the most controversial practices ever to arise within the energy industry. As responsible citizens and consumers of energy, we need to understand what fracing is, what its impacts are, and what alternatives exist. Under what conditions, if any, can it be defended?

What is Fracing?

Induced hydraulic fracturing (aka "fracing") is a technique used to recover crude oil and natural gas from geological rock formations for which traditional drilling techniques are ineffective. Many types of geological formations such as shale and sandstone contain small pores and void spaces which are filled with oil and gas. (Think of a wet sponge, whose pores hold water.) But because these pores are small and integral to the rock formations, and because internal pressure gradients within the rock formation are not high enough to force the gas and oil out, these materials will not flow into a well created by conventional drilling. (The sponge does not release its water just by sitting on the countertop. One must squeeze it, i.e., create pressure gradients, to release the water.) The oil and natural gas therefore can’t be recovered using conventional drilling.

The advent of fracing has allowed vast reserves of previously inaccessible oil and gas to be recovered. A vertical well is drilled down thousands of feet into the layer of rock that contains the trapped oil and gas. Once that layer of rock is reached, horizontal (directional) drilling commences, which creates a horizontal well channel running parallel through the oil- and gas- rich rock. A portion of the horizontal well has access holes bored through the casing to allow fluid to be injected into the rock layer. A fracturing fluid solution comprised mainly of water, with various chemical additives and proppant (particulate matter consisting of either sand or aluminum oxide), is pumped at high pressure into the well. The high pressure liquid causes portions of the rock near the access holes to fracture. The size of the fractures created are on the order of a millimeter or so in length. After fracturing is complete, the fracturing solution is withdrawn. However, the proppant particles remain behind and serve to keep the newly created fractures propped open by wedging themselves into the openings. With the enlarged pore structures and fractures created, hydraulic pressure gradients in the rock formation are now sufficient to overcome the hydraulic resistance, and oil and gas can flow into the well casing and be recovered.

Although fracing has been known within the oil and gas industry since the 1950s, large scale fracing operations were not practical until the technology for directional drilling was developed, in the 1980s and 1990s. This is because rock formations containing oil and gas typically run in horizontal layers. To enable efficient extraction of oil and gas, the fracing operation must be able to access a relatively large area of the rock formation. This can only be done if a horizontal channel can be drilled within a specific horizontal layer of rock [2].

Fracing is very water intensive, with each fracing event using between 1.2 and 3.5 million gallons of water per well. Since some wells have to be refractured several times over the course of their useful lives, a single well can require a life-time consumption of as much as 3 to 8 million gallons of water [3]. Clearly, in arid areas, this high water requirement is a serious problem, with water having to be transported to the well site. Even in non-arid regions, the water required for fracing operations can place a strain on local water supplies. It is this water intensiveness that is one of the biggest concerns surrounding the fracing debate. Since the fracing water is mixed with a number of chemical additives, and since it mixes with some crude oil or natural gas while in the well, it cannot be readily reused or recovered for subsequent drinking or irrigation purposes, without extensive (and expensive) treatment. Nor can it safely be released to surface waters. For this reason, one common practice within the industry for disposal of spent fracing solution has been deep well injection-- pumping the wastewater deep underground, far deeper than the deepest drinking water aquifer. However, this practice, too, has come under scrutiny, as discussed below.

Environmental Impacts and Political Issues

Smith [4] provides an excellent summary of the various environmental impacts associated with hydraulic fracturing. These include:

  • Disruptive Land Use and Noise Pollution -- A typical drilling site for a fracing well requires 2 to 6 acres of land and requires a holding pond for water effluents. Hundreds of trucks a year must traverse the property, hauling water and wastewater to and from the site. Wells seldom exist in isolation, and "gas farming" -- areas where large swaths of land are populated by multiple wells -- can be very disruptive. Many of the drilling sites are located in populated areas (e.g. western Pennsylvania) and such activities contribute to local traffic problems as well as to noise pollution. Proponents of fracing cite that landowners are paid lease fees and royalties for the drilling occurring on their land and are therefore fairly compensated. However, many of those impacted by noise and traffic congestion are not land owners, but rather the general public who live in the area. Furthermore, recent media reports [5], [6] suggest that some landowners are not being compensated commensurate with the value of gas or oil being removed from wells on their property, despite having legal agreements with the energy companies. This brings into question the ethics and general credibility of the energy companies involved in hydraulic fracturing.
  • Induced Seismicity -- Seismic activity is generated in two ways during fracing operations. The first way is through the hydraulic fracturing itself, during which high pressure fluid induces fractures in the rock layers. The second way is during the disposal of spent fracing fluids by deep well injection. The amount of seismic activity induced during the fracing operation itself is negligible because the magnitude of the seismic event is proportional to the length of the induced fracture, which in the case of fracing is on the order of only a millimeter or so. Rumors that fracing operations themselves have resulted in measurable earthquakes are unsubstantiated by science. However, deep well injection of spent fracing solutions are linked to potential earthquakes. As such, US regulations require Class II injection wells to be located in areas far from identified fault lines, and injection rates are limited to prevent substantial increases in pore pressure in the well. Seismic monitoring at deep well injection sites is used so that injection can be slowed or stopped if seismic activity is detected. The state of the art is that the causal mechanism between deep well injection and earthquakes is not well understood and more research is required.
  • Harmful Air Emissions -- Air emissions come from a variety of sources. During drilling, emissions come from diesel or gasoline engines used to operate drilling rigs and fracturing engines. Trucks used to deliver water to the site and remove wastewater also contribute to negative air quality. In response to these increased vehicular and engine emissions, some states are in the process of tightening air quality standards to force drillers and haulers to operate cleaner engines and to incorporate better emission control technologies. The methane in the natural gas itself is a potent green house gas and the US EPA is taking steps to require gas producers to appropriately separate and recover any natural gas that enters the fracing water. This separation and recovery is called “green completion” and is being mandated by the EPA for all fracing operations beginning in 2015.
  • Adverse Impacts on Water Supplies and Quality -- Unquestionably, the biggest potential detrimental impacts of fracing lie in their potential to negatively impact water quality and quantity.
    • Water for drilling and fracturing often comes from local surface or ground waters. Given the large volume of water used, it is critically important that hydrological studies be completed prior to the commencement of fracing operations to ensure that sufficient water resources remain to supply other local water needs for drinking water, local industry, and local agriculture. In the event that local water is not sufficient to supply the fracing process, the energy company has to consider trucking in additional water from elsewhere. Not only is this expensive, but it simply shifts the burden from one local water supply to another remote supply in another region. Recent technological advances hold promise in the reuse of fracing liquids from one well to the next, and in the employment of advanced water treatment technologies to clean up fracing water to levels that would allow it to be released into the local water supply, rather than deep well injected. However, these technologies are expensive and in their infancy.
    • Possible groundwater contamination by fracing solutions. Despite what some opponents of fracing say, it is unlikely that fracing solutions used in the shale gas layers of rock can independently migrate into aquifer layers holding drinking water. The same lack of permeability in the rock layers that require fracing to extract the gas prevents fluid migration in undisturbed rock between the shale gas layer and the aquifer layer. Shale gas rock is typically located thousands of feet below the deepest drinking water aquifer and fracing solutions can’t migrate through that much rock. However, in the event that the well casing is defective or fracing solution is spilled above ground during deployment, seepage into the ground water can occur and under those conditions ground water contamination will result. For this reason, technology improvements and new regulations are focusing on preventing and detecting well casing leaks, spill prevention, disclosures by companies of the exact chemical compositions of their proprietary fracing fluids, and the development of "greener" fracturing fluids.

Despite the environmental threats discussed above, fracing proponents in the US have been able to forestall much federal legislation to date that might significantly curtail it or ban it altogether, as has occurred in France. This is because fracing has some very positive short term political and economic benefits. These benefits exist in opposition to the negative environmental issues discussed above:

For the first time in decades, the US recently became a net exporter of energy, in the form of natural gas. This not only provides revenues to oil and gas companies, but fracing proponents point out that it helps to bolster national security. One aspect of national security that has been highly problematic in the last several decades has been energy security. Until the advent of fracing, the US was heavily dependent on the Middle East and other politically unstable or politically unfriendly regions of the world for oil supplies. A disruption in energy supplies due to war or political unrest in these regions could have a catastrophic impact on the US. Having a secure, domestic supply of energy alleviates some of this risk. However, the energy picture is not as totally positive as it seems on the surface because energy supplies from fracing are largely in the form of natural gas, although some additional domestic oil is being produced. Yet, the US infrastructure, especially automobiles, runs on gasoline, derived from crude oil. In order to become truly energy independent from the fruits of the fracing boom, the US would have to make investments in its infrastructure to convert much of its fleet to natural gas-based vehicles and would need to build a system of natural gas re-fueling stations, on par with the number of gas stations currently in existence. Since this has not yet occurred, US energy independence derived from fracing is still more a goal than an accomplished fact. Instead, there is currently a glut of natural gas on the market, which is driving natural gas prices down, making fracing operations less profitable overall.

In addition to the promise of energy independence, fracing has helped segments of the US economy. In addition to providing tens of thousands of local and regional jobs in boom areas like western Pennsylvania, Colorado, and North Dakota, it represents a technical area in which the US holds significant leadership globally. The US has been the leader in developing directional drilling and other technologies related to fracing and stands to gain by exporting this technology to other countries.

As a result of both the perceived and real political and economic benefits associated with fracing, and despite growing public concerns over its environmental impacts, there currently are no federal initiatives within the US to curtail or ban it. It has been banned in France, and the United Kingdom has recently taken steps to strengthen its regulations and more tightly regulate it. In the absence of federal actions, state and local governments are intervening on behalf of their citizens in order to place controls on fracing operations. These actions include zoning law changes, noise ordinances, stricter state environmental regulations, and the like. Recently, Michigan drafted a bill in its state legislature to ban fracing in the state, although it remains to be seen if this bill will become law.

Final Thoughts

I began this essay by posing a question: Can hydraulic fracturing be defended? As is the case with most things, the answer is more nuanced than one might initially expect. While there are many negative environmental impacts associated with fracing, these negative impacts are also being recognized, and with that recognition energy companies are proposing and employing a variety of technological approaches to mitigate impacts, such as water reuse, advanced water treatment technologies, improved well casing design, greener fracing fluids, green completions, etc. Federal, state, and local governments are also beginning to heed citizen’s concerns and are beginning to introduce legislation to protect the environment from some of the potential impacts of fracing. However, despite all of this, and despite the boost that fracing has given to certain segments of the economy, I would still answer the question with a “no”. The reason for my “no” lies not so much with the issue of immediate environmental impacts -- although those are real and of concern -- but because fracing is giving our society a false sense of energy security and delaying the important work we must do to begin to transition to an economy based on increased energy conservation and renewable energy supplies.

We can use a familiar analogy to understand the dynamics of the world energy economy today. The analogy is that of a relay race. In a relay race, the point at which the runner passes the baton is critical. Pass the baton too soon and the baton might be dropped. But wait and pass the baton too late, and the race could be lost. Like in a relay race, we need to begin transitioning away from fossil fuels well before the supply runs out, not just to protect the environment, but to ensure against disruptions in the energy supply and to promote a smooth transition. While the advent of fracing has brought with it increased natural gas and oil supplies by allowing us to access fossil fuel reserves we could not utilize before, it is important that we recognize that these fossil fuels are still finite and will eventually become depleted.

Proponents of fraccing and those politicians and business people who advocate for continued reliance on fossil fuels -- in lieu of development of a robust and comprehensive alternative energy infrastructure -- like to cite cheap and available energy as a reason for the major advancements made by the human race -- in science, education, manufacturing, medicine, and public health—during the 20th century. (Many of these same individuals also conveniently ignore or discount the negative effects of increasing levels of greenhouse gases on global climate change.) But, these very societal advancements are jeopardized by a failure to plan for a smooth transition to the post-fossil fuel economy. The discontinuity that would be created in the world economy by an abrupt fall in fossil fuel supplies without commensurate alternative energy systems availability would threaten all of these hard-won societal advancements. Furthermore, technologies that threaten our increasingly scarce water supplies or which contribute to global climate change threaten the well-being of human beings both in the US and across the globe.

In recognition of the need for a plan to better manage and coordinate our energy and water resources and to plan for future energy needs, the Government Accountability Office (GAO) released a report [7] in late 2012 recommending a coordinated approach for establishing comprehensive US energy and water policies, and asked Congress and other federal agencies to consider the effects that national energy production and water use have at the local level. The report also urges the Department of Energy to take steps to create a long range energy policy for the nation, as prescribed in the Energy Policy Act of 2005, but never implemented. This need for a long range plan for transitioning away from fossil fuels to a renewable energy economy is critical. Yet, fraccing only delays the inevitable transition from fossil fuels to sustainable energy sources through the false promise of continued cheap fossil fuel availability for decades to come. In the meantime, it arguably does harm to the environment (a cost that fraccing proponents do not fully account for when making claims of cheap energy). We are like Nero fiddling while Rome burned -- with our groundwater supplies threatened and increasing levels of greenhouse gases causing global climate changes, we fail to plan for a smooth transition to the era when fossil fuels are no longer readily available.

Does technology exist (or can it be developed) to minimize at least some of the worst environmental impacts of fraccing? As discussed above, the answer is "yes." But, does the cost and effort associated with employing this technology make sense in the larger scheme of things? Probably not. Economists often refer to the concept of lost opportunity cost. Surely the cost, technical know-how, and human effort we are putting into trying to make fraccing marginally more environmentally acceptable would be better spent on facilitating the transition to sustainable energy sources. This transition is not a question of whether, but when. A wise nation would begin to phase out its commitment to fracing -- not end it abruptly, because that would create economic disruptions -- but allow market forces reigned in by stricter environmental regulations work to make fracing less profitable. At the same time, we would shift our attention to transitioning into a post- fossil fuel based economy, via implementation of a comprehensive, long range national energy policy. That is what a wise nation would do.

References and Further Reading:

  1. Flynt Leverett, course materials for 17.906 Reading Seminar in Social Science: The Geopolitics and Geoeconomics of Global Energy, Spring 2007. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on 15 July 2013.
  2. Robbins, K., “Awakening the Slumbering Giant: How Horizontal Drilling Technology Brought the Endangered Species Act to Bear on Hydraulic Fracturing”, Case Western Reserve Law Review, 2013. (http://law.case.edu/journals/LawReview/Documents/63CaseWResLRev4.6.Article.Robbins.pdf)
  3. “Modern Shale Gas Development in the United States: A Primer”, the Ground Water Protection Council; ALL Consulting; National Energy Technology Laboratory (April 2009). (http://www.netl.doe.gov/technologies/oil-gas/publications/EPreports/Shale_Gas_Primer_2009.pdf), retrieved July 16, 2013.
  4. Smith, Trevor, “Environmental Considerations of Shale Gas Development”, Chemical Engineering Progress, Volume 108, Issue 8, August, 2012, p. 53-59.
  5. "Pennsylvania Landowners Feel Cheated by Royalty Payments from Fracking”, All Things Considered, National Public Radio, broadcast July 29, 2013. Retrieved on August 12, 2013 from http://www.npr.org/templates/story/story.php?storyId=206728504
  6. “Chesapeake, Encana sued in Federal Antitrust Action,” Reuters News Agency, February 25, 2013. Retrieved on August 12, 2013 from http://www.reuters.com/article/2013/02/25/chesapeake-encana-antitrust-idUSL1N0BPBZ420130225
  7. “The Energy-Water Nexus -- Coordinated Federal Approach Needed to Better Manage Energy and Water Trade-offs,” GAO -12-880 , the Government Accountability Office, September 13, 2012, http://www.gao.gov/products/GAO-12-880 , retrieved August 12, 2013.

Coming in December 2013, is an essay based on the painting, "The Print Shop," exploring the impacts on the modern environmental movement of noted environmental authors such as Rachel Carson, Aldo Leopold, and others.


By Deborah Jackman, PhD, PE, LEED AP™ - originally posted on 04/04/2013

plowing-with-oxen-teams.jpg

The Painting and its Historical Significance:

“Plowing with Oxen Teams” was painted in 1866 by the English artist, William Watson. The original painting is oil on canvas and is 32 inches by 44 inches in size. The painting depicts an idyllic bucolic scene. In the foreground, farmers are urging the team of oxen to pull the plow through the fertile soil, assisted by the urgent barking of the farm dog. In the background is a spectacular mountain vista and clear blue sky. The scene evokes nostalgia for a highly idealized view of farming as it was practiced in the past.

In fact, farming in the past was far from glamorous. It involved back-breaking manual labor, working from dawn to dusk, and contending with Nature’s fickleness — frost, drought, hail, high winds, plant disease, insect infestations, and flooding. Beginning in the early 20th century, advances in farming technology – gasoline-powered tractors, chemical pesticides, and commercial fertilizers — revolutionized agriculture. The dominant agricultural production model in developed countries such as the United Stated changed from the small scale, family farm model to the large scale, heavily mechanized agribusiness model. With this change came the promise of greater productivity, higher crop yields, and cheaper food supplies. It also facilitated the migration of many people from a rural lifestyle to an urban one, since fewer individuals were needed to work the land. However, modern agricultural practices are not without some very negative side-effects, and are arguably not sustainable. In response to some of the negative environmental and societal impacts of modern agribusiness, a strong counter movement in sustainable agricultural practices has emerged over the last several decades. The modern sustainable agriculture movement does not advocate a return to purely traditional farming methods of the past, but rather advocates for combining some aspects of modern technology with traditional practices shown to have scientific benefits, to produce a model that is sustainable over the long term.

The political tensions and scientific controversies that exist currently between advocates for the large-scale agribusiness model and the agricultural sustainability model are too complex and broad in scope to be covered in this brief essay. However, to illustrate the issues involved in this broader controversy, this essay will examine one hot-button issue — how genetically modified (GM) crops are impacting agricultural production and affect the sustainability of our farming system.

An Introduction to Genetically Modified (GM) Crops:

Genetically Modified (GM) crops are plants normally used to feed humans or livestock which have been modified using genetic engineering techniques to introduce traits that allow the plant to resist pests, withstand herbicide applications designed to eliminate weeds, and improve growth rates. The genes of the plant are altered in such a way so as to introduce a trait that does not occur naturally in that particular plant species. Genes from a totally different plant species or even from life forms from other biological kingdoms (bacteria and animal species) may be introduced into the genetic material of the subject plant in order to introduce the desired trait.

GM crops differ from crops improved through selective breeding or through naturally induced mutations primarily because of the potential for introduction of inter-kingdom genetic matter. Recalling our basic high school biology, we know that life forms are classified as belonging to one of six kingdoms — the plant kingdom, animal kingdom, fungi kingdom, bacteria kingdom, chromista kingdom (members of which include several types of seaweed and diatoms), and the protozoa kingdom. Horizontal gene transfer is the flow of genes across species. Such flow of genetic material can and does occur naturally. For example, the development of antibiotic resistance in bacteria is the result of horizontal gene transfer between bacteria. When a plant breeder develops a hybrid grain such as triticale (a hybrid of wheat and rye) using conventional plant cross-breeding techniques, horizontal gene transfer is also involved. But, what differs with GM crops is that the gene transfer that is induced can cross kingdom lines and involves gene transfers that would likely never occur spontaneously in nature or even using conventional plant selective breeding techniques. For example, the most famous GM crop in common agricultural use today — Roundup Ready soybeans — contains genetic material from several species of bacteria. The resulting transgenic plant is not susceptible to damage from Roundup, a common herbicide. Therefore, farmers can control weeds by spraying Roundup on these crops, killing naturally occurring weeds, but not harming the genetically altered crop. Crop yields are improved because competition with weeds is eliminated and the farmer saves time and labor because it is quicker to spray a field with Roundup than to perform mechanical cultivation to eliminate the weeds. [1]

Another common GM crop is corn that has had genetic material from the bacteria, Bacillus thuringiensis, introduced into its DNA. Bacillus thuringiensis (Bt) is naturally toxic to caterpillars. By introducing Bt genes into corn, infestations of Corn Earworm, a common insect pest in corn, are eliminated without having to spray the corn with externally applied pesticides that could drift in the wind or run off into streams.

The use of GM crops is widespread. In the US, 86% of corn, 93% of soybeans, 93% of cotton, and 87% of canola is genetically modified to be either Roundup resistant or to resist predation by caterpillars through insertion of the Bt gene [2]. The U.S. accounts for 50% of all GM agricultural production and trade world-wide, with Canada, Australia, Argentina, Brazil, and China also accounting for significant percentages [3]. In other parts of the world, GM crops are much less common, either due to the increased seed costs of such crops, governmental prohibitions or citizen resistance to their use, such as in the European Union. Most genetically modified crop seeds are produced by a small number of large corporations, with Monsanto Corporation the largest supplier of GM seed. Such seed is patented and seeds produced from GM plants cannot be saved to be used for seed for subsequent generations of plantings. GM seed must be purchased new each year from the corporate supplier holding the patent or risk incurring substantial legal and monetary penalties [4].

In addition to their use as agricultural crops, GM plants have been created to synthesize drugs, facilitate their use as bio-fuels, and bio-remediate contaminated soils. Genetically modified carrots are used to produce the drug Taliglucerase alfa, used to treat Gaucher’s Disease [5]. GM bananas have been created which produce a human vaccine against Hepatitis B, although these are not yet in commercial production [6]. The Swiss-based company, Syngenta, has received USDA approval to market corn seed, trade-marked Enogen, which has been genetically modified to convert its starch to sugar more readily in order to speed production of ethanol-based bio-fuels [7].

The Sustainable Agriculture Movement:

Sustainable agriculture is defined as “an integrated system of plant and animal production practices having a site-specific application that will last over the long term. Sustainable agriculture has the following goals:

    1. To satisfy human food and fiber needs
    2. To enhance environmental quality and the natural resource base upon which the agricultural economy depends
    3. To make the most efficient use of non-renewable resources and on-farm resources and integrate, where appropriate, natural biological cycles and controls
    4. To sustain the economic viability of farm operations
    5. To enhance the quality of life for farmers and society as a whole”

[8]

Far from being a fringe movement, the sustainable agriculture movement has been recognized by the U.S. federal government in the 1990 farm bill [9], and by the United Nations in its Agenda 21 document [10]. The sustainable agriculture model is frequently discussed in terms of a triad — economic sustainability, environmental sustainability, and societal sustainability. In fact, Agenda 21 discusses how sustainable agriculture can help promote all three of these elements.

Of the five goals set forth by the sustainable agriculture movement, modern agribusiness in the U.S. is currently very good at achieving the first, i.e., satisfying food and fiber needs, in a very inexpensive fashion. Food costs as a percentage of total family income are lower in the U.S. than in most other countries in the world. However, we achieve this cheap food production by largely ignoring or violating the other four principles. This creates a false economy because it is not sustainable over the long term. While modern agribusiness can bring us food cheaply now, our food prices will inevitably rise and scarcity will ensue once the negative impacts of our current food production model reach their tipping points. Demographers project the global population will exceed 8 billion by the year 2030. Despite claims by agribusiness that its “modern” methods are the only ones that can ensure adequate food for all of the Earth’s projected inhabitants, there are other voices that disagree. Andre Leu, Chair of the Organic Federation of Australia, is on record saying that sustainable farming is the ONLY way to meet world food demand in the future [11].

As noted earlier in this article, we are using the GM crop debate as a microcosm to explore the more general issues surrounding modern agribusiness overall. To this end, in the next section we explore how the use of GM crops is problematic to the sustainable agriculture movement.

Sustainable Agriculture versus Genetically Modified Crops — Attendant Scientific and Political Controversies:

Soil Erosion and Water Scarcity: Proponents of GM crops argue that one of the chief promises of such crops is that they will enable farmers, especially those in developing countries, to grow plants on very marginal lands — including those with little water, depleted soil, or areas prone to frosts. This is because transgenes can be inserted into various crops to reduce that plant’s need for water, nitrogen, and to make it less frost susceptible. While this sounds like a noble goal, and while it may be true over the short term, it will likely lead to long term environmental damage.

To understand why, one must first understand why farmers must use marginal lands in the first place. In much of the world, soil conservation is not being given high enough priority. In Africa alone it is estimated that over a billion tons of soil are eroded every year. Such erosion occurs when forests and other natural windbreaks adjacent to tillable areas are cut down or when excessive or improper tilling methods are used. Wind erosion then occurs as topsoil is literally blown away. The removal of hedgerows and other vegetation buffers in order to maximize tillable acreage also means that the roots from plants that help to anchor soil in place during heavy precipitation events are no longer present to prevent topsoil run-off. The increased use of ammonia based fertilizers, rather than using compost for fertilizing crops reduces soil texture and trace elements needed for maximum plant vigor. This global phenomenon of loss of topsoil quantity and quality is being called “Peak Soil.” It means that we are currently at the “peak” of soil availability and can expect ever decreasing soil fertility if we do not make a concerted effort to improve soil management practices. Similarly, wholesale destruction of tropical rainforests has altered global rainfall patterns and over-pumping of aquifers has created shortages of water available to agriculture. Emphasis needs to be on water conservation practices including aggressive water reuse and recycling, and on soil moisture conservation strategies such as mulching and enhancement of soil texture so as to maximize its ability to hold moisture.

In keeping with the sustainable agricultural goal of enhancing environmental quality and the natural resource base upon which the agricultural economy depends, soil and water conservation is essential to long term agricultural viability. To the extent that GM crops are marketed as the solution to poor soils and water shortages, they distract farmers from the need to employ vigorous soil and water conservation practices, and thereby harm the environment.

Likely Damage to Beneficial Insects: Even scientists who generally view GM crops as benign admit that they cause some damage to populations of certain beneficial or desirable insects. People who are generally opposed to GM crops make claims of more extreme damage to the populations of such insects, including attributing them to be the cause of the honey bee colony collapse phenomenon (a claim that has not been scientifically proven to date). The indisputable fact is that research on the long term effects of GM crops on insect populations is on-going, the mechanisms at play within the ecosystem are complex, and much is still unknown. Individual Monarch butterflies who feed on milkweed flowers growing near plantings of Bt corn have been observed to die if pollen from the corn adheres to the milkweed flower and if that pollen expresses the Bt gene. However, some scientists argue that because the rate of expression of the Bt gene in corn pollen is very low, the risk of potential damage to overall Monarch populations is negligible even if some individual butterflies are indeed killed [12]. Researchers Conners, Glare, and Nap [12] note that even if insects don’t die directly from exposure to pollen containing the Bt transgene, some predator insects (who feed off prey insects who consume pollen from these plants) lack vigor and do not weigh as much as counterparts who feed off prey insects not fed with plants carrying the Bt gene. Many other examples of probable harm to insects due to GM crops are documented in the literature.

To the extent that more insect species are beneficial than are directly harmful to crops, and to the extent that GM crops appear (based on as yet incomplete evidence) to threaten both targeted pests and certain other insects indiscriminately, it is likely that wholesale production of GM crops do not represent a sustainable agricultural practice.

Herbicide Resistant “Superweeds”: It has been almost 20 years since the first Roundup Ready soybeans containing a gene to resist attack by glyphosate were commercialized in 1995. During this time, Nature has adapted. There are now numerous documented cases of various weed species that have become resistant to the effects of Roundup (glyphosate). Thus, the very problem that Roundup Ready soybeans were created to address — namely having crop plants able to resist damage from Roundup so that vulnerable weeds could be sprayed and destroyed — has been exacerbated. The case of waterhemp, a relative of pigweed, is a good example of an invasive weed species that has now become resistant to not one, but to three classes of herbicides, including glyphosates, ALS inhibiting herbicides, and PPO inhibiting herbicides [13]. Research is currently underway to engineer new varieties of crop plants that can resist damage from other classes of herbicides to which weeds have not yet become resistant. In this ever escalating “arms race” between modern genetic engineering and Mother Nature, I bet on Mother Nature to continuously “up the ante”. Continually having to re-engineer plants in order to try to stay ahead of Nature’s ability to adapt, rather than working in concert with Nature, is not sustainable. For many of these herbicide resistant weeds, the only sustainable option for control may be mechanical cultivation — the method used in the past, before the introduction of Roundup Ready crops.

Emergence of Bt Resistant and Secondary Insect Pests: In a situation similar to that involving herbicide resistant weed species, we are now seeing the emergence of certain Bt resistant insects that is attributable to the widespread use of the Bt transgene. In November 2009, Monsanto scientists found that the pink bollworm had become resistant to Bt cotton being grown in India. Since that time, resistant strains of cotton bollworms have been identified in Australia, China, Spain and the U.S. Monsanto recommends that as a strategy to delay the spread of this resistant bollworm, farmers interplant non-GM cotton with GM cotton, in order to dilute any resistant genes that may arise in these insects [14]. The operative word here is “delay”, since even Monsanto doesn’t claim that this tactic will stop the spread of the resistant strain of this insect altogether.

Secondary insect pests are also emerging. The definition of a secondary insect pest is a species that was never susceptible to Bt and thus can’t become resistant to it. However, these insect pests were previously kept in check by a balance within the ecosystem between themselves and various Bt susceptible species. Once these Bt susceptible species are reduced or eliminated, the secondary species experience a population explosion. Some of the secondary insect pests that are emerging in China and India include mirids, aphids, spider mites, and mealy bugs. Ironically, a 2011 survey of Chinese farmers indicates that they are collectively using nearly as much pesticide to keep these secondary pests in check as what was used previously to control the cotton bollworm prior to the introduction of Bt cotton [15].

Biodiversity: The importance of maintaining a reservoir of genetic diversity among food crops is well-understood by botanists. One lesson of the Irish Potato Famine in the 1840s was that if a particular cultivar becomes susceptible to attack by a virus or other disease vector, it is possible to go back into the gene pool and breed other varieties of the plants that are resistant to a particular disease. However, if we lose genetic diversity, and in particular, if we lose the genetic material contained in the wild relatives of our domesticated food crops, we endanger our future ability to respond to plagues like the Potato Famine. Various transgenes engineered into domesticated GM crops can be spread readily into non-GM relatives and closely related wild species. Hence the potential exists that without intentional strategies for preventing the spread of transgenes, the genetic purity of wild relatives of food crops will be forever lost. A 2010 study of wild canola in the U.S. Midwest, found the 83 percent contained the transgene used in domesticated canola to make it herbicide resistant [16]. Similarly, there is also great concern in Mexico over the use of Bt corn (maize), since Mexico is the geographical center of diversity of maize, and the home of its wild relatives.

Socio-political Implications: The tenets of sustainable agriculture include 1) “to satisfy human food and fiber needs”; 2) “to sustain the economic viability of farm operations;” and 3) “to enhance the quality of life for farmers and society as a whole.”

As relates to the first point, one of the most potent arguments made by proponents of GM agriculture is that without the use of GM crops, we will be unable to feed the world’s growing population. Yet upon further investigation, this claim becomes suspect. When the causes of modern famine are analyzed, we learn that the primary cause is not a fundamental shortage of food, but rather is due to social and political instability such as warfare, religious conflicts between various factions within developing nations, or corrupt and failing governments unable to administer food aid and food distribution programs effectively. How GM crops, per se, can address these issues any more effectively than conventional agriculture can is unclear. In fact, there is reason to believe that the use of GM seed worldwide will exacerbate certain social and political inequalities and make matters worse. One fact that supports this view is that GM seed is patent protected and must be purchased anew each year from a licensed distributor. The age-old practice of a farmer being able to hold back some of the previous year’s harvest as “seed corn” becomes illegal when GM crops are raised. This prevents the farmer from being self-reliant and forces him/her to shell out scarce cash resources to the seed distributor at the start of each growing season. This, in turn, does very little to “enhance the quality of life of the farmer,” or “to sustain the economic viability of farm operations.”

Another negative social impact of GM crops is that research into these crops has largely displaced traditional agricultural research directed at improving (through conventional means) the production of various indigenous crops important in the developing world. Such crops include millet, teff, and cowpeas. These “orphan crops” do not offer the potential for large profits, and therefore interest on the part of large agribusiness in investing in ways to improve crop yields is non-existent [17].

Final Thoughts — a Middle Ground?

As we have seen, GM crops present many potential issues that bring into question the wisdom of their use in agriculture. It is probable that some of the impacts of GM crops are not yet known because it will take some time before their effects on the larger ecosystem are fully understood. The Precautionary Principle — a fundamental tenet of environmental science — tells us, “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.” In other words, if there is a plausible chance of a negative impact, even if that negative impact has not yet been definitively proven, we should refrain from pursuing the activity until we can prove it to be harmless. Instead, U.S. agribusiness has pursued the opposite approach — it has proceeded in developing GM crops on the basis that it would cease doing so, only if harm from these could be proven definitively.

However, questioning whether we should employ GM crops in agriculture today is like closing the barn door after the horse has escaped (to call on an old farm adage). The case of GM crops is an example of scientific advances outpacing the public policy development needed to regulate them. Given the current widespread use of GM crops and the existing economics surrounding this multi-billion dollar industry, it is not likely we can eliminate GM crops altogether. We can, however, strengthen and expand our public policy to 1) create ways to better regulate existing GM plants, 2) monitor approvals of proposed new GM introductions, and 3) protect organic and non-GMO agriculture from the intrusion of GM crops. Pragmatists even within the sustainable agriculture movement agree that the present goal should not be to try to eliminate GM crops, but to strongly regulate them. Raymond P. Poincelot wrote a particularly forceful editorial espousing this view in the Journal of Sustainable Agriculture, which I recommend anyone interested in this topic read [18]. Some practical public policy actions that could be taken to facilitate the appropriate regulation and management of GM crops include:

  1. All GM crops raised for human consumption should be labeled as such. Meat from livestock raised on GM crops should also be labeled. The U.S. Food and Drug Administration (FDA) has resisted doing so, claiming a label would needlessly frighten the general public and that no evidence exists that GM foods are harmful when eaten. Indeed, of all the research into the possible negative effects of GM crops, the evidence to support direct harm to humans when consuming GM based foods is the weakest. Yet, that does not mean there are not effects that are harmful to the environment and thus indirectly harmful to humans. Labeling products as containing transgenes would raise the level of public awareness of the prevalence of these within the food chain. It might even spur the consumer to learn more about the larger impacts of GM foods to the environment. One could also argue that nutrition labels that appear on foods now could frighten consumers were they to see the list of food additives contained in processed foods, even though these additives have been tested and are generally considered safe. Yet, such labels are mandated. The same should be true for GM foods. Ultimately, the consumer has the right to know what is in her food and to decide for herself how to respond. Those skeptical of the U.S. regulatory system suggest that the U.S. agribusiness lobby has squelched the introduction of mandatory labeling of GM crops for fear U.S. consumers would reject them, cutting into profit margins, or worse yet, would insist these products be banned, such as what has occurred in portions of the European Union. These skeptics may very well be correct in their assessment.
  2. Mandated buffer zones. Certain crops like corn and sunflowers are notoriously promiscuous in their pollination habits. An organic farmer who attempts to grow non-GMO corn often has a difficult time doing so because his neighbor is growing GM corn in the next field. Buffer zones of several hundred feet, planted with unrelated vegetation, are recommended to stop the introduction of Bt genes into the organic corn. For sunflowers (raised for their oil) carrying the Bt gene, the required buffer zones are up to 1000 meters, to prevent cross-pollination with wild or non-GMO sunflowers. Right now the onus for providing such buffer zones lies with the organic farmer, who must remove some of his tillable land from production to create the buffer zone. Since many organic farmers tend to be smaller land holders, this onus is particularly damaging to the organic farmer’s profitability. One suggestion is to require the farmer growing GM crops to provide the buffer zone on his/her land.
  3. A stronger regulatory process for monitoring and approving GM crops. Within the U.S., regulatory responsibility for GM crops is split between the U.S. Department of Agriculture (USDA), the Food and Drug Administration (FDA), and the Environmental Protection Agency (EPA). EPA regulates biopesticides, including Bt. Therefore, any GM crops engineered to carry the Bt gene or other biopesticide genes must get EPA approval. Such approvals have become almost routine, even in light of recent evidence that the Bt transgene can harm some insect populations that are not pests. FDA regulates GM crops that are eaten by humans or food animals. Unless the crop contains foreign proteins that differ from the natural plant proteins in the non-GMO counterpart, FDA will almost without question designate it as “Generally Recognized as Safe.” The USDA regulates GM crops under the Plant Protection Act of 2000. Companies wishing to conduct field trials for a GM crop must either notify USDA or seek a permit. Whether notification or permitting is required depends on the potential risk a particular crop poses. Higher risk crops are those that have the potential to readily hybridize with wild relatives, stay in the ground for a long time, or which involve pharma crops (crops engineered to produce drugs for human consumption). Once the field trial stage is over and commercialization is sought, corporations can request that USDA “deregulate” their crop, effectively removing all future oversight. The exception to this deregulation is for pharma crops, which must remain regulated even while in commercial production [19]. The combination of the patchwork of regulatory authorities, understaffing of the enforcement arms of the three regulatory agencies, and over-reliance on data supplied by the corporations being regulated, (as opposed to testing by independent third parties), makes for a very weak regulatory system for GM crops in the U.S.

    On the international level, GM crop regulation is codified in the 2000 Cartegena Protocol. However, many GM crop producing countries, including the U.S., are not parties to the agreement. The Cartegena Protocol calls for informed consent on the part of countries who import GM crops. However, corporations who market GM seed have argued against disclosure of all but a minimum of information [3]. The result is that there is very weak international oversight and controls for GM crops.

     

  4. All future GM crop introductions should contain gene engineering specifically designed to mitigate impacts to the environment. For example, genetic engineers have the ability to build a dwarfing gene into a plant, along with whatever other traits they are building in for other purposes. Such a gene would produce a dwarf plant. The dwarf plant could flourish in its agricultural setting, producing whatever crop is intended because in such a setting there is no other competition. However, such dwarfs would be at a disadvantage in the wild because taller plants would block sunlight and not allow the dwarfs to survive. Thus, the dwarf plants would not tend to escape into the wild. In addition to intentional dwarfing other similar strategies would include intentionally designing plants with infertile seeds (to prevent spread outside the agricultural zone), or plants whose pollen is designed to be unpalatable to certain vulnerable insect species such as honey bees. While such engineered “enhancements” might cost Monsanto (and others) more, it would be considered the cost of doing business in a socially responsible manner.
    Region-specific bans on certain GM crops. For certain GM crops, there is not a uniform risk across all regions that they will hybridize with wild relatives and therefore spread their transgenes into the larger environment. If the wild relatives of the crop in question are found in Central or South America, for example, and if the GM crop is being grown in Iowa, there is little chance that the genetic make-up of the wild relatives will be affected. Such is the case with a crop like corn (maize). However, if wild relatives are present and if such relatives are particularly vulnerable to hybridization with the GM crop, then it makes sense to consider a regional (not total) ban on the GM crop. Such actions would mean an added level of complexity in the regulatory process, but would serve to better protect genetic diversity.

     

  5. Reduce or eliminate farm subsidies for corn, rice, wheat, soybeans, cotton, sugar cane and other agricultural commodity crops. It is not accidental that the crops that have led the GM revolution are the same crops that receive U.S. federal government subsidies. These subsidies distort the economics of growing such crops and incentivize big agriculture to grow more of them than if they were not subsidized. Since they then become the main cash crop, farmers are further incentivized to use agricultural practices that have a short-term benefit, even if they may have long-term downsides, e.g. Roundup ready soybeans, from whose use Roundup-resistant weed varieties have emerged. The fact that GM seed is patented and more expensive than non-GM seed is not a significant factor due to the distorted economics. It is worth noting that farmers growing crops that are not subsidized, such as most vegetables and fruits (tomatoes, lettuce, strawberries, apples, etc.) have not yet widely adopted GM varieties. In fact, one early GM introduction — the FlavrSavr tomato — was pulled from the market because consumers initially didn’t like it. Growers then made the rational economic decision to cease production. Is this connection between subsidies and the prevalence of GM varieties merely a coincidence then? Many would argue no. Once subsidies were removed, the decision to continue to use GM varieties would be made based on a more realistic assessment of their costs.

While we gaze at “Plowing with Oxen Teams,” it is easy for us to romanticize the way farming was practiced a century or more ago. Back then, there were no concerns over genetically modified crops and their long- term effect on the environment. However, farmers were also at the mercy of nature and had little recourse to deal with pests, weeds, or drought. As a result, starvation due to crop failures was much more common than today. A century ago, humans also had little or no access to life-saving drugs, including the drugs that biotechnology firms are now able to produce using pharma crops. So, like most things in life, we must seek balance in how we view and manage GM crops and the other fruits of the genetic engineering revolution in which we find ourselves. Unfortunately, the science that brought us GM crops developed faster than the public policies and regulatory structures intended to regulate it. We must now work to bring policy and regulation into alignment with this new technological revolution, in order to protect the well being of humans and the environment, alike. We must also structure our regulations in such a way as to offer citizens a choice of whether or when to consume GM products. Full disclosure through labeling and mechanisms to protect organic and non-GMO producers from contamination of their products with transgenes are essential. The purpose of this essay is not to demonize modern agribusiness. It is instead to encourage farmers to use a balanced approach to farming practices — employing sustainable practices where practicable, as well as judicious and selective use of GM crops and other ultra-modern technologies, where such technologies actually offer a societal benefit.

References and Further Reading:

  1. “Genetically Modified Crops,” Wikipedia, February 12, 2013, (http://en.wikipedia.org/wiki/Genetically_modified_crops)
  2. “Acreage NASS,” National Agricultural Statistics Board Annual Report, June 30, 2010. (http://usda.mannlib.cornell.edu/usda/nass/Acre/2010s/2010/Acre-06-30-2010.pdf)
  3. Gupta, A., “Transparency as Contested Political Terrain: Who Knows What about the Global GMO Trade and Why does it Matter?,” Global Environmental Politics, Vol. 10 Issue 3; August, 2010; Massachusetts Institute of Technology.
  4. “Supreme Court Appears to Defend Patent on Soybean,” reported by Adam Liptak, The New York Times, February 19. 2013.
  5. Maxmen, A., “Drug-making Plant Blooms,” Nature –International Weekly Journal of Science, Volume 485, Issue 7397, May 8, 2012.
  6. Kumar, G.B. Sunil; T.R. Ganapathi; et. al.; “Expression of Hepatitis B Surface Antigen in Transgenic Banana Plants,” Planta, Volume 222, Number 3, p. 484-493, October, 2005.
  7. “Genetically Modified Crops’ Results Raise Concern,” reported by Carolyn Lochhead, The San Francisco Chronicle, April 30, 2012.
  8. Gold, M., (July 2009). ‘What is Sustainable Agriculture?” (http://www.nal.usda.gov/afsic/pubs/agnic/susag.shtml). United States Department of Agriculture, Alternative Farming Systems Information Center.
  9. Food, Agriculture, Conservation, and Trade Act of 1990, Public Law 101-624, Title XVI, Subtitle A, Section 1603.
  10. “Promoting Sustainable Agriculture and Rural Development,” United Nations 1992 Earth Summit, Agenda 21, Chapter 14; Rio de Janiero, 1992.
  11. Leu, Andre, “Future Organic,” New Internationalist, Issue 368, June 2004, p. 34-35.
  12. Conner, A.J., Glare, T.R., and Nap, Jan-Peter, “The Release of Genetically Modified Crops into the Environment,” Part II, Overview of the Ecological Risk Assessment,” The Plant Journal, (2003), Volume 33, p.19-45, Blackwell Publishing, Ltd.
  13. Nordby, D., Hartzler, B., and Bradley, K., “The Biology and Management of Waterhemp,” Knowledge to Go Bulletin #GWC-13, Purdue University Extension, 2007.
  14. “Genetically Modified Food Controversies,” Wikipedia, February 11, 2013. (http://en.wikipedia.org/wiki/Genetically_modified_food_controversies)
  15. Zhao, J.H., Ho, P., and Azadi, H., “Erratum to: Benefits of Bt Cotton Counterbalanced by Secondary Pests? Perceptions of Ecological Change in China,” Environmental Monitoring Assessment, August 2012.
  16. “First Wild Canola Plants with Modified Genes Found in the United States,” Arkansas Newswire, University of Arkansas, August 6, 2010.
  17. Naylor, R.L., et al., “Biotechnology in the Developing World: a Case for Increased Investments in Orphan Crops.” Food Policy, Volume 29, Issue 1, p.15-44, 2004.
  18. Poincelot, Raymond, P., “From the Editor,” Journal of Sustainable Agriculture , Vol. 16(3), The Hayworth Press, Inc., 2000.
  19. Agricultural Biotechnology: Safety, Security, and Ethical Dimensions, Federation of American Scientists website, http://www.fas.org/biosecurity/education/dualuse-agriculture/2.-agricultural-biotechnology/index.html, March 26, 2013.

Coming in September 2013 is an essay on the natural gas industry’s current practice of hydraulic fracturing, i.e., “fraccing,” inspired by the 1911 Viggo Langer painting, ‘Oil Rigs in Baku at Caspian Sea.’
Copyright 2013 Deborah L. Jackman

By Deborah Jackman, PhD, PE, LEED AP™ - originally posted on 01/11/2013

trummerfrauen.jpg

The Painting and its Historical Significance:
“Trümmerfrauen” means “rubble women.” The painting, created in 1951 by Johvi Schulze-Görlitz, depicts a group of women sifting through the rubble of a bombed-out building, reclaiming bricks. The woman in the foreground of the painting is chiseling mortar from the bricks, so that they can be reused in subsequent rebuilding projects. The need for this reclamation speaks to the utter devastation inflicted by the Allies upon Nazi Germany at the end of World War II. That women, rather than men, were engaged in this back-breaking labor speaks to the fact that a large percentage of healthy, working-age German men were killed or captured during the war and were not available to help rebuild. Even the date of the painting is telling. While the war ended in 1945, even as late as 1951 significant areas of German cities still lay in ruin. Rebuilding continued well into the late 1970s in some areas. After the War, there was an estimated 14 billion cubic feet of rubble from destroyed buildings scattered throughout what was then West Germany [1]. All major German cities were affected. An estimated 80% of all historic buildings located in cities were destroyed. Housing was scarce because 6.5 million apartment units (out of a total of 16 million units) throughout West Germany had been destroyed in the bombing. While some of the rubble was reclaimed and reused in new construction projects, much of it was just piled up to create man-made hills. The rubble had to be consolidated so that still intact buried infrastructure such as sewer and water lines could be accessed during rebuilding. The Teufelberg (Devil’s Mountain), a mountain constructed of rubble located in Berlin, is 115 meters tall and is the second highest point in the city. There are even recreational sites located in modern Berlin, Leipzig, Frankfurt, and other cities used for snowboarding, paragliding, and rock climbing that have been created from these artificial urban rubble mountains.

Eventually, through hard work and resources supplied to it by the U.S. as part of the Marshall Plan, Germany did rebuild itself. Today, it has the largest economy in Europe, with a strong manufacturing base, a highly trained workforce, and low unemployment. Along the way–and likely due in part to its rebuilding experiences after the War– Germany has also become an international leader in the modern green building movement. Today, the average building in the U.S. uses approximately one third more energy than its German counterpart [3].

This essay will provide an overview of the modern green building movement, a summary of the various green building rating systems in use, and will look at the role that material selection and reuse plays in that movement.

The Green Building Movement—Definition and History:
At its most fundamental, a green building is one that minimizes negative impacts on the environment both during its construction and operation. Negative environmental impacts are those that contribute to the depletion of the Earth’s non-renewable resources or which degrade ecosystems. To understand how much impact buildings have on the environment, one need only consider that 40% of all energy used in the United States today is used to operate buildings. This amount of energy equals the amount of energy used for all transportation functions (automobiles, trucks, trains, planes, and buses) nationwide. Most of this energy comes from non-renewable sources and results in the generation of significant amounts of greenhouse gases. And, energy use is only one factor among several building-related factors that impact the environment. A number of rating systems and design protocols have been developed to assist architects and engineers in making sustainable choices as they design a building. Some of these “green” practices have also made their way into revisions to various building codes—a necessary step for institutionalizing green building strategies within the mainstream building design and construction industries. Before discussing specific green building strategies and rating systems, it is helpful to understand how these strategies and systems came to be. It is informative to look briefly at the history of the green building movement.

Most scholars of the green building movement mark the start of widespread interest in sustainable design with the publication of Rachel Carson’s Silent Spring in 1962. This seminal work spurred development not only of the green building movement, but of environmentalism in general. It directly led to the U.S. ban on DDT, and was a catalyst for much of the major federal environmental legislation of the 1970s, such as the Clean Water Act, the Clean Air Act, and Superfund legislation. Internationally, this heightened interest in the environment led to the 1972 Earth Summit, held in Stockholm, Sweden, and attended by representatives from 113 countries. As an outgrowth of this first Earth Summit, the Declaration of the UN Conference on Human Development was developed—a document that outlined 26 principles related to sustainability and human activity. Participants of this first Earth Summit also agreed to reconvene every 10 years to reassess progress toward achieving environmental goals [2].

Nearly concurrent with the first Earth Summit was the first Arab oil embargo of the U.S., also occurring in 1972. The Oil Producing and Exporting Countries (OPEC), a confederation of oil-rich nations, mostly in the Middle East, ceased oil exports to the U.S. for political reasons. This created a temporary petroleum shortage in the U.S., driving gasoline prices to record highs and creating national interest in alternative energy sources and in energy conservation. With the creation in 1977 of the Department of Energy (DOE) and the National Renewable Energy Laboratory, the federal government increased funding of research into various renewable energy technologies such as photovoltaic (solar) energy and wind energy– technologies that at the time were in their infancy, but which today have advanced to the point of being widely accepted alternatives to fossil fuel-generated electricity. The emphasis on energy conservation produced legislation to increase the fuel economy of automobiles. It also caused changes in building codes to ensure more energy efficient building envelopes. Once U.S. oil supplies appeared to regain a more secure footing starting in the early 1980’s, energy prices dropped and many U.S. consumers and businesses largely ignored energy conservation and alternative energy issues for the next two decades. Yet, many of the structural changes created during the oil embargo years—work of the DOE to fund energy research within universities, automotive fuel efficiency standards, and building code changes to foster building envelope efficiency—remained, and work continued in the background, largely out of public view.

A third key event in the evolution of the environmental and green building movements was the formation in 1987 of the Brundtland Commission. The Brundtland Commission was convened under the authorization of the United Nations General Assembly to create a white paper on what constituted sustainable development. The resulting report, “Our Common Future,” had as its principle premise that sustainable international development was the process by which we “meet the needs of the present without compromising the ability of future generations to meet their own needs” [2]. The Brundtland Commission’s work is perhaps most significant because it promoted action on the part of European and Asian nations to create and enforce standards to foster environmental responsibility. As impactful as the Arab oil embargo was to U.S. attitudes about energy conservation, the Brundtland Commission was probably a greater influence outside of the U.S. Hence, as popular interest in energy conservation waned in the U.S. during the 1980’s, it increased in Europe. Subject to higher energy prices than in the U.S., European businesses and consumers in the 1980’s and 1990’s had a much larger financial incentive to conserve energy and to explore alternative energy technologies.

The work of the Brundtland Commission spurred a movement among European architects to incorporate sustainable features into their buildings. By the early 1990’s, many European governments had mandated requirements for minimum energy efficiency standards in buildings. Building designs featuring sustainable elements became prominent in the work of such well-known European architects as Norman Foster and Willem Jan Neutelings [3]. U.S. architects recognized this emerging European design trend and realized the need for the development of sustainable building standards in the U.S. A principle difference, however, between Europe and the U.S. is the degree to which the respective governments practice centralized planning, and regulate building and development. The degree of regulation is much higher in Europe. So, whereas energy efficiency standards for buildings were mandated by many European governments by the 1990’s, architects and engineers seeking to develop consistent standards for green building in the U.S. had to largely rely on the development of voluntary, industry-based standards. In 1993, the United States Green Building Council (USGBC) was formed as an outgrowth of discussions conducted during the American Institute of Architect’s (AIA) World Congress of Architects meeting in Chicago of that year [2].

The USGBC unveiled the first version of its Leadership in Energy and Environmental Design (LEED) green building rating system in 1998, after several years of discussions among its membership. The development of the first version of LEED was stimulated by a memo of understanding between AIA and the DOE, finalized in 1996 during the Clinton administration. The memo of understanding called for the establishment of a roadmap for sustainable buildings for the 21st century and promised government support in the form of grants for research related to this goal. An executive order made by President Clinton in 1998 mandated all government buildings to improve their energy management and to incorporate “environmentally preferred” material choices whenever possible [2]. While not directly impacting private buildings, this executive order provided impetus to the development of much of the intellectual infrastructure needed for a more integrated approach to sustainable design within the U.S. in subsequent years. LEED was revised and updated to Version 2.0 in 2000 and again, to Version 2.1 in 2002, Version 2.2 in 2005, and Version 3.0 in 2009, all in response to an increased interest in green building and to efforts to apply consistent metrics to what it means for a building to be “green.” In the early 2000’s, after 9/11 and the ensuing Iraq war, energy prices again rose and public interest in renewable energy and energy conservation re-emerged. Around this same time, a number of prominent extreme weather events occurred world-wide (droughts, hurricanes, floods), which many attributed to global climate change caused by greenhouse gases. The cumulative effect of all of these factors is that today interest in green building is high. Green buildings are no longer viewed as exotic, but as mainstream. Because more architects and designers are now familiar with the strategies and tenets of green design, design premiums that were previously assigned to certified green buildings have greatly diminished. Owners and architects alike understand that on a life cycle cost basis (which considers both construction and on-going operating costs), a well-designed green building is actually less expensive than one that is designed using older conventional standards.

Green Building Rating Systems:
Especially in the early days of the green building movement designers were not always clear on what constituted a “green” or sustainable building design practice. A strategy or material that initially seemed like it might be the greenest choice turned out, after further analysis, to be less green than other alternatives. The need to establish objective standards for what building design practices were sustainable was the driver behind the establishment of various building rating systems. Given below is a brief summary of the major green building rating systems in use today and their primary criteria.

  • BREEAM (established in 1988 by the British Building Research Organization)
    Even though LEED is the best known rating system in the United States, it is not the oldest. BREEAM was established by the British in 1988 and is the oldest green building rating system in wide spread use. It was directly inspired by the same wave of environmental activism in Europe that surrounded the formation of the Brundtland Commission discussed above. It is used widely in Great Britain, Germany, France, Spain, and Italy. It assesses buildings in the following areas:

     

    Credits are awarded in each category, the credits weighted relative to the importance of each category, and a building receives one of four ratings: Pass, Good, Very Good, or Excellent. A certificate is awarded that can be used for promotion by the building owners. BREEAM covers residences, offices, and industrial facilities, with different assessment methods for each category.

    1. building management (during construction, commissioning, and operation);
    2. energy use;
    3. health and well-being of workers and occupants;
    4. water and air pollution generated by the construction and on-going operations of the building;
    5. transport (CO2 generated to travel to and from the building);
    6. land use (greenfield and brownfield sites);
    7. ecology (protection of sensitive building sites);
    8. material (low impact materials based on life cycle analysis); and
    9. water (consumption and efficiency).
  • LEED (Leadership in Energy and Environmental Design, established in the U.S. by the USGBC in 1998 with updated versions since)
    This is the major rating system used in the U.S. It consists of seven categories of evaluation criteria:

     

    Each category has a maximum number of points assigned to it and if a building design and construction process meets a given criteria, it earns points for that category, up to the maximum limit. Total points are then calculated and a building earns a Certified, Silver, Gold, or Platinum rating. Like BREEAM, a certificate is issued that the owner can use for publicity purposes. LEED ratings systems for new commercial construction, core and shell construction, and schools are available. While the topics are organized and subdivided somewhat differently, the fundamental parameters which determine building sustainability are very similar between BREEAM and LEED. One significant difference between the two is in how they were developed. BREEAM was created by a British national standards agency and then was adopted by various developers. LEED, on the other hand, was developed by consensus within the private sector by conversations and debate between architects, engineers, and other interested parties. Other important points related to LEED include the fact that the regional priority credit category is new with LEED Version 3.0 and was added to address the criticism that there needed to be flexibility built into LEED to allow for regional differences. Also, in going from LEED Version 2.2 to Version 3.0, categories were reweighted to give greater importance to energy conservation and water conservation—arguably, the factors having the greatest environmental impacts.

    1. sustainable sites;
    2. water efficiency;
    3. energy and atmosphere;
    4. materials and resources;
    5. indoor environmental quality;
    6. innovation and design process; and
    7. regional priorities.
  • Green Globes (originated in Canada but becoming popular in the U.S)
    Green Globes is questionnaire-driven, with questions asked of the designer and builder related to seven categories:

     

    Again, except for the organization of the categories being slightly different, the essence of what constitutes valid criteria for a sustainable building project is similar to LEED and BREEAM. One major difference is that in addition to designers answering the questionnaire, the building project is only certified once a third party auditor does a site inspection/audit to verify that the features reported were in fact implemented in the building. Neither LEED nor BREEAM actually requires an audit of the final building; both rely on accurate self-reporting from the designers. Another difference between Green Globes and LEED is that in Green Globes the project is not penalized if a particular point is unavailable to the project. For example, LEED gives a point for a project built on a Brownfields site. This point counts toward the point total within the Sustainable Site category. If the location for the project precludes it being on a Brownfields site, there is no way for the project to regain that point and it may be unable to achieve the highest rating even if it is exemplary in all other respects. Green Globes’ points are adjusted based on project location. For these reasons, and also because some see LEED as increasingly driven by monetary interests within the USGBC, many prefer Green Globes.

    1. project management;
    2. site;
    3. energy;
    4. water;
    5. resources, building materials, and solid waste;
    6. emissions and effluents; and
    7. indoor environment.
  • CASBEE (Japan) and Green Star (Australia)
    Space precludes a detailed description of these in this essay. The interested reader is referred to [4] for more information on these two rating systems and also for more detail on LEED, BREEAM, and Green Globes.

Regardless of which of these rating systems is used, they have several characteristics in common. All account for– in varying degrees and using different algorithms and point systems– building energy usage, water usage, materials usage, ecological impacts and site considerations, and occupant well-being. None are perfect and each can be manipulated in ways that can actually produce a less green outcome, from an objective standpoint. For example, LEED has been criticized as stifling true design innovations by incentivizing designers to make certain design choices just to earn points needed to attain a higher level of certification, even if those design choices don’t contribute optimally to the sustainability of the overall design. Critics argue that truly innovative solutions are passed over because they don’t earn the designer or owner LEED points. To some extent, USGBC has attempted to address such criticisms by incorporating innovation points into the LEED system, but this is not a perfect solution. Ultimately, the effectiveness of any of these rating systems in ensuring an optimal building design lies in the skill and common sense of the design team employing them. There is no “one size fits all” solution to green building design; solutions are dependent on the location and use of the building, owner preferences, the budget, and other factors. The designer must optimize the sustainability of the building within the context of these other factors.

Building Material Reuse and Recycling:
Since the environmentally efficient use of materials is a parameter used in all the major green building rating systems described above, and since our subject painting directly speaks to the reuse of bricks, let’s briefly review how building material selection impacts the modern green building movement.

First, it is worth taking a moment to define the terms “reuse” and “recycling.” While sometimes mistakenly used interchangeably, they are technically different. “Reuse” of materials is taking a used building element—a brick, timber, flooring, doors, or other building materials or architectural elements and simply reusing them in another structure for the same purpose or similar purpose. Reuse can involve a cleaning or machining step (e.g. chipping mortar off bricks or resizing a timber using a saw), but does not involve reprocessing the material and remanufacturing it into another form. “Recycling” of materials is taking a used building element and reprocessing it such that it becomes raw material for a distinctly different finished product. A good example of a commonly recycled building element is steel. A steel beam from a demolished building can be sent to a steel mill, melted down, and reformed into another steel object, such as sheet goods, that can be used in automobiles or other products totally unrelated to the building process. In this context, the bricks in our subject painting are destined for reuse, rather than recycling.

A related point is that just because a building product is recycled or reused does not mean it is the most sustainable choice for a given building project. There are five generally recognized factors that can contribute to how green a building product is [5]:

  1. The product is made from “environmentally attractive” materials such as those that are salvaged, recycled, renewable, minimally processed or harvested in a sustainable manner.
  2. The product is “green” because of what it does NOT contain, for example treated lumber that doesn’t use conventional preservatives shown to harm the environment.
  3. The product reduces environmental impacts during construction, renovation, or deconstruction because of the way it is designed, e.g. certain types of modular building panels that reduce site disturbances during installation and which are easy to disassemble and reuse.
  4. The product helps to reduce negative environmental impacts during building operation, e.g. products that make the building very energy or water efficient and thereby increase overall building sustainability.
  5. The product contributes to a safer indoor environment within the building both for workers during construction and for occupants, e.g. low VOC paints that don’t release harmful fumes.

Sometimes a new product that contributes significantly to building energy efficiency is greener than a reused element that would contribute to a less energy efficient building. New windows versus reused windows are a good example. The new window, with high efficiency glass, would generally be considered the greener choice. Some material choices espouse more than one of the five factors listed. Generally speaking, the more of the five listed factors a single building material product espouses, the more likely it is to be the most sustainable choice for a given situation.

An integrated method for determining in an overall sense how sustainable a given building material choice is uses the Life Cycle Assessment (LCA) methodology. LCA is able to factor in the effects of multiple attributes—e.g., recycled content, high energy efficiency, low impact manufacturing process, etc.– and predict which material choices have lowest environmental impacts overall. LCA requires a detailed accounting of the raw material and energy inputs through out the life cycle of the product and also knowledge of emissions generated during product manufacture, transport, installation, use, and salvage/demolition. The life cycle of the product starts at the point where any raw materials needed to manufacture the product are mined or harvested and continues on until the building product is removed from the building many years later during demolition. The major downside to LCA is that we do not currently have sufficiently detailed databases quantifying raw material inputs, energy inputs, and emissions on many common processes and products. However, as research in the area of sustainable construction continues, such databases are growing over time.

One result of an LCA analysis on a building product is knowledge of that product’s embodied energy. Embodied energy is the total energy consumed in the acquisition and processing of raw materials, including manufacturing, transportation, and final installation. The lower the calculated embodied energy of a building product is, the lower its environmental impact. Reused products usually have lower embodied energy than newly manufactured products of the same type because the energy that went into the original manufacturing steps, (e.g. kiln firing a brick), do not have to be repeated before that product can be reused. It is interesting to note that according to the U.S. EPA [6], the U.S. manufactured over 8.3 billion clay bricks in 2001. Each brick has an embodied energy of approximately 4300 BTU [7]. Were we to reuse even a fraction of these bricks rather than simply manufacture them anew, the potential energy savings would be huge. At least in the case of bricks reuse makes tremendous environmental sense. One obstacle in the way of building materials reuse is the inability to match those who have materials that are available to be reused with those who wish to reuse them. This obstacle can be addressed more easily now than in the pre-Internet days through the development of searchable on-line materials exchanges such as the example shown in [8].

Final Thoughts:
A typical building has a life expectancy of 30 to 50 years or more, depending on its use, historical value, and other factors. Therefore, design decisions made today on a building project will influence energy consumption, water consumption, site ecology, and overall sustainability for decades. Arguably, wise design decisions are critical to the long term health of our environment. This brief overview of green building rating systems and of the sustainability of building materials is intended to increase awareness by users and consumers of buildings—homeowners and commercial developers alike—of the critical importance of the green building movement. By reclaiming used bricks from the rubble of World War II our Trümmerfrauen were employing aspects of sustainable design and construction, albeit because of the exigency of their circumstances, rather than due to any conscious efforts on their part to be “green.” Yet the Trümmerfrauen painting provides us with an interesting historical segue to better understanding the significance of the sustainable building movement in our own times.

References and Further Reading:

  1. Leick, R., Schreiber, M., and Stoldt, H.; “Out of the Ashes – A New Look at Germany’s Postwar Reconstruction,” Der Spiegel Online International, August 10, 2010. (http://www.spiegel.de/international/germany/out-of-the-ashes-a-new-look-at-germany-s-postwar-reconstruction-a-702856.html)
  2. Korkmaz S., Erten D., Varun Potbhare M.; “A Review of Green Building Movement Timelines in Developed and Developing Countries to Build an International Adoption Framework,” Proceedings of the Fifth International Conference on Construction in the 21st Century (CITC-V), May 20-22, 2009, Istanbul, Turkey.
  3. Ouroussioff, N., “Why Are They Greener Than We Are?”, The New York Times Magazine, May 20, 2007.
  4. Kibert, C.; Sustainable Construction—Green Building Design and Delivery, 3rd Edition, John Wiley and Sons, Inc., Hoboken, New Jersey, 2012.
  5. Wilson, A., “Building Materials: What Makes a Product ‘Green’?”, Environmental Building News, January, 2000.
  6. “Background Document for Life-Cycle Greenhouse Gas Emission Factors for Clay Brick Reuse and Concrete Recycling,” EPA530-R-03-017, November, 2003.
  7. “Sustainability and Brick,” Technical Notes on Brick Construction-TN 48, the Brick Industry Association, Reston, VA, June 2009.
  8. The Used Building Materials Exchange, http://build.recycle.net/exchange/.

Coming in Spring 2013 is an essay on sustainable farming practices, inspired by William Watson’s oil on canvas work, “Plowing with Oxen Teams.”

experience-of-the-german-autobahn.jpg

By Deborah Jackman, PhD, PE, LEED AP™ - originally posted on 06/04/2012

The Photograph and its Significance:
The subject work being considered in this essay is not a painting but a photograph taken by the German photographer, Hermann Harz, in 1935. It is one of several in a series of dye-transfer photographs which highlight the German Autobahn system.

The Nazi regime used photographs to document its major construction projects—the Autobahn construction and the building of a number of major bridges—for propaganda purposes. It wished to demonstrate the superiority of the National Socialist system over that of other world governments. However, while the Nazis used the Autobahn as a propaganda tool, the concept of the Autobahn did not originate under the Nazis, but rather in the 1920s during the Weimar Republic.

In 1926, the Planning Association for the Motorway Linking the Hanseatic Towns, Frankfort and Basle (HAFRABA) was created under the leadership of Willy Hof, a prominent German business leader. The HAFRABA was not a governmental body, but rather a private organization dedicated to advocating for the development of modern roadways. By 1926, motorized vehicles were becoming fairly commonplace among regular German citizens, and the HAFRABA wanted to promote the development of an infrastructure to allow motorized vehicles to easily and safely travel across the entire country, thereby connecting major cities and promoting commerce. To this end, the HAFRABA drew up construction plans for the roadway and began to lobby the Weimar Republic to finance the construction of this system of roadways (the Autobahn). The Weimar Republic rejected the project for two basic reasons. First, in order to build the Autobahn, the government needed to secure financing. Since bank credit was not readily available to the German government after the First World War, the only financing option would have been to charge tolls, as Italy did in building its national roadways in the 1920s. However, the assessment of tolls was a breach of German law and was therefore not an option (i.e. under the Financial Adjustment Law of 1926.) The second reason was that the German railroads lobbied against the development of any national roadway system, fearing that it would negatively impact their industry. Thus, it wasn’t until the Great Depression and the rise of the Nazi party to power that the idea of building the Autobahn was reconsidered. Hitler saw value in supporting the construction of the Autobahn for two reasons. First, it was a way to offer thousands of unemployed Germans employment, and second, it could be used as a propaganda tool to showcase the Nazi regime. Therefore, in 1933, Hitler met with Willy Hof to discuss the project. Hitler was able to use the brute force of his dictatorship to compel the railroads to withdraw their objection to the project. The project was financed by taxes on crude oil and petroleum, a levy on the railroads, loans from German banks, and by savings in unemployment compensation, due to increased construction industry employment. Contrary to popular belief, the Autobahn was not primarily built by slave labor. Up until 1939, when most able-bodied German men were conscripted into the military, free labor was used to build the Autobahn. It was only after 1939 that slave labor, made up of concentration camp prisoners and POWs, was used [1], [2].

Today, the building of the German Autobahn remains a highly charged subject, filled with negative political and historical implications. However, from a purely technical viewpoint, the Autobahn represents a significant engineering achievement. Despite its Nazi origins, the building of the Autobahn helped to lay the groundwork for post World War II German road-building technology and design expertise. Modern German highway engineers use advanced engineering techniques to extend the life of pavements, to reduce maintenance costs, and to minimize environmental impacts. Pavement design methods used in the German highway system are being studied in the United States today, in an effort to improve both general pavement performance and in order to make US roads more sustainable.

A Primer on Pavement Design:
Before beginning a discussion of sustainable pavement, we will cover, very broadly, the basics of pavement design, in order to become familiar with the types of pavement and their characteristics. This will allow an informed discussion of how sustainable pavement technologies can be best incorporated into US roads

Pavements can be placed in one of three categories: rigid pavement, flexible pavement, or composite pavement. Rigid pavement includes portland cement concrete pavements, with or without expansion joints and with or without steel reinforcement. Flexible pavement includes asphalt concrete pavement (more commonly known simply as asphalt pavement). Composite pavement consists of rigid (concrete) pavement underneath, covered with an overlay of asphalt. Each type of pavement has its unique advantages [3].

Rigid pavement is built of portland cement concrete, which is comprised of portland cement, fine and coarse aggregates, water, and various chemical additives to improve workability. Air is also sometimes entrained in the concrete to varying degrees, depending on the physical properties desired. In the case of reinforced concrete pavements, steel rebar is also used as part of the roadbed. The chemical reaction between the portland cement and the water drives a curing process that transforms the concrete mixture from a very viscous slurry into a solid with high compressive strength following a curing period. The fine and coarse aggregates used are traditionally various sizes of sand and gravel. However, increasingly, in an effort to reduce the environmental impacts of rigid pavements, various recycled materials are being substituted for sand and gravel as the aggregate. Such recycled materials include ground glass and ceramics, foundry ****, and crushed portland cement concrete, recovered from demolished roadways and construction projects and recycled back into the concrete mix as aggregate.[3] Fly ash from power plants has been used since the 1980′s to replace a portion of the portland cement component in concrete. (US EPA guidelines have mandated since the mid 1990s the use of fly ash in concrete.) The use of such recycled materials does more, from an environmental perspective, than merely keep these materials out of landfills. Viewed from a Life Cycle Assessment standpoint (discussed in greater detail below), using recycled materials in rigid pavements saves significant amounts of energy and raw materials and reduces the overall amount of green house gases generated over the lifecycle of the highway. This is primarily because virgin sand and gravel does not need to be mined and transported to the construction site. In the case of old highways being demolished and rebuilt, even greater environmental benefits can be achieved if the old concrete to be used as aggregate can be reground and reused on-site, thereby saving the energy that would have been expended to transport the recycled material to the road site.

Flexible pavement consists of asphalt cement, a mixture of asphalt ( a tar-like product derived from bituminous coal), fine and coarse aggregate, and various chemical fillers and additives to improve performance and workability. Asphalt paving operations are very energy intensive both because of the energy needed to mine the coal used to create the asphalt cement and because the asphalt has to be heated in order to liquefy it sufficiently to allow it to be mixed with the aggregate and laid down on the roadbed. Efforts to make flexible pavement more sustainable include 1) the use of recycled materials in place of the virgin aggregate, 2) the use of various recycled materials which contain asphalt ( such as roofing shingles) melted down and used in place of some of the virgin asphalt cement, and 3) the use of various additives to increase the workability of the asphalt cement at low temperatures, thereby reducing energy demands associated with mixing and laying down the pavement.

Composite pavement consists of a bed of rigid pavement overlaid with asphalt concrete. It combines the environmental impacts of both rigid and flexible pavements. Its primary advantage over either of the other two types is improved ride-ability and noise reduction characteristics [3].

Regardless of the pavement type, the key to optimal durability and performance is a well-prepared road bed, including a properly designed drainage system and a properly designed and compacted sub-base (usually comprised of various grades of dirt, sand, and gravel).

Decisions on which pavement type to use have been driven by cost, and have differed based on location and societal viewpoints. In the US, first-cost considerations have often dominated road building decisions. Many US governmental bodies (federal, state, local) seem willing to tolerate more on-going maintenance costs in favor of lower first costs. This philosophy has favored the use of flexible and composite pavements in many (although not all) areas of the US. However, in Europe (in Germany and Austria in particular), the philosophy favors building roads that are extremely durable, which have low maintenance requirements, and which have very long life spans. This philosophy tends to favor rigid pavement designs, which can have significantly higher first costs, but lower maintenance costs. The Autobahn itself is an example of this.

While cost has historically been the main driver in pavement design, sustainability is now an additional design parameter considered by highway engineers. Because of the German emphasis on rigid pavement design, much of the research on how to make rigid pavements more sustainable has come out of Germany. It is based on the premise that by increasing life span and minimizing maintenance requirements, one inherently lowers environmental impacts because over the life span of the road, fewer interventions involving the expenditure of energy and raw materials are needed. Because of the greater emphasis on flexible pavements in much of the US, US engineers have emphasized the recycling of materials and various energy conservation strategies more in their quest to develop more sustainable pavements. Part of this emphasis is driven by governmental mandate. (The US federal government passed the Intermodal Surface Transportation Efficiency Act (STEA) in 1994, which mandated the use of recycled tire content in asphalt paving projects which receive federal funding.) Both approaches have validity, but ultimately, neither provides the full picture on how to maximize pavement sustainability. The big picture can only be understood in the context of a Life Cycle Assessment, discussed below

Life Cycle Assessment and the Attributes of Sustainable Pavement:
In trying to determine which of the pavement types (rigid or flexible) is more sustainable, and in trying to develop new strategies for minimizing the environmental impacts of roads and pavement, a life cycle assessment must be conducted. Life Cycle Assessment (LCA) is a technique which views the entire life cycle of an engineered system as a single control volume. It looks at energy and mass inputs and outputs from the control volume and quantifies the environmental impacts based on how much energy and resources are used to create the system and on how much hazardous waste and green house gases are generated. It must include impacts ranging from the energy required to extract and transport the raw materials during initial construction, to the impacts which occur during routine system maintenance, to the impacts created during final disposal of residues and wastes from the system following demolition. Researchers have created databases and software packages which help to catalog and calculate environmental impacts.

One of the most comprehensive LCA databases is the US Department of Energy’s LCI Database, available on-line [4]. The DOE LCA database quantifies the environmental impacts of a number of basic industrial and construction processes in terms of the amount of energy they consume and the amount of greenhouse gases and toxic emissions produced. Using DOE LCI, one can find the energy cost and emissions for the production of unit quantities of portland cement, asphalt, steel, and other raw materials used in pavements. Using this data along with estimates of the energy and emissions costs to transport and install them, researchers are able to quantify the relative sustainability of different pavement types and systems.

Horvath and Hendrickson [5] conducted an LCA comparing asphalt pavement to steel reinforced concrete pavements (RCP). The LCA considered energy consumption, ore and fertilizer requirements, toxic emissions, and the hazardous wastes generated during extraction, transportation, mixing, and construction of both asphalt and RCP pavements. Their study initially assumed no recycled content in either type of pavement; the analysis was based upon the use of virgin raw materials. Their conclusions were that RCP required less energy and generated lower amounts of hazardous wastes, but had higher ore and fertilizer requirements and higher toxic emissions, than did asphalt pavements. But, if one subsequently accounted for the fact that there is currently more recycled content in asphalt pavement than in RCP, asphalt pavements can be concluded to be marginally more sustainable.

The results of this study can be understood best if one understands the primary environmental impacts for both RCP and asphalt pavements. Portland cement production is extremely energy intensive and the production of the portland cement used in RCP pavement production is arguably the single largest negative environmental impact associated with rigid pavements. The largest negative environmental impacts in the production of flexible (asphalt pavements) are the large amounts of energy needed to mine the bituminous coal from which the asphalt is produced, and the energy required to warm the asphalt mix prior to installation. To the extent that a portion of virgin asphalt is being successfully replaced with recycled asphalt shingles, with recycled tires, and with recycled asphalt cement pavement re-melt, total energy costs to produce flexible pavements can be driven down. Since the portland cement component in concrete is chemically changed during the concrete curing process, a similar opportunity to recycle this component of RCP, and thus save energy is not possible. Hence, other strategies, mainly aimed at extending the life of concrete pavements, must be employed to make rigid pavement design more sustainable.

Recent Developments to Enhance the Sustainability of Pavements:
Keeping in mind the principles of LCA and the attributes of sustainable pavements discussed above, a number of interesting avenues to minimize the environmental impacts of both rigid and flexible pavements are being researched.

In the category of rigid pavement, German engineers continue to lead the quest to reduce the environmental impacts of portland cement concrete pavements.

One of the major directions this research is taking is in the development of two-layer concrete pavements. These pavements consist of two separate PCC layers—a thick sub-layer, covered with a relatively thin wear layer. Such pavements have been shown to have durability that is comparable to traditional single layer PCC pavement, yet because they are comprised of two layers, it is possible to more readily use higher amounts of recycled materials in the aggregate of the sub-layer. Lower quality aggregate—such as recycled, ground PCC pavement, ground glass, and **** are used in the sub-layer. These recycled aggregates do not appear to reduce the structural performance of the pavement. A higher quality, more expensive aggregate, such as pea gravel, is reserved for the wear layer. This aggregate is exposed as part of the surface finishing process during construction. The exposed aggregate improves roadway safety (by improving pavement friction characteristics) and reduces road noise characteristics. In addition to reducing environmental impacts, and improving friction and noise characteristics, the two-layer PCC pavement is also cheaper [6]. Another German innovation in two-layer PCC pavement is the use of a 3 millimeter thick, polymeric geotextile as an interlayer between the two PCC layers [7[. This interlayer has been shown to lengthen the life of the pavement through 3 mechanisms: 1) the interlayer keeps cracks and other discontinuities in the lower layer from propagating to the wear layer; 2) the interlayer, if properly installed, can promote drainage of any water that enters the wear layer away from the bottom structural layer, thereby increasing roadbed life by reducing cracking due to freeze-thaw cycles; and 3) the interlayer absorbs some of the dynamic stresses caused by heavy traffic, thereby reducing stresses on the structural sub-layer and thus extending its life. The primary disadvantages of the use of the geotextile are its added cost and the need for careful installation to ensure proper performance. This necessitates having a highly proficient and well-trained construction team.

Another German road design practice that is making its way into the US is a movement away from steel reinforced concrete to plain concrete pavements. In a seminal, 15 year longitudinal study in Michigan, comparing standard US concrete pavement design (the control) to standard German concrete pavement design along a stretch of Michigan highway near Detroit, one conclusion has been that steel reinforcements can actually promote transverse cracking in the pavement, thereby shortening its life [8]. Since eliminating steel from PCC pavement removes one material input to the LCA, while potentially also lengthening its overall life and reducing maintenance costs, this change can make PCC pavements more sustainable as well.

Finally, another growing trend to improve the sustainability of PCC pavement has been to use scrap tires to fuel portland cement kilns instead of coal. Since the production of portland cement remains highly energy intensive, one way of mitigating the environmental impacts across the life cycle of the pavement is to recover the embodied energy in scrap tires, rather than use “virgin” coal to fuel the kilns. Not only is this more sustainable, but it saves energy costs during production. In 1996, 23 cement plants across the US used tires as a supplemental fuel. Air emissions using scrap tires as fuel are no worse than air emissions from burning coal. And, given that 250 million tires are discarded in the US per year, which possess 15,000 BTU per pound, this represents a potentially significant energy and material savings [9].

Recent developments to improve the sustainability of flexible pavement (in addition to the increasing reuse of asphalt concrete as re-melt, and the use of used asphalt shingles, both discussed above) include Cold-in-Place recycling (CIR) and Cold-in-Place recycled expanded asphalt mix (CREAM) [10]. Both of these technologies involve the on-site reclamation of used asphalt pavement using specialized machinery that can demolish the existing pavement, regrind it on-site, re-melt it and lay it down to create a new pavement surface. CIR uses various chemical additives and conditioners to allow the recycled pavement to be melted and reworked at lower temperatures than normal, thus saving energy. CREAM uses a similar technology, but also adds air to create an asphalt foam, that can be reworked at lower temperatures. The combination of on-site reclamation (thereby removing transportation impacts from the LCA and reducing the impacts associated with the use of virgin asphalt cement) with the reduced energy costs of re-melting and remixing the asphalt places both CIR and CREAM at the forefront of sustainable technologies for flexible pavements.

Final Thoughts:
Increased mass transportation and the development of vehicles that minimize the use of fossil fuels are the ideas that have often dominated our national discussion of ways to make our transportation system more sustainable. Indeed, both are important strategies in the overall effort to reduce the environmental impacts of transportation. But, the use of the personal automobile is not likely to vanish any time soon. And, our economy relies on semi tractor-trailers to haul large amounts of freight. Given these facts, the need for well-constructed highways will continue into the foreseeable future. However, as this essay is intended to show, the existence of our national highway system can be compatible with responsible environmental stewardship. Highway engineers have made considerable progress through innovative designs and increased recycling to reduce the environmental impacts of highway construction and maintenance. And, just around the “bend in the road,” there will undoubtedly be even more interesting and exciting developments in sustainable pavements in the future.

References:

  1. “The Autobahn Myth”; Oster, Uwe; History Today , November, 1996, p. 39- 41. (Translated from German by Judith Hayward).
  2. “The Third Reich’s Concrete Legacy”; Boser, Ulrich; U.S. News and World Report, Volume 134, Issue 23, p. 45, June 30, 2003.
  3. The Highway Engineering Handbook—Building and Rehabilitating the Infrastructure, 3rd Edition; Roger L. Brockenborough, P.E., editor; 2009; ISBN: 978-0-07-159763-0.
  4. DOE LCI Database, https://www.lcacommons.gov/nrel/search.
  5. “Comparison of Environmental Implications of Asphalt and Steel-Reinforced Concrete Pavements”; Horvath, A. and Hendrickson, C., Transportation Research Record: Journal of the Transportation Research Board of the National Academies; Volume 1626, 1998.
  6. “Design and Construction of Sustainable Pavements: Austrian and German Two-Layer Concrete Pavements”; Tompkins, D., Khazanovich, L., Darter, M., and Fleischer, W.; Transportation Research Record; Volume 2098, p. 75-85, 2009.
  7. “Nonwoven Geotextile Interlayers for Separating Cementitious Pavement Layers: German Practice and U.S. Field Trials”; Rasmussen, R. and Garber, S.; Research Report prepared by the International Scanning Study Team for the Federal Highway Administration, U.S Department of Transportation, May 2009.
  8. Fifteen Year Performance Review of Michigan’s European Concrete Pavement, Smiley, D., Report Number R-1538, Michigan Department of Transportation, Construction and Technology Division, February, 2010.
  9. A Comparison of Six Environmental Impacts of Portland Cement Concrete and Asphalt Cement Concrete Pavements; Gadja, J., and VanGeem, M., PCA R&D Serial No. 2068, Portland Concrete Association, 2001.
  10. “Sustainable Pavements: Environmental, Economic, and Social Benefits of In-Situ Pavement Recycling”; Alkins, A., Lane, B., and Kazmierowski, T.; Transportation Research Record: Journal of the Transportation Research Board of the National Academies, Volume 2084, 2008.

fantastic river landscape.jpg

By Deborah Jackman, PhD, PE, LEED AP™ - originally posted on 09/18/2012

The Painting and its Historical Significance:
No series of essays featuring the works from the Man at Work Collection would be complete without including one of the seminal works in the collection—the 1609 oil painting by Flemish artist Marten van Valckenborch, “Fantastic River Landscape with Ironworks”. Not only is it one of the oldest works in the Collection, but it depicts fledgling elements of the steel industry, an industry that underpinned much of the Industrial Revolution and the development of modern society. It is also an industry that has enormous environmental impacts, and which, in recent years has undergone significant changes to improve its sustainability.

On the left bank of the river is the ironworks. The ironworks is comprised of groups of workers who offload iron ore from boats; a blast furnace used to convert iron ore to elemental iron; a forge shop used to produce wrought iron implements from the elemental iron; and lime quarrying on the hillside above the ironworks. The blast furnace is the structure in the middle left foreground with the large, rectangular stone chimney and one water wheel. The forge shop is located just to the left of the blast furnace in the structure without walls, with the pyramidal thatched roof, attached to the building with two water wheels. (The water wheels in both the case of the blast furnace and forge were used to power air bellows which blew the air required for both processes.) The lime being quarried on the hill side was used in the blast furnace during the chemical reduction of iron oxide to elemental iron. On the right bank of the river, farming activities are depicted. The contrast between the activities on the opposite banks of the river is evidence that the time period in which the painting was created was one of transition. The early 17th century represents the start of the Industrial Revolution and a move away from an agrarian society. For the purpose of this essay, we will focus on the activities on the left bank of the river—those associated with iron making.

In 1609, iron produced in blast furnaces was transformed into wrought iron implements such as plowshares, horseshoes, and other hardware items, by blacksmiths at the forge. In 2012, blast furnaces are still used, but the iron produced in them is nearly all used for the subsequent manufacture of steel, as discussed in the section below. The blast furnace depicted in the painting is thus a focus of our study because it is a technology that has endured over the last 400 years (albeit with improvements along the way) and because its operation has the greatest environmental impacts of any process associated with steel production.

An Introduction to Steel Production:
Modern steel plants vary from plant to plant in their details of operation. But, the general process used to produce steel is common to nearly all modern plants [1]. This general process includes the following steps:

  1.     Various material handling practices and technologies to bring the needed raw materials to the plant and prepare them for further processing. The raw materials include iron ore, limestone, and fuel, usually coking coal. Depending on the exact processes used in a given plant, the iron ore is processed before being introduced into the blast furnace by crushing, pelletizing, and/or concentration. The general term used to prepare the iron for the blast furnace is beneficiation. Different plants use different beneficiation processes, which vary in complexity and which depend, in part, on the type and quality of iron ore being used. Limestone—also used as a reagent in the blast furnace—must be crushed and screened. Coal brought into the steel plant must be converted to coke in order to prepare it properly for the chemical reaction for which it will be used when introduced into the blast furnace. Therefore, most modern steel plants incorporate coking ovens. The coking process introduces additional process steps, and consumes some of the energy originally stored in the coal. Hence, coking, while necessary in conventional steel making, lowers the overall energy efficiency of the steel production process, and, from a life cycle assessment standpoint produces additional environmental impacts.
  2. Chemical reduction of the iron oxides contained in the iron ore to elemental iron. This is most commonly accomplished through the use of a blast furnace, although recent advances in technology, discussed below, have produced alternate methods of achieving reduction of the iron oxides to elemental iron which are less energy intensive and less polluting. The elemental iron produced in the blast furnace is commonly called pig iron. Upon leaving the blast furnace, the pig iron typically contains too much carbon content to be used in steel products. It is suitable only to be used as cast iron.
  3. Controlled oxidation of the pig iron in a steelmaking furnace to lower its carbon content and to produce carbon steel with the desired metallurgical composition. Steelmaking furnaces typically fall into one of three categories—a basic-oxygen furnace (BOF), an open-hearth furnace (OHF), or an electric-arc furnace (EAF). The basic-oxygen furnace is the type most commonly found in large, integrated steel making plants in the U.S. In the steelmaking furnace, molten pig iron is exposed to air in a controlled, basic (i.e. high pH) environment and various trace elements such as chromium, manganese, nickel or molybdenum may be added to the molten metal to produce various steel alloys. Since most steel recovered from demolished buildings and other objects is recycled today, scrap steel is also commonly introduced to the steelmaking furnace, to replace all or part of the pig iron used. The processing of recycled scrap steel, as opposed to virgin pig iron, favors the use of an EAF, for reasons described below.
  4. Various forming and heat treating processes to produce the desired end products. These processes include rolling, casting, forging, drawing, and extruding the molten steel into such products as bars, plates, wire, tubular products like pipe, and other structural shapes.

Of the four steps outlined above, the two which have the largest environmental impacts are the iron reduction step involving the blast furnace and the operation of the steelmaking furnace. These environmental impacts arise from the enormous amounts of energy consumed during the two processes and from the generation of large amounts of toxic exhaust gases, contaminated wastewaters, and solid waste. We will therefore focus on understanding these two processes in more depth, so as to be able to understand ways to minimize the environmental impacts associated with them.

The blast furnace, contrary to what its name suggests, is more than just an oven or furnace for heating the raw materials. It is a chemical reaction vessel in which iron oxide is reduced to elemental iron. The term commonly used for this reduction reaction of iron oxides to iron is smelting. A charge containing iron ore, flux (usually limestone), and fuel (usually coke in modern plants) is introduced to the top of the blast furnace, while air (sometimes enriched with oxygen) is blown into the lower portion of the furnace. The chemical reaction takes place through out the body of the blast furnace. The material charge moves downward, reacting with the hot combustion gases, rich in carbon monoxide, moving upward. The end products are molten pig iron and ****, which exit the bottom, and flue gases which exit the top of the furnace. The chemical reactions involved can be summarized as follows:

2 C(s) + O2(g) → 2 CO (g)(1)

[Coke and oxygen are converted to carbon monoxide in an incomplete combustion reaction]

3 Fe2O3(s) + CO(g) → 2 Fe3O4(s) + CO2(2)
Fe3O4(s) + CO(g) → 3 FeO(s) + CO2(g)(3)
FeO(s) + CO(g) → Fe(s) + CO2(g)(4)

[Multi-step chemical reduction of iron (+3) oxide to elemental iron using carbon monoxide as the reducing agent]

The net reaction is:

Fe2O3 + 3 CO → 2 Fe + 3 CO2(5)

Careful inspection of the above chemical equations indicates that carbon monoxide is the limiting reagent. Consequently, there needs to be a way for more carbon monoxide to be generated in the body of the reactor (blast furnace) as the material charge moves through in order to keep the reaction going. That is the purpose of the flux (limestone):

CaCO3(s) → CaO(s) + CO2(g)(6a)
CO2(g) + C ↔ 2 CO(6b)

In reactions (6a) and (6b), the limestone is decomposed into calcium oxide and carbon dioxide, and then the carbon dioxide reacts with the carbon from the coke to generate more carbon monoxide. The calcium oxide generated by the decomposition of the limestone typically reacts with impurities in the iron ore such as silica to form various types of mineral ****, which are byproducts of the blast furnace process. The most common **** produced is calcium silicate:

SiO2 + CaO → CaSiO3

Calcium silicate has properties similar to Portland cement and is used as a replacement for a portion of Portland cement in some concrete mixes. This use provides a natural route for reclaiming and recycling the **** produced in the blast furnace reaction, thereby minimizing the impact of this particular solid waste byproduct on the environment. (The interested reader may wish to read Installment Three of this essay series, “ ’The Experience of the German Autobahn’—A Discussion of Sustainable Pavement Technologies,” for a more in-depth description of how the use of **** in concrete mixes used in highway construction can minimize the environmental impacts of road construction.)

It is apparent that the blast furnace process is incredibly energy intensive, requiring large amounts of carbon-based fuels, and producing significant amounts of greenhouse gases in the form of carbon dioxide. According to Rubel, et al, [2], 75% of all energy consumed in an integrated steel plant is consumed in the form of coke used during the blast furnace process. 15 gigajoules of energy are used in the blast furnace process required to produce one ton of steel. In 1609, the blast furnace depicted in our painting probably used charcoal, produced by burning wood in an oxygen-starved environment. Coke, derived from coal, is used today. Both charcoal and coke burn at temperatures higher than the materials from which they are derived. This higher temperature promotes a more efficient reduction reaction. In addition to carbon monoxide and carbon dioxide, the exhaust gases from the blast furnace contain significant amounts of particulate matter, heavy metals, and other pollutants. In the time of van Valckenborch, no one worried about air emissions, but today, increasingly stringent environmental regulations require steel producers to employ various pollution abatement technologies, which are expensive and which add complexity to the process.

The pig iron produced by smelting in the blast furnace typically has around 4% by weight carbon content. The desired carbon content of carbon steel is around 0.5% to 1%, depending on the particular type of steel [1]. The pig iron also contains contaminants such as sulfur, silica and phosphorus. The steelmaking furnace incorporates a process whereby excess carbon is oxidized and removed, in the form of carbon dioxide, and where other impurities can be reacted and removed as ****. It also provides a convenient point in the process where various metal alloys can be added to produce steels with enhanced chromium, molybdenum, or nickel content (alloy steels). As mentioned above, the three styles of steelmaking furnaces encountered today are (1) the Open Hearth Furnace (sometimes called the Siemens Process); (2) the Basic Oxygen Furnace (BOF) (sometimes called the Bessemer Process); and (3) the Electric Arc Furnace (EAF). In the Open Hearth Furnace (OHF), the molten pig iron is introduced into a vessel lined with a basic refractory material, such as magnesite brick, a material capable of withstanding the high temperatures and one that does not introduce acidic byproducts into the process. Into the refractory vessel gasified fuel and excess air is introduced, along with a limestone charge. During the ensuing combustion process, carbon residing in the carbon steel is oxidized and other trace impurities react with the limestone to produce various types of ****. The Open Hearth process requires a fuel gasifier if coal is to be used as the primary fuel source. A distinguishing feature of the OHF is a series of passageways in the refractory brick to allow the brick to be preheated by hot exhaust gases. This allows the OHF to reach very high temperatures. In the Basic Oxygen Furnace (BOF), the molten pig iron enters a reaction vessel into which pure oxygen is blown through the molten iron mass. A limestone charge is also added. Oxidation of carbon to carbon dioxide occurs and other impurities are converted to **** in the presence of the limestone. Because pure oxygen is used instead of air, no external fuel source is needed to promote the oxidation (combustion) of carbon to carbon dioxide. The oxidation reaction of carbon to carbon dioxide is exothermic, so the heat generated by the reaction itself is sufficient to sustain the temperature in the reaction vessel. In this way, the BOF process is different from the OHF process. Because BOF does not require an external fuel source and doesn’t require a fuel gasification system, it has largely replaced the OHF process in most modern, integrated steel production facilities. However, because BOF does not have an external fuel source, it is limited as to how much scrap steel it can process, unless that scrap steel is first melted. The BOF process is optimized around the use of molten pig iron generated by the blast furnace. Thus, in the modern steel plant, the use of a blast furnace and a BOF are tightly linked. The third type of steelmaking furnace is the Electric Arc Furnace, (EAF). It uses electrical current introduced into the iron or recycled steel by large electrodes to melt the metal. The metal charge is spiked with burnt lime prior to being placed in the furnace. The lime acts as flux, promoting the conversion of impurities in the iron or steel into ****, which can subsequently be separated from the steel. Oxygen is blown into the furnace during operation to convert carbon into carbon dioxide, just as in the case of the BOF and the OHF processes. Because the EAF is designed to be able to melt its charge before oxidation of excess carbon occurs, it is uniquely capable of handling 100% recycled steel and does not need a supply of pig iron in order to produce steel. Because the energy source is 100% electrical energy, rather than coal or natural gas, the EAF process can also theoretically be run using electricity generated from nuclear or renewable sources, thereby allowing it to operate with a much smaller carbon footprint.

Recent Technological Developments to Increase the Sustainability of Steel Production:
The steel industry has always known that its processes are energy intensive and as a result have significant negative environmental impacts. Consequently, incremental improvements to both the blast furnace and BOF processes have occurred through out the last century in an effort to recover energy. These efforts have not, until quite recently, been directed at reducing the processes’ carbon footprints, but rather at reducing production costs. Nonetheless, a number of energy recovery strategies have been employed, particularly with blast furnace processes to reduce the cost of their operations, (and incidentally to also reduce their carbon footprint). One of the most basic strategies has been to use the hot exhaust gases from the blast furnace and waste heat from other places in the steel plant to preheat the air entering the furnace. Recently, a new design called the Top-Gas Recycling Blast Furnace has been pilot tested in the EU. This technology employs carbon capture and storage of exhaust gases from the blast furnace. It is projected to be fully commercialized by 2020 [3].

Since the Clean Air and Clean Water Acts were first passed by the US Congress in the early 1970’s, during the administration of President Richard M. Nixon, US steel makers have had to also employ various pollution abatement technologies. These technologies are well documented in Reference [4]. European and other first world countries have had similar pollution control regulations in place for steel manufacturers for much of the last half century. However, treating contaminated air and water streams after they have been generated in the steel making process is inherently less sustainable than preventing that pollution in the first place. Therefore, the newest and most innovative steel production processes seek to minimize energy use and prevent pollution, over the life cycle of the process, rather than treat and remediate pollutants after they are generated.

One such process has been developed by Siemens—a patented process known as the COREX® process. This process involves a modified blast furnace process, the details of which are proprietary. COREX® eliminates the need for coking the coal prior to the iron reduction step and also eliminates the need to sinter the iron ore prior to reduction. By eliminating the need for coke ovens and sintering plants, overall environmental impacts are reduced on a life cycle assessment basis. Overall energy consumption of the steel making process is also reduced because conventional coking reduces some of the useable chemical energy originally stored in the coal, in exchange for providing the higher combustion temperatures, needed in a conventional blast furnace. Reference [5] provides additional information on the life cycle assessment analysis performed by Siemens on its COREX ® process.

Even more promising than COREX®, from an energy conservation and environmental impacts perspective, are Direct Reduced Iron (DRI) processes [6]. This family of processes allows for the direct reduction of iron ore (in the form of lumps, pellets, or fines) by a reducing gas (either a mix of hydrogen and carbon monoxide) or by non-coking grades of coal. The process occurs in the solid phase– the iron ore is not melted, as in a blast furnace process. Because there is no phase change during the process (i.e. no melting occurs), the process is inherently less energy intensive. Furthermore, the solid-phase product produced from the DRI process can then be fed directly to an EAF furnace, further reducing the need for primary fossil fuel use. A final advantage of DRI is that because a source of coking-grade coal is not required, it can be conducted in geographical locations where low grade coal or other fuels are available. The ability to make steel in the same region where it will be used by combining DRI and EAF technologies reduces transportation costs and thereby the embodied energy and environmental impact of the product over its life cycle. Reference [7] provides a detailed Life Cycle Assessment (LCA) analysis of the material flows involved in the conventional steel industry. It shows that the traditional methods of making steel which involve transporting raw materials and finished product across large geographical distances is unsustainable, and that the steel industry must move to a more localized production model. Such localized steel production has taken the form of “mini-mills”. Mini-mills require far less capital investment, and unlike a blast furnace, which cannot be shut down for years at a time (because the start-up energy demand is so high), mini-mills are able to be operated on-demand. DRI is somewhat less rich in elemental iron—88% as opposed to 95% iron content from blast furnace processes and it contains more silica impurities than pig iron from a blast furnace because blast furnace **** is not removed. But, these disadvantages are largely outweighed by other factors and can be compensated for by various pre- and post-treatments. One economic driver for the increased use of DRI processes is the growth of the scrap steel market. Scrap steel is now highly sought after for use in EAF- based mini-mills and the price of scrap steel has increased significantly over the last several decades. DRI is therefore desirable as an alternate feedstock for these mini-mills.

Final Thoughts– The Connection between Economic Competitiveness and Sustainable Production Methods:
Over the last 30 years, the amount of energy required to produce a ton of steel has been reduced by 50%, and today, over 70% of scrap steel is recycled [8]. However, until quite recently, these improvements in energy efficiency and recycling were not due to environmental consciousness on the part of steel companies so much as due to a desire to reduce production costs. They were largely market-driven. This illustrates a key point regarding sustainability– that it often can be accomplished in parallel with cost reductions and increased profitability, and not in opposition to them. An older and largely obsolete view of environmentalism is that it is always an added cost, over and above other production costs. This is clearly not the case in the steel industry, which has embraced sustainability as a means to remain viable into the 21st Century. Reference [2] provides a detailed analysis of how sustainable practices can be used to improve profitability within the steel industry over the next decades. Such industry trade groups as the World Steel Association (www.worldsteel.com ) have devoted considerable resources to developing more efficient and sustainable models for steel production and to sharing them with their member companies. Just as the blast furnace technology represented in van Valckenborch’s 1609 painting helped to lead humanity into the modern industrial age, efforts at increased sustainability by the modern steel industry can help forge an integrated, green economy of the future.

References:

  1. The Making, Shaping and Treating of Steel, 10th Edition, United States Steel Corporation, edited by William T. Lankford, Jr., Norman L. Samways, Robert F. Craven, and Harold E McGannon, 1985.
  2. “Sustainable Steelmaking—Meeting Today’s Challenges, Forging Tomorrow’s Solutions”, Rubel, H.; Wortler, M.; Schuler, F.; and Micha, R. ; The Boston Consulting Group, July, 2009.
  3. http://www.ulcos.org/en/research/blast_furnace.php, information provided by the Ultra Low CO2 Steelmaking initiative.
  4. Pollution Prevention Technology Handbook, edited by Robert Noyes; p. 168-192, Noyes Publications, Park Ridge, New Jersey, 1993.
  5. Siemens: A better ecobalance in steel production – Pig iron production with COREX® and FINEX®
  6. “The Increasing Role of Direct Reduced Iron in Global Steelmaking,” Grobler, F., and Minnitt, R.C.A., The Journal of the South African Institute of Mining and Metallurgy, March/April, 1999, p. 111-116.
  7. “Iron Ore and Steel Production Trends and Material Flows in the World: Is this Really Sustainable?”, Yellishetty, M., Ranjith, P.G., and Tharumarajah, A., Resources, Conservation and Recycling, Volume 54 (2010), p. 1084-1094.
  8. World Steel Association: Sustainable Steel: At the core of a green economy

Coming in January 2013, is an essay based on Johvi Schulze-Görlitz’s 1951 oil-on-canvas painting, “Trümmerfrauen” (Rubble Women). The essay will look the evolution of materials recycling in the construction industry and the growth of the green building movement.?

child_labor_in_the_dye_works.jpg

By Deborah Jackman, PhD, PE, LEED AP™ - originally posted on 04/11/2012

The Painting:
‘Child Labor in the Dye Works’ is attributed to the German artist, Heinrich Kley, who lived between 1863 and 1945.  The exact date of the painting is unknown, but based upon the style of dress of the workers and the equipment used, the painting was likely created sometime between the mid 1890s and the start of the World War I (1914).  It depicts an industrial yarn dyeing operation, presumably in Germany.  The portion of the scene that draws the eye is the brilliantly colored yellow yarn being transferred from the dyeing process by the child worker.  The use of child labor is, of course, forbidden in developed countries today, but persists– contrary to international pressures to curtail it– in some developing nations. However, the primary purpose of this article is not to discuss child labor in the developing world. Instead, we will use the image presented in the painting as a springboard to explore the broader environmental implications of textile dyeing operations. Before discussing these environmental impacts and what can be done to minimize them, it is helpful to understand in overview the history of textile dyeing.

A Brief History of Textile Dyeing:
Dyeing of fibers to produce colored cloth is among the oldest of human activities.  Until the first aniline dye, mauve, was synthesized by William Henry Perkins in 1858, all dyes were natural in origin—derived either from plant, animal, or mineral sources.  The initial driving force behind the development of synthetic dyes was the desire to have a wider variety of more vibrant colors available than were obtainable from natural sources.  Then, once synthetic fibers such as nylon began to be introduced in the mid 1940’s, a secondary driver for the continued use of aniline dyes was the fact that synthetic fibers took up synthetic dyes more readily than natural dyes. Following the introduction of synthetic mauve dye, other aniline dye colors were gradually introduced such that by 1900 a full range of colors were available in the form of synthetic aniline dyes [1].  Aniline dyeing of wool had the additional advantage of not requiring a mordanting step, as the use of natural dyes did.  (Mordanting is a chemical pre-treatment of the fiber separate from the dyeing step itself required to ensure that the pigment takes to the fiber and remains color-fast.)  The upshot of all of these factors is that by the mid 20th century, aniline dyes ( and their close chemical derivatives—azo dyes) had supplanted natural dyes in nearly all commercial textile operations world-wide.

The timeline of the technical development of aniline dyes discussed above might suggest that the colored fibers depicted in our painting were the product of aniline yellow, but this is likely not the case.  Rather, the yellow yarn depicted in the painting is likely wool dyed with weld—a natural dye derived from the weld plant (Reseda luteola) [2].  The reasons for this are economic, not technical.  Even though the technology for using aniline dyes was probably known at the time this painting was created (estimated to be the mid 1890s to 1914), the wool textile industry in Germany was depressed until 1904 due to high tariffs imposed by countries to which Germany would have exported its colored fibers.  Thus, textile manufacturers did not invest significant money into updating operations until after at least 1904 because there was little economic incentive for them to do so [3].  To further the irony, during the late 19th century Germany was the center of R&D work on organic chemical synthesis, and German companies held most of the chemical patents for synthetic dyes.  In 1913, 80% of the synthetic dyes exported to the rest of Europe and to America came from Germany and yet, German textile manufacturers lagged behind other nations in adopting the synthetic dyes in their own operations.  During World War I many German chemical plants were destroyed for fear that they were manufacturing mustard gases and other chemical warfare agents, and after the War, many of these German dye patents were seized.  The nexus of synthetic textile dye manufacturing then shifted to the U.S. [4].

Environmental Impacts of Textile Dyeing:
The use of weld to dye wool and other natural fibers, as depicted in the painting, has relatively low environmental impacts compared to synthetic dye operations, but is not totally benign.  The spun fibers or woven, un-dyed cloth is placed in hot (200 F) water to which enough alum (sodium aluminum sulfate, NaAl(SO4)2 ,) has been added to create a 1:4 alum to fiber (by weight) solution.  The fibers are steeped in the hot solution for 12 hours.  This is the mordanting step which prepares the fibers to accept the dye.  Once mordanted, the fibers are transferred to another vat containing a hot (150F to 200F) aqueous weld dye solution.  They are kept in the dye solution until the desired shade of yellow is obtained.  The fibers are then removed, washed with a pH neutral soap and allowed to dry.  The aqueous weld dye solution is prepared by crushing the stems, leaves, and flowers of the weld plant and placing them in a vat covered by water.  The mixture is brought to the boiling point and allowed to steep for 30 minutes.  The colored solution is then strained to remove the solids [5].  Given that both the weld plant and the alum have low toxicity, they do not present a hazard from a material or human health/worker standpoint.  But because of the extreme color intensity of the aqueous dye solution, it does need to be decolorized before being discharged as an industrial wastewater in a manufacturing setting.  Such decolorization can be accomplished by processing the solution through an activated carbon filter or by using ultrafiltration or ion exchange technologies. From a more generalized sustainability standpoint, the weld dyeing process is quite energy intensive (due to the need to heat the solutions and to maintain their temperature for relatively long periods), and is also water intensive (although a given batch of weld dye can be reused to do multiple batches of fibers by re-fortifying the dye bath with more weld).    Although we are focusing on the use of weld dye to produce yellow fibers in reference to our subject painting, other natural dyes used to produce other colors generally exhibit similar environmental impacts to that of weld.

Aniline yellow and other aniline and azoic synthetic dyes exhibit more severe environmental impacts than natural dyes.  Most synthetic dyes are delivered to textiles by creating an aqueous (i.e. water based) dye solution, immersing the fibers in the dye solution at elevated temperatures  (100 to 130 C) for a specified period, and then removing the fibers or woven cloth for further finishing, including drying.  Most synthetic dyes, including aniline and azoic dyes, do not require a mordanting step, which is one of the distinct advantages of synthetic dyes.  Various pieces of specialized machinery have been developed by the modern textile industry to optimize and mechanize such dying operations.    Reference [6] provides a detailed technical review of various modern textile dyeing processes and related equipment.  However, regardless of the specifics of a particular technique, the general process involves the use of considerable amounts of energy and water, just as the natural dyeing process does.  In this regard, there is little difference in the environmental impacts or sustainability of the use of natural versus synthetic dyes.

The differential environmental impacts between natural dyes and synthetic aniline/azo dyes occur at two other points in the life cycle of the dye–  its manufacture and the disposal of resulting dye wastewaters.  At both of these points in the life cycle, aniline/azo-based dyes have much more negative and severe impacts than do natural dyes.

The manufacture of azoic dyes involves a complex, multi-step process of organic chemical synthesis. The root source of the chemicals used to synthesize azoic dyes is petroleum (i.e. crude oil). The precursor chemical to Yellow Azo dye (and other shades of azo dyes) is aniline which is converted into the dye via an oxidation process.  Aniline, itself, is an aromatic (i.e. based upon the benzene ring structure) amine.  It is acutely toxic to humans and also likely carcinogenic although evidence of its carcinogenicity is somewhat contradictory. Various azo dyes have been banned by the European Union due to concerns about their carcinogenicity [7],[8],[9].  Each step in the chemical synthesis of azo dyes from the extraction of the crude oil to the final synthesis from aniline has significant negative environmental impacts.

Large scale agricultural production of weld and other dye plants would also use petroleum to fuel farm machinery and to transport the harvested plants, but since there are not as many steps and chemical reaction processes between the planting of the weld and its final use, and because the intermediaries in the processes do not involve the use of aromatic chemicals such as aniline and benzene, the production of natural dyes is arguably far more sustainable than the production of synthetic dyes.  The weld plant itself is a hardy perennial, native to temperate regions of Eurasia.  It is drought tolerant, thrives in poor soils, and is not subject to insect infestations.  Therefore, cultivation of weld for use in dyes is relatively low impact, not requiring significant amounts of fertilizers or pesticides.

Once synthetic dyes are spent, the steps involved in treating the resulting dye-laden wastewaters are expensive and complex. Singh and Arora [10] provide a critical review of present treatment technologies being employed to treat wastewaters containing azo dyes.  A variety of physical, chemical, and biological treatment strategies are being employed singly and in combination to remove azo dyes prior to discharge of these industrial wastewaters.  Because azo dyes can be degraded biologically in the natural environment to produce by-products such as aniline and related toxic aromatic chemicals, it is necessary to remove these dyes prior to discharge.    Treatment strategies such as carbon adsorption; coagulation, flocculation and settling; filtration; membrane processes; ion exchange; direct chemical oxidation; UV irradiation; and aerobic and anaerobic biological treatments are being used to capture or destroy the azo dye chemicals before they reach the environment.  In those cases where the azo chemical is merely transferred to another medium rather than destroyed (e.g. coagulation/flocculation/filtration), the environmental hazard is not eliminated but must still be dealt with through yet another series of steps such as incineration or land filling.  In contrast, wastewaters containing natural dyestuffs or alum mordanting solutions do not require as extreme of treatment steps—simple decolorization using carbon adsorption or via UV irradiation is typically sufficient because the dyes are not toxic.  Treatment is done simply to meet water standards related to color and turbidity.  Again, in terms of what is required to treat the wastes generated in textile dyeing operations, natural dyes exhibit far more sustainability.  Potential pollution is prevented rather than having to be remediated.

Future Developments to Promote Sustainability:
As we have discussed, both natural and synthetic textile dyeing operations have adverse environmental impacts; although arguably, those involving natural dyestuffs have fewer impacts. Nevertheless, because even natural dyeing operations use large amounts of energy and water, the textile industry continues to seek even more sustainable methods to color fibers.

Thiry, [11], discusses several new and more sustainable strategies for coloring fibers and fabrics that are likely to replace, at least in part, traditional solution-based dyeing operations in the near future.  Among the new strategies being developed are 1) ultra low liquor ratio dyeing; 2) selective plant breeding to produce naturally colored cotton fibers; 3) digital printing of fabrics; 4) cationic cotton printing; and 5) waterless dyeing involving the use of supercritical CO2.  Ultra low liquor ratio dyeing reduces both water and energy usage by reducing the weight ratio of the dye solution to the fabric to levels as low as 3:1 for some fabrics.  The smaller the amount of water used, the less energy is required to heat the solution and the smaller the amount of wastewater ultimately generated.  Cotton is the most commonly used natural fiber in the world, largely because of its use in denim fabrics.  Prior to relatively recent attempts to breed color out of cotton in order to produce higher agricultural yields, cotton grew with various naturally occurring pigmentations such as red, green, and brown.  Recently, growers are working to breed these natural color variations back into cotton, producing fibers requiring no dyeing whatsoever.  Digital printing of fabrics using technologies similar to the ink jet printing used with paper use no water whatsoever.  However, the production of the ink jet cartridges and the chemicals contained in them involve certain environmental impacts. (As with any discussion of sustainability, one must consider not just the impacts of the immediate process, but of the entire life cycle associated with that process.)  Cationic cotton printing involves treating cotton fabric so that it has a positive (cationic) charge.  The charged cotton is then immersed in a dye bath containing a reactive dye that attaches itself to the charged sites on the fabric.  Using the correct chemical ratio of dye to fabric results in water in the dye bath which is free from all chemicals and color at the end of the batch process.  That water can then be reused in dyeing subsequent batches of fabric, thereby conserving water.  Cationic dyeing is conducted at room temperature, meaning it is less energy intensive than traditional chemical dyeing operations.  Details on the production of these cationic dyes were unavailable and so we cannot draw a conclusion of their broader sustainability over the entire life cycle of the process.  Finally, some textile manufacturers are attempting to develop a dye process using supercritical CO2 .The carbon dioxide is exposed to extremely high pressures at relatively low temperatures (room temperature and below).  As any student of basic thermodynamics knows, this will cause the CO2 to exist in the liquid state.  The pigments are suspended in the liquid CO2 and the fabric is introduced.  Then the pressure of the system is dropped and the CO2 evaporates, leaving a completely dry dyed fabric.  While technically feasible, it has not been scaled up for full scale production because it is not economically competitive with conventional dyeing processes at this time.

In addition to the various innovative strategies discussed above, the textile industry is also experiencing a resurgence in interest in the use of natural dyes and associated processes, as a way to mitigate environmental impacts.  Allegro Natural Dyes, a company located in Longmont, Colorado, was an industry start-up in 1995, created with the intent to re-introduce natural dyes into the textile industry, [12].  The company has subsequently obtained a number of patents involving various proprietary processes centered around the production of natural dyes.  Allegro is an interesting case because it supports the premise that the use of natural dyes can be economically competitive to synthetic dye-based practices in today’s modern textile industry.  Hence, in some ways, the textile industry has come full circle.

Today there clearly exists an opportunity to employ the best parts of both traditional and modern dyeing practices to produce a more sustainable textile industry overall.  The use of natural dyes combined with modern water treatment technologies such as ion exchange and membrane technologies, and combined with various energy recovery techniques, could produce industry processes that would capitalize on the low environmental impacts of natural dye production and yet could mitigate the water and energy intensiveness of the dyeing process.  This would result in a sustainable process over the life cycle of the dyeing operation.  It represents an exciting modern sharing of new technology with age-old traditional practices.

References:

  1. The Cambridge History of Western Textiles, Jenkins, David, editor, Cambridge University Press, 2003, p.764.
  2. New International Encyclopedia, “Weld”, Gilman, Daniel Coit; Peck, Harry Thurston; Colby, Frank Moore; Dodd, Mead & Company, New York, 1905.  [Wikisource].
  3. Jenkins, op. cit., p. 782.
  4. Jenkins, op. cit., p.1082-1084.
  5. The Art and Craft of Natural Dyeing: Traditional Recipes for Modern Use, Liles, J.N., Knoxville: University of Tennessee Press, 1990.
  6. “ A Review of Textile Dyeing Processes”, Perkins, Warren, S., Textile Chemist and Colorist, Vol.23, No.9, August, 1991.
  7. “Aniline”,  http://en.wikipedia.org/wiki/Aniline , January 21, 2012.
  8. “Azo compound”, http://en.wikipedia.org/wiki/Azo_dye , February 16, 2012.
  9. Environmental Impact of Textiles; Slater, Keith; Woodhead Publishing, LTD.; 2003, p. 81-82.
  10. “Removal of Synthetic Textile Dyes From Wastewaters:  A Critical Review on Present Treatment Technologies”; Singh, K., and Arora, S.; Critical Reviews in Environmental Science and Technology, 41: 807-878, 2011.
  11. “Color it Greener”; Thiry, Maria, C.; AATCC Review, Vol. 10, Issue 3, p.32-39,      2010.
  12. “Natural Dye Startup”; Chemical Week; Vol.157, Issue 7, 0009272X. August 23, 1995.

By Deborah Jackman, PhD, PE, LEED AP™ - originally posted on 03/23/2012

This article represents the first installment of what will be a series of technical essays on trends in sustainability which are occurring across a wide spectrum of businesses and industries.  As a springboard for discussion, and in order to provide historical context, each article will feature one or more paintings from the Man at Work Collection.

The Man at Work Collection held at the Grohmann Museum, located at the Milwaukee School of Engineering (MSOE), provides a detailed visual record of how technology has evolved since the 16th century.  With the current emphasis on increasing the sustainability of 21st century manufacturing, agriculture, transportation, and construction, and on reducing the carbon footprint of human activities globally, it is instructive and informative to study the technological practices of previous generations.  An understanding of how past generations used technology can be used as a springboard for analyzing and discussing how we can make our 21st century technologies more sustainable.

Each article will focus on one painting from the Man at Work Collection. The painting will be used as a starting point for discussing production practices within the featured industry.  In some cases, the historical practices revealed by the paintings will suggest ways that we can make our modern practices more environmentally sound, i.e., ways we can learn from the past to ensure a more sustainable future. In other cases, they may provide lessons in what should be avoided. In any case, the painting will provide a rich forum for discussing some aspect of sustainability within targeted industries.  The selection of paintings and the industries represented in those paintings will be somewhat random and will be based on my personal areas of technical interest. During the course of the series, we will cover an eclectic mix of industries and subject matter. 

What constitutes a sustainable practice must be defined in order to focus the discussion.  While there are many definitions of sustainability, all are based on certain common underlying principles.  These principles seek to maximize performance in five key areas:  (1) energy efficiency; (2) water conservation and reuse strategies; (3) materials and resources efficiency; (4) environmental health and safety for workers/inhabitants; and (5) site/activity location selection to mitigate potential ecological impacts.  At least one of these criteria will be examined and discussed in each of the paintings studied.

Each painting presents an image that is complex—one that can be taken in any number of directions relative to the sustainability issues discussed.  These essays are not intended to be exhaustive in their discussions of all aspects of sustainability within a given industry.  Instead, references will be provided so that the reader can delve more deeply into those areas that interest him.  I hope that this series of essays will be as interesting and informative for you, the reader, as they promise to be for me in researching and presenting them to you.  Articles will be published approximately every three months.  I welcome your comments.