The Potomac Highlands Watershed School 

Stream Cleaner Environmental Forum

Native Guides

 Michael Schwartz

Environmental Scientist, The Conservation Fund – Freshwater Institute

Revised January 26, 2008

          To create a  picture of how land use affects water quality we need to understand as much as possible about the processes that cause nonpoint source pollution to enter our streams.  In drawing this picture we need to comprehend that a watershed is much more than just a stream.  Because the entire landscape is connected hydrologically, it is what happens on the land that affects the quality of a stream.  Consequently, a stream is an expression of all the various hydrologic processes that go on within a watershed.  Indeed, it could even be said that the source of a stream is in the air since that is the part of the water cycle where our streams begin.  It is these hydrologic processes that affect where and how much pollution enters our streams.  The inherent capacity of watersheds to assimilate nutrients is influenced by climate, geology, soils, vegetation, and topography.  When humans enter a watershed they typically pollute the air, import nutrients, and pave the earth; all of which can cause more nutrients to enter the streams in a watershed than can be assimilated naturally.  Yes, even the pollution we put into the air can fall back down onto the earth AND pollute our water.  Thus, nonpoint source nutrient pollution is caused when human land use exceeds the natural capacity of a watershed to assimilate these nutrients.

          Partners in the restoration of the Chesapeake Bay seek to reduce a sufficient amount of the anthropogenic inputs to the Bay so that it can survive and thrive.  To help accomplish this we use the Chesapeake Bay Watershed Model to tell us how we are doing in this effort.  The Model incorporates data on land use, climate, and other watershed characteristics to calculate the amount of nitrogen, phosphorus, and sediment that enters every waterway in the Chesapeake Bay watershed.  As you may already know, the Chesapeake Bay watershed is 64,000 square miles big.  And since the Model must also incorporate air pollution, it incorporates an airshed which is almost twice as big as the watershed.  In order to model this watershed with the resources at hand the Model has to be complex enough to provide good data but simple enough so that it doesn’t take months to run all the calculations.  The Model also requires good data in to give good data out.  One of the most important sources of input the Model requires is data on Best Management Practices (BMP’s).  In order to get credit for reducing pollution, the Bay watershed states must submit data on BMP’s every year to the Chesapeake Bay Program, who then enters this data into the Watershed Model.  No cheating allowed!  This is very challenging because it requires quite a few assumptions.  You are assuming that BMP’s previously entered are still in place and that you are not counting BMP’s that were already counted before.

          The biggest challenge facing the Bay partners is to get the most pollutant reduction for the least amount of money.  Accomplishing this requires resource managers to choose where and what.  The placement of BMP’s must be prioritized where they will achieve the most pollution reduction for the least cost.  It doesn’t make much sense to plant a forested buffer strip at the edge of a natural meadow when there is a cow pasture just downstream.  Since different land uses produce different types of pollutants and different pollutant loads we must know what BMP will provide us with the most pollution reduction for the least cost.  Prioritization of BMP’s can be facilitated through the use of distributed water quality models to help us understand local relationships between land use and runoff processes.  The Chesapeake Bay Watershed Model is a lumped water quality model (which works best at large scales like the Chesapeake Bay watershed).  Distributed water quality models provide better results than lumped models but their computational intensiveness requires that they be used at smaller scales.