High-Resolution Surface Observations for Climate Monitoring

Climate monitoring is a foundational component of understanding climate variability across the globe and regionally. In the United States and other countries, surface measurements of temperature and precipitation have been taken routinely by human observers (longest records) and by automated stations overseen by the federal government. Budget plateaus or reductions have resulted in the elimination of many observing stations as well as delays in adopting new measurement technologies or techniques. The future of climate monitoring may look bleak at a time when knowledge of our climate record is critical for water resource management, military and emergency preparedness, infrastructure planning, agricultural production, tourism, and science education.


Introduction
Climate monitoring is a foundational component of understanding climate variability across the globe and regionally. In the United States and other countries, surface measurements of temperature and precipitation have been taken routinely by human observers (longest records) and by automated stations overseen by the federal government. Budget plateaus or reductions have resulted in the elimination of many observing stations as well as delays in adopting new measurement technologies or techniques. The future of climate monitoring may look bleak at a time when knowledge of our climate record is critical for water resource management, military and emergency preparedness, infrastructure planning, agricultural production, tourism, and science education.
Fortunately, innovation has accelerated in the academic and private sectors, leading to new climate monitoring infrastructure that is revitalizing how the nation might address these growing challenges. The establishment of regional "mesonets" --surface observing networks that measure atmospheric and soil variables at least hourly at spatial resolutions of 10-50 km --has not only been successful in providing meteorologists with vital, real-time information for weather forecasting but has helped to document previously unknown or seemingly rare regional climate phenomena. These networks also provide high-resolution data for increased understanding of climate variability in both space and time. Data from high-quality, real-time mesonets now are used extensively by researchers, educators, and practitioners. It appears that these networks will become a vital component of our long-term monitoring capacity in the United States. This chapter will discuss the current status of monitoring the climate of the United States using surface observing stations, the evolution of regional mesonets, examples of new knowledge generated about regional climate variability as a result of these mesonets, and what the future may hold for high-resolution climate monitoring through a "network of networks." The proposed "network of networks" brings its own challenges to monitoring the climate, as many of the advocates of high-resolution, real-time monitoring are those interested in observations for weather forecasting only. Hence, the required standards for obtaining climate-quality data may be considered too costly for state and private network operators. This chapter will provide rationale for still pursuing this course, as well as continuing our federal observation program.

Brief history of surface climate observations
Since Wren's invention of the tipping bucket rain gauge in 1662 and Fahrenheit's invention of the mercury thermometer in 1714, systematic measurements of the weather have been taken at individual locations across various countries. In Europe, for example, monthly precipitation observations recorded at the Paris Observatory (France) began in 1688 [1] and monthly mean temperature records for central England have been constructed from several locations and time series since 1698 [2]. In the United States, instrumented observations began in Cambridge, Massachusetts; Boston, Massachusetts; and Philadelphia, Pennsylvania in the early 1700s [3]. These and other weather observations required meticulous and disciplined manual measurements and recording, typically conducted by scientists at national institutions or observatories.
At the First International Meteorological Congress in Vienna, Austria, during September 1873, discussions by the existing national meteorological services firmly instituted the development of an international (albeit spatially inhomogeneous) network of standardized weather observations [4]. Technical work at this Congress included the standardization of the time of observation, observational methods and units, and instrument testing and calibration. In addition, the participating countries agreed to share their data with one another in a timely fashion (i.e., via telegraph). With standardization and documentation, a true surface climate observation network could be advanced worldwide, and national meteorological services became increasingly important for strengthening their nation's observational capabilities.
A century would pass, however, before automation of surface weather measurements became inexpensive enough for deployment in most nations. Until then, daily temperature and precipitation --the heart of our climate record --were measured and recorded manually by both experienced observers and trained volunteers. Experienced observers at U.S. Weather Bureau/National Weather Service offices reported an array of weather variables every hour, including atmospheric pressure, temperature, wind speed and direction, humidity, precipitation type and amount, cloud cover, and visibility. In the United States, because of its vast size and limited funding, an extensive network of volunteer "cooperative observers" also was needed to capture the spatial and temporal variability of the nation's climate [3]. The Organic Act of 1890, which also charged the Weather Bureau with recording meteorological observations, created the U.S. Cooperative Observer Program, thus forming the backbone of the modern climate record of the United States. At its peak in the late 1950s, this network had about 14,000 observers nationwide. Today's Cooperative Observer Program has a two-fold mission: (1) to provide meteorological observations necessary to define the climate of the United States and to document changes in the climate; and (2) to provide meteorological observations in near real-time for forecast, warning, and other public service operations of the National Weather Service (formerly the Weather Bureau) [5]. Cooperative observer data usually include daily maximum and minimum temperatures, snowfall, and daily precipitation totals.
During the period from the international recognition of a need for standardization and data sharing until the advent of reliable and affordable automated weather stations, a distinction between synoptic weather observations and climatological measurements became apparent and, in some nations, engrained in the culture of the government. As noted by Eden (2009) for this division within the United Kingdom: By 1880, the contrast between synoptic and climatological observations, which first manifested itself some three decades earlier, had become much more clear-cut. Synoptic observations of pressure, temperature, wind, sky state, present weather, and visibility, were taken at specific times of the day and immediately transmitted by electric telegraph to central offices where they were plotted on maps which then had isobars and other features drawn on them. Climatological observations were generally taken once per day, normally at 0900, and included non-synoptic elements such as maximum and minimum temperature and rainfall for the preceding 24 hours, run of the wind, state of the ground, soil temperatures, and sunshine duration (starting in 1880 at a selection of sites); they often included a manuscript weather diary as well.
They were tabulated and sent by post to a central office either weekly or monthly. [6] Thus, the onset of a national Automated Surface Observing System (ASOS) by the U.S. National Weather Service, as part of the organization's $4.5 billion modernization from 1989 to 2000 [7], was seen by some as advancing weather (synoptic) observations to the detriment of the climate record. Part of the concern by the climatological community was the deployment of ASOS and other automated weather stations at airports, where the observations may not be representative of the region's climate. Another concern was the large investment in ASOS funding likely was to result in reduced instrumentation equipment, maintenance, and operations funding for the cooperative observer program, thus jeopardizing the quantity and quality of the climate records. By the beginning of the 21st century, the number of cooperative observing sites had, in fact, nearly halved from its mid-20th century peak [3].
At the same time, there was substantial growth in automated weather observing outside of the federal sector. In particular, the private sector moved quickly into developing observing programs and, with several examples of companies that cut corners on system deployment or operation, the climate community again declared concern for the future of the climate record. Over the years, network operators began to realize the benefit of increasing the quality of their sensors, methods, and data management. In part, this change resulted from recommendations by the National Research Council (NRC 1999) that network operators adopt the following 10 climate monitoring principles, proposed by Karl et al. (1995;[8]): 1. Management of network change: Network operators should critically examine how changes in their network (e.g., station location, data processing, instrumentation) may impact climatological analyses of the locale and its region.

2.
Parallel testing: When changes in the network do occur, network operators should develop appropriate transfer functions for time series by operating the old and new systems (locations, instruments, etc.) simultaneously over a sufficiently long time period to observe the range of climate variability.

3.
Metadata: Network operators should fully document the observing system and its operating procedures, including station location and exposure, instrumentation, sampling times, calibration and validation methods, quality assurance procedures, data processing algorithms, and other information pertinent to the data history.
4. Data quality and continuity: Network operators should routinely assess quality and homogeneity of all observations.

5.
Integrated environmental assessment: Network operators should plan for their data to be used in state, regional, national, or international climate assessments. Regular scientific analysis of data time series adds value to the monitoring program.

6.
Historical significance: Network operators should identify protected sites within their network that will be maintained for many decades to a century or more. Sites should be prioritized based on their contribution to obtaining a homogeneous, long-term climate record.

7.
Complementary data: Network operators should prioritize funding for data-poor regions, poorly observed variables, regions that are sensitive to land use/land cover or other changes, and high temporal resolution for critical measurements.

8.
Climate requirements: Network designers, operators, and instrument engineers should be provided climate monitoring requirements appropriate to the region and consistent with national and international standards at the outset of network design.

9.
Continuity of purpose: Network operators should commit to long-term and stable observations while also serving short-term operational needs.

10.
Data and metadata access: Network operators should develop data management systems that enable users to easily and cost-effectively access, use, and interpret data and data products.
These or similar principles inspire network managers to care for their observations prior to distribution to thousands of data users. For the government, in particular, additional operational costs needed for data quality assurance, maintenance, and data and metadata management greatly reduce the expense (multiplied by hundreds of thousands to millions of users) for quality control procedures on the client side. Hence, budget officials must become aware of the net savings to the economy when data are cared for properly by the network owner.

Current surface observation capabilities
The surface observational networks of the United States provide excellent examples of the capabilities and challenges for monitoring climate variability and change across diverse landscapes and providing data to diverse users. The networks can be generally characterized as the following: (1) climate reference networks, (2) government-operated volunteer observer networks, (3) mesoscale weather and climate monitoring networks, (4) use-inspired surface observing networks, and (5) citizen science networks. Climate reference networks are designed, developed, deployed, operated, and maintained specifically to obtain a high-quality climate record that is representative of a region. Government-operated volunteer observer networks are designed to obtain daily observations with high spatial resolution for a long-term climate record. Mesoscale weather and climate monitoring networks are multi-purpose networks that strive for high-quality observations that can be applied in near real-time to critical and non-critical decisions across multiple disciplines. Use-inspired surface observing networks are designed to solve a specific problem or gap in the current observing capabilities. Citizen science networks help to fill significant gaps in the spatial coverage of other networks as well as engaging and educating the public about the importance of data in scientific research and decision making. Certainly any given observing system or network may fit into multiple categories, but most public-and privatefunded networks fall primarily into one of these network types.

Climate reference networks
The premier climate reference network is the U.S. Climate Reference Network (USCRN), which first deployed operational stations in 2002 and, as of 2012, has 114 automated stations at 107 locations in the lower 48 states, 12 stations in Alaska, and 2 stations in Hawai'i ( Fig. 1). USCRN is operated by the National Oceanic and Atmospheric Administration and was designed to obtain long-term, accurate, and unbiased observations to quantify climate change on a national scale [9]. As such, the network follows strict requirements for site location, instrumentation, maintenance, documentation, and data storage and processing. With the aid of local experts, site locations were selected to obtain a minimum 50-year record without relocation and minimal changes in land use/land cover; hence, sites are primarily located on public lands (e.g., federal lands, university research stations, conservation areas) [9]. Sites are representative of the region and, to the extent possible, are situated in open areas with sufficient fetch (Fig. 2). Engineers calibrate all instruments on a regular schedule to the standards of the National Institute for Standards and Technology and both technicians and engineers maintain and document (e.g., via photographs) the site and its instrumentation during annual visits [9].
Instrumentation meets or exceeds standards set by the World Meteorological Organization and, as a key feature of the network, instrumentation for triple redundancy was installed for air temperature, precipitation, and, beginning in 2009, soil moisture and temperature (at 5, 10, 20, 50, and 100 cm). The redundancy ensures continuity of the climate record should an instrument fail and the enhanced quality assurance of all data, including better detection of instrument drift [9]. In addition to the redundant core measurements, stations also measure relative humidity, wind speed, skin (ground surface) temperature, and incoming solar radiation [10].  All information about the site, sensors, quality assurance procedures, data transmission and processing, and other pertinent information is documented as metadata and retained in perpetuity [9]. Five-minute-resolution observations of all variables except 10-cm and deeper soil observations (which are hourly) are transmitted every hour from all USCRN sites. Data that cannot be transmitted because of communications disruption are stored for up to 7 days at the local site and retransmitted when communications are restored.
Obviously, the climate monitoring provided by climate reference networks is highly desired; however, it also is a substantial financial investment. The investment is critical to providing a baseline climate record to compare to and calibrate observations from the other types of networks (e.g., [11,12]), to provide "ground truth" data for satellite remote sensing of climate variables (e.g., [13,14]), and to obtain a high-quality historical climate record that can measure long-term trends as well as variability and extremes in regional climates (e.g., [10,15]). Because of multi-million and multi-billion dollar decisions made by the public and private sectors related to climate and, most importantly, its extremes, the value of a climate reference network such as USCRN greatly exceeds its financial costs.

Government-operated manual observation networks
As noted earlier, the historical climate record of the United States from the late 1800s to present results from manual observations from hundreds of Federal employees at what are commonly called "first-order stations" and thousands of trained volunteers in the U.S. Cooperative Observer Program [5], commonly called the COOP Network. In particular, a subset of the firstorder and COOP stations --the U.S. Historical Climatology Network (HCN) --has been used to quantify national-and regional-scale temperature changes in the contiguous United States (Fig. 3) [16]. The 1,218 HCN stations (as of 2012) were selected based on their length of record (i.e., 80 years of monthly temperature and precipitation data), low percent of missing data, few or no station moves, and spatial coverage across the conterminous United States [17].
Because the stations were not chosen for their pristine locations or the perfection of their observational methods, several corrections were employed to adjust for systematic, nonclimatic changes, resulting in the publication of the "United States Historical Climatology Network (HCN) Serial Temperature and Precipitation Data" record that is used worldwide by researchers, practitioners, and educators. These adjustments help to correct the time-ofobservation bias [19,20], station moves and instrument changes [21,22], and undocumented discontinuities [23] such as incorrectly documenting the observation date, improperly resetting the maximum/minimum thermometer, or transcription errors [12].
Although it is desirable for all measurements to be consistent with the 10 climate monitoring principles proposed by Karl et al. (1995), the sheer number of manual observations from volunteers in the COOP Network precludes a pristine dataset. Yet, the COOP Network's data are invaluable as an historical record. From rural farmers to state climatologists to climate change researchers to policymakers, the COOP and HCN datasets are part of their toolkit for analysis and decision making. Unfortunately, as observers retire, move, or pass away, stations close and are rarely replaced with new ones. In addition, with the onset of state and private weather networks as well as shrinking budgets, local National Weather Service offices that oversee their local COOP observing sites are rarely compelled to ensure that daily observations of temperature and precipitation from COOP sites receive high priority. The withering of the COOP Network is detrimental to our nation's historical climate record and, like many reductions in scientific observations, may not be recognized until it is too late to take action.

Automated mesoscale weather and climate monitoring networks
During the 1980s and 1990s, rapid advances in microcomputers and telecommunications led to the ability of states, private companies, educational institutions, nonprofit organizations, and other weather-and climate-sensitive groups to install and operate regional to national networks of weather or climate monitoring stations at a reasonable cost (e.g, $1,000 to $20,000 per station) [10]. Most of the early networks were to aid weather forecasting, with many stations established for K-12 education or for advertisement on the local news broadcasts (e.g., a company could invest in a station and have their name mentioned by on-air meteorologists or weatherpersons). More recently, thousands of small businesses, cities, water management agencies, agricultural producers, colleges, universities, and large corporations have invested in weather or climate monitoring. The rapid growth of these more affordable surface observing stations led, in many cases, to an emphasis on quantity over quality. Thousands of stations were installed on rooftops, under trees, near air conditioning units, and in other unsuitable locations.
There were numerous network operators, however, who took great care in design and implementation of their networks. Even before Karl et al.'s 10 principles of climate monitoring, for example, scientists at the University of Oklahoma and Oklahoma State University established the Oklahoma Mesonet with adherence to standardization and principles to obtain highquality observations [24,25]. Funded in late 1990 and commissioned in January 1994, the Oklahoma Mesonet has 120 stations (in 2012, Fig. 4) that measure more than 20 environmental variables, including air and soil temperatures (at multiple heights and depths), rainfall, relative humidity, barometric pressure, solar radiation, wind speed (at multiple heights) and direction, and soil moisture (at multiple depths) [15]. Stations transmit data every five minutes to a central facility at the Oklahoma Climatological Survey, where observations immediately undergo a suite of automated quality assurance tests [15,26]. One or more meteorologists also manually analyze the daily data as well as examine long-term trends or biases through a rigorous quality assurance process [15]. Because the managing principle for the network from its inception was "research-quality data in near real time," the Oklahoma Mesonet has served both the weather and climate communities with great distinction for two decades, being called the "gold standard of statewide mesoscale surface networks" by the National Research Council [10]. Primarily developed and maintained by university researchers and staff, the number of statewide or regional automated networks has been growing since the 1990s. Examples of these multi-purpose networks include the following: West Texas Mesonet (70 stations in 2012, [27]), New Jersey Weather and Climate Network (55 stations, http://climate.rutgers.edu/njwxnet/), and Kentucky Mesonet (64 stations, [28,29]

Use-inspired surface observing networks
Many regional surface observing systems were established initially to serve agricultural users and, in most cases, were expanded to serve a broader user community as additional funding or technology became available. Other examples of systems inspired by user needs include the following: off-shore platform measurement of weather and sea-surface conditions for the oil and gas industry, tall-tower observations of wind for the wind energy industry, and remote automated weather stations to observe potential wildfire conditions for fire management agencies.
Stations located in rural agricultural areas also are likely to be representative of a larger region and, hence, make excellent climate observing stations if measurement, metadata, and reporting standards are adhered to. Other use-inspired networks, however, have not been deemed acceptable for climate records, primarily because of station placement or the lack of metadata and standardization. For example, most state departments of transportation, particularly those that experience snow and ice conditions during the winter, installed road weather stations along major highways and interstates to monitor atmospheric and pavement conditions for road maintenance and safety. To satisfy operational needs, these network operators installed "environmental sensor stations" adjacent to pavement, many times in valleys, where road conditions would become hazardous sooner than flat, open areas. Hence, these stations only represented the microclimate of the area.
Until the 2000s, state road weather networks were rarely used by those outside the transportation community, and local and state network operators could not access each others' observations. To address challenges raised by the National Research Council in its report "Where the Weather Meets the Road: A Research Agenda for Improving Road Weather Services" [30], transportation engineers, meteorologists, climatologists, systems engineers, and practitioners finally partnered in the Clarus Initiative to develop an integrated road weather observational network and data management system [31]. The initiative resulted in establishing guidelines for siting stations, documenting and maintaining metadata, developing quality assurance procedures, obtaining and sharing data in near real-time, and archiving data. The Federal Highway Administration and Intelligent Transportation System Joint Program Office of the U.S. Department of Transportation jointly administered the Clarus Initiative, prototyping the system in Fall 2006 for Alaska, Minnesota, and Utah and then incentivizing other states to join shortly thereafter. By 2012, 38 U.S. states and four Canadian provinces participated in the Clarus system. Because of the short period-of-record, Clarus data currently are not used for climate analyses; however, with continued adherence to standards and vigilance in operational oversight, the Clarus observations could become a particularly useful dataset to examine climate extremes, microclimates, and climates in mountainous terrain.

Citizen science networks
Taking advantage of the enthusiasm of weather hobbyists, school children, and those who want to learn about science, "citizen science" networks have grown over the years. The most expansive of these grassroots efforts, with over 15,000 volunteers in 2010, is the Community Collaborative Rain, Hail and Snow Network (CoCoRaHS) [32,33]. Founded in 1998, CoCoR-aHS provides a platform for communities around the United States to participate in "handson" science in their own backyards through the daily observation of precipitation. Volunteers of all ages measure rain, hail, and snow using simple, low-cost rain gauges and hail pads. Observers input their measurements through the CoCoRaHS web site (www.cocorahs.org), where data become immediately available. These data are used daily by many federal, state, and community organizations. Through consistent observation and data sharing, participants actively learn about their climate, their water resources, and the impacts precipitation has on their lives and their communities. In some areas of the country, there is a greater concentration of observers (e.g., south-central Texas, Colorado's Front Range, metropolitan Chicago and Phoenix) where the density is approaching one station per 2.5 to 3 square kilometers, offering detailed insight into precipitation variability and extremes.

Benefits and limitations of surface observing of the climate
As federal budgets for environmental observing systems and numerical modeling become tighter, there has been a tendency for modelers and remote sensing experts to state that their technologies are sufficient to monitor changes in the environment. In the case of satellite remote sensing of weather and climate variables, which is necessary for global observations particularly across mountainous terrain, oceans, and countries with few or classified surface observations, any under-appreciation for surface observing systems could become detrimental to satellite programs themselves as well as our nation's historical climate record. "Ground truth" of satellite products --derived from empirical relationships between the desired product (e.g., air temperature) and the measured radiances --requires a long-term record of high-quality surface observations. This dataset is needed not only to develop satellite algorithms but to ensure that long-term climatic trends, synthesized from multiple satellite instruments that have changed significantly from the 1960s to present, are consistent with those detected over land during the same time period.
Both active and passive remote sensing have distinct advantages in their spatial coverage over point-based surface observations, especially in regions such as the American West and Alaska, where surface observing stations are relatively sparse compared to the more populated or flatter regions of the United States [34]. For example, dual-polarimetric weather radars can identify the type and amount of hydrometeors within the troposphere with a resolution high enough to capture intense precipitation shafts that may result in flash flooding (e.g., [35,36] or distinguish between winter precipitation types (e.g., rain, snow, and sleet; [37]). Small-scale precipitation maxima can move between surface observing sites and, thus, not be measured, and most surface stations are not equipped with a present weather sensor to determine the precipitation type. In addition, satellite imagers can take advantage of radiation bands (e.g., infrared, thermal, microwave) to detect climate-related changes in the environment (e.g., phenology) not measured by surface observing stations (e.g., [34,[38][39][40][41]).
It is clearly important to measure variables throughout the full vertical profile of the atmosphere, especially the boundary layer, so remotely sensed observations are critical; however, most environmental, economic, and societal decisions are made based on land surface climatology, from a meter or so below the ground surface to several meters above it. Within this region, businesses and residences exist, much of the biosphere evolves, people and goods are transported, wildlife migrates, and water moves and undergoes phase changes. Policymakers also affect public policy (e.g., flood insurance, agricultural incentives, protection of natural resources, emissions limitations) as a result of conditions and changes primarily near the land surface.
The most accurate measurements of the Earth's land surface are in-situ observations because satellite and radar remote sensing have significant limitations. In-situ, not satellite or radar, sensors can measure the atmospheric pressure gradients that drive weather systems. In-situ sensors can measure variables regardless of whether there is cloud cover or not, at precise heights or depths, and with high temporal resolution --all which are currently unachievable with satellite remote sensing (e.g., [34,42]). In-situ networks provide the validation data needed for remote sensing and are inexpensive compared to satellite instrumentation, sensor deployment, and sensor maintenance and replacement costs (e.g., [10,43]). In most cases, a broken or poorly functioning in-situ sensor can be replaced by a well-calibrated twin within days, leading to a more continuous data record (e.g., [15]). Because of their high temporal resolution, in-situ networks measure climate extremes and the full range of variability (e.g., [44,45]) that is needed for local to international decision making. For the historical climate record and analysis of important trends in a changing climate, in-situ sensors observe air and soil temperature, precipitation, air and soil moisture, and winds directly whereas satellite imaging, for example, averages radiances over a large footprint, suffers from atmospheric attenuation, and has poor vertical resolution; thus, satellites cannot adequately measure lowlevel atmospheric moisture, surface winds, or precipitation (e.g., [46][47][48]).
Mesoscale surface observing systems are not without their limitations however. Several problems are characteristic of most surface observing sites: change in land use/land cover over time, necessary relocation or closure of sites, increasing maintenance with sensor age, inadequate spatial coverage (especially for precipitation), limited length of record, differing times of observation, and poorly documented and maintained metadata [9]. With adequate standards and attention to data quality assurance, these problems can be significantly reduced in any given network, allowing for research-quality time series at each station.
One of the greatest benefits of a high-resolution automated mesoscale climate network is the ability to observe rapidly evolving phenomena that are indicative of a region's climate. For example, Figure 5 displays a significant temperature change (~18˚C in 30 min) that is indicative of the extremes of Alaska's climate. In this case, a warm air mass from the Gulf of Alaska penetrated inland in the Tetlin National Wildlife Refuge in southeast Alaska (near the U.S.-Canada border), relieving a lengthy cold stretch. To better engineer structures (e.g., aboveground pipelines) and prepare for health consequences of this type of harsh environment, high-resolution observations are needed. A benefit of the high spatial resolution of a mesoscale monitoring network is the ability to document phenomena that typically occur between widely spaced synoptic stations. For example, decaying nocturnal thunderstorms can produce hot, dry, and gusty surface winds called "heatbursts," typically generated in an environment characterized by a dry boundary layer and surface temperature inversion [49]. Prior to the analysis of an 18-year record of Oklahoma Mesonet data, less than two dozen heatbursts had been documented worldwide in the scientific literature. With the high-resolution Mesonet observations however, 207 heatburst events 2 of various magnitudes, areal coverage, and duration were identified between 1994 and 2009 across Oklahoma [50]. The geographical distribution of heatbursts across Oklahoma (Fig.  6) documents the characteristically drier boundary layer in the western (versus the eastern) part of the state. A longer period of record of heatbursts should be able to document whether or not the regional boundary-layer moisture gradient shifts position with climate change. The traditional separation of "weather observations" and "climate observations" is being blurred with the near real-time availability of high-quality, high-resolution data from automated mesoscale observing networks. These observations are used both by weather forecasters to protect lives and property during extreme events and by climatologists to analyze and document these extremes, their trends over time, and their impacts on physical, natural, and human systems. For example, the Kentucky Mesonet measured 5-min rainfall during a heavy precipitation event on 1-2 May 2010 in the Mid-South U.S. (Fig. 7). The larger event resulted in the historic flooding of downtown Nashville, Tennessee. The Mesonet station received 8.38 mm of rain during a 5-min period, 50.8 mm during an hour, and 258 mm over two days --the latter broke the state's two-day precipitation record of 211 mm (from 6-7 December 1924) [51]. Similarly, daily rainfall observations from multiple climate monitoring networks were used to request disaster assistance from the Federal Emergency Management Agency for damaging storms caused by the rare re-intensification of a tropical storm (Erin) over western Oklahoma on 19 August 2007 (Fig. 8) [52]. As another example, the New Jersey Weather and Climate Network provided high-resolution data ( Fig. 9) for government officials and the public before, during, and after Hurricane Sandy (2012), which devastated the Northeast U.S., especially coastal areas of New Jersey. Mesoscale observations have been combined with remotely sensed data to produce important products for decision makers that could not be developed with either type of observation alone.
For example, to depict the potential for wildfire, the Oklahoma Fire Danger Model [53] includes a set of maps that combines temperature, relative humidity, and wind observations from the Oklahoma Mesonet with Normalized Difference Vegetation Index product (1-km resolution) of polar orbiting satellites as well as dead fuel models. Map products include a burning index (Fig. 10) that represents the estimated intensity of any fire that occurs, an ignition component that estimates how easily a firebrand can produce a fire, a spread component that represents the forward speed of any headfire, and an energy release component that estimates the heat released per unit area at the head of a fire. Firefighters use these products to staff their teams in advance of days with high fire danger and protect both their teams and the public during wildfire events. This example demonstrates the benefits of developing robust remote sensing and in-situ observational networks in tandem so as to develop decision products that save lives, property, and money.

What might the future hold for surface observing of the climate?
In 2009, the National Research Council (NRC) issued a report documenting the status of weather and climate observing systems in the United States and recommended actions to evolve the existing, disparate networks into an integrated, coordinated "network of networks" [10]. The NRC report acknowledged the pivotal role of the federal government, especially the National Oceanic and Atmospheric Administration, in weather and climate monitoring and information services, but also recognized the rapidly growing roles of state and local governments, universities, and the private sector in obtaining mesoscale observations. The broader weather and climate enterprise was investing in surface observing; however, there was little coordination, standardization, and leadership provided to ensure that a system was created to be greater than the sum of its individual parts.
In response to the 2009 NRC report, the American Meteorological Society, whose membership comprises governmental, private sector, and academic interests in the weather and climate enterprise, established an Ad Hoc Committee on a Nationwide Network of Networks. Through six subcommittees with representatives from across the enterprise, the committee provided a vision for a path forward from the existing disarray in mesoscale observation systems [54]. A steering group comprised of experts from the public, private, and academic sectors should be created to adopt technical standards for participating networks, incentivize networks to participate, coordinate activities of the nationwide network of networks (e.g., data sharing policies and costs, product development and dissemination, participant workshops and educational activities), and develop the market for mesoscale observations [54].
These and other reports have highlighted the fact that, especially in times of austerity and other fiscal pressures, the federal government cannot alone be responsible for monitoring the nation's climate. With considerable funding and interest in surface observing from non-federal entities with significant experience and expertise, it would be wise for the federal government to recognize the value of providing leadership in coordinating the growing array of networks. Hand in hand with this coordination effort, the federal government should work to integrate its own suite of surface observing systems and prioritize highly both a climate reference network (to which other observing systems can be compared for quality) and the historical cooperative observer network (for long-term continuity). It is anticipated that success in creating a nationwide network of networks would generate commercial revenue sufficient to help reduce the costs to the government for their observation systems. Imagine a Google, Inc. model to obtaining and sharing weather and climate data; the income generated could revolutionize the financial structure of surface observations.
A dramatic transformation is indeed needed. A broader vision for "research quality data in near real-time" must evolve and permeate the enterprise to minimize the competition between weather monitoring for public safety and climate monitoring for climate change assessment. In particular, although mistakes have been made in the past by the private sector, the climate enterprise should embrace an appropriate role for the private sector as leaders in innovation, cost efficiency, and marketing. Perhaps a certification program, overseen by independent experts, can raise the value of excellence in environmental monitoring to ensure a growing record of climate observations for decades to come.

Conclusion
The United States has a rich history of monitoring the climate using surface observations, from thousands of dedicated volunteer observers to governmental, academic, and commercial networks using automated stations and reliable telecommunications. In the past several decades, these observations have been augmented by remote sensing technologies that provide necessary spatial coverage and the ability to monitor non-atmospheric variables related to climate variability and change. Hand in hand, in-situ and remotely sensed measurements provide data for research, education, and daily decision making that many times are undervalued by funding agencies, especially as budgets tighten.
As we struggle with the consequences of climate change, surface observing systems become more critical to monitor how climate variability, especially extremes in temperature and precipitation, evolves across different geographical regions. These observations have become critical components not only in establishing our climate record, but in ensuring that global and regional climate models represent the key climate drivers across the world as well as providing routine, hourly or sub-hourly observations for daily decisions by natural resource managers, public safety officials, transportation managers, agricultural producers, event planners, weather forecasters, and others. A robust and healthy surface observing system --from climate reference networks to regional mesonets to volunteer observers --must be maintained for the security of national, regional, and local economies and the protection of our natural resources.