Background

Long-term monitoring (LTM) is required for evaluating the performance of groundwater remedies and for post-closure monitoring at Department of Defense (DoD) and other sites. LTM is expected to be costly, since it generally spans many years and is required at a large number of sites.

This project demonstrated and validated use of the Summit Envirosolutions, Inc. Sampling Optimizer and Data Tracker software, which assists in performing the following LTM optimization (LTMO) functions: (1) identifying improved sampling plans that eliminate redundant sampling locations and/or frequencies; (2) identifying values from recently collected data that are not within expectations based on statistical evaluation of previous values; and (3) tracking metrics relative to site-specific objectives, such as tracking contaminant mass metrics over time.

Two major modules comprise the software: Sampling Optimizer (SO) and Data Tracker (DT). Sampling Optimizer identifies redundant sampling locations and/or frequencies in historical data. The SO uses mathematical optimization, which is unique relative to other LTMO software products, and invokes an auxiliary software component called Model Builder (MB) to provide quasi-automatic interpolation model fitting. Data Tracker assists users in comparing current monitoring data with historical data to identify cases where current data deviate from expectations based on historical values and patterns. The MB enables visualizing relative uncertainty in the fitted interpolation models, and it is also used as needed by the DT to calculate metrics that are based on interpolation of measured concentration values.

Objective

The primary objective of this project was to demonstrate and validate use of the Sampling Optimizer and Data Tracker software at three DoD sites. Comparisons were made with the MAROS software at one of the sites. A further objective of this project was to make the software and documentation available to the government for free use at government sites by government personnel and their contractors.

Demonstration Results

The project team found the software to be easy to learn and use by a typical DoD analyst or contractor. The SO module provided useful tradeoff curves of the sampling cost (e.g., number of samples) versus the “error,” which increases as the number of samples is reduced. Plans with significant reductions in the number of samples and acceptable loss of information were identified; a typical result was a 35% reduction in the number of sampling locations. The DT identified as “out-of-bounds” the vast majority of artificial anomalies added for the testing of this module and also identified several actual anomalies in the data from the demonstration sites. The time and effort required to prepare and import the data into the software, as well as to execute the various software functions, were documented. Suggestions regarding data preparation were provided, and potential future improvements to the software were identified. The software and User’s Guide are now available for use at government sites by government personnel and their contractors.

Implementation Issues

The mathematical optimization employed within the SO allows evaluation of sampling redundancy on a system-wide basis (i.e., identify the best plan if one location is removed, if two locations are removed, if three locations are removed, etc.). This is a significant improvement over other LTMO software such as MAROS, which evaluate redundancy only on a well-by-well basis. A key benefit is that the SO software allows the tradeoff between the number of samples and the accuracy of the resulting plume interpolation to be assessed. Another key benefit of the approach taken by the SO for evaluating data redundancy is that plume visualizations for the baseline plan (with all samples) versus improved plans (with reduced numbers of samples) are created within the software. These comparative visualizations are effective for communicating with stakeholders and regulators. A benefit of the DT module is that the software automatically highlights recently collected data values that are unexpected and require further attention.