Draft guidance on sampling and mitigation measures for controlling corrosion: Supporting information

On this page

Principles of corrosion in drinking water distribution systems

Exposure to contaminants resulting from the internal corrosion of drinking water systems can be the result of corrosion in either the distribution system or the plumbing system, or both.

The degree to which corrosion is controlled for a contaminant in a system can be assessed adequately by measuring the contaminant at the tap over time and correlating its concentrations with corrosion control activities.

This document focuses primarily on the corrosion and leaching of lead-, copper- and iron‑based materials. It also briefly addresses the leaching from galvanized and cement pipes, but does not include microbiologically influenced corrosion.

The corrosion of metallic materials is electrochemical in nature and is defined as the “destruction of a metal by electron transfer reactions” (Snoeyink and Wagner, 1996). For this type of corrosion to occur, all four components of an electrochemical cell must be present: (1) an anode, (2) a cathode, (3) a connection between the anode and the cathode for electron transport and (4) an electrolyte solution that will conduct ions between the anode and the cathode. In the internal corrosion of drinking water distribution systems, the anode and the cathode are sites of different electrochemical potential on the metal surface, the electrical connection is the metal and the electrolyte is the water.

The key reaction in corrosion is the oxidation or anodic dissolution of the metal to produce metal ions and electrons:

M → Mn+ + ne

where:

In order for this anodic reaction to proceed, a second reaction must take place that uses the electrons produced. The most common electron acceptors in drinking water are dissolved oxygen and aqueous chlorine species.

The ions formed in the reaction above may be released into drinking water as corrosion products or may react with components present in the drinking water to form a scale on the surface of the pipe. The scale that forms on the surface of the metal may range from highly soluble and friable to adherent and protective. Protective scales are usually created when the metal cation combines with a hydroxide, oxide, carbonate, phosphate or silicate to form a precipitate.

The concentration of a specific metal in drinking water is determined by the corrosion rate and by the dissolution and precipitation properties of the scale formed. Initially, with bare metal, the corrosion rate far exceeds the dissolution rate, so a corrosion product layer builds over the metal’s surface. As this layer tends to stifle corrosion, the corrosion rate drops towards the dissolution rate (Snoeyink and Wagner, 1996).

Main contaminants from corrosion of drinking water distribution systems

The materials present in the distribution system determine which contaminants are most likely to be found at the tap. The primary contaminants of concern that can leach from corrosion of materials in drinking water distribution systems include antimony, cadmium, copper, iron, lead and zinc. It is important to assess whether these contaminants will be present at concentrations that exceed those considered safe for human consumption. It is important to note that discolouration (red water) events are likely to be accompanied by the release of accumulated contaminants, including lead. Discoloured water should not be considered safe to consume or treated as only an aesthetic issue. Instead, the occurrence of discoloured water should trigger sampling for metals and potentially additional distribution system maintenance (Friedman et al., 2016).

Sources of contaminants in distribution systems

Widespread installation of lead service lines occurred in Canada until 1975 but they are still present owing to their durability. In Canada, copper plumbing with lead–tin solders (widely used until 1989) and brass faucets and fittings are predominant in domestic plumbing systems (Churchill et al., 2000).

Cast iron and ductile iron pipes have historically been used for water mains in Canada. The lining of ductile iron pipes and cast-iron mains with cement-mortar to make them resistant to water main corrosion remains a common practice (AWWA, 2017a). Galvanized steel was commonly used in plumbing pipes and well components for plumbing systems until 1980 (NRCC, 2015). Cement‑based materials are also commonly used to convey water in large-diameter pipes. In new installations, polyvinyl chloride (PVC) pipes often replace copper tubing, lead service lines and distribution pipes.

Lead pipes and solders

Lead may leach into potable water from lead pipes in old water mains, lead service lines, lead in pipe jointing compounds and soldered joints, lead in brass and bronze plumbing fittings, and lead in goosenecks, valve parts or gaskets used in water treatment plants or distribution mains. Lead was a common component of distribution systems for many years.

Lead service lines have been shown to be a consistently high source of lead for many years and to contribute 50% to 75% of the total lead at the tap after extended stagnation times. Lead service lines have been shown to release lead in both dissolved and particulate form under various conditions (Health Canada, 2019a). A number of studies found iron release after full and partial lead service line replacement. These studies established a correlation between particulate lead at the tap and metals such as iron, zinc, tin and copper (Deshommes et al., 2010a; McFadden et al., 2011; Camara et al., 2013). More detailed information on lead release from lead service lines can be found elsewhere (Health Canada, 2019a).

All provinces and territories use the National Plumbing Code of Canada (NPC) as the basis for their plumbing regulations. The NPC allowed lead as an acceptable material for pipes (service lines) until 1975 (NRCC, 2015).

The NPC officially prohibited lead solders from being used in new plumbing or in repairs to plumbing for drinking water supplies in the 1990 version (NRCC, 2015). The most common replacements for lead solders are tin–antimony, tin–copper and tin–silver solders. Under the NPC, components (i.e., fittings) used for potable water applications must comply with the relevant standards for plumbing fittings (NRCC, 2015). The relevant standards, namely ASME A112.18.1/CSA B125.1 and CSA B125.3 (CSA, 2018a, 2018b), limit the lead content of solder to 0.2% and include requirements to comply with both NSF/ANSI/CAN Standard 61 and NSF/ANSI/CAN Standard 372 (NSF International, 2020a, 2020b).

Fixtures such as refrigerated water coolers and bubblers commonly used in schools and other non-residential buildings may contain lead. Selected components of water coolers such as soldered joints within the fixtures or the lining in the tank may contain alloys with lead (U.S. EPA, 2006). Some of these fixtures are still in use in Canada and can contribute high levels of lead to drinking water (McIlwain et al., 2015).

Copper pipes and brass fittings and fixtures

Copper is used in pipes and copper alloys found in domestic plumbing. Copper alloys used in potable water systems are brasses (in domestic fittings) and bronzes (in domestic plumbing valves). Brasses are basically alloys of copper and zinc with other minor constituents, such as lead. Brass fittings are also often coated with a chromium–nickel compound. Bronzes (also referred to as red brass) are alloys of copper, tin and zinc, with or without lead. Historically, most brasses contained between 2% and 8% lead but currently contain less than 4% lead (NSF International, 2020a). An important consideration for reducing exposure to lead is to address leaching from these materials by specifying that they meet health-based and plumbing standards. A number of studies have demonstrated that the use of components such as faucets and other fittings with a low lead content can result in a reduced concentration of lead at the tap (Health Canada, 2019a; Pieper et al., 2016). NSF/ANSI/CAN Standard 61 (Drinking Water System Components — Health Effects) limits the leaching of lead into drinking water. To comply with NSF/ANSI/CAN Standard 372 (Drinking Water System Components — Lead Content) (NSF International, 2020a, 2020b), the lead content of components such as plumbing fittings and materials must not contain more than 0.25% lead as a weighted average.

Pieper et al. (2015) found that corrosion may be a significant concern for well owners and that brass components are the most likely source of lead. Another study found that lead leaching from lead brass (C36000) increased with decreasing pH and alkalinity (Pieper et al., 2016). More detailed information on lead release from brass can be found elsewhere (Health Canada, 2019a).

Iron pipes

The following iron-based materials are the principal sources of iron in drinking water distribution systems: cast iron, ductile iron, galvanized iron and steel. Iron may be released directly from iron-based materials or indirectly through the iron corrosion by-products, or tubercles, formed during the corrosion process, leading to discoloured (red) water events. Since cast iron and ductile iron are frequently found in Canadian drinking water distribution systems, it is not surprising that red water is the most common corrosion problem reported by consumers. When the iron concentration exceeds the aesthetic objective, the iron can stain laundry and plumbing fixtures, produce an undesirable taste in beverages and impart a yellow to red-brownish colour to the water.

Contaminants can accumulate within or on top of iron and lead corrosion products and scale deposits in distribution systems (Lytle et al., 2004; Schock, 2005; Schock et al., 2008a, 2014; Friedman et al., 2010). These scales can then be dislodged and released back to the water in the distribution system with accumulated contaminants such as lead and arsenic (Schock, 2005; U.S. EPA, 2006; Lytle et al., 2014). Iron release has also been correlated with particulate lead at the tap (Deshommes et al., 2010a; McFadden et al., 2011; Camara et al., 2013, Schock et al., 2014; Trueman and Gagnon, 2016; Trueman et al., 2017; Deshommes et al., 2017; Deshommes et al., 2018).

Galvanized pipes

Galvanized pipes will release zinc, since they are manufactured by dipping steel pipes in a bath of molten zinc. Galvanized pipes can also be sources of cadmium and lead, since these materials are present as impurities (Leroy et al., 1996). The NPC permitted the use of galvanized steel as an acceptable material for pipes for plumbing systems until 1980 (NRCC, 2005). Lead and cadmium are correlated with zinc when galvanized steel pipes are the source of lead release (Clark et al., 2015). As such, the presence of cadmium may be an indicator that galvanized steel pipes are a source of lead. A study exposed brass and galvanized steel to more aggressive waters typically found in groundwaters. The study found that galvanized steel may still release significant lead as a result of the sorption of lead to plumbing. The authors concluded that galvanized steel would remain an issue for systems without corrosion control and that it is important for private well owners to test for lead (Pieper et al., 2016). More detailed information on lead and cadmium release from galvanized pipe can be found elsewhere (Health Canada, 2019a, 2020a).

Cement pipes

Cement-based materials used to convey drinking water include reinforced concrete pipes, cement mortar linings and asbestos-cement pipes. In addition to the aggregates (sand, gravel or asbestos), which constitute the basic structure of cement, the binder, which is responsible for the cohesion and mechanical properties of the material, consists mostly of calcium silicates and calcium aluminates in varying proportions (Leroy et al., 1996). Degradation of cement-based materials causes substantial water quality degradation, especially in long lines or low flow areas and with poor to moderately buffered waters. It can be a source of calcium hydroxide (lime) in the distributed water, which may result in an increase in pH and alkalinity. The degradation can also precipitate a variety of minerals, causing cloudy or turbid water and poor taste. In extreme cases, aggressive water can result in reduced pipe strength and cause increased head loss (Schock and Lytle, 2011). The degradation of cement-based materials can also be a source of aluminum and asbestos in drinking water. Newly installed in situ mortar linings have been reported to cause water quality problems in dead ends or low-flow water conditions when water alkalinity is low (Douglas and Merrill, 1991).

Plastic pipes

PVC, polyethylene and chlorinated PVC pipes used in the distribution system have the potential to release contaminants into the distributed water. Stabilizers are used to protect PVC from decomposition when exposed to extreme heat during production. In Canada, organotin compounds are the most common stabilizers used in the production of PVC pipes for drinking water and can be released into drinking water. More detailed information on contaminant release from PVC pipes can be found elsewhere (Health Canada, 2013). Fittings intended for PVC pipes can be made of brass, which contains lead and can be a potential source of lead where PVC pipes are used. Under the NPC, all plastic pipes must comply with the CSA B137 series of standards for plastic pipe, which require that pipes and the associated fittings comply with the NSF/ANSI/CAN Standard 61 (NSF International, 2020a) requirements for leaching of contaminants.

Challenges in measuring corrosion

There is no single, reliable index or method to measure water corrosivity and reflect population exposure to contaminants that are released by the distribution system. Given that a major source of metals in drinking water is related to corrosion in distribution and plumbing systems, measuring the contaminant at the tap is the best tool to assess corrosion and reflect population exposure.

Levels of contaminants at the tap

The literature indicates that lead, copper and iron are the contaminants whose levels are most likely to exceed guideline values owing to the corrosion of materials in drinking water distribution systems. The MAC for lead is for total lead and based on health considerations for the most sensitive population (i.e., children). However, the MAC for lead is established based on feasibility rather than only health protection since current science cannot identify a level under which lead is no longer associated with adverse health effects (Health Canada, 2019a). The guideline for copper is based on bottle-fed infants (Health Canada, 2019b) and the guideline for iron is based on an aesthetic objective (Health Canada, 1978). Both copper and iron are considered essential elements in humans. Based on these considerations, lead concentrations at the tap are used as the basis for initiating corrosion control programs.

A national survey was conducted to ascertain the levels of cadmium, calcium, chromium, cobalt, copper, lead, magnesium, nickel and zinc in Canadian distributed drinking water (Méranger et al., 1981). Based on the representative samples collected at the tap after 5 min of flushing at the maximum flow rate, the survey concluded that only copper levels significantly increased in the drinking water at the tap when compared with raw water.

Concurrently, several studies showed that concentrations of trace elements from household tap water sampled after a period of stagnation can exceed guideline values (Wong and Berrang, 1976; Lyon and Lenihan, 1977; Nielsen, 1983; Samuels and Méranger, 1984; Birden et al., 1985; Neff et al., 1987; Schock and Neff, 1988; Gardels and Sorg, 1989; Schock, 1990a; Singh and Mavinic, 1991; Lytle et al., 1993; Viraraghavan et al., 1996).

A number of contaminants can be accumulated in and released from the distribution system. Scales formed in distribution system pipes that have reached a dynamic equilibrium can subsequently release contaminants such as aluminum, arsenic, other trace metals and radionuclides (Valentine and Stearns, 1994; Reiber and Dostal, 2000; Lytle et al., 2004; Schock, 2005; Copeland et al., 2007; Morris and Lytle, 2007; Schock et al., 2008a; Friedman et al., 2010; Wasserstrom et al., 2017). Changes made to the treatment process, particularly those that affect water quality parameters such as pH, alkalinity and oxidation-reduction potential (ORP); blending; and change of water supply should be accompanied by close monitoring in the distributed water (Schock, 2005).

Lead service lines

Lead service lines have been shown to be a consistently high source of lead for many years and to contribute 50%–75% of the total lead at the tap after extended stagnation times. Although the majority of lead released from lead service lines under stagnant conditions is dissolved lead, water flow can increase the release of both dissolved and particulate lead through the mass transfer of lead out of pipe scales and by physically dislodging the pipe scales. The relative contribution of lead in dissolved lead and particulate forms is not clearly understood and likely varies with water chemistry, plumbing configuration, stagnation time, flow regime, age of the plumbing materials containing the lead and use patterns. The presence of particulate lead in drinking water is sporadic, unpredictable and often associated with mechanical disturbances to the system. It has been shown to also result from galvanic corrosion (Health Canada, 2019a) and to continue and even worsen over long periods of time (St. Clair et al., 2015).

Replacing the lead service line can disturb or dislodge existing lead scales or sediments containing lead, resulting in a significant increase in lead levels at the tap. This increase has been shown to continue for three or more months after the lead service line replacement (Health Canada, 2019a). Del Toral et al. (2013) found that disturbances to the lead service line increased lead concentrations in the water. These disturbances included meter installation or replacement, automated meter installation, service line leak or external service shut-off valve repair, and significant street excavation in proximity to the home.

Lead has been correlated with iron due to the adsorption of dissolved lead onto iron deposits in the lead service line and premise plumbing (Health Canada, 2019a; Trueman and Gagnon, 2016; Deshommes et al., 2017; Pieper et al., 2017; Trueman et al., 2017; Pieper et al., 2018; Bae et al., 2020). Sustained lead release after full lead service line replacement can result from the adsorption of lead onto iron corrosion scales from old galvanized iron plumbing (McFadden et al., 2011). The release of high levels of particulate lead, for four years after full lead service line replacement, was related to manganese and iron accumulation onto pipe walls of premise plumbing which provided a sink for lead. Manganese accumulation on lead pipes can obstruct the formation of the more stable Pb(IV) corrosion scale, increasing the risk of lead release through more readily soluble scales (Schock et al., 2014).

Manganese and iron coatings occur often on lead and other types of pipes and can prolong the time it takes to build passivating films with corrosion inhibitors. They also tend to increase the risk of sporadic spikes of lead from particulate release. Manganese buildups throughout water distribution mains and storage are likely far more common than have been reported (Schock and Lytle, 2011).

Elevated lead levels were seen after both full and partial lead service line replacement and associated with iron released from a water supplied by an unlined iron distribution main (Health Canada, 2019c). Galvanized iron plumbing or iron deposits within premise plumbing can accumulate lead via adsorption, releasing it even after the primary source of lead has been removed (McFadden et al., 2011; Schock et al., 2014).

Lead-based solders

A study on the leaching of copper, iron, lead and zinc from copper plumbing systems with lead-based solders was conducted in the Greater Vancouver Regional District (Singh and Mavinic, 1991). The study showed that for generally corrosive water (pH 5.5 to 6.3; alkalinity 0.6 to 3.7 mg/L as CaCO3), the first litre of tap water taken after an 8-h period of stagnation exceeded the Canadian drinking water guidelines for lead and copper in 43% (lead) and 62% (copper) of the samples from high-rise buildings and in 47% (lead) and 73% (copper) of the samples from single-family homes. Even after prolonged flushing of the tap water in the high-rise buildings, there were still exceedances in 6% of the cases for lead and in 9% of the cases for copper. In all cases in the single‑family homes, flushing the cold water for 5 min reduced levels of lead and copper below the guideline levels.

Subramanian et al. (1991) examined the leaching of antimony, cadmium, copper, lead, silver, tin and zinc from new copper piping with non-lead-based soldered joints exposed to tap water. Copper levels were found to be above 1 mg/L in some cases. The authors concluded that solders used in copper pipes do not leach antimony, cadmium, lead, silver, tin or zinc into drinking water (all were below the detection limits) even in samples that were held in pipes for 90 days.

Faucets and brasses

Samuels and Méranger (1984) conducted a study on the leaching of trace metals from kitchen faucets in contact with the City of Ottawa’s water. Water was collected after a 24-h period of stagnation in new faucets not washed prior to testing. In general, the concentrations of cadmium, chromium, copper and zinc in the leachates did not exceed the Canadian drinking water guideline values applicable at that time. However, levels well above the guideline value for lead were leached from the faucets containing lead-soldered copper joints.

Similar work by Schock and Neff (1988) revealed that new chrome-plated brass faucets can be a significant source of copper, lead and zinc contamination of drinking water, particularly upon stagnation of the water. The authors also concluded that faucets, as well as other brass fittings in household systems, provide a continuous source of lead, even when lead-free solders and fluxes are used in copper plumbing systems. Maas et al. (1994) conducted a statistical analysis of water samples collected in non-residential buildings after an overnight stagnation period from over 12,000 water fountains, bubblers, chillers, faucets and ice makers. The analysis indicated that over 17% of the samples had lead concentrations above 15 µg/L. Notably, the drinking water collected from bubblers, chillers and faucets had lead concentrations above 15 µg/L in over 25% of the samples. Other studies found that between 5% and 21% of drinking water fountains or faucets had lead concentrations above 20 µg/L following a period of stagnation greater than 8 h (Gnaedinger, 1993; Bryant, 2004; Sathyanarayana et al., 2006; Boyd et al., 2008a).

Studies conducted in Copenhagen, Denmark, found that nickel was leaching from chromium-nickel–plated brass after periods of water stagnation (Anderson, 1983). Nickel concentrations measured in the first 250 mL ranged from 8 to 115 µg/L and dropped to 9 to 19 µg/L after 5 min of flushing. Similarly, large concentrations of nickel (up to 8,700 µg/L in one case) were released from newly installed chromium-nickel–plated brass, nickel-plated parts and nickel-containing gunmetal following 12-h periods of water stagnation (Nielsen and Andersen, 2001). Kimbrough (2001) found that brass was a potential source of nickel at the tap. Nickel was found in the first litre after a period of water stagnation (mean and maximum concentrations ranged from 4.5 to 9.2 µg/L and from 48 to 102 µg/L, respectively) with the results also indicating that almost all of the nickel was contained in the first 100 mL.

Iron pipes

When iron pipes are exposed to aerated or chlorinated water, metallic iron is oxidized and iron corrosion products form (e.g., tubercles). Although the dominant oxidant in most water supplies is dissolved oxygen, chlorine also increases the corrosion rate, but concentrations are typically lower than that of dissolved oxygen. The iron corrosion rate depends on the dissolved oxygen concentration and on its transported rate to the metal surface. The ferrous ions produced by the oxidation reactions may either dissolve in the water or deposit on the corroded iron surface as a scale. The growth of the scale decreases the iron corrosion rate but the dissolution of the corrosion by-products contributes to iron release in the water. The extent to which this process occurs depends on the water quality and hydraulic conditions (Benjamin et al., 1996; McNeill and Edwards, 2001). Water flow in the pipe, temperature, thickness and the porosity of the accumulated scale on the metal surface affect the oxygen transport. Water quality parameters such as alkalinity, pH and the concentration of inorganic ions have a minimal effect on the corrosion rate. However, they may influence corrosion scale formation and, subsequently, the corrosion rate at a later stage (Benjamin et al., 1996).

Many studies have shown that iron scales can act as a sink for, and persistent source of, lead in drinking water (Friedman et al., 2010; Schock et al., 2014). In particular, iron concentrations at the tap have been correlated with lead (Health Canada, 2019a; Trueman and Gagnon, 2016; Deshommes et al., 2017; Pieper et al., 2017; Trueman et al., 2017; Pieper et al., 2018). Elevated lead levels were seen after both full and partial lead service line replacement and were associated with iron release from a water supplied by an unlined iron distribution main (Health Canada, 2019c). Galvanized iron plumbing or iron deposits within premise plumbing can accumulate lead via adsorption, releasing it even after the primary source of lead has been removed (McFadden et al., 2011; Schock et al., 2014).

Iron hydroxides may also adsorb and concentrate other contaminants, including manganese, arsenic and aluminum. The installation of chlorination at a groundwater system in the United States caused exceptionally high arsenic concentrations at the tap. Chlorination induced the formation of ferric hydroxide solids, which readily sorbed and concentrated arsenic present in the groundwater at concentrations below 10 µg/L. The addition of chlorine also resulted in the release of copper oxides, which, in turn, sorbed and concentrated arsenic. Arsenic concentrations as high as 5 mg/L were found in the water samples collected (Reiber and Dostal, 2000). Furthermore, the scale may later be released if the quality of the water distributed is modified (Reiber and Dostal, 2000; Lytle et al., 2004) or if there are changes to the hydraulic conditions (Health Canada, 2019c). Triantafyllidou et al. (2019) reported average arsenic levels at the tap ranging from 0.5 to 51 μg/L in systems where elevated levels were attributed in part to desorption from, dissolution of, or resuspension along with iron oxides. Manganese, cadmium, chromium, barium, radium, thorium and uranium have been detected along with iron in hydrant flush solids. Water supply changes may also yield red water events and a concomitant increase in concentrations of inorganic contaminants. Manganese has been shown to accumulate in loose deposits of distribution pipe materials, including iron pipes, where it is associated with tubercle deposits (Health Canada, 2019d). Manganese has been found to accumulate to a lesser extent in iron pipe surface scale than in PVC pipes (Imran et al., 2005; Friedman et al., 2016). Aluminum can accumulate on iron pipes and be released, along with other contaminants, when water quality changes. Physical/hydraulic disturbances may also cause aluminum deposits to detach (Health Canada, 2020b).

Iron corrosion products in the distribution system support microbial growth, and corrosion scale provides a favourable habitat for microbes, as do suspended corrosion products. Corrosion releases ferrous iron, which is oxidized by chlorine to ferric iron, depleting disinfectant residuals and enabling microbial growth. Corroded iron provides sites for bacterial growth, thereby protecting biofilm bacteria from inactivation by free chlorine. Even low levels of iron corrosion (e.g., 1 mm/yr) support greater bacterial biomass than uncorroded pipe. Higher corrosion rates also diminish the disinfection capacity of monochloramine (Health Canada, 2019c).

Cement pipes

High concentrations of aluminum were found in the drinking water of Willemstad, Curaçao, Netherlands Antilles, following the installation of 2.2 km of new factory-lined cement mortar pipes with a high aluminum content (18.7% as aluminum oxide) (Berend and Trouwborst, 1999). Aluminum concentrations in the distributed water increased from 5 to 690 µg/L within 2 months of installation and were still above 100 µg/L after 2 years. These atypical elevated aluminum concentrations were attributed to the low hardness (15 to 20 mg/L as CaCO3), low alkalinity (18 to 32 mg/L as CaCO3), high pH (8.5 to 9.5) and long contact time (2.3 days) of the distributed water and the use of polyphosphate as a corrosion inhibitor.

Aluminum was also found to leach from in situ Portland cement–lined pipes in a series of field trials carried out throughout the United Kingdom in areas with different water qualities (Conroy, 1991). Aluminum concentrations above the European Community (EC) Directive of 0.2 mg/L were found following installation in very low alkalinity water (around 10 mg/L as CaCO3) with elevated pH (> 9.5) and contact times of 6 h. Aluminum concentrations dropped below 0.2 mg/L after 2 months of pipe service. Furthermore, in water with slightly higher alkalinity (~ 50 mg/L as CaCO3), aluminum was not found to exceed the EC Directive. The Canadian guideline for aluminum in drinking water is 2.9 mg/L and is based on neurological effects (Health Canada, 2020b).

Asbestos fibres have been found to leach from asbestos-cement pipes (Leroy et al., 1996). Although a Guideline Technical Document is available for asbestos in drinking water, it states that “there is no consistent, convincing evidence that ingested asbestos is hazardous. There is, therefore, no need to establish a maximum acceptable concentration for asbestos in drinking water” (Health Canada, 1989).

Factors influencing levels of contaminants at the tap

Many factors contribute to the corrosion and leaching of contaminants from drinking water distribution systems. However, the principal factors are the type of materials used, the age of the plumbing system, the stagnation time of the water and the quality of the water in the system. The concentrations of all corrosive or dissolvable materials present in the distribution system will be influenced by some or all of these factors. However, the manner in which these factors will impact each contaminant will vary from one contaminant to another.

Microbiologically influenced corrosion results from a reaction between the pipe material and organisms, their metabolic by-products or both (Schock and Lytle, 2011). Microbial activity can affect pH, metal solubility and the oxidation-reduction potential of the surrounding microenvironment. More detailed information on this type of corrosion can be found in other reference documents (AWWA, 2017a; Health Canada, 2019b).

Factors influencing the corrosion and leaching of lead, copper, iron and cement are discussed here, since these materials are most likely to produce contaminants that exceed Canadian drinking water guidelines, pose health risks to the public or be a source of consumer complaints.

Age of the plumbing system

Lead concentrations at the tap originating from lead solders and brass fittings decline with age (Sharrett et al., 1982; Birden et al., 1985; Boffardi, 1988, 1990; Schock and Neff, 1988; Neuman, 1995). Researchers have concluded that the highest lead concentrations appear in the first year following installation and level off after a number of years of service (Sharrett et al., 1982; Boffardi, 1988). However, unlike lead-soldered joints and brass fittings, lead piping can continue to provide a consistently strong source of lead after many years of service (Britton and Richards, 1981; Schock et al., 1996). In a field study in which lead was sampled in tap water, Maas et al. (1991) showed that homes of all ages were at a substantial risk of lead contamination.

The age of the plumbing materials, fittings and devices is particularly important for copper and brasses (Schock and Lytle, 2011). Copper release into the drinking water largely depends on the type of scale formed within the plumbing system. It can be assumed that at a given age, a corrosion by-product governs the release of copper into the drinking water. A decrease in solubility in the following order is observed when the following scales predominate: cuprous hydroxide [Cu(OH)2] > bronchantite [Cu4(SO4)(OH)6] >> cupric phosphate [Cu3(PO4)2] > tenorite [CuO] and malachite [Cu2(OH)CO3] (Schock et al., 1995). Copper concentrations continue to decrease with the increasing age of plumbing materials, even after 10 or 20 years of service, when tenorite or malachite scales tend to predominate (Sharrett et al., 1982; Neuman, 1995; Edwards and McNeill, 2002). In certain cases, sulphate and phosphate can at first decrease copper concentrations by forming bronchantite and cupric phosphate, but in the long run they may prevent the formation of the more stable tenorite and malachite scales (Edwards et al., 2002).

The age of an iron pipe affects its corrosion. In general, both iron concentration and the rate of corrosion increase with time when a pipe is first exposed to water, but both are then gradually reduced as the scale builds up (McNeill and Edwards, 2001). However, most red water problems today are caused by old, heavily tuberculated unlined cast iron pipes that are subject to stagnant water conditions prevalent in dead ends. Sarin et al. (2003) removed unlined cast iron pipes that were 90 to 100 years old from distribution systems and found that the internal surface had up to 76% of the cross-section of the pipes blocked by scales. Such pipes are easily subject to scouring and provide the high surface areas that favour the release of iron.

A newly installed cement-based material will typically leach lime, which, in turn, will increase water pH, alkalinity and concentrations of calcium (Holtschulte and Schock, 1985; Douglas and Merrill, 1991; Conroy et al., 1994; Douglas et al., 1996; Leroy et al., 1996).

Experiments by Douglas and Merrill (1991) showed that after 1, 6 and 12 years in low‑flow, low‑alkalinity water, lime continued to leach from cement mortar linings upon prolonged exposure, but at a significantly decreased rate when comparing 6- and 12-year-old pipes with the 1-year-old pipe. The lime leaching rate naturally slows down as surface calcium becomes depleted and the deposits formed over time may protect the mortar against further leaching.

Stagnation time, water age and flow

Lead

Concentrations of lead and copper in drinking water from various sources of lead material, including lead service lines, lead solder and brass fittings that contain lead, can increase significantly following a period of water stagnation of a few hours in the distribution system. Many factors, such as the water quality and the age, composition, diameter and length of the lead pipe, impact the shape of stagnation curves and the time to reach an equilibrium state (Lytle and Schock, 2000).

In reviewing lead stagnation curves drawn by several authors, Schock et al. (1996) concluded that lead levels increase exponentially upon stagnation, but ultimately approach a fairly constant equilibrium value after overnight stagnation. Lytle and Schock (2000) showed that lead levels increased rapidly with the stagnation time of the water, with the most critical period being during the first 20 to 24 h for both lead pipe and brass fittings. Lead levels increased most rapidly over the first 10 h, reaching approximately 50% to 70% of the maximum observed value. In their experiment, lead levels continued to increase slightly even up to 90 h of stagnation.

Kuch and Wagner (1983) plotted lead concentrations versus stagnation time for two different water qualities and lead pipe diameters. The lead concentrations in 1/2-inch (1.3-cm) pipe where the pH of the water was 6.8 and the alkalinity was 10 mg/L as calcium carbonate (CaCO3) were significantly higher than lead concentrations stagnating in 3/8-inch (0.95-cm) pipe where the pH of the water was 7.2 and the alkalinity was 213 mg/L CaCO3. Additional data from Kuch and Wagner (1983) indicate that lead levels approach maximum or equilibrium concentrations at greater than 300 min (5 h) for 1/2-inch pipe (1.3-cm lead service line) and at greater than 400 min (6.7 h) for 3/8-inch (1.0-cm) pipe. The diameter of pipes or lead service lines in Canada ranges from 1/2 inch (1.3 cm) to 3/4 inch (1.9 cm) but is typically 5/8 inch (1.6 cm) to 3/4 inch (1.9 cm). In addition, lead concentrations have been demonstrated to be highly sensitive to stagnation time in the first 3 h of standing time for 1/2-inch (1.3-cm) to 3/4‑inch (1.9-cm) pipe. Depending on the water quality characteristics and pipe diameters, differences between 10% and 30% could be observed with differences in standing time as little as 30 to 60 min (Kuch and Wagner, 1983; Schock, 1990a). Long lead or copper pipe of small diameter produces the greatest concentrations of lead or copper, respectively, upon stagnation (Kuch and Wagner, 1983; Ferguson et al., 1996).

Lead is also released during no-flow periods from soldered joints and brass fittings (Birden et al., 1985; Neff et al., 1987; Schock and Neff, 1988). Wong and Berrang (1976) concluded that lead concentrations in water sampled in a 1-year-old household plumbing system made of copper with tin–lead solders could exceed 0.05 mg/L after 4 to 20 h of stagnation and that lead concentrations in water in contact with lead water pipes could exceed this value in 10 to 100 min. In a study examining the impact of stagnation time on lead release from brass coupons, Schock et al. (1995) observed that for brass containing 6% lead, lead concentrations increased slowly for the first hour but ultimately reached a maximum concentration of 0.08 mg/L following 15 h of stagnation. Following a 6-h stagnation period, the lead concentration was greater than 0.04 mg/L. The amount of lead released from brass fittings was found to vary with both alloy composition and stagnation time.

Stagnation times, flow regime and water chemistry have been shown to influence the release of particulate lead from lead service line scales. Particulate lead has been observed to increase under flowing and stagnant water conditions as well as low-flow conditions, in the presence of orthophosphate, in the presence of orthophosphate with increasing stagnation time and at higher pH under flowing water conditions. Of particular interest is that studies have consistently shown that moderate to high flow rates typical of turbulent flow or flow disturbances can increase the mobilization of lead and result in significant contributions of particulate lead to the total lead concentration (Health Canada, 2019a).

Copper

Copper behaviour is more complex than that of lead when it comes to water stagnation. Copper levels will initially increase upon stagnation of the water, but can then decrease or continue to increase, depending on the oxidant levels. Lytle and Schock (2000) showed that copper levels increased rapidly with the stagnation time of the water, but only until dissolved oxygen fell below 1 mg/L, after which they dropped significantly. This is further demonstrated in a study on the influence of stagnation and temperature on water quality in distribution systems (Zlatanovic et al., 2017). The authors found that concentrations of copper increased with stagnation time during winter and summer. Copper levels peaked at 1 370 mg/L after 48 h of stagnation during winter and 1 140 mg/L after 24 h of stagnation during summer, when both were sampled at the tap. Copper levels decreased after the peak for both seasons (pH and alkalinity not provided). Sorg et al. (1999) also observed that in softened water, copper concentrations increased to maximum levels of 4.4 and 6.8 mg/L after about 20 to 25 h of standing time, then dropped to 0.5 mg/L after 72 to 92 h. Peak concentrations corresponded to the time when the dissolved oxygen was reduced to 1 mg/L or less. In non-softened water, the maximum was reached in less than 8 h, because the dissolved oxygen decreased more rapidly in the pipe loop exposed to non-softened water. High-flow velocities can sometimes be associated with erosion corrosion or the mechanical removal of the protective scale in copper pipes. Water flowing at high velocity, combined with corrosive water quality, can rapidly deteriorate pipe materials (Health Canada, 2019b).

Iron

Most red water issues are caused by old, heavily tuberculated unlined cast iron pipes that are subject to stagnant water conditions prevalent in dead ends. Cyclic periods of flow and stagnation were reported as the primary cause of red water problems resulting from iron corrosion of distribution systems (Benjamin et al., 1996). Iron concentration was also shown to increase with longer water stagnation time prevalent in dead ends (Beckett et al., 1998; Sarin et al., 2000). Iron corrosion can also occur in a water supply under conditions where there is limited dissolved oxygen (stagnant water). During stagnation, the dissolved oxygen concentration is depleted near the metal surface and ferric oxides in the corrosion scale may serve as the alternative oxidant. Under these conditions, ferric iron serves as the oxidant and ferrous iron is generated. Ferrous iron may precipitate or diffuse into the bulk water and is then oxidized to (insoluble) ferric iron by the dissolved oxygen, contributing to red water (Benjamin et al., 1996; Sarin, 2004). Red water complaints may occur when cyclic periods of flow and stagnation in water occur (Health Canada, 2019c).

Cement

Long contact time between distributed water and cement materials has been correlated with increased water quality deterioration (Holtschulte and Schock, 1985; Conroy, 1991; Douglas and Merrill, 1991; Conroy et al., 1994; Douglas et al., 1996; Berend and Trouwborst, 1999). In a survey of 33 U.S. utilities with newly installed in situ lined cement mortar pipes carrying low‑alkalinity water, Douglas and Merrill (1991) concluded that degraded water quality was most noticeable in dead ends or where the flow was low or intermittent. Similarly, Conroy (1991) and Conroy et al. (1994) found that the longer the supply water was in contact with the mortar lining, the greater was the buildup of leached hydroxides, and hence the higher was the pH.

pH

The effect of pH on the solubility of the corrosion by-products formed during the corrosion process is often the key to understanding the concentration of metals at the tap. An important characteristic of distributed water with higher pH is that the solubility of the corrosion by-products formed in the distribution system typically decreases. The release of metals from materials used in distribution and premise piping systems will be affected by pH, but also by the alkalinity and dissolved inorganic carbon (DIC) levels of the water, as they influence the formation of the passivating scale on the surface of the material. The presence of these passivating scales on internal pipe surfaces will help prevent the release of lead or copper to the water (Schock and Lytle, 2011).

Lead

The passivation of lead usually results from the formation of a surface film composed of a Pb(II) hydroxycarbonate or orthophosphate solids. The solubility of the main divalent lead corrosion by-products cerussite [PbCO3], hydrocerussite [Pb3(CO3)2(OH)2] and lead hydroxide [Pb(OH)2]) largely determines the lead levels at the tap (Schock, 1980, 1990b; Sheiham and Jackson, 1981; De Mora and Harrison, 1984; Boffardi, 1988, 1990; U.S. EPA, 1992; Leroy, 1993; Peters et al., 1999).

From thermodynamic considerations, lead solubility of corrosion by-products in distribution systems decreases with increasing pH (Britton and Richards, 1981; Schock and Gardels, 1983; De Mora and Harrison, 1984; Boffardi, 1988; Schock, 1989; U.S. EPA, 1992; Singley, 1994; Schock et al., 1996). Solubility models based on Pb(II) chemistry show that the lowest lead levels occur when pH is around 9.8. However, these pH relationships may not be valid for insoluble tetravalent lead dioxide (PbO2) solids, which have been found in lead pipe deposits from several different water systems with high oxidation-reduction potential (highly oxidizing conditions) (Schock et al., 1996, 2001; Schock and Lytle, 2011). Depending on the pH and alkalinity of the water, pipe scales may include hydrocerrusite [(Pb3CO3)2(OH)2] (low pH, low alkalinity), cerussite (PbCO3) and massicot (PbO) (higher alkalinity) (McNeill and Edwards, 2004).

Lead dioxide has been found to be present in waters of low pH and frequently in waters with high alkalinity (Schock et al., 2001, 2005b). Based on tabulated thermodynamic data, the pH relationship of PbO2 may be opposite to that of divalent lead solids (e.g., cerussite, hydrocerrussite) (Schock et al., 2001; Schock and Giani, 2004). Lytle and Schock (2005) demonstrated that PbO2 easily formed at pH 6 to 6.5 in water with persistent free chlorine residuals in weeks to months.

Release of lead from lead solder is primarily controlled by galvanic corrosion and increasing pH has been associated with a decrease in corrosion of lead solder (Oliphant, 1983a; Schock and Lytle, 2011).

Utility experience has also shown that the lowest levels of lead at the tap are associated with pH levels above 8 (Karalekas et al., 1983; Lee et al., 1989; Dodrill and Edwards, 1995; Douglas et al., 2004). Based on bench- and pilot-scale experimental results and analysis of several criteria, the City of Ottawa selected a pH of 9.2 and a minimum alkalinity target of 35 mg/L as CaCO3, using sodium hydroxide and carbon dioxide, to control corrosion. During the initial switch to sodium hydroxide, the pH was maintained at 8.5. However, lead testing found an area of the city with high levels of lead at the tap (10 to 15 µg/L for flowing samples). Nitrification in the distribution system caused the pH to decrease from 8.5 to a range of 7.8 to 8.2 and resulted in lead release from lead service lines. Increasing the pH to 9.2 resulted in an almost immediate reduction in lead concentrations ranging from 6 to 8 µg/L for flowing samples. Ongoing monitoring has demonstrated that lead levels at the tap, following the increase in pH, were consistently below the regulated level of 10 µg/L (1.3 to 6.8 µg/L) (Douglas et al., 2007).

Examination of utility data provided by 365 utilities revealed that the average 90th percentile lead levels at the tap were dependent on both pH and alkalinity (Dodrill and Edwards, 1995). In the lowest pH category (pH < 7.4) and lowest alkalinity category (alkalinity < 30 mg/L as CaCO3), utilities had an 80% likelihood of exceeding the U.S. EPA Lead and Copper Rule Action level for lead of 0.015 mg/L. In this low-alkalinity category, only a pH greater than 8.4 seemed to reduce lead levels at the tap. However, when an alkalinity greater than 30 mg/L as CaCO3 was combined with a pH greater than 7.4, the water produced could, in certain cases, meet the Action level for lead.

A survey of 94 water utilities sampling a total of 1,484 sites, with both non-lead and lead service lines, after an overnight stagnation of at least 6 h was conducted to evaluate the factors that influence lead levels at the consumer’s tap (Lee et al., 1989). The authors demonstrated that maintaining a pH of at least 8.0 effectively controlled lead levels (< 10 µg/L) in the first litre collected at the tap.

A 5-year study to reduce lead concentrations in the drinking water distribution system of the Boston, Massachusetts metropolitan area was conducted (Karalekas et al., 1983). Fourteen households were examined for lead concentrations at the tap, in their lead service lines and in their adjoining distribution systems. Average concentrations were reported for combined samples taken (1) after overnight stagnation at the tap, (2) after the water turned cold and (3) after the system was flushed for an additional 3 min. Even if alkalinity remained very low (on average 12 mg/L as CaCO3), raising the pH from 6.7 to 8.5 reduced average lead concentrations from 0.128 to 0.035 mg/L.

Copper

Although the hydrogen ion does not play a direct reduction role on copper surfaces, pH can influence copper corrosion by altering the equilibrium potential of the oxygen reduction half-reaction and by changing the speciation of copper in solution (Reiber, 1989). The release of copper is highly pH-dependent. When copper corrodes, it is oxidized to Cu(I)(cuprous) and Cu(II)(cupric) copper species that may form protective copper carbonate-based (passivating) scales on the surface of copper plumbing materials, depending on the pH and the levels of DIC and oxidizing agents in the water (Atlas et al., 1982; Pisigan and Singley, 1987; Schock et al., 1995; Ferguson et al., 1996). Copper corrosion increases rapidly as the pH drops below 6. In addition, uniform corrosion rates can be high at low pH values (below about pH 7), causing metal thinning. At higher pH values (above about pH 8), copper corrosion problems are almost always associated with non-uniform or pitting corrosion processes (Edwards et al., 1994a; Ferguson et al., 1996). Edwards et al. (1994b) found that for new copper surfaces exposed to simple solutions that contained bicarbonate, chloride, nitrate, perchlorate or sulphate, increasing the pH from 5.5 to 7.0 roughly halved corrosion rates, but further increases in pH yielded only subtle changes.

The prediction of copper levels in drinking water relies on the solubility and physical properties of the cupric oxide, hydroxide and basic carbonate solids that comprise most scales in copper water systems (Schock et al., 1995). In the cupric hydroxide model of Schock et al. (1995), a decrease in copper solubility with higher pH is evident. Above a pH of approximately 9.5, an upturn in solubility is predicted, caused by carbonate and hydroxide complexes increasing the solubility of cupric hydroxide. Examination of experience from 361 utilities reporting copper levels revealed that the average 90th-percentile copper levels were highest in waters with a pH below 7.4 and that no utilities with a pH above 7.8 exceeded the U.S. EPA’s action level for copper of 1.3 mg/L (Dodrill and Edwards, 1995). However, problems associated with copper solubility were also found to persist up to about pH 7.9 in cold, high‑alkalinity and high-sulphate groundwater (Edwards et al., 1994a).

In general, copper solubility increases (i.e., copper levels will increase) with increasing DIC and decreasing pH (Ferguson, et al., 1996; Schock, et al., 1995). Copper levels may be controlled at lower pH levels. However, high alkalinity, high DIC groundwaters are prone to copper corrosion problems and adjusting pH may be impractical because of the potential for CaCO3 precipitation.

Iron

Release of iron from iron-based drinking water materials, such as cast iron, steel and ductile iron, has been modelled based on the formation of protective ferrous solid scales (FeCO3). In the pH range of 7 to 9, both the corrosion rate and the degree of tuberculation of iron distribution systems generally increase with increasing pH (Larson and Skold, 1958; Stumm, 1960; Hatch, 1969; Pisigan and Singley, 1987). However, the solubility of iron-based corrosion by‑products, and thus iron levels, decrease with increasing pH (Karalekas et al., 1983; Kashinkunti et al., 1999; Broo et al., 2001; Sarin et al., 2003). In a pipe loop system constructed from 90- to 100-year-old unlined cast iron pipes taken from a Boston distribution system, iron concentrations were found to steadily decrease when the pH was raised from 7.6 to 9.5 (Sarin et al., 2003). Similarly, following a pH increase from 6.7 to 8.5, a consistent downward trend in distribution system iron concentrations was found over 2 years (Karalekas et al., 1983). The rate of ferrous iron oxidation increases with pH and, generally, both the solubility and dissolution rates of iron oxides, and other iron compounds, decrease with increasing pH (Schwertmann, 1991; Silva et al., 2002; Sarin et al., 2003; Duckworth and Martin, 2004). Finished water pH was lowered from 10.3 to 9.7, resulting in reduced soluble lead release. However, iron concentrations were found to increase at pH 9.7 and were correlated with increased particulate lead release (Masters and Edwards, 2015).

Waters with high buffer intensity will mitigate changes in pH, and a relatively stable pH will encourage the formation of the more protective ferrous-based solids and result in lower iron release. Maintaining a stable pH can be important for preventing desorption of inorganic contaminants from iron oxides.

Cement

Water with low pH, low alkalinity and low calcium is particularly aggressive towards cement materials. The water quality problems that may occur are linked to the chemistry of the cement. Lime from the cement releases calcium ions and hydroxyl ions into the drinking water. This, in turn, may result in a substantial pH increase, depending on the water’s buffering capacity (Leroy et al., 1996). Pilot-scale tests were conducted to simulate low-flow conditions of newly lined cement mortar pipes carrying low-alkalinity water (Douglas et al., 1996). In the water with an initial pH of 7.2, alkalinity of 14 mg/L as CaCO3 and calcium at 13 mg/L as CaCO3, measures of pH as high as 12.5 were found. Similarly, in the water with an initial pH of 7.8, alkalinity of 71 mg/L as CaCO3 and calcium at 39 mg/L as CaCO3, measures of pH as high as 12 were found. The most significant pH increases were found during the first week of the experiment, and pH decreased slowly as the lining aged. In a series of field and test rig trials to determine the impact of in situ cement mortar lining on water quality, Conroy et al. (1994) observed that in low-flow and low-alkalinity water (around 10 mg/L as CaCO3), pH increases exceeding 9.5 could occur for over 2 years following the lining. Asbestos-cement pipes are particularly susceptible to low pH waters (less than 7.5 to 8.0) with high calcium alkalinity and silicate levels (Schock and Lytle, 2011).

Field trials carried out throughout the United Kingdom in areas with different water qualities found that high pH in cement pipes can render lead soluble. Lead levels increased significantly with increasing pH when pH was above 10.5. The concentration of lead ranged from just less than 100 µg/L at pH 11 to greater than 1 000 µg/L above pH 12 (Conroy, 1991). This brings into question the accuracy of the solubility models for high pH ranges and the point at which pH adjustment may become detrimental. Elevated pH levels resulting from cement leaching may also contribute to aluminum leaching from cement materials, since high pH may increase aluminum solubility (Berend and Trouwborst, 1999). Aluminum can interfere with orthophosphate passivation used for corrosion control by preventing the formation of protective scales (AWWA, 2017a; Wasserstrom et al., 2017).

Zinc

Zinc coatings on galvanized steel corrode similarly to iron but the corrosion reactions are typically slower. Corrosion of the galvanized pipes can release trace metals, such as cadmium and lead, into drinking water distribution systems. When the pipe is new, corrosion depends strongly on pH. Pisigan and Singley (1985) found that below pH 7.5, zinc levels increased in drinking water (DIC concentration of 50 mg C/L). At pH levels of 7.5 to 10.4, hydrozincite, the most stable corrosion by-product, predominates. Waters at pH > 10.4 can be aggressive to zinc and will often remove galvanized coatings (zinc hydroxides predominate).

Alkalinity

Alkalinity serves to control the buffer intensity of most water systems; therefore, a minimum amount of alkalinity is necessary to provide a stable pH throughout the distribution system for corrosion control of lead, copper and iron and for the stability of cement-based linings and pipes.

Lead

According to thermodynamic models, the minimum lead solubility occurs at relatively high pH (9.8) and low alkalinity (30 to 50 mg/L as CaCO3) (Schock, 1980, 1989; Schock and Gardels, 1983; U.S. EPA, 1992; Leroy, 1993; Schock et al., 1996). These models show that the degree to which alkalinity affects lead solubility depends on the form of lead carbonate present on the pipe surface. These models apply to uniform scales of lead minerals but not on mixed deposit mineral phases and they are not good predictors of scales formed in real-world drinking water lead service lines (Tully et al., 2019). When cerussite is stable, increasing alkalinity reduces lead solubility; when hydrocerussite is stable, increasing alkalinity increases lead solubility (Sheiham and Jackson, 1981; Boffardi, 1988, 1990). Cerussite is less stable at pH values where hydrocerussite is stable and may form. Eventually, hydrocerussite will be converted to cerussite, which is found in many lead pipe deposits. Higher lead release was observed in pipes where cerussite was expected to be stable given the pH/alkalinity conditions. However, when these conditions are adjusted so that hydrocerussite is thermodynamically stable, lead release will be lower than in any place where cerussite is stable (Schock, 1990a).

Laboratory experiments also revealed that, at pH 7 to 9.5, optimal alkalinity for lead control is between 30 and 45 mg/L as CaCO3 and that adjustments to increase alkalinity beyond this range yield little additional benefit (Schock, 1980; Sheiham and Jackson, 1981; Schock and Gardels, 1983; Edwards and McNeill, 2002) and can be detrimental in some cases (Sheiham and Jackson, 1981).

Schock et al. (1996) reported the existence of significant amounts of insoluble tetravalent lead dioxide in lead pipe deposits from several different water systems. However, the alkalinity relationship for lead dioxide solubility is not known, as no complexes or carbonate solids have been reported. The existence of significant amounts of insoluble lead dioxide in lead pipe deposits may explain the erratic lead release from lead service lines and the poor relationship between total lead and alkalinity (Lytle and Schock, 2005).

Copper

Alkalinity is not expected to influence the release of lead from lead solder, since this release is mostly dependent on the galvanic corrosion of the lead solder as opposed to the solubility of the corrosion by-products formed (Oliphant, 1983b). However, Dudi and Edwards (2004) predicted that alkalinity could play a role in the leaching of lead from galvanic connections between lead- and copper-bearing plumbing. A clear relationship between alkalinity and lead solubility based on utility experience remains to be established. Trends in field data of 47 U.S. municipalities indicated that the most promising water chemistry targets for lead control were a pH level of 8 to 10 with an alkalinity of 30 to 150 mg/L as CaCO3 (Schock et al., 1996). A subsequent survey of 94 US water companies and districts revealed no relationship between lead solubility and alkalinity (Lee et al., 1989). In a survey of 365 utilities under the U.S. EPA Lead and Copper Rule, lead release was significantly lower when alkalinity was between 30 and 74 mg/L as CaCO3 than when alkalinity was < 30 mg/L as CaCO3. Lower lead levels were also observed in utilities with alkalinities between 74 and 174 mg/L and greater than 174 mg/L when the pH was 8.4 or lower (Dodrill and Edwards, 1995). Laboratory and utility experience demonstrated that copper corrosion releases are worse at higher alkalinity (Edwards et al., 1994b, 1996; Schock et al., 1995; Ferguson et al., 1996; Broo et al., 1998) and are likely due to the formation of soluble cupric bicarbonate and carbonate complexes (Schock et al., 1995; Edwards et al., 1996). Examination of utility data for copper levels obtained from 361 utilities also revealed the effects of alkalinity were approximately linear and more significant at lower pH: a combination of low pH (< 7.8) and high alkalinity (> 74 mg/L as CaCO3) produced the worst-case 90th-percentile copper levels (Edwards et al., 1999). However, low alkalinity (< 25 mg/L as CaCO3) also proved to be problematic, depending on pH (Schock et al., 1995). For high-alkalinity waters, the only practical solutions to reduced cuprosolvency are lime softening, the removal of bicarbonate or the addition of rather large amounts of orthophosphate (U.S. EPA, 2003).

Lower copper concentrations can be associated with higher alkalinity when the formation of the less soluble malachite and tenorite are favoured (Schock et al., 1995). A laboratory experiment conducted by Edwards et al. (2002) that for relatively new pipes, at pH 7.2, the maximum concentration of copper released was nearly a linear function of alkalinity. However, as the pipes aged, lower releases of copper were measured at an alkalinity of 300 mg/L as CaCO3, at which malachite had formed, than at alkalinities of 15 and 45 mg/L as CaCO3, at which the relatively soluble cupric hydroxide prevailed.

Iron

Lower iron corrosion rates (Stumm, 1960; Pisigan and Singley, 1987; Hedberg and Johansson, 1987; Kashinkunti et al., 1999) and iron concentrations (Horsley et al., 1998; Sarin et al., 2003) in distribution systems have been associated with higher alkalinities.

Experiments using a pipe loop system built from 90- to 100-year-old unlined cast iron pipes taken from a Boston distribution system showed that decreases in alkalinity from 30 to 35 mg/L to 10 to 15 mg/L as CaCO3 at a constant pH resulted in an immediate increase of 50% to 250% in iron release. Changes in alkalinity from 30 to 35 mg/L to 58 to 60 mg/L as CaCO3 and then back to 30 to 35 mg/L also showed that higher alkalinity resulted in lower iron release, but the change in iron release was not as dramatic as the changes in the lower alkalinity range (Sarin et al., 2003). An analysis of treated water quality parameters (pH, alkalinity, hardness, temperature, chloride and sulphate) and red water consumer complaints was conducted using data from the period 1989–1998. The majority of red water problems were found in unlined cast iron pipes that were 50 to 70 years old. During that period, the annual average pH of the distributed water ranged from 9.1 to 9.7, its alkalinity ranged from 47 to 76 mg/L as CaCO3 and its total hardness ranged from 118 to 158 mg/L as CaCO3. The authors concluded that the strongest relationship was between alkalinity and red water complaints and that maintaining finished water with an alkalinity greater than 60 mg/L as CaCO3 substantially reduced the number of consumer complaints (Horsley et al., 1998).

Cement

Alkalinity is a key parameter in the deterioration of water quality by cement materials. When poorly buffered water comes into contact with cement materials, the soluble alkaline components of the cement pass rapidly into the drinking water. Conroy et al. (1994) observed that alkalinity played a major role in the deterioration of the quality of the water from in situ mortar lining in dead-end mains with low-flow conditions. When the alkalinity was around 10 mg/L as CaCO3, pH levels remained above 9.5 for up to 2 years, and aluminum concentrations were above 0.2 mg/L for 1 to 2 months following the lining process. However, when alkalinity was around 35 mg/L as CaCO3, the water quality problem was restricted to an increase in pH level above 9.5 for 1 to 2 months following the lining process. When the alkalinity was greater than 55 mg/L as CaCO3, no water quality problems were observed.

The nature of the passivating film formed on galvanized steel pipes changes in response to various factors. Water with moderate levels of DIC and high buffer intensity appear to produce good passivating films (Crittenden et al., 2012).

Temperature and seasonal variation

No simple relationship exists between temperature and corrosion processes, because temperature influences several water quality parameters, such as dissolved oxygen solubility, solution viscosity, diffusion rates, activity coefficients, enthalpies of reactions, compound solubility, oxidation rates and biological activities (McNeill and Edwards, 2002).

These parameters, in turn, influence the corrosion rate, the properties of the scales formed and the leaching of materials into the distribution system. The corrosion reaction rate of lead and iron is expected to increase with temperature. Hot water is often observed to be more corrosive than cold water (Schock and Lytle, 2011). As such, there is a more direct impact on the solubility of lead at the tap with respect to elevated temperatures. The solubility of several corrosion by-products decreases with increasing temperature (Schock, 1990a; Edwards et al., 1996; McNeill and Edwards, 2001, 2002).

Seasonal variations in temperature between the summer and winter months were correlated with lead concentrations, with the warmer temperatures of the summer months increasing lead concentrations (Britton and Richards, 1981; Karalekas et al., 1983; Colling et al., 1987, 1992; Douglas et al., 2004; Ngueta et al., 2014). Douglas et al. (2004) reported a strong seasonal variation in lead concentration, with the highest lead levels seen during the months of May to November. In a duplicate intake study of lead exposure from drinking water, Jarvis et al. (2018) observed significantly higher lead levels in summer compared to winter for properties with and without lead service lines, with and without orthophosphate dosing. In almost every case, the mean water lead concentration for each participant was higher in summer compared to winter. Masters et al. (2016) also found that lead levels were 3 times greater during the summer compared to the winter in 50% of the homes sampled. Generally, in distribution systems and service lines, temperature fluctuations are limited over short time frames, so any effect will be confounded by other factors. Seasonal changes in temperature are often accompanied by significant changes in other parameters (e.g., NOM) (Masters et al., 2016).

Masters et al. (2016) found that copper levels were 2.5 to 15 greater during the winter in comparison to summer in 5 of the 8 homes sampled. In a survey of the release of copper in high-rise buildings and single-family homes, Singh and Mavinic (1991) noted that copper concentrations in water run through cold water taps were typically one-third of copper concentrations in water run through hot water taps. A laboratory experiment that compared copper release at 4°C, 20°C, 24°C and 60°C in a soft, low-alkalinity water showed higher copper release at 60°C, but little difference in copper release between 4°C and 24°C (Boulay and Edwards, 2001). However, copper hydroxide solubility was shown to decrease with increasing temperature (Edwards et al., 1996; Hidmi and Edwards, 1999). In a survey of 365 utilities, no significant trend between temperature and lead or copper levels was found (Dodrill and Edwards, 1995).

Red water complaints as a function of temperature were analyzed by Horsley et al. (1998). Although no direct correlation was found between temperature and red water complaints, more red water complaints were reported during the warmer summer months. Corrosion rates, measured in annular reactors made of new cast iron pipes, were also strongly correlated with seasonal variations (Volk et al., 2000). The corrosion rates at the beginning of the study (March) were approximately 2.5 milli-inch per year (0.064 mm per year) at a temperature below 13°C. The corrosion rates started to increase in May and were highest during the months of July to September (5 to 7 milli-inch per year [0.13 to 0.18 mm per year] and > 20°C).

No information was found in the reviewed literature on the relationship between temperature and cement pipe degradation.

Calcium

Traditionally, it was thought that calcium stifled corrosion of metals by forming a film of calcium carbonate on the surface of the metal (also called passivation). However, many authors have refuted this idea (Stumm, 1960; Nielsen, 1983; Lee et al., 1989; Schock, 1989, 1990b; Leroy, 1993; Dodrill and Edwards, 1995; Lyons et al., 1995; Neuman, 1995; Reda and Alhajji, 1996; Rezania and Anderl, 1997; Sorg et al., 1999). No published study has demonstrated, through compound-specific analytical techniques, the formation of a protective calcium carbonate film on lead, copper or iron pipes (Schock, 1989). Leroy (1993) showed that in certain cases, calcium can slightly increase lead solubility. Furthermore, surveys of U.S. water companies and districts revealed no relationship between lead or copper levels and calcium levels (Lee et al., 1989; Dodrill and Edwards, 1995).

For iron, many authors have reported the importance of calcium in various roles, including calcium carbonate scales, mixed iron/calcium carbonate solids and the formation of a passivating film at cathodic sites (Larson and Skold, 1958; Stumm, 1960; Merill and Sanks, 1978; Benjamin et al., 1996; Schock and Fox, 2001). However, calcium carbonate by itself does not form protective scales on iron materials (Benjamin et al., 1996).

Calcium is the main component of cement materials. Calcium oxide makes up 38%–65% of the composition of primary types of cement used for distributing drinking water (Leroy et al., 1996). Until an equilibrium state is reached between the calcium in the cement and the calcium of the conveyed water, it is presumed that calcium from the cement will be either leached out of or precipitated into the cement pores, depending on the calcium carbonate precipitation potential of the water.

Free chlorine residual

Hypochlorous acid is a strong oxidizing agent used for the disinfection of drinking water and is the predominant form of free chlorine below pH 7.5. Free chlorine species (i.e., hypochlorous acid and hypochlorite ion) can also act as primary oxidants towards lead and thus increase lead corrosion (Boffardi, 1988, 1990; Schock et al., 1996; Lin et al., 1997). Gaseous chlorine can lower the pH of the water by reacting with the water to form hypochlorous acid, hydrogen ion and chloride ion. In poorly buffered waters, chlorine can increase corrosivity through a reduction in pH and, in general, by increasing corrosion rates and ORP (Schock and Lytle, 2011). A pipe loop study on the effect of chlorine on corrosion demonstrated that a free chlorine residual (0.2 mg/L) did not increase lead concentrations (Cantor et al., 2003). A survey of 94 U.S. water companies and districts also revealed no relationship between lead levels and free chlorine residual concentrations (in the range of 0 to 0.5 mg/L) (Lee et al., 1989).

Significant lead dioxide deposits in scales were first reported by Schock et al. (1996) in pipes from several different water systems. Suggestions were made as to the chemical conditions that would favour these tetravalent lead (lead dioxide – PbO2) deposits and the changes in treatment conditions (particularly disinfection changes) that could make the PbO2 lead scales vulnerable to destabilization. Schock et al. (2001) found deposits in lead pipes that contained lead dioxide as the primary protective solid phase. Subsequent to these findings, different attributes of the theoretical solubility chemistry of lead dioxide were expanded upon, particularly the association with high free chlorine residuals and low oxidant demand. Low lead levels observed in most of this distribution system were found to be the result of almost pure PbO2 passivating films (Schock et al., 2001). Elevated lead concentrations in Washington, DC, were linked to a change in secondary disinfectant from chlorine to chloramination and previous work on PbO2 formation (Schock et al., 2001; Renner, 2004). Results of solids analysis of pipe scales from Washington, DC, confirmed the reductive dissolution pathway for the breakdown of PbO2 (Schock et al., 2001). Many studies have explored various aspects of the kinetics of PbO2 formation and breakdown (Lytle and Schock, 2005; Switzer et al., 2006; Lin and Valentine, 2008a,b; Liu et al., 2008; DeSantis et al, 2020). Other studies have shown that the reaction is reversible in a short time frame of only weeks (Giani et al., 2005; Lytle and Schock, 2005). Edwards and Dudi (2004) and Lytle and Schock (2005) confirmed that lead dioxide deposits could be readily formed and subsequently destabilized in weeks to months under realistic conditions of distribution system pH, ORP and alkalinity.

When hypochlorous acid is added to a water supply, it becomes a dominant oxidant on the copper surface (Atlas et al., 1982; Reiber, 1987, 1989; Hong and Macauley, 1998). Free chlorine residual was shown to increase the copper corrosion rate at a lower pH (Atlas et al., 1982; Reiber, 1989). Conversely, free chlorine residual was shown to decrease the copper corrosion rate at pH 9.3 (Edwards and Ferguson, 1993; Edwards et al., 1999). However, Schock et al. (1995) concluded that free chlorine affects the equilibrium solubility of copper by stabilizing copper(II) solid phases, resulting in higher levels of copper release. The authors did not observe any direct effects of free chlorine on copper(II) solubility other than the change in valence state and, hence, the indirect change in potential of cuprosolvency.

On exposure to disinfectant during water treatment and distribution, Fe(II) is oxidized to the relatively insoluble Fe(III) oxidation state, which is responsible for discoloured water. Several authors reported an increase in the iron corrosion rate with the presence of free chlorine (Pisigan and Singley, 1987; Cantor et al., 2003). However, a more serious concern is the fact that iron corrosion by-products readily consume free chlorine residuals (Frateur et al., 1999). Furthermore, when iron corrosion is microbiologically influenced, a higher level of free chlorine residual may actually decrease corrosion problems (LeChevallier et al., 1993). No information was found in the literature correlating iron levels with free chlorine residuals.

No information was found in the literature correlating free chlorine residual with cement pipe degradation.

Chloramines

Chloramines have been reported to influence lead in drinking water distribution systems. As noted previously, in 2000, the Water and Sewer Authority in Washington, DC, started using chloramines instead of chlorine as a secondary disinfectant. Subsequently, more than 1,000 homes in Washington, DC, exceeded the U.S. EPA’s action level for lead of 0.015 mg/L, and more than 157 homes were found to have lead concentrations at the tap greater than 300 µg/L (Renner, 2004; U.S. EPA, 2007). Chlorine is a powerful oxidant and the lead oxide scale formed over the years had reached a dynamic equilibrium in the distribution system. Switching from chlorine to chloramines reduced the oxidizing potential of the distributed water and destabilized the lead oxide scale, which resulted in increased lead leaching (Schock and Giani, 2004; Lytle and Schock, 2005; DeSantis et al., 2020). The work of Edwards and Dudi (2004) also showed that chloramines do not form a low‑solubility solid on lead surfaces. The ORP brought about by chloramination favours the more soluble divalent lead solids. A study by Treweek et al. (1985) also indicated that under some conditions, chloraminated water is more solubilizing than water with free chlorine, although the apparent lead corrosion rate is slower.

Little information has been reported in the literature about the effect of chloramines on copper or iron. Some authors reported that chloramines were less corrosive than free chlorine towards iron (Treweek et al., 1985; Cantor et al., 2003). Hoyt et al. (1979) also reported an increase in red water complaints following the use of chlorine residual instead of chloramines.

No information was found in the reviewed literature linking chloramines and cement pipe degradation.

Chloride and sulphate

Studies have shown the effect of chloride on lead corrosion in drinking waters to be negligible (Schock, 1990b). In addition, chloride is not expected to have a significant impact on lead solubility (Schock et al., 1996). However, Oliphant (1993) found that chloride increases the galvanic corrosion of lead-based soldered joints in copper plumbing systems.

Chloride has traditionally been reported to be aggressive towards copper (Edwards et al., 1994b). However, high concentrations of chloride (71 mg/L) were shown to reduce the rate of copper corrosion at pH 7 to 8 (Edwards et al., 1994a,b, 1996; Broo et al., 1997, 1999). Edwards and McNeill (2002) suggested that this dichotomy might be reconciled when long-term effects are considered instead of short-term effects: chloride would increase copper corrosion rates over the short term; however, with aging, the copper surface would become well protected by the corrosion by-products formed.

Studies have shown the effect of sulphate on lead corrosion in drinking water to be generally negligible (Boffardi, 1988; Schock, 1990b; Schock et al., 1996). Sulphate was found to stifle galvanic corrosion of lead-based solder joints (Oliphant, 1993). Its effect was to change the physical form of the normal corrosion product to crystalline plates, which were more protective.

Sulphate is a strong corrosion catalyst implicated in the pitting corrosion of copper (Schock, 1990b; Edwards et al., 1994b; Ferguson et al., 1996; Berghult et al., 1999). Sulphate was shown to decrease concentrations of copper in new copper materials; however, upon aging of the copper material, high sulphate concentrations resulted in higher copper levels in the experimental water (Edwards et al., 2002). The authors concluded that this was due to sulphate’s ability to prevent the formation of the more stable and less soluble malachite and tenorite scales. However, Schock et al. (1995) reported that aqueous sulphate complexes are not likely to significantly influence cuprosolvency in potable water.

A review of lead levels reported by 365 water utilities revealed that higher chloride to sulphate mass ratios (CSMRs) were associated with higher 90th-percentile lead levels at the consumer’s tap. The study showed that 100% of the utilities that delivered drinking water with a CSMR below 0.58 met the U.S. EPA’s action level for lead of 0.015 mg/L. However, only 36% of the utilities that delivered drinking water with a CSMR higher than 0.58 met the action level for lead (Edwards et al., 1999). Dudi and Edwards (2004) also conclusively demonstrated that higher CSMRs increased lead leaching from brass due to galvanic connections. High levels of lead in the drinking water of Durham, North Carolina, were found to be caused by a change in coagulant from alum to ferric chloride that had increased the CSMR, resulting in lead leaching from the plumbing system (Renner, 2006; Edwards and Triantafyllidou, 2007).

No clear relationship between chloride or sulphate and iron corrosion can be established. Larson and Skold (1958) found that the ratio of the sum of chloride and sulphate to bicarbonate (later named the Larson Index) was important (a higher ratio indicating a more corrosive water). Authors reported that chloride (Hedberg and Johansson, 1987; Veleva, 1998) and sulphate (Veleva, 1998) increased iron corrosion. When sections of 90-year-old cast iron pipes were conditioned in the laboratory with chloride at 100 mg/L, an immediate increase in iron concentrations (from 1.8 to 2.5 mg/L) was observed. Conversely, sulphate was found to inhibit the dissolution of iron oxides and thus yield lower iron concentrations (Bondietti et al., 1993). The presence of sulphate or chloride was also found to lead to more protective scales (Feigenbaum et al., 1978; Lytle et al., 2003). In another study, neither sulphate nor chloride was found to have an effect on iron corrosion (Van Der Merwe, 1988).

Lytle et al. (2020) evaluated the effects of chloride, sulphate and DIC on iron release from a 90-year-old cast iron pipe section in pH 8.0 water under stagnant conditions. Results showed that the addition of 150 mg/L sulphate to water increased the mean total iron concentrations to 1.13 to 2.68 mg/L from 0.54 to 0.79 mg/L in water with 10 mg C/L DIC. Similar results were observed when chloride was added alone, and when sulphate and chloride were added together. In contrast, the mean total iron concentrations were reduced by 53% to 80% in waters with a higher DIC of 50 mg C/L.

Rapid degradation of cement-based material can be caused in certain cases by elevated concentrations of sulphate. Sulphate may react with the calcium aluminates present in the hydrated cement, giving highly hydrated calcium sulpho-aluminates, which may cause cracks to appear and reduce the material’s mechanical strength. The effect of sulphate may be reduced if chloride is also present in high concentrations (Leroy et al., 1996).

Natural organic matter

Natural organic matter (NOM) may affect corrosion in several ways. Some organic materials have been found to coat pipes, thus reducing corrosion while others increase corrosion. It is generally recommended that NOM be removed to minimize lead and copper concentrations.

Some NOM reacts with the metal surface, providing a protective film and reducing corrosion over long periods (Campbell, 1971). Others have been shown to react with corrosion products to increase lead corrosion (Korshin et al., 1996, 1999, 2000, 2005; Dryer and Korshin, 2007; Liu et al., 2009; Masters and Lin, 2009; Zhou et al., 2015; Masters et al., 2016). NOM is one of the major challenges to plumbosolvency control using orthophosphate in the United Kingdom (Colling et al., 1987; Hayes et al., 2008). NOM may complex calcium ions and keep them from forming a protective CaCO3 coating. Zhou et al. (2015) observed that increases in NOM resulted in significant increases of lead release in simulated partial lead service line replacements. In bench-scale work, Trueman et al. (2017) observed increased lead release from coupons as a result of both uniform and galvanic corrosion in the presence of humic acid. The addition of orthophosphate lowered the lead release but humic substances impacted its effectiveness. Zhao et al. (2018) found that NOM delayed aggregation of lead phosphate particles after PbO2 was destabilized.

Research in copper plumbing pitting has indicated that some NOM may prevent pitting attacks (Campbell, 1954a,b, 1971; Campbell and Turner, 1983; Edwards et al., 1994a; Korshin et al., 1996; Edwards and Sprague, 2001). However, NOM contains strong complexing groups and has also been shown to increase the solubility of copper corrosion products (Korshin et al., 1996; Rehring and Edwards, 1996; Broo et al., 1998, 1999; Berghult et al., 1999, 2001; Edwards et al., 1999; Boulay and Edwards, 2001; Edwards and Sprague, 2001). Nevertheless, the significance of NOM to cuprosolvency relative to competing ligands has not been conclusively determined (Schock et al., 1995; Ferguson et al., 1996). Copper release above 6 mg/L and blue water were observed in a new copper plumbing system. Removal of NOM increased dissolved oxygen and subsequently increased scale formation. The authors suggested that in the absence of NOM, the corrosion rate decreased, accelerating the natural aging process (Arnold et al., 2012). More information on NOM and lead and copper is available elsewhere (Health Canada, 2019a,b, 2020c).

Several authors have shown that NOM decreases the iron corrosion rate of both galvanized steel and cast-iron pipe (Larson, 1966; Sontheimer et al., 1981; Broo et al., 1999). However, experiments conducted by Broo et al. (2001) revealed that NOM increased the corrosion rate at low pH values, but decreased it at high pH values. The opposing effect was attributed to different surface complexes forming under different pH conditions. NOM was also found to encourage the formation of more protective scales in iron pipes by reducing ferric colloids to soluble ferrous iron (Campbell and Turner, 1983). However, NOM can complex metal ions (Benjamin et al., 1996), which may lead to increased iron concentrations. Peng et al. (2013) observed that iron release increased in the presence of NOM and other inorganics.

In some cases, the organics may become food for organisms growing in the distribution system or at pipe surfaces. This can increase the corrosion rate when those organisms attack the surface. Little information was found in the reviewed literature on the relationship between NOM and cement pipe degradation.

Methods for measuring corrosion

As noted above, there is no direct and simple method to measure internal corrosion of drinking water distribution systems. Over the years, a number of methods have been put forward to indirectly assess internal corrosion of drinking water distribution systems. The Langelier Index has been used in the past to determine the aggressivity of the distributed water towards metals. Coupon and pipe rig systems were developed to compare different corrosion control measures. As the health effects of leaching of metals in the distribution system became a concern, measuring the metal levels at the tap became the most appropriate method to both assess population exposure to metals and monitor corrosion control results.

Corrosion indices

Corrosion indices should not be used to assess the effectiveness of corrosion control programs, as they provide only an indication of the tendency of calcium carbonate to dissolve or precipitate. They were traditionally used to assess whether the distributed water was aggressive towards metals and to control for corrosion. These corrosion indices were based on the premise that a thin layer of calcium carbonate on the surface of a metallic pipe controlled corrosion. Accordingly, a number of semi-empirical and empirical relationships, such as the Langelier Index, the Ryzner Index, the Aggressiveness Index, the Momentary Excess and the Calcium Carbonate Precipitation Potential, were developed to assess the calcium carbonate–bicarbonate equilibrium. However, a deposit of calcium carbonate does not form an adherent protective film on the metal surface. There is significant empirical evidence contradicting the presumed connection between corrosion and the Langelier Index and corrosion indices should not be used for corrosion control practices (Benjamin et al., 1996). The work of Edwards et al. (1996) has shown that under certain conditions, the use of corrosion indices results in actions that may increase the release of corrosion by-products.

Coupons and pipe rig systems

The selection of the most appropriate materials for the conditions under study is critical to achieve the most reasonable approximation. The use of new plumbing material in simulators (e.g., pipe rigs) must be deemed to be appropriate for the corrosion of concern. For instance, new copper is appropriate when a water system uses copper in new construction. Lead brass faucets are appropriate when permitted under existing regulatory regimes and available to consumers. Conversely, new lead pipe is not appropriate when looking at a system that has old lead service lines or goosenecks/pigtails with well-developed scales of lead and non-lead deposits. In fact, predicting the behaviour of these materials in response to different treatments or water quality changes may be erroneous if appropriate materials are not selected for the simulator.

Coupons and pipe rig systems are good tools to compare different corrosion control techniques prior to initiating system-wide corrosion control programs. They provide a viable means of simulating distribution systems without affecting the integrity of the full-scale system. Pipe rig systems can be useful as part of an overall holistic corrosion control optimization strategy, incorporating water quality, scale development and corrosion treatment monitoring. The effectiveness of this integrated approach has been shown for several water systems (Cantor, 2009). A low-cost pipe-loop system is described in Lytle et al. (2012) and could serve as an evaluative tool for utilities. However, even with a prolonged conditioning period for the materials in the water of interest, coupons used in the field or laboratory and pipe rig systems cannot give an exact assessment of the corrosion of larger distribution systems. Such tests cannot reliably reflect population exposure to distribution system contaminants, since too many factors influence contaminant concentration at the consumer’s tap.

Coupons inserted in the distribution system are typically used to determine the corrosion rate associated with a specific metal; they provide a good estimate of the corrosion rate and allow for visual evidence of the scale morphology. There is currently no single standard regarding coupon geometry, materials or exposure protocols in drinking water systems (Reiber et al., 1996). The coupon metal used must be representative of the piping material under investigation. The coupons are typically inserted in the distribution system for a fixed period, and the corrosion rate is determined by measuring the mass loss rate per unit of surface area. The duration of the test must allow for the development of corrosion scales, which may vary from 3 to 24 months, depending on the type of metal examined (Reiber et al., 1996).

The major drawback of coupons is their poor reproducibility performance (high degree of variation between individual coupon measurements). This lack of precision is due both to the complex sequence of handling, preparation and surface restoration procedures, which provides an opportunity for analysis-induced errors, and to the high degree of variability that exists in metallurgical properties or chemical conditions on the coupon surface during exposure (Reiber et al., 1996).

Pipe rig systems are more complex than coupons and can be designed to capture several water quality conditions. Laboratory experiments with pipe rig systems can also be used to assess the corrosion of metals. In addition to measuring the mass loss rate per unit of surface area, electrochemical techniques can be used to determine the corrosion rate. Furthermore, pipe rig systems can simulate a distribution system and/or plumbing system and allow for the measurement of contaminant leaching, depending on which corrosion control strategy is used. These systems, which can be made from new materials or sections of existing pipes, are conditioned to allow for the development of corrosion scales or passivating films that influence both the corrosion rate of the underlying metal and the metal release. The conditioning period must allow for the development of corrosion scales, which may vary from 3 to 24 months, depending on the type of metal examined. Owing to this variability, 6 months is recommended as the minimum study duration (Eisnor and Gagnon, 2003).

As with coupon testing, there is currently no single standard for the use of pipe rig systems in the evaluation of corrosion of drinking water distribution systems. However, there are publications that can help guide researchers on complementary design and operation factors to be considered when these studies are undertaken (AwwaRF, 1990, 1994). Eisnor and Gagnon (2003) published a framework for the implementation and design of pilot-scale distribution systems to try to compensate for this lack of standards. This framework identified eight important factors to take into consideration when designing pipe rig systems: (1) test section style (permanent or inserts), (2) test section materials, (3) test section diameter, (4) test section length, (5) flow configuration, (6) retention time, (7) velocity and (8) stagnation time.

Monitoring at the tap

Population exposure to contaminants resulting from the internal corrosion of drinking water systems arises from the corrosion of both the distribution system and the plumbing system. Measuring the contaminant at the tap, particularly lead, remains the best means to determine population exposure. The degree to which a system has minimized corrosivity for the contaminant can also be assessed adequately through measuring the contaminant at the tap over time and correlating it with corrosion control activities.

Treatment/control measures for lead, copper and iron

This document defines the levels of lead at the tap as the only measure used to initiate or optimize a corrosion control program. Nevertheless, control measures for copper and iron are also described here, since both the corrosion and concentrations of these metals will be largely influenced by the corrosion control method chosen.

Corrosion of drinking water systems and the release of contaminants into the conveyed water depend on both the material that is subject to corrosion and the water that comes in contact with the material. The contact time of the water with the material greatly influences the level of metals present in the drinking water. Therefore, flushing the water in the plumbing materials after a period of stagnation and prior to consuming it will help reduce exposure to lead. Reducing exposure to heavy metals can also be achieved, as an interim measure, by the use of certified drinking water treatment devices.

Drinking water can also be made less corrosive by adjusting its pH or alkalinity or by introducing corrosion inhibitors. Adjustments of the pH or alkalinity or the use of corrosion inhibitors to control lead, copper or iron levels in drinking water should be done with caution. Pilot studies should be conducted to determine the effectiveness of the corrosion control method chosen for the particular conditions prevailing in the distribution system. Even if a method is effective in reducing lead, copper or iron levels in pilot tests, it may not be effective when exposed under the conditions of the distribution system. Thus, rigorous full-scale monitoring should also be conducted before, during and following the initiation or optimization of a system’s corrosion control program.

The use of some treatment processes can result in an increase in lead through the resulting changes in water quality.

Mitigation in drinking water distribution systems

The judicious selection of materials (i.e., materials that contain little lead, such as lead‑free solders, low-lead fittings or in-line devices) is one of the possible means to reduce population exposure to the contaminants of concern. For example, the use of lead-free solders and brass fittings with a low lead content ensures that less lead is found in drinking water as a result of solder corrosion.

Lead service line replacement

The full replacement of a lead service line (i.e., utility and homeowner portions) can significantly reduce lead concentrations at a consumer’s tap. Generally, utilities should strongly encourage consumers to replace their portion of the lead service line when they are undertaking to replace the public portion. This ensures a full replacement of the lead service line and minimizes the consumer’s exposure to lead. Although partial lead service line replacement (i.e., replacing only the utility or consumer’s portion) can also reduce lead concentrations, it does not result in a proportional decrease in lead levels when compared with full service line replacement (Health Canada, 2019a).

Replacing the lead service line (full or partial) can disturb or dislodge existing lead scales or sediments containing lead, resulting in a significant increase in lead levels at the tap. This increase has been shown to continue for 3 or more months after the lead service line replacement (Trueman and Gagnon, 2016; Deshommes et al., 2017; Pieper et al., 2017; Trueman et al., 2017; Pieper et al., 2018; Doré et al., 2019; Health Canada, 2019a;). Doré et al. (2019) found that optimal corrosion control treatments for full and partial lead service line replacements are different. Utilities should, therefore, identify the corrosion control treatment which would be effective for all lead service line configurations. Generally, the corrosion control treatment for full replacement was found to be the addition of orthophosphate, while decreasing the CSMR yielded the best results for partial replacement.

When undertaking lead service line replacement, appropriate flushing should be conducted after the replacement, and debris should subsequently be cleaned from the screens or aerators of outlets (Health Canada, 2019a). Extensive initial flushing by the consumer should be encouraged and other mitigation measures, such as point-of-use filtration, public education and/or weekly or biweekly sampling until lead levels stabilize, should be considered by the utility. The water quality at the consumer’s tap should be monitored closely following both full and partial lead service line replacement for several months after replacement. The importance of regularly cleaning outlet aerators should be communicated to consumers to ensure that any lead-containing particles are removed as part of ongoing maintenance (Health Canada, 2019a). A set of procedures and best practices for undertaking full and partial lead service line replacement (including tools to use, flushing instructions, customer information and verification) can be found in the AWWA standard C-810-17 (AWWA, 2017b).

Mitigation of galvanic corrosion

Partial replacement may also induce galvanic corrosion at the site where new copper piping is attached to the remaining lead pipe. When connecting two dissimilar metals, a dielectric fitting should be used to prevent galvanic corrosion (Wang et al., 2012; Clark et al., 2014; AWWA, 2017b). Similarly, it is expected that connecting PVC piping to the lead service line in a partial replacement scenario would also prevent galvanic corrosion. Documentation of all lead service line replacement activities is an important step in ensuring that the utility has complete records of lead service line replacement progress/programs (AWWA, 2017b).

Mitigation of copper corrosion

Given the variety of water quality, microbiological and flow condition factors that can cause copper pitting, utilities should consider using tools such as those found in Sarver et al. (2011). These tools help utilities address key water quality changes, including the removal of NOM, phosphate, silicate as well as waters with chlorine, high pH or low alkalinity in order to avoid or mitigate copper pitting. A low-cost pipe-loop system is described by Lytle et al. (2012) and could serve as an evaluative tool for utilities. High-alkalinity, low-chloride water is associated with decreased dezincification (Sarver et al., 2011). Lytle and Schock (1996) found that orthophosphate did not provide a clear benefit at pH 7 and 8.5, although they suggested that orthophosphate might be more effective for copper leaching from brass.

Use of certified materials

Health Canada recommends that, where possible, water utilities and consumers use drinking water materials that have been certified as conforming to the applicable NSF/ANSI health-based performance and lead content standards (NSF International, 2020a,b) (see “Lead pipes and solders”). These standards have been designed to safeguard drinking water by helping to ensure material safety and performance of products that come into contact with drinking water.

Mitigation strategy for distribution systems

Discolouration (red water) episodes are likely to be accompanied by the release of accumulated contaminants, including lead, because dissolved lead is adsorbed onto iron deposits in the lead service line. Therefore, discoloured water events should trigger distribution system maintenance actions, such as systematic unidirectional flushing of the distribution system, to ensure that all particles are flushed out before the water reaches the consumer (Vreeburg, 2010; Friedman et al., 2016). Friedman et al. (2010) identified several key water quality conditions that should be controlled in order to maintain water stability for deposited inorganics, including pH, ORP and corrosion control measures, as well as avoiding both the uncontrolled blending of surface water and groundwater and the uncontrolled blending of chlorinated and chloraminated water. Utilities can determine baseline water quality to establish boundary conditions outside of which an excursion could be expected to trigger a release event (Friedman et al., 2016). Strategies to minimize physical and hydraulic disturbances should also be developed.

Other measures that contribute to maintaining stable conditions in the distribution system include pipe cleaning (e.g., unidirectional flushing, pipe pigging), pipe replacement and appropriate treatment to minimize the loading of other contaminant sinks (e.g., iron, manganese) and decrease the concentrations of contaminants entering the distribution system (e.g., arsenic, barium, chromium, manganese) (Friedman et al., 2010; Cantor, 2017).

For systems using orthophosphate for corrosion control, the inhibitor should be applied at all entry points and a consistent residual concentration should be maintained throughout the distribution system to promote the stability of phosphate-based scales (Friedman et al., 2010).

Biostability in the distribution system is another important factor that minimizes contaminant accumulation and release, especially from microbial activity. Biostability can be achieved by minimizing nutrients in the water (e.g., organic carbon, ammonia, nitrate/nitrite, total phosphorus), managing water age and maintaining a sufficient disinfectant residual (Cantor, 2017; Health Canada, 2020d).

Mitigation of impacts resulting from treatment

Some treatment technologies can increase lead in drinking water by changing water quality parameters that impact lead release. In the anion exchange process, used for removal of contaminants such as uranium, freshly regenerated ion exchange resin removes bicarbonate ions, causing reductions in pH and total alkalinity during the initial 100 bed volumes (BVs) of a run. Raising the pH of the treated water may be required at the beginning of a run (100 to 400 BVs) to avoid corrosion (Clifford, 1999; Wang et al., 2010; Clifford et al., 2011). Similarly, frequent regeneration of an ion exchange resin can have an impact on corrosion. In a case study in Maine, frequent regeneration of the ion exchange resin was instituted to reduce the levels of uranium in the waste stream (residuals). This resulted in a significant and continual decrease of pH and subsequent leaching of copper and lead into the drinking water (Lowry, 2009, 2010). Since reverse osmosis (RO) continually and completely removes alkalinity in water, it will continually lower the pH of treated water and increase its corrosivity. Therefore, the product water pH must be adjusted to avoid corrosion issues in the distribution system such as the leaching of lead and copper (Schock and Lytle, 2011; U.S. EPA, 2012).

Controlling pH and alkalinity

The adjustment of pH at the water treatment plant is the most common method for reducing corrosion in drinking water distribution systems and leaching of contaminants in the distributed water. Raising the pH remains one of the most effective methods for reducing lead and copper corrosion and minimizing lead, copper and iron levels in drinking water. Experience has shown that the optimal pH for lead and copper control falls between 7.5 and 9.5. The higher spectrum of this pH range would also be beneficial in reducing iron levels, but may favour iron corrosion and tuberculation. Although increasing alkalinity has traditionally been recommended for corrosion control, it is not clear if it is the best means to reduce lead and copper levels in drinking water. The literature appears to indicate that the optimal alkalinity for lead and copper control falls between 30 and 75 mg/L as CaCO3. Higher alkalinity (> 60 mg/L as CaCO3) is also preferable for the control of iron corrosion, iron level and red water occurrences. Moreover, alkalinity serves to control the buffer intensity of most water systems. Therefore, sufficient alkalinity is necessary to provide a stable pH throughout the distribution system for corrosion control of lead, copper and iron and for the stability of cement-based linings and pipes.

Corrosion inhibitors

Two predominant types of corrosion inhibitors are available for potable water treatment: phosphate- and silicate-based compounds. The most commonly used inhibitors include orthophosphate, polyphosphate (typically, blended polyphosphates) and sodium silicate, each with or without zinc.

The successful use of corrosion inhibitors is very much based on trial and error and depends on both the water quality and the conditions prevailing in the distribution system. The effectiveness of corrosion inhibitors is largely dependent on maintaining a residual of inhibitors throughout the distribution system and on the pH and alkalinity of the water.

Measuring the concentration of inhibitors within the distribution system is part of any good corrosion control practice. Generally, direct correlations between the residual concentration of inhibitors in the distribution system and the levels of lead, copper or iron at the tap are not possible.

Health Canada recommends that, where possible, water utilities and consumers choose drinking water additives, such as corrosion inhibitors, that have been certified as conforming to the applicable NSF/ANSI health-based performance standard or equivalent. Phosphate- and silicate-based corrosion inhibitors are included in NSF/ANSI/CAN 60, Drinking Water Treatment Chemicals – Health Effects (NSF International, 2020c). These standards have been designed to safeguard drinking water by ensuring that additives meet minimum health effects requirements and thus are safe for use in drinking water.

Stannous (tin) chloride has been used as a corrosion inhibitor but very few experimental data on this inhibitor exist. Under certain conditions, this inhibitor reacts with the metal present at the surface of the pipe or the corrosion by-products already in place to form a more insoluble deposit on the inside walls of the pipe. Since the deposits are less soluble, levels of metals at the tap are reduced. Several studies have failed to demonstrate the use of stannous chloride as a viable corrosion control treatment method. It may stabilize pH in the distribution system via its inhibition of biofilm growth, thus contributing to lower lead concentrations. Stannous chloride was not effective at controlling copper corrosion in a groundwater system with high DIC and high hardness (AWWA, 2017b).

Phosphate-based inhibitors

Orthophosphate and zinc orthophosphate are the inhibitors most often reported in the literature as being successful in reducing lead and copper levels in drinking water (Health Canada, 2019a,b; Cantor et al., 2017). Orthophosphate formulations that contain zinc can decrease the rate of dezincification of brass and can deposit a protective zinc coating (probably basic zinc carbonate or zinc silicate) on the surface of cement or A-C pipe, given the proper chemical conditions. Research has generally shown that zinc is unnecessary in the formulation for the control of lead from pipes (Schock and Lytle, 2011).

Orthophosphate has been shown in field and laboratory tests to greatly reduce lead solubility through the formation of Pb(II). Orthophosphate reacts with the metal of the pipe itself (particularly with lead, iron and galvanized steel) in restricted pH and dosage ranges. The effectiveness depends on the proper control of pH and DIC concentration, and a sufficient orthophosphate dosage and residual in the distribution system through the premise plumbing. Based on solubility, much higher doses of orthophosphate are needed in waters with higher carbonate content (Schock and Lytle, 2011).

The dosages of orthophosphate applied in the United Kingdom that have been highly effective for plumbosolvency control are generally 2 to 4 times the dosages commonly applied in the United States (Hayes et al., 2008; Cardew, 2009). Cardew (2009) has reported on the success of long-term application of high orthophosphate dosages to mitigate both particulate lead release and plumbosolvency in difficult waters.

Water systems with low DIC levels have reported difficulty in achieving good control of lead release using phosphate at a pH over 8. This phenomenon has also been observed in laboratory experiments with low DIC waters and approximately 1 mg PO4/L orthophosphate (Schock, 1989; Schock et al., 1996, 2008b). The rate of formation of lead orthophosphate passivating films seems to be slower than the rate of carbonate or hydroxycarbonate film formation. Considerable time must be allowed for the reactions to take place. Some studies have shown that many months to several years are needed to reduce the rate of lead release down to essentially constant levels (Lyons et al., 1995; Cook, 1997). The speed and amount of reduction appear to be proportional to the applied dosage of orthophosphate.

Solubility models for lead and copper indicate that the optimal pH for orthophosphate film formation is between 6.5 and 7.5 on copper surfaces (Schock et al., 1995) and between 7 and 8 on lead surfaces (Schock, 1989). A survey of 365 water utilities under the U.S. EPA Lead and Copper Rule also revealed that utilities using orthophosphate had significantly lower copper levels only when the pH was below 7.8 and lower lead levels only when the pH was below 7.4 and alkalinity was below 74 mg/L as CaCO3 (Dodrill and Edwards, 1995). It has been reported that orthophosphate can still reduce lead in the pH range of 7.0 to 8.0 (AWWA, 2017a). Schock and Fox (2001) demonstrated successful copper control in high-alkalinity water with orthophosphate when pH and alkalinity adjustments were not successful. Typical orthophosphate residuals are between 0.5 and 3.0 mg/L (as phosphoric acid) (Vik et al., 1996). Several authors reported that orthophosphate reduced iron levels (Benjamin et al., 1996; Lytle and Snoeyink, 2002; Sarin et al., 2003), iron corrosion rates (Benjamin et al., 1996; Cordonnier, 1997) and red water occurrences (Shull, 1980; Cordonnier, 1997). Reiber (2006) noted that orthophosphate was effective for hardening existing iron scales at pH 7.4 to 7.8, reducing red water occurrence. Lytle et al. (2003) observed that total iron released remained low following discontinuation of orthophosphate addition due to the formation of iron-phosphorous solids in the scales, thereby reducing the solubility of ferrous iron and/or decreasing the permeability of the scales.

Phosphate-based inhibitors, especially orthophosphate, were also shown to reduce heterotrophic plate counts and coliform bacteria in cast iron distribution systems by controlling corrosion. It was observed in an 18-month survey of 31 water systems in North America that distribution systems using phosphate-based inhibitors had fewer coliform bacteria compared with systems that did not have corrosion control (LeChevallier et al., 1996). Similarly, orthophosphate treatment at the rate of 1 mg/L applied to a highly corroded reactor made of cast iron immediately reduced iron oxide release and bacterial counts in the water (Appenzeller et al., 2001). Orthophosphate dosing using blended phosphates raises the complication of the specific chemical form and complexation or sequestration ability of the polyphosphate component. Therefore, although most studies show some benefit of higher ratios of orthophosphate to polyphosphate, it is not always a benefit if the polyphosphate component is a strong complexing agent and stable against reversion. The background water chemistry, particularly iron, calcium and magnesium concentrations, also plays a major role in blended phosphate effectiveness.

Polyphosphates have been frequently used to successfully control tuberculation and restore hydraulic efficiency to transmission mains. Polyphosphates can sometimes cause the type of corrosion to change from pitting or concentration cell corrosion to a more uniform type, which causes fewer leaks and aesthetic complaints. Pipe walls are usually thick enough that some increase in dissolution rate is not of practical significance. A major role for polyphosphates is the sequestration and mitigation of discoloured water from source water manganese and iron, as well as reducing some scaling from hard or lime-softened water. Although sequestration can be effective at reducing the colour associated with metals in water, it does not remove them. As such, exposure to the metal being sequestered in the water will occur if it is consumed. Several authors reported that the use of polyphosphate could prevent iron corrosion and control iron concentrations (McCauley, 1960; Williams, 1990; Facey and Smith, 1995; Cordonnier, 1997; Maddison and Gagnon, 1999). However, polyphosphate does not act as a corrosion inhibitor but rather as a sequestrant for iron, causing a decrease in the visual observation of red water (Lytle and Snoeyink, 2002). According to McNeill and Edwards (2001), this led many researchers to conclude that iron by-products had decreased, when in fact the iron concentrations or the iron corrosion rates may have increased.

The use of polyphosphate was reported as being successful at reducing lead levels in some studies (Boffardi, 1988, 1990, 1993; Lee et al., 1989; Hulsmann, 1990; Boffardi and Sherbondy, 1991). However, it was also reported as being ineffective at reducing lead concentrations and even detrimental towards lead in some circumstances (Holm et al., 1989; Schock, 1989; Holm and Schock, 1991; Maas et al., 1991; Boireau et al., 1997; Cantor et al., 2000; Edwards and McNeill, 2002). McNeill and Edwards (2002) showed that polyphosphate significantly increased lead in 3‑year-old pipes for both 8-h and 72-h stagnation times. Increases in lead concentrations by as much as 591% were found when compared with the same conditions without inhibitors. The authors recommended not using polyphosphate to control for lead. Only limited data are available on the impact of polyphosphate on copper solubility. Cantor et al. (2000) reported that the use of polyphosphate increased copper levels at the tap. In a copper pipe rig study, Edwards et al. (2002) reported that although polyphosphate generally reduced soluble copper concentrations, copper concentrations significantly increased at pH 7.2 and alkalinity of 300 mg/L as CaCO3, since polyphosphates hinder the formation of the more stable malachite scales.

Silicate-based inhibitors

Limited data are available on the impact of sodium silicate on lead and copper solubility. As sodium silicate is a basic compound, it is always associated with an increase in pH, making it difficult to attribute reductions in lead or copper concentrations to sodium silicate alone when an increase in pH may also result in a decrease in lead and copper concentrations.

A study conducted by Schock et al. (2005a) in a medium-sized utility pertained to iron in source water as well as lead and copper leaching in the plumbing system. The problems were solved simultaneously through the addition of sodium silicate to the three wells that contained elevated levels of iron and manganese servicing homes with lead service lines. A fourth well required only chlorination and pH adjustment with sodium hydroxide. At the three wells, an initial silicate dose of 25 to 30 mg/L increased the pH from 6.3 to 7.5 and immediately resulted in 55% and 87% reductions in lead and copper levels, respectively. An increase in the silicate dose to 45 to 55 mg/L raised the pH to 7.5 and resulted in an even greater reduction in the lead and copper levels (0.002 mg/L and 0.27 mg/L, respectively). The colour and iron levels were equal or superior to those prior to treatment. However, the use of sodium silicate alone has not been shown conclusively in the literature to reduce lead or copper concentrations.

Between 1920 and 1960, several authors reported reductions in red water occurrences when using sodium silicate (Tresh, 1922; Texter, 1923; Stericker, 1938, 1945; Loschiavo, 1948; Lehrman and Shuldener, 1951; Shuldener and Sussman, 1960). However, a field study conducted in a Canadian distribution system revealed no beneficial effects from sodium silicate (4 to 8 mg/L; pH range of 7.5 to 8.8) to control iron concentrations in old cast iron and ductile iron pipes. Visual inspection via a camera inserted inside a cast iron pipe prior to the injection of sodium silicate and immediately following the mechanical removal of the tubercles and after 5 months of sodium silicate revealed no reductions in the degree of tuberculation or the prevention of the formation of tubercles at these low concentrations (Benard, 1998). Very few studies have proven the efficiency of sodium silicates as corrosion inhibitors or their true mechanism of action.

Experiments that studied effects of high levels of silica at different pH levels found that at pH 8, silica may play a role in the stabilization of the cement pipe matrix by interfering with the formation of protective ferric iron films that slow calcium leaching (Holtschulte and Schock, 1985). Li et al. (2021) found that the use of a sodium silicate dose of 20 mg/L did not control for lead, under partial or full lead service line conditions, in low alkalinity water at a constant pH of 7.4, when compared with orthophosphate (with and without zinc) at 0.3 mg/L as P. A sodium silicate dose of 48 mg/L was found to disperse corrosion scale in cast iron pipe sections and lead service lines, which substantially increased lead and iron release. The authors concluded that corrosion inhibition due to direct lead-silicate interactions is unlikely. Aghasadeghi et al. (2021) compared sodium silicates, orthophosphate and pH adjustment under the same pH conditions in water with an alkalinity of 79 mg/L as CaCO3. The authors found that sodium silicate treatment at 20 mg/L was less effective in reducing lead release than pH adjustment (pH 7.9) and that increasing the silicate dose to 25 mg/L caused increased lead release and destabilization of corrosion scale. The authors concluded that silicates did not offer any benefits for reducing lead release from the LSL other than increasing pH.

Lintereur et al. (2011) compared three different sodium silicate dosages (3 mg/L, 6 mg/L and 12 mg/L) and found that sodium silicate decreased copper release compared with the controls (no treatment and pH increase). The decrease appeared to be dose dependent, with the lowest copper release observed at the highest sodium silicate doses. Scale analysis revealed a silicate‑copper scale indicating that a silicate scale may be partly responsible for the inhibitory action. Woszczynski et al. (2015) found that sodium silicates (18 mg Si/L, pH 7.3 and pH 6.3) did not control copper when compared with phosphate (0.8 mg PO4/L, pH 7.3). The authors noted that silicate performance was affected by pH and could be affected by water quality conditions.

Flushing and maintenance

Since the level of trace metals increases upon stagnation of the water, flushing the water present in the plumbing system can significantly reduce lead and copper levels. In that respect, flushing can be seen as an exposure control measure. A study by Gardels and Sorg (1989) showed that 60%–75% of the lead leached from common kitchen faucets appears in the first 125 mL of water collected from the faucet. They further concluded that after 200–250 mL, 95% or more of the lead has normally been flushed from faucets (assuming no lead contribution from other sources upstream of the faucet). In Canadian studies, in which the cold water tap of homes was flushed for 5 min, no concentrations of trace metals exceeded their respective Canadian drinking water guidelines at that time (Méranger et al., 1981; Singh and Mavinic, 1991). Flushing the cold water tap in buildings, particularly in large buildings or institutions, may not be sufficient to reduce lead and copper levels below the guidelines (Singh and Mavinic, 1991; Murphy, 1993; Deshommes et al., 2012; McIlwain et al., 2016; Miller-Schulze et al., 2019).

Murphy (1993) demonstrated that the median lead concentration in samples collected from drinking fountains and faucets in schools had increased significantly by lunchtime after a 10-min flush in the morning. The authors concluded that periodic flushing throughout the day would be necessary to adequately reduce lead concentrations. Flushing is considered a short-term approach for reducing lead (Deshommes et al., 2012; McIlwain et al., 2016; Doré et al., 2018; Katner et al., 2018; Miller-Schulze et al., 2019). Doré et al. (2018) observed that partial flushing (30 s) and full flushing (5 min) reduced lead concentration by 88% and 92%, respectively. However, after only 30 min of stagnation, median lead levels increased to > 45% of the levels seen after extended stagnation (> 8h). The authors recommended that the first 250 mL of water stagnating in taps be flushed prior to consumption even following short stagnation. They found that in most fountains, it would take 2 to 20 s to flush this volume of water.

When lead service lines are the source of lead, flushing the system until the water turns cold is not an appropriate measure, since it is generally the point at which the water from the service line reaches the consumer. Sampling several litres sequentially can help determine if flushing alone will be successful in reducing lead concentrations as well as the length of time required for flushing. Flushed samples in Washington, DC, revealed that lead levels were sometimes highest after 1 min of flushing. Lead concentrations as high as 48 mg/L were observed after flushing. In some cases, the lead concentrations were still elevated after 10 min of flushing (Edwards and Dudi, 2004).

Lead service line replacement (full or partial) and construction activities (Sandvig et al., 2008; Cartier et al., 2013; Del Toral et al., 2013) can disturb or dislodge existing lead scales or sediments containing lead, resulting in a significant increase in lead levels at the tap. Extensive initial flushing by the consumer should be encouraged and utilities should follow best practices for flushing (AWWA, 2017b). In some cases, flushing may not be sufficient to reduce lead concentrations at the tap. Therefore, utilities should conduct the appropriate monitoring to ensure that flushing is an appropriate measure before recommending it to consumers. They should also ensure appropriate flushing and communicate its practical limitations (Katner et al., 2018).

Maintenance activities, such as the routine cleaning of debris from aerators or screens on faucets, may also be important for reducing lead levels at the tap. Debris on aerators or screens can include particulate lead, which can be abraded and pass through the screen during periods of water use. This can result in a significant increase of particulate lead in the water from the tap, which can be variable and sporadic. It is important to ensure that sampling is done with the aerator or screen in place so that potential particulate lead contributions may be detected. Best practice also calls for the flushing of larger distribution systems on a regular basis, especially in dead ends, to get rid of loose corrosion by-products and any attached microorganisms.

Drinking water treatment filters

Reducing exposure to lead can be achieved, as an interim measure, through the use of drinking water treatment devices. It must be noted that in situations where high levels of lead are possible after replacement of the lead service line, drinking water treatment devices may have reduced capacity and require more frequent replacement. Because exposure to lead from drinking water is only a concern if the contaminants are ingested, point-of-use (POU) treatment devices, certified for lead removal, installed at drinking water taps are considered to be the best approach to reduce concentrations to safe levels immediately before consumption. Studies have demonstrated that installation of POU filtration devices can be an effective interim measure to reduce exposure to both soluble and particulate lead (Deshommes et al., 2010, 2012; Bosscher et al., 2019; CDM Smith, 2019; Pan et al., 2020; Purchase et al., 2020). Deshommes et al. (2012) showed that POU filtration devices significantly decreased dissolved and particulate lead concentrations, even where the particulate fraction of lead was double the soluble lead concentration at a federal penitentiary complex. Some POU filtration systems have been shown to remove lead for up to 6 months without changing the media (Mulhern and Macdonald Gibson, 2020).

Health Canada does not recommend specific brands of drinking water treatment devices, but strongly recommends that consumers look for a mark or label indicating that the device or component has been certified by an accredited certification body as meeting the appropriate NSF/ANSI drinking water treatment standards. These standards have been designed to safeguard drinking water by helping to ensure the safety and performance of materials that come into contact with drinking water. Certification organizations accredited by the Standards Council of Canada test and certify treatment devices for reduction of lead (and other contaminants) to the relevant NSF/ANSI standards. In Canada, the following organizations have been accredited by the Standards Council of Canada (www.scc.ca) to certify drinking water devices and materials as meeting NSF/ANSI standards:

Adsorption (i.e., carbon block/resin), RO and distillation technologies are effective treatment technologies at the residential scale for the removal of lead at the tap. Certified residential treatment devices using adsorption and RO are currently available for the reduction of lead (dissolved and particulate forms) in drinking water. There are currently no certified distillation systems.

For a drinking water treatment device to be certified for the removal of lead, the device must be capable of reducing an influent lead concentration of 150 μg/L (particulate and dissolved) to a maximum final (effluent) lead concentration of less than 5 μg/L (NSF International, 2020d,e,f).

Alternative approaches

Linings, coating and paints are typically mechanically applied when the pipe is manufactured or in the field, prior to installation. Some linings can be applied after the pipe is in service. The most common pipe linings are epoxy paint, cement mortar and polyethylene. The use of coatings must be carefully monitored, because they can be the source of several water quality problems (Schock and Lytle, 2011). Coatings should meet the requirements of ANSI/NSF/CAN Standard 61 and the relevant AWWA standards.

In-situ lining products, consisting of collapsed tubing inserted through small-diameter pipes and then expanded with heat and pressure to seal the pipe interior surface against water contact, have been developed. Similarly, epoxy coatings are also being considered for lining lead service lines. There is little in the published literature regarding their use but if successful they could aid in reducing disruption and down time (U.K. WIR, 2012). However, few data are available to support their long-term durability and their ability to work effectively in badly distorted or damaged pipes or through in-line fittings (e.g., valves, tees) (U.K. WIR, 1997; Tarbet et al., 1999). Caution should be exercised when considering this rehabilitation option as any failure may unknowingly put the consumer at risk of lead exposure.

Rationale for monitoring programs to assess corrosion

The sampling protocols and goals for the monitoring protocols presented below are based on an understanding of the variations in lead concentrations observed at the tap, which depend on the period of stagnation, the age and source of lead, and other factors. Monitoring of lead at the tap can be done using different sampling protocols, but the selected protocol must take into consideration the desired objective. Sampling protocols can be used to identify sources of lead, effectively control corrosion, assess compliance and estimate exposure to lead. They will vary based on factors such as desired stagnation time, sample volume, sampling sites and sampling frequency (Schock, 1990a; van den Hoven and Slaats, 2006; Schock and Lemieux, 2010). The selection of the stagnation time is based on practical considerations and the desire to generate higher lead concentrations, which make it easier to evaluate any changes (Jackson and Ellis, 2003).

Residential monitoring programs

Previous residential monitoring programs conducted in the United States and Europe have demonstrated that lead levels at the tap vary significantly both across a system and at one site (Karalekas et al., 1978; Bailey and Russell, 1981; AwwaRF, 1990; Schock, 1990a,b; U.S. EPA, 1991). The concentration of lead at the tap depends on a variety of chemical and physical factors, including water quality (pH, alkalinity, temperature, chlorine residual, etc.), stagnation time, as well as the age, type, size and extent of the lead-based materials. Water use and the volume of water collected have also been identified as important factors affecting the concentration of lead at the tap (Deshommes et al., 2016; Doré et al., 2018). Statistically, the greater the variability, the larger the sample population size must be to obtain results that are representative of a system. In addition, when monitoring is conducted to assess the effectiveness of changes in a treatment approach to corrosion control, it is important to reduce the variability in lead levels at the tap (AwwaRF, 1990). Monitoring programs must, therefore, include controls for the causes of variability in order to obtain results that are representative and reproducible (Schock, 1990a; AwwaRF, 2004; European Commission, 1999).

For residential monitoring programs, sampling considerations should include ensuring that sampling is done at the kitchen tap, with the aerator or screen on and at flow rates typically used (approximately 4 to 5 L/min) by consumers (van den Hoven and Slaats, 2006). These steps help to ensure that the sample collected is representative of the typical lead concentrations from the tap.

An approach using random daytime (RDT) sampling with a goal that triggers investigative sampling was selected. The approach integrates the use of the MAC for lead to inform consumer action, reducing the risks to susceptible individuals (i.e., infants, children and pregnant persons). This approach is complementary to the protocol used in the lead guideline and is easy to implement, informative and a proven alternative which can also be used for larger buildings and multiple-unit dwellings (Cardew, 2003). Where permitted, monitoring under this approach could be based on consumer- or utility-collected samples.

When the failure is in a sample from a tap in domestic premises or other premises which are not a public building, no further samples are required but a comprehensive investigation should be undertaken to establish if lead is present in the pipe work belonging to the homeowner. RDT and 30 MS sampling protocols can both be used for residential sites, as they are considered appropriate for identifying priority areas for actions to reduce lead concentrations and assessing compliance. Although both RDT and 30 MS are suitable for evaluating the effectiveness of corrosion control strategies, RDT sampling is used system-wide and 30 MS sampling is typically used at sentinel sites (Hayes, 2010). Due to its random nature, RDT sampling requires 2–5 times more samples than 30 MS to be statistically robust. Whereas RDT sampling is relatively inexpensive, more practical to implement and generally more acceptable to the consumer than 30 MS sampling, the 30 MS sampling protocol can also be used for investigating the cause of exceedances and identifying appropriate mitigation measures.

Sampling programs should be conducted throughout the year to take into account seasonal effects on lead variability. Sampling should be conducted at the cold water tap in the kitchen or other appropriate location where water is used for drinking or food preparation. Regardless of the protocol used, all samples should be collected in wide-mouth sample bottles and without removing the aerator.

Determination of sampling protocols for a residential monitoring program

The objectives of the residential monitoring program: option 1 RDT + stagnation (two‑tier) are to identify and diagnose systems in which corrosion of lead from a variety of materials is an issue, to assess the potential for consumers to be exposed to elevated concentrations of lead, and to assess the quality and effectiveness of corrosion control programs. Consideration of the sampling protocols used in various studies of lead levels at the tap as well as studies on the factors that affect the variability of lead concentrations was given in the selection of the residential monitoring program: Option 1. A two-tier approach was determined to be an effective method for assessing system-wide corrosion and identifying potentially high levels of lead. It is also effective in providing the appropriate information for selecting the best corrective measures and evaluating the effectiveness of corrosion control for residential systems in Canada.

In some cases, the responsible authority may wish to collect samples for both tiers during the same site visit. This step eliminates the need to return to the residence if the system goal for Tier 1 is not met. The analyses for the second tier are then done only on the appropriate samples, based on the results of the Tier 1 samples.

Tier 1 RDT (option 1)

The first-tier sampling protocol determines the contribution of lead at the consumer’s tap from the internal plumbing following a period of stagnation and from the transitory contact with the lead service line. A 1 L sample is collected randomly during the day from a drinking water tap in each of the residences. Samples should be collected directly from the consumer’s tap without prior flushing; no stagnation period is prescribed to better reflect consumer use (without removing the aerator or screen). When more than 10% of the sites (defined as the 90th percentile) have a lead concentration greater than 0.005 mg/L (MAC/goal), it is recommended that utilities take corrective measures, including conducting additional sampling following the Tier 2 sampling protocol. The Tier 1 sampling protocol has been widely used for assessing system-wide lead levels and has been demonstrated to be an effective method for identifying systems both with and without lead service lines that would benefit from implementing corrosion control.

The United Kingdom has documented the effectiveness of system-wide RDT sampling for compliance monitoring and to assess the performance and optimization of corrosion control (Jackson, 2000; Health Canada, 2019a). Compliance sampling is undertaken by collecting a set frequency number of samples that will depend on the population served in a discretely supplied area (zone). The frequency may be reduced if no failures have occurred in a defined period. However, increased sampling may be required when a lead problem is extensive. Such a situation occurred in the northwest of England, where 50 samples per year for each water supply zone was required (Cardew, 2003). The impact of sample size and the number of sites that have lead services lines were analyzed by Cardew (2009). The author indicated that, typically, 25 samples are taken for each compliance zone supplying less than 50,000 people. He was able to distinguish homes with no service line from those impacted by either soluble or particulate lead when evaluating compliance data collected over a 6-year period from three water supply zones. Variations in water quality play a role in the variability of lead since some effects are seasonal in nature (e.g., temperature, alkalinity, organic matter). Other factors that contribute to the variability of lead include housing type, water use and customer behaviour such as fully opening a faucet, as well as the sampling protocol used for assessing compliance (Cardew, 2003). It is important to develop a sampling program which takes into account seasonal effects to ensure that corrosion control programs capture and address this variability (Cardew, 2000, 2003).

The U.K. reduced the number of samples per year now required for each water supply zone (see Table 2) (Cardew, 2003; DWI, 2010). However, using compliance data to prioritize action may require an increased sample size when the number of lead service line sites decreases in a specific area given the reduction in the statistical significance of the results. Increased sample size can be achieved by either increasing the number of samples or by consolidating several years’ worth of data. In these cases, the use of complementary approaches (e.g., LSL sentinel sites) will provide a more reliable method of estimating public exposure and the effectiveness of corrosion control achieved (Cardew, 2003). Baron (2001) concluded that up to 60 samples are necessary to obtain statistically valid and accurate assessment of lead concentrations in a supply zone (population > 500). A minimum number of 20 samples in a supply zone with similar water quality characteristics throughout the zone was identified. The need to increase the number of samples when compliance is high (i.e., 90%) was also indicated to ensure that the zone is actually well characterized.

A number of studies evaluated RDT against fully flushed (FF) and fixed (30-min) stagnation time (30 MS) sampling protocols to identify methods to estimate the average weekly concentration of lead at a consumer’s tap (i.e., in comparison to composite proportional sampling) (Baron, 1997, 2001; European Commission, 1999; van den Hoven and Slaats, 2006). The objective of the European Commission (1999) study was to determine which of these 3 common sampling protocols was the most representative of a weekly average amount of lead ingested by consumers. This large-scale study was conducted in 5 member countries and included a variety of water qualities. Each country undertook sampling at a minimum of 2 areas, selecting sampling sites with at least 50% of the sites in each area/district being served by lead service lines. The study identified that a protocol determining the average lead concentration in 2 1-L samples collected after the water had been fully flushed for 5 min (considered by the authors to be equivalent to 3 pipe volumes of the plumbing system) and then left to stagnate in the pipes for a fixed period of 30 min was effective for estimating the average lead concentration at a consumer’s tap.

Baron (2001) confirmed these findings during a study in France comparing the three types of sampling, but without undertaking composite proportional sampling. The author found that at the zonal level (zone population not defined), RDT and 30 MS samples had very similar results when sampled for a sufficient number of households. It was determined that random selection of properties appeared to be a good solution for assessing the situation in a zone and helping to prioritize and determine the types of actions to implement. RDT sampling was considered more practical and acceptable to consumers, whereas 30 MS sampling was found to be more reproducible and equally representative. However, FF sampling was deemed to be unrepresentative of average concentrations and provided only an indication of the minimum lead levels at the tap (Baron, 2001; van den Hoven and Slaats, 2006).

It was determined that RDT sampling was representative and enabled the detection of a large proportion of sites with lead issues. Additionally, it was relatively inexpensive, practical to implement and acceptable to consumers. RDT samples were found to be less reproducible than 30 MS samples and had a tendency to overestimate lead exposure (European Union, 1999; Jackson, 2000; Cardew, 2003; van den Hoven and Slaats, 2006).

Cardew (2003) found that CCT effectiveness could be assessed using the RDT compliance data. In addition, he established that optimization could be modelled to evaluate the point of diminishing returns for phosphate concentration on lead levels and undertook an analysis of RDT versus 30 MS sampling protocols under specific conditions.

Tier 1 30 MS (option 2)

The studies noted in the previous section, Tier 1 RDT (option 1), concluded that the 30 MS protocol was both reproducible and representative of typical exposures and also representative of the average inter-use stagnation time of water in a residential setting (Bailey et al., 1986; van den Hoven and Slaats, 2006). It was determined that typical exposure was reflected by taking the average lead concentration of 2 1-L samples collected for the 30 MS protocol. The reproducibility of the 30 MS sample makes it a useful tool for monitoring changes in lead levels over time and assessing the efficacy of corrective treatment at sentinel sites (Jackson, 2000). Flushing prior to stagnation has been shown to eliminate accumulated particles (van den Hoven and Slaats, 2006; Deshommes et al., 2010a, 2012). However, increased turbulent flow seen at higher flow rates has been associated with the presence of particulate lead (Cartier et al., 2012a; Clark et al., 2014). In consideration of this, 30 MS sampling should be conducted at medium to high flow rates (> 5 L/min) to capture particulate lead release for the sampling protocol. This protocol was determined to be more expensive, less practical to implement and less acceptable to consumers than RDT sampling. However, the 30 MS sampling method is considered to have lower variability and to be more reproducible than the RDT method because of the fixed stagnation time (European Union, 1999; Jackson, 2000; Cardew, 2003).

It is important to note that the sampling method can contribute to the variability of lead concentrations. Many factors contribute to the variability and these were investigated using a Monte Carlo simulation to assess water quality fluctuations and their impact on the overall variability of lead levels. Cardew (2003) found that the coefficient of variation (CV) for both 30 MS and RDT, under different sampling and water quality conditions, resulted in an increase of the CV for both sampling methods due to water quality fluctuations. Water quality fluctuations dramatically increase the number of samples needed for 30 MS, and thus the sample size requirements for RDT only need be 2 times greater than that for 30 MS. Similarly, Jackson (2000) determined that an RDT protocol would require 3 to 5 times the number of samples to provide equivalent information if used as an alternative to stagnation samples. Consequently, the perceived advantage of sampling at the same properties using 30 MS is less significant in reality.

Flushing prior to stagnation has been shown to eliminate accumulated particles (van den Hoven and Slaats, 2006; Deshommes et al., 2010a, 2012). However, increased turbulent flow seen at higher flow rates has been associated with the presence of particulate lead (Cartier et al., 2012a; Clark et al., 2014). In consideration of this, sampling should be conducted at medium to high flow rates (> 5 L/min) to capture particulate lead release for the sampling protocol 30 MS sampling typically used at sentinel sites. The 30 MS sampling protocol can also be used for investigating the cause of exceedances and identifying appropriate mitigation measures.

Selection of this protocol as an alternative method for residential monitoring is based on adaptation of a sampling protocol used in a variety of European studies that were intended to estimate consumers’ average weekly exposure to lead at the tap (Baron, 1997, 2001; European Commission, 1999). Although the protocol was used in these studies to estimate average weekly exposure, it may also be useful for obtaining information on the corrosivity of water towards lead pipe. It is therefore presented as a tool that can be used to identify residential sites with lead service lines that may have elevated lead concentrations. When combined with profile sampling, 30 MS can be used for investigative purposes at individual homes (Cartier et al., 2011). As discussed in detail below, the protocol has been adapted so that it can also be used as a tool for investigating the cause of corrosion.

Tier 2 a) 30 MS (options 1 and 2)

This protocol measures the concentration of lead in water that has been in contact with the lead service line as well as with the interior plumbing (e.g., lead solder, lead brass fittings) for a transitory and short period of time (30 min). Four consecutive 1 L samples are taken at the consumer’s cold drinking water tap (without removing the aerator or screen) after the water has been fully flushed for 5 min and the water has then been left to stagnate for 30 min. Each 1 L sample is analyzed individually to obtain a profile of lead contributions from the faucet, plumbing (lead in solder, brass and bronze fittings, brass water meters, etc.) and a portion or all of the lead service line.

The Tier 1 system goal is intended to trigger corrective measures, including conducting additional sampling. If fewer than 10% of sites (defined as the 90th percentile) have lead concentrations above 0.005 mg/L, utilities should provide customers in residences with information on methods to reduce their exposure to lead. These measures can include flushing the appropriate volume of water prior to consumption following a period of stagnation, checking screens/aerators for debris that may contain lead, such as lead solder, and replacing their portion of the lead service line. It is also recommended that utilities conduct follow-up sampling for these sites to assess the effectiveness of the corrective measures undertaken by the consumer.

This sampling protocol will provide utilities with the water quality information needed to protect the most sensitive populations from unsafe concentrations of lead by determining whether consumers need to be educated to flush their drinking water systems after periods of stagnation. The samples collected are also used from an operational standpoint to determine whether or not the water distributed has a tendency to be corrosive towards lead and, if so, to help determine the next steps that should be taken in implementing a corrosion control program.

The Tier 1 sampling technique is considered to be the most informative when compared with other routine sampling techniques and should be used to increase the likelihood that system‑wide problems with lead will be correctly identified, including the occurrence of elevated concentrations of lead resulting from a 30-min stagnation period in contact with a variety of lead materials.

The collection of 4 1-L samples to be analyzed individually is selected, since this will provide a profile of the lead contributions from the faucet, the interior plumbing of the home and, in many cases, all or a portion of the lead service line. Previous studies have indicated that 95% of the lead contributed from faucets is flushed in the first 200 to 250 mL. In addition, the contribution from lead solder can generally be found in the first 2 L of water flushed from the plumbing system. The collection of 4 1-L samples to be analyzed individually will therefore provide the water supplier with information on both the highest potential lead levels at the tap and the source of the lead contamination. This information can then be used to determine the best corrective measures for the system and to provide data to help assess whether corrosion control has been optimized.

Tier 2 b) 6 h stagnation (options 1 and 2)

Tier 2 sampling is required only when the first-tier sampling identified more than 10% of sites (defined as the 90th percentile) with lead concentrations above 0.005 mg/L (SG). Sampling is conducted at 10% of the sites sampled in Tier 1, specifically the sites at which the highest lead concentrations were measured. For smaller systems (i.e., serving 500 or fewer people), a minimum of 2 sites should be sampled to provide sufficient lead profile data for the system.

Four consecutive 1 L samples are taken at the consumer’s cold drinking water tap (without removing the aerator or screen) after the water has been stagnant for a minimum of 6 h. Each 1 L sample is analyzed individually to obtain a profile of lead contributions from the faucet, plumbing (lead in solder, brass and bronze fittings, brass water meters, etc.) and a portion or all of the lead service line.

The objectives of Tier 2 sampling are to provide information on the source and potentially highest levels of lead, which will help utilities select the best corrective measures. It will also provide the best information for assessing the effectiveness and optimization of the corrosion control program.

In order to obtain information on the potentially highest levels of lead, sampling after a period of stagnation is important. In particular, the Tier 2 protocol is intended to capture water that has been stagnant not only in the premise plumbing but also in a portion or all of the lead service line (if present). Similar to other lead materials (i.e., lead solder and brass fittings), lead concentrations in water that has been stagnant in lead pipe also increase significantly with time up to 8 h. Several factors affect the slope of the stagnation curves for lead pipe in drinking water. Generally, the concentration of lead increases rapidly in the first 300 min. The typical stagnation curve for lead pipe is very steep for stagnation times shorter than 6 h. Therefore, small differences in the amount of time that water is left to stagnate may cause considerable variability in the lead concentration (Kuch and Wagner, 1983; AwwaRF, 1990, 2004; Schock, 1990a).

Another important factor that contributes to lead levels at the tap is the volume of water that has been in contact with the lead service line following a period of stagnation. Lead profiling studies conducted in Canada and the United States have indicated that the highest concentration of lead at the tap in residences with lead service lines occurs in samples that are representative of the water that has stagnated in the lead service line (Campbell and Douglas, 2007; Huggins, 2007; Kwan, 2007; U.S. EPA, 2007; Craik et al., 2008). Data from these studies indicate that when water is stagnant in the lead service line for 6 h, the maximum concentration of lead can be found between the 4th and 12th litres of sample volume. Generally, substantially elevated lead concentrations were observed in the 4th, 5th or 6th litre of sample volume in a number of studies (Campbell and Douglas, 2007; Douglas et al., 2007; Sandvig, 2007; Craik et al., 2008). Extensive profiling of lead levels in homes with lead service lines in Washington, DC, following a switch to chloramination demonstrated that the average mass of lead release (concentration adjusted for actual volume) attributed to the lead service line was 470 µg (73 µg/L) compared with 26 µg (26 µg/L) in the first litre sample and 72 µg (31 µg/L) in samples from the remaining home piping and components prior to the lead service line (U.S. EPA, 2007).

Determining the potential for elevated concentrations of lead from water that has been stagnant in lead service lines is therefore an important component of a sampling protocol for assessing corrosion in residential distribution systems and subsequent corrosion control optimization. Comparing samples with the highest lead concentrations before and after corrosion control implementation will provide utilities with essential data in evaluating whether treatment has been optimized. This will ultimately help demonstrate that the highest lead levels have been reduced to the greatest extent possible. It is estimated that, in Canada, collection of a minimum of 4 1- L samples following a period of stagnation of 6 h will increase the likelihood that the highest concentrations of lead will be detected. Since the volume of sample needed to obtain water that has been stagnant in the lead service line will depend on the plumbing configuration at each site, utilities should conduct a broad characterization of the types of high-risk sites to estimate if collection of 4 1-L samples will be sufficient.

Collection of 4 1-L samples to be analyzed individually is selected, since this will provide a profile of the lead contributions from the faucet, the interior plumbing of the home and, in many cases, all or a portion of the lead service line. Previous studies have indicated that 95% of the lead contributed from faucets is flushed in the first 200 to 250 mL. In addition, the contribution from lead solder can generally be found in the first 2 L of water flushed from the plumbing system. The collection of 4 1-L samples to be analyzed individually will, therefore, provide the water supplier with information on both the highest potential lead levels at the tap as well as the source of the lead contamination. This information can then be used to determine the best corrective measures for the system and provide data to help assess whether corrosion control has been optimized.

Limitations

In general, the objectives of a residential monitoring program are to identify and diagnose systems in which corrosion of lead from a variety of materials is an issue, to assess the potential for consumers to be exposed to elevated concentrations of lead, and to assess the quality and effectiveness of corrosion control programs. The residential monitoring program: option 2 for residences with lead service lines, has not been assessed for these purposes. Rather, it is intended as a tool for identifying elevated lead concentrations at residences with lead service lines. It is important to note that this sampling protocol has not been evaluated to determine its effectiveness for detecting corrosion of other plumbing materials, nor does it measure the potentially higher levels of lead that may be present in water stagnating for longer periods in the household plumbing and lead service lines.

A study by Kuch and Wagner (1983) indicates that concentrations of lead approach an equilibrium value after 5 to 7 h of stagnation, depending on the diameter of the pipes (correlating to 1/2-inch and 3/8-inch [1.3 cm and 1.0 cm]). In addition, the concentration of lead increases exponentially in the first 300 min of stagnation in lead pipe. Lead contributions from other materials, such as lead brass fittings and lead solder, have also been found to increase significantly following 4 to 20 h of stagnation. There are limited field data comparing lead levels at the tap following different periods of stagnation. Therefore, it is difficult to evaluate if a 30-min stagnation period is accurate for assessing corrosion. Limited studies suggest that lead concentrations following a period of stagnation of 30 min are substantially lower in the equivalent sample volume than those measured at the same tap following 6 h of stagnation (AwwaRF, 1990; Douglas et al., 2007; Craik et al., 2008). Therefore, the possibility of underestimating the highest concentration of lead at consumers’ taps may be significant when using a stagnation time of 30 min.

Determination of sampling protocols for non‑residential/residential buildings

The objectives of the sampling protocols for non-residential and residential sites, such as child care centres, schools, and residential and office buildings, are to locate specific lead problems within the buildings and identify where and how to proceed with remedial actions. The intention is to minimize lead concentrations at the cold drinking water outlets (i.e., fittings/fixtures such as faucets and fountains) used for drinking and cooking and therefore protect occupants’ health from exposure to lead. The sampling protocols are based on an understanding of the variations in lead concentrations observed at outlets in a non-residential building resulting from sources of lead within the plumbing and water use patterns (Deshommes et al., 2012; McIlwain et al., 2016; Katner et al., 2018; Miller-Schulze et al., 2019).

In some cases, responsible authorities may want to collect Tier 1 and Tier 2 samples at the same time to eliminate the need to return to the site. In this case, authorities should be aware that the confidence in some sample results will decrease, since flushing water through one outlet may compromise the flushed samples taken from other outlets that are located in close proximity.

Tier 1 sampling protocol

A first-draw 250 mL sample is taken at the locations identified in the sampling plan after the water has been stagnant for a minimum of 8 h, but generally not more than 24 h. To ensure that representative samples are collected, the aerator or screen on the outlet should not be removed prior to sampling. If the lead concentration exceeds 0.005 mg/L (MAC) at any of the monitoring locations, corrective measures should be taken.

The Tier 1 sampling protocol has been used in non-residential settings for locating specific lead issues, determining how to proceed with remedial measures and demonstrating that remediation has been effective. Numerous studies have been published on extensive sampling programs for measuring lead concentrations at the tap, conducted in schools and other non‑residential buildings. These studies demonstrated that the collection of 250 mL samples following a period of stagnation of a minimum of 8 h, but generally not more than 24 h, is effective at identifying outlets with elevated lead concentrations (Gnaedinger, 1993; Murphy, 1993; Maas et al., 1994; Bryant, 2004; Boyd et al., 2008a,b). Using this sampling method, several studies were able to determine the source of lead within schools and develop a remediation plan (Boyd et al., 2008a,b).

As with residential monitoring programs, each component of a sampling protocol in non‑residential settings, such as the stagnation time, the volume of water collected and the SG, has important implications as to the usefulness of the data collected. Since the objectives of conducting sampling in non-residential buildings are different from those in residential settings, the volume of water collected is also different.

The Tier 1 and Tier 2 sampling protocols for non-residential sites are based on the collection of a 250 mL sample volume. Studies have demonstrated that to evaluate the amount of lead leaching from outlets such as kitchen faucets, more than 95% of the lead can be found in the first 200 to 250 mL of water from the faucet (Gardels and Sorg, 1989). Lead levels in non‑residential buildings have generally been found to decrease significantly following flushing of the outlet for 30 s. This suggests that the fountain or faucet and the connecting plumbing components can be major contributors to elevated lead concentrations at outlets in non‑residential buildings (Bryant, 2004; Boyd et al., 2008a,b; Pieper et al., 2015). The collection of a larger volume of water, such as 1 L, would include a longer line of plumbing prior to the outlet. This plumbing may contain valves, tees and soldered joints that could contribute to the lead concentration in the 1 L sample. However, it would not be possible to identify which material was releasing the lead. In addition, it is suggested that collecting such a large volume from a drinking water fountain might dilute the initial high concentrations observed in the outlet. This is not desirable, since water collected from sections of plumbing farther from the outlet typically have lower lead concentrations (U.S. EPA, 2004). Therefore, the collection of a sample volume that is smaller (250 mL) than those typically used to assess corrosion (1 L and greater) in residential systems is considered important for sampling in non‑residential buildings. A 250 mL sample volume is selected for sampling in non-residential buildings, as it represents water from the fitting (fountain or faucet) and a smaller section of plumbing and is therefore more effective at identifying the source of lead at an outlet (U.S. EPA, 1994, 2006).

As discussed in the section Stagnation time, water age and flow, studies examining sources of lead at the tap have found lead solder and brass fittings to be significant sources of elevated lead concentrations following a period of stagnation (Lee et al., 1989; Singh and Mavinic, 1991; AwwaRF, 2004; U.S. EPA, 2007). Depending on the age and type of material, the concentrations of lead from brass fittings have been shown to increase significantly following stagnation periods of between 4 and 20 h (Lytle and Schock, 2000). As a result, the water-use pattern in a building is an important factor in determining lead concentrations at the tap. Since water-use patterns are often intermittent in non-residential buildings, such as day care centres, schools and office buildings, it is important to sample following a period of stagnation. The most conservative standing time prior to sampling is between 8 and 18 h, since it is most likely to result in the measurement of peak concentrations of lead. Therefore, first-flush samples should be collected following a minimum period of stagnation of 8 h, but not greater than 24 h, so that they are representative of the longer periods in which outlets are not used for drinking during most days of the week in a non-residential building.

When the SG of 0.005 mg/L is exceeded, interim corrective measures should be taken to protect the health of sensitive populations in situations with exposure patterns, such as those found in non-residential buildings. Occupants of the building and other interested parties such as parents should be informed of the results of any sampling conducted in the building.

Tier 2 sampling protocol

In order to help identify the source of lead at outlets that exceed the Tier 1 system goal, follow-up samples are taken of the water that has been stagnant in the upstream plumbing but not in the outlet itself. The results can then be compared to assess the sources of elevated lead and to determine the appropriate corrective measures. In order to be able to compare the results, a second 250 mL sample is collected following the same period of stagnation. To obtain water that has been stagnant in the plumbing prior to the outlet, a 250 mL sample is taken after a period of stagnation of a minimum of 8 h, but generally not more than 24 h, followed by a 30-s flush. Water fountains and cold water outlets exceeding the Tier 1 system goal are resampled in the same year and in the same season. Thirty-second flushing was selected, since it should normally eliminate the water present in the outlet.

If the lead concentration in the second 250 mL sample decreases below 0.005 mg/L, then it can be concluded that the water fountain, cold drinking water outlet or plumbing in the immediate vicinity is the source of the lead. If concentrations of lead above 0.005 mg/L are found in the Tier 2 samples, then the lead sources may include the plumbing materials that are behind the wall, a combination of both the outlet and the interior plumbing, or contributions of lead from the service connection. When the Tier 2 lead concentrations exceeds 0.005 mg/L, immediate corrective measures should be taken, the lead sources should be determined and remediation measures should be implemented.

The results of Tier 1 and Tier 2 sampling should be interpreted in the context of the plumbing profile so that an assessment of the lead contributions can be made and the appropriate interim and long-term corrective measures can be taken. Competent authorities can develop the plumbing profile using the questions provided in the U.S. EPA’s 2006 technical guidance on lead in drinking water (U.S. EPA, 2006). Information on other sampling that can be conducted to help determine the source of lead if it has not been identified as well as detailed information on the interpretation of Tier 1 and Tier 2 sampling results can be obtained from this reference material (U.S. EPA, 2006).

Determination of non-residential/residential monitoring sites

In general, the level of lead in drinking water entering non-residential buildings from a distribution system is low. It is recommended that at each monitoring event, samples be taken from an outlet close to the point where the water enters the non-residential building. This will determine the concentration of lead contributed by either the service line or the main water distribution system (water main). Ideally, samples should be collected after an appropriate period of flushing so that they are representative of water from the service line and from the water main. The volume of water to flush will depend on the characteristics of the building plumbing system (i.e., the distance between the service line and the water main). In some situations (e.g., where there is a lead service line to the building), it may be difficult to obtain a sample that is representative of water from the water main as a result of contributions of lead from the service line. In this case, an alternative sampling location may need to be selected.

The occurrence of elevated lead concentrations within buildings such as schools is typically the result of leaching from plumbing materials and fittings and water use patterns (U.S. EPA, 2006; Boyd et al., 2007; Pinney et al., 2007). Studies evaluating lead levels at drinking water fountains and taps in schools in Canada and the United States have demonstrated that levels can vary significantly within buildings and can be randomly distributed (Boyd et al., 2007; Pinney et al., 2007). An evaluation of lead levels in schools in Seattle, Washington, found that 19% of drinking fountains had concentrations of lead above 0.015 mg/L (system goal) in the first-draw 250 mL samples (Boyd et al., 2008a). The lead was attributed to galvanized steel pipe, 50:50 lead–tin solder and brass components such as bubbler heads, valves, ferrules and flexible connectors. As a result, it is important to measure lead levels at fountains and outlets used for consumption in non-residential buildings to determine if elevated lead levels may be present and identify where corrective measures are required to protect occupants’ health.

Although limited information is available on the variability of lead levels at individual fountains and outlets within non-residential buildings, studies have shown that it is not possible to predict elevated levels. The number of monitoring sites that should be sampled in a non‑residential building should be based on the development of a sampling plan. A plumbing profile of the building should be completed to assess the potential for lead contamination at each drinking water fountain or cold drinking water or cooking outlet. Competent authorities can develop the plumbing profile using the questions provided in the U.S. EPA 3Ts guidance (EPA, 2006). Information in the plumbing profile can then be used to develop a sampling plan that is appropriate for the type of building that is being sampled (e.g., child care centre, school, office building).

Authorities that are responsible for maintaining water quality within non-residential buildings will need to do more extensive sampling at individual outlets based on the sampling plan developed for the building. The sampling plan should prioritize drinking water fountains and cold water outlets used for drinking or cooking based on information obtained in the plumbing profile, including, but not limited to, areas containing lead pipe, solder or brass fittings and fixtures, areas of stagnation and areas that provide water to consumers, including infants, children and pregnant people.

When sampling at kitchen taps in non-residential buildings, the aerators and screens should be left in place, and typical flow rates should be used (approximately 4 to 5 L/min). However, for other types of outlets, such as water fountains, lower flow rates are typical and should be used when sampling. These steps help to ensure that the sample collected is representative of the average water quality consumed from the type of outlet being sampled. It is also important to note that opening and closing shut-off valves to fittings and fixtures (i.e., faucets and fountains) prior to sampling have been shown to significantly increase lead concentrations (Seattle Public Schools, 2005). After opening a shut-off valve, outlets should be completely flushed and then allowed to stagnate for the appropriate period of time.

The average intake of lead by an individual varies considerably as a result of several factors, including consumer behaviour, configuration of the plumbing system (e.g., single-family dwelling, apartment building, office building, school), water usage patterns (e.g., flow regime), contact time of the water with the plumbing, seasonal effects and water chemistry (Cardew, 2000, 2003; van den Hoven and Slaats, 2006; Schock and Lytle, 2011; Deshommes et al., 2016). Sampling methods used to assess exposure should ideally take these variations into account. Studies have demonstrated that composite proportional sampling captures the inherent variability of lead exposure from drinking water and is representative of this exposure (Anjou Recherche, 1994; van den Hoven and Slaats, 2006; Schock and Lytle, 2011). Composite proportional sampling is achieved with a consumer‑operated device fitted to the drinking water tap that splits off a small, constant proportion of every volume of water drawn, typically over a period of 1 week. Composite proportional sampling requires equipment that is impractical for routine monitoring and is better suited for long‑term sampling.

No representative sampling site can be established for most schools, thereby requiring the sampling of every drinking water location to assess exposure of children in the schools. Depending on the type of sampling site (i.e., school vs. multi-dwelling building), smaller sample and smaller total volumes may be necessary (Health Canada, 2009b; Schock and Lytle, 2011; U.S. EPA, 2018).

Page details

Date modified: