Environmental Risk Assessment of Agrochemicals — A Critical Appraisal of Current Approaches

This chapter provides insights into the difficulties and challenges of performing risk evaluations of agrochemicals. It is a critical review of the current methodologies used in ecological risk assessment of these chemicals, not their risks to humans. After an introduction to the topic, the current framework for ecological risk assessment is outlined. Two types of assessments are typically carried out depending on the purpose: i) regulatory assessments for registration of a chemical product; and ii) ecological assessments, for the protection of both terrestrial and aquatic ecosystems, which are usually site-specific. Although the general framework is well established, the methodologies used in each of the steps of the assessment are fraught with a number of shortcomings. Notwithstanding the subjectivity implicit in the evaluation of risks, there is scepticism in scientific circles about the appropriateness of the current methodologies because, after so many years of evaluations, we are still incapable of foreseeing the negative consequences that some agrochemicals have in the environ‐ ment. A critical appraisal of such methodologies is imperative if we are to improve the current assessment process and fix the problems we face today. The chapter reviews first the toxicity assessment methods, pointing to the gaps in knowledge about this essential part of the process and suggesting avenues for further improve‐ ment. Deficiencies in the current regulations regarding toxicity testing are discussed, in particular the effect of the time factor on toxicity and the issue of complex mixtures. Other matters of concern are the extrapolation of toxicity data from the individual to the population and community levels, and the sub-lethal effects. The exposure assessment methods are dealt with in a second place. These rely on modelling and actual measurements of chemical residues in the environment. Various techniques employed to determine to exposure and bioavailability of agrochemicals to the various organisms in both aquatic and terrestrial ecosystems are reviewed. Again, the shortcomings and gaps in knowledge are addressed and suggestions for improvement are pointed out. Then, the process of putting together the information from the toxicity and exposure assessments to evaluate risks is discussed. Tiers I and II of the risk assessment are reviewed. The challenge here is to keep objectivity in the evaluations; this may require the introduction of new methods of risk assessment. Finally, the risk assessment implies establishing a management strategy that aims at reducing or minimising the impacts of agrochemicals under normal agricultural scenarios. Recommendations are often case-specific and need to be based on sound science as well as common sense principles. The chapter concludes with a summary of issues that need to be considered for improving risk assessments of agrochemicals.


Introduction
The enormous amount of chemical contaminants that are currently found in air, soil, water and sediments of our planet, mostly as a result of human activities [1], call for an evaluation of their risks to the web of life. The contaminants that potentially result in adverse biological effects -whether at individual, population, community or ecosystem level -the so-called pollutants, are of concern [2]. For obvious reasons, pollutants that relate to our food resources are of particular interest and require priority in risk assessment. Among the latter chemicals are those used in agricultural production, also called agrochemicals, which include pesticides of various kinds, plant growth regulators, repellents, attractants and fertilisers. These chemicals are intentionally added to the environment for controlling pests and diseases of crops, requiring an accurate assessment of the maximum amounts that can be applied.
Because of their huge market in all continents, agrochemicals are among the most common pollutants in one fifth of the Earth's land [3]. Also, given the high toxicity of most insecticides, herbicides and fungicides to their target and nontarget organisms, their negative impacts on the environment cannot and must not be ignored. Indeed, pesticides constitute one of the main drivers of population decline in some wildlife species [4]. Therefore, proper evaluations of the risk that pesticides and other agrochemicals pose to organisms and ecosystems should be conducted in a scientific manner.
Various approaches have been used in the past to evaluate the ecological risks of agrochemicals, and all of them focus on the intrinsic toxicity they may have -their poisonous natureas well as their exposure in the environment. Indeed, the ability of a chemical to adversely affect the functioning of organisms is a major factor to take into account in evaluations of risk. That ability is governed primarily by the specific mode of action of the chemical under consideration, while the magnitude of adverse effects primarily depends on the exposure level and often on the exposure time as well. Here we have the basic components of any chemical risk assessment: toxicity and exposure [5].
Organisms are not only subject to the effects of single, individual chemicals, since the environment they live in is usually contaminated with many compounds, each in different concentrations and with different modes of action. Site-specific risk assessments, therefore, must refer to chemical mixtures if evaluations are to be meaningful. Despite the many advances in this area of environmental science, our ability to properly assess the risk of individual contaminants is still far from desirable, let alone the assessment of chemical mixtures! Even if the characterisation of a pollutant can be done on the basis of its physicochemical properties and toxic mode of action, the exposure component is often difficult to assess, as it involves issues of bioavailability, interaction of the chemical with other natural stressors (e.g. temperature, pH, dissolved and suspended material) and acclimation and adaptation of individual organisms, all of which are difficult to gauge. It is often said that chemical risk assessments lack realism, as they are too reliant on toxicity tests carried out in laboratories, whereas the conditions of the natural environment frequently reduce both exposure and availability considerably [6].
This chapter deals with the ecological risk assessment (ERA) of agrochemicals, deliberately avoiding any discussion about the risk of human exposure to residues of these contaminants in food, which is treated in a different chapter. Emphasis is placed on the limitations of the current approaches, with some suggestions being proposed to improve the evaluations or to overcome specific problems. A critical approach will thus provide insights into the difficulties and challenges of performing adequate risk assessments.

Types of risk assessment
Chemical risks refer to specific organisms and situations. Furthermore, the primary purpose of their evaluation determines the type of assessment that is carried out. On this basis, two types of risk assessments are usually performed with agrochemicals: (i) ecological risk assessments for protection of ecosystems and (ii) risk assessments for registration of agrochemicals or regulatory purposes, which are partly guided by the need to protect the environment.
Ecological risk assessment aims at protecting organisms, communities and ecosystems. It applies to all man-made chemicals and their degradation products as well as natural contaminants that are poisonous or can be toxic above certain threshold concentrations (e.g. copper, trace metals, nitrate, etc). Usually ERA is site specific, as it considers environmental conditions specific for an area or region in addition to the characteristics and levels of the agrochemicals applied. As a result, risk management plans can be proposed, i.e. remediation processes to reduce or eliminate residues or mitigation measures that reduce the loads discharged into the environment.
Agricultural products that are released into the environment need to be assessed for impacts on ecosystems and consequently be approved by regulatory authorities. The aim of the regulatory risk assessment is to ensure that those chemicals that enter the environment do not pose a serious threat to living organisms. Regulatory risk assessment usually follows preset protocols, such as those of the OECD or US Environmental Protection Agency (EPA). Because any chemical can potentially affect living organisms, regulators can impose restrictions in the usage or establish conditions that mitigate or avoid the exposure of the organisms at risk. While new agricultural products are developed by chemical companies in astonishing numbers, only a fraction of those make it to the market. In the worst case, agrochemicals can be banned and withdrawn from national and/or international markets, as in the case of old organochlorine pesticides (e.g. DDT, dieldrin, aldrin, heptachlor). Obviously, economic considerations should not influence the decisions of regulators in regard to the non-safety of certain chemicals.

Framework for ecological risk assessment
Ecological risks of agrochemicals are evaluated by standard procedures, as it is done with other chemicals. This approach is similar to the evaluations of risks for humans, except for one main difference: human risk assessments aim at protecting individuals, whereas ecological risk assessments aim at protecting ecological structures, whether populations, communities or entire ecosystems. Another difference is the preferential use of median effect concentrations (EC50), in particular the median lethal concentrations (LC50) and lethal doses (LD50) in risk assessments of agrochemicals, rather than no-observed-effect concentrations or levels (NOEC or NOEL). This is because the data on the latter metrics are less readily available for the surrogate species used in toxicological testing of agricultural compounds; typically, only mammal toxicity is measured at the lowest levels. Besides this difficulty, NOEC and NOEL are statistically unreliable -it is almost impossible to prove that there are no effects in natural populations of organisms from exposure to low concentrations of chemicals [7,8].
In essence, all risk assessments rely on the framework shown in Fig. 1, which is based on standard procedures of the US EPA [9] and has been adopted, with some modifications, by most regulatory authorities in OECD countries [10]. The two main components of the risk framework are the toxicity assessment and the exposure assessment [5]. Without toxicity, there is no risk, but if organisms are not exposed to a toxic chemical, they are under no risk either. Therefore, it is only when organisms are exposed to potentially toxic chemicals that a risk assessment must be undertaken.
Toxicity levels of a chemical (either concentrations in aquatic media or doses in terrestrial animals) must be determined in the laboratory, as effects from exposure under field conditions may be confounded by other factors. Standard toxicity tests, carried out on several taxa of plants and animals, are primarily designed to establish the acute lethality [11]: either the LC50 for aquatic organisms after a given time of exposure or the LD50 for doses applied to or ingested by terrestrial organisms. In addition, chronic toxicity after prolonged and repeated exposure (whether lethality or another effect) and reproductive endpoints after sublethal exposure over time are often determined as well [12]. Knowledge about the mode of action of the chemical should help understand its toxic effects [13].
The exposure characterisation is complex as it usually involves some sort of modelling to determine the possible concentrations of chemical in the various media in which the organisms live [14]. Inputs for the models include all relevant physicochemical properties of the chemical under consideration, e.g. water solubility, lipophilic behaviour, volatility and degradation constants in water, soil, light conditions, plants and animals. The latter constants are important to understand the persistence of chemical residues in the environment. In site-specific ERA, the exposure should also include monitoring data to validate the modelling used. If monitored and modelled data differ markedly, an explanation should be sought for the discrepancies.
In countries of the OECD, assessment of the two components of risk follows a tiered-structured process (Fig. 1). In the first step, rough estimates of risk are evaluated by a simple ratio, the hazard quotient (HQ) between the predicted environmental concentrations (PECs) in a particular medium and the acute lethality (LC50 or LD50) to the standard test organisms found in that medium. HQ values above 1 are unacceptable, and risk thresholds are commonly set at 0.1, meaning that assessments that result in HQ < 0.1 pass the first tier and can be regarded as "safe" for a particular environment. HQ values > 0.1 require evaluation in a second tier. However, a given chemical may be considered safe to terrestrial organisms such as rats, but not safe for aquatic organisms such as midge larvae. Even within the same environmental medium, some species in the field are more sensitive than the surrogate species used in the laboratory tests. To overcome this problem, probabilistic risk assessments (PRAs) have been derived to identify the fraction of organisms that would be negatively affected by the PEC in a given environmental medium [15]. Risk in a PRA is defined as a feasible detrimental outcome of an activity or chemical, and it is characterised by two quantities: i) the magnitude (severity) of the possible adverse consequence(s) and ii) the likelihood of occurrence of each consequence. Species sensitivity distributions (SSDs) are typically used for that purpose [16] -see below. HQ is the preferred method used in regulatory risk assessments of agrochemicals, whereas PRA is mostly used in site-specific ERA, which in most jurisdictions amounts to a second tier.
Whether HQs or PRAs are used, if the chemical does not pass the first tier, a further evaluation must be done. The second tier involves gathering evidence of effects under realistic environmental conditions. Field trials and mesocosms are typically carried out in order to obtain that information. A number of difficulties with these trials and experimental ecosystems will be addressed in the respective sections below. Suffice to say now that as a result of the second tier, a management plan for the agrochemical under evaluation should be drawn to mitigate its possible risks. Such plans may consider restrictions of usage for a chemical product and other conditions that aim at avoiding the exposure of organisms in the environmental media for which a risk has been identified.

The weight-of-evidence approach
Weight of evidence (WOE) is an evaluation of evidence on one side of an issue as compared with the evidence on the other side of the issue or an evaluation of the evidence on multiple issues. WOE is used to describe the type of consideration made in a situation where there is uncertainty and which is used to ascertain whether the evidence or information supporting one side of a cause or argument is greater than that supporting the other side. The US Environmental Protection Agency started to consider a WOE approach for assessment of carcinogenic chemicals in 1986 [17]. The same approach was introduced later for ecological risk assessments [18], and since then, the European Union and other governments have followed suit to address specific problems posed by chemicals. The concept of WOE is familiar to the public, as it is embodied in the justice system of western countries [19]. Several methods, ranging from qualitative to quantitative, are used to reach conclusions from the various lines of evidence synthesised in the WOE approach [20]. In assessing a chemical's safety, however, WOE uses with preference causal criteria and logic methods as well as "probability and other statistical techniques to evaluate how individual data adds to or subtracts from evidence of risk" [21].
It should be realised that, in fact, the WOE approach operates throughout the entire assessment process outlined in Fig. 1. In particular, the toxicity and exposure assessment steps use qualitative and quantitative measures to characterise the chemical in regard to its hazard and potential harm. For example, in the sediment quality "triad", sediment chemistry, toxicity and effects to resident organisms are the individual lines of evidence [2]. In general, individual lines of evidence may include acute toxicity tests, sublethal effects, biomarkers, residue data from field surveys and other sources [5,22]. The integration of all these lines of evidence is done in the risk assessment step, by looking "critically at each line of evidence and consider uncertainty and variability when individual lines point to different conclusions" [21]. Most ERAs use causal criteria and logic methods to evaluate risks, with sediment assessments relying heavily on the logic methods. The resulting risk is typically defined in terms of probability of harm for populations in a particular environment. The reader is referred to the review by Linkov et al. [21] concerning the application of WOE to current environmental assessments.

Toxicity assessment
The first condition for a risk assessment is to know whether a chemical is toxic or not. Multiple lines of evidence are used for that purpose, including bioassays with standard species that are representative of various environmental media, biomarkers and others.
For decades, acute and chronic bioassays have helped determine the toxicity levels of most agrochemicals to a range of organisms. For all chemicals, lethal toxicity to mammals is based on rats or mice bioassays, from which the oral LD50 and contact LC50 can be determined. In the past, NOEC and the lowest-observable effect concentration (LOEC) were also determined, mainly for human risk assessment. However, ecotoxicologists have been arguing that the latter measures are statistically flawed and consequently should not be used any longer [23,24]. Instead, the estimation of LD10 or LD20 from standard bioassays is now preferred to define the low levels of effect [25,26], given that populations of organisms in nature always have a small proportion of casualties due to diseases, lack of fitness, starvation and other factors, which cannot be distinguished from chemical toxicity [8]. Similar bioassays for acute toxicity have been used with a range of organisms, including birds, fish, amphibians, earthworms, springtails, bees, flies and a variety of insect pests, mosquito larvae, midge larvae, dragonfly and mayfly nymphs, several types of crustaceans and zooplankton, clams, oysters, mussels, algae and plant species. The need for standardisation of tests has produced a large body of literature, where some species are preferred based on ecological relevance, sensitivity, commonality, easiness to culture in the laboratory or other convenient traits. The commercial Microtox ® assay system is also an acute bioassay designed to detect basal toxicity to the marine luminescent bacteria Photobacterium phosphoreum, mostly from some organic substances [27]. Many other standardised in vitro cytotoxicity tests have been developed to test a large number of chemicals [28]. Acute lethality is typically determined for single doses at 24 h in terrestrial organisms and after a fixed time of exposure in aquatic organisms: from 24 or 48 h in small organisms to 96 h for larger crustaceans and fish. Chronic toxicity bioassays are carried out to detect possible biological effects after repeated or continuous exposure to sublethal doses or concentrations of a chemical. The endpoints in this case may vary from lethality to carcinogenicity and mutagenicity [29] or from reproduction impairment [30] to hormonal imbalance [31] and developmental effects [32]. Any negative or deleterious effect can be considered a chronic toxicity endpoint, not just the reproduction effects -the criterion for chronic toxicity is long, repeated exposure to chemical amounts that are not lethal within the short timeframe of the acute bioassays. Not all chronic endpoints are considered ecologically relevant, and hence, such endpoints as growth and reproduction would have a higher weighting than others such as biomarkers (unless, of course, these can be linked to community and population effects).
Each one of the above tests is considered a line of evidence for the risk assessment of a chemical. Current information on acute and chronic toxicity for all agrochemicals can be accessed online through the ECOTOX database compiled by the US Environmental Protection Agency (http:// cfpub.epa.gov/ecotox/) and other databases with a narrower scope, e.g. AGRITOX (http:// www.agritox.anses.fr/index.php), Footprint Database (http://sitem.herts.ac.uk/aeru/iupac/), the Pesticide Manual [33] and others.
Toxic action may depend not only on dose or concentration but also on the duration of exposure. It has long been suggested that toxicity tests should include information not only on doses or concentrations but also on exposure time [34][35][36], with time-to-effect bioassays being a practical way to achieve this [37]. Toxicokinetics are necessary to interpret the toxicity of a compound with time of exposure [38,39], because they can relate time-dependent toxicity to target organ concentrations [40]. For some compounds, toxic effects may depend almost exclusively on the total dose [41], whereby the product of dose, or concentration, and exposure time produces the same biological effect (Haber's rule). For pesticides that block specific metabolic or physiological pathways, the toxicodynamics of the compounds determine the time dependency of the effect [42]. In this case, toxicity is better described as a function of the time of exposure in addition to the dose. Understanding of the mode of action of such compounds is, therefore, essential for a correct prediction of their toxic effects [43]. Thus, whenever the interactions of chemicals with critical receptors are either slowly reversible or irreversible and the resulting biological effects are also irreversible (i.e. death), such effects are time cumulative and long lasting [40]. Examples of such chemicals are the neonicotinoid insecticides [44], which impair cognition by blocking nicotinic acetylcholine receptors in the central nervous system of insects and other arthropods. The implications of this time-cumulative toxicity for risk assessment will be dealt with in the section on first-tier assessment below.
One particular group of compounds that have attracted special attention in recent times is the endocrine disruptor chemicals (EDC), among which are some agrochemicals: amitrole, atrazine, DDT, lindane, mancozeb, maneb, metiram, metolachlor, pentachlorophenol, vinclozolin and ziram [45]. EDCs alter the homeostatic balance in organisms by either mimicking the activity of hormones or influencing the metabolism of natural hormones as a side effect. Because hormones are substances that effectively induce or suppress physiological mechanisms, often complex and interlinked, exposure to EDCs can have unforeseen effects, certainly unrelated to the specific mode of action of the ED compound. For example, some organochlorine pesticides induce feminisation in alligators [46,47]. EDCs should not be confused with hormonal pesticides, which by their very nature mimic natural hormones in plants (e.g. auxinlike herbicides such as 2,4-D) or insects (e.g. ecdysone mimics such as tebufenozide). Identification of EDCs has been given high priority in some developed countries [48] with the aim of restricting their use in view of the potential consequences for human health [49], even if the research needed to characterise and elucidate their impacts is lagging behind [50,51]. This is a case where regulation appears to have run ahead of scientific evidence, which in some cases is controversial [52].
All the above is for direct toxicity, which is the only way to assess the toxic potency of a chemical compound. Assessment of the toxicity of mixtures of chemicals started in the 1970s; this can be determined using the same protocols for acute and chronic toxicity [53] or using genomics [54]. Apart from determining the total toxicity of the mixture, it is important to assess whether the compounds show additive, synergistic or antagonistic toxicity, and if so, try to estimate the synergistic/antagonistic ratios [41,55]. Additive toxicity occurs with compounds that have the same or similar mode of action, e.g. organophosphorus and carbamate insecticides [56], pyrethroids, neonicotinoids, triazines, etc. In such cases, a toxic equivalent (TE) with reference to a standard compound in the same group of chemicals is estimated [57]. TEs are then used for assessing the total toxicity of such mixtures in tissues, sediments or another matrix [58].
Among synergistic mixtures, the best and well known is that of piperonyl butoxide with pyrethroid insecticides, which occurs because of inhibition of the mono-oxygenase detoxification system in insects [59]. Recently, mixtures of ergosterol-inhibiting fungicides with cyanosubstituted neonicotinoids have also shown enhanced synergism of these insecticides in honeybees [60].
Indirect impacts of chemical toxicity occur in nature [61]. Although it is often difficult to determine the causal relationship that exists between chemicals and the indirect effects they may have in ecosystems, well-designed microcosms and mesocosms, as well as field studies, have often proved such effects [62][63][64]. Indeed, ecological theory predicts that the elimination of predators inevitably results in a bloom of the primary prey species they control [65]. Equally, the elimination of key primary consumers leads to the uncontrolled growth of their feeding sources (e.g. weeds, algae), and this occurs when insecticides eliminate grazing planktonic species [66], but not necessarily to the demise of their predators, as the latter organisms can switch prey preferences [67]. Fungicides can eliminate fungal communities that have the important role of recycling organic wastes in nature and thus reduce the efficiency of the litter decomposition and mineralisation in soil [68]. Reduction of the insect food source after application of imidacloprid in agricultural environments over many years has led to the reduction of several bird species in the Netherlands [69,70]. Even biological insecticides, such as Bt (Bacillus thuringiensis) applied to control mosquitoes, flies and other nuisance insects in wetland areas, inevitably reduce the bird populations that feed on those insects [71]. One of the practical applications of this theory in agriculture is the integrated pest management (IPM) [72]. Indeed, IPM is mindful of the disastrous consequences of pesticide application for pest control without consideration of the impacts on the whole ecosystem. Extreme examples are the massive explosion of brown plant hoppers (Nilaparvata lugens) in Indonesia during the 1970s following excessive application of insecticides that eliminated spiders and other predators of this rice pest [73] and the uncontrolled population growth of red-billed quelea (Quelea quelea) in areas of Botswana with high residue loads of fenthion, a persistent organophosphorus insecticide applied for pest control in agriculture [74]. Indirect effects, however, are not included in the first tier of regulatory risk assessment of chemicals; in some cases, they may be considered in the second tier ( Fig. 1).

Exposure assessment
The first step in the exposure assessment starts with characterisation of the chemical properties of a substance, as this will determine the main routes of exposure for organisms. Among the physical and chemical properties relevant for environmental risk are solubility in water and organic solvents; partitioning coefficients, in particular the octanol-water coefficient (K ow ); soil adsorption constants (e.g. K oc ); vapour pressure and volatilisation (Henry's law); dissociation constants (pK); and half-lives in several environmental matrices (air, water, soil, sediments and plant tissues) due to different processes: hydrolysis, photolysis or metabolism. In addition, mobility and leaching potential should be assessed, and this can be done by means of indices such as the ground ubiquity score (GUS), which combines soil adsorption properties and degradation in soil [75] ( Table 1). The databases mentioned above (i.e. AGRITOX, Footprint and Pesticide Manual) are comprehensive sources of such data. Manufacturers of new products must provide this information -different from the material safety data sheets (MSDS) -to their regulatory authorities for risk assessment. Numerous laboratory measurements and tests must be carried out to obtain this kind of data, although in some cases, quantitative structure-property relationships (QSPRs) are used to derive approximate values when compounds have similar chemical structures.
Degradation pathways in animals are also important, as some chemicals are amenable to metabolic breakdown and produce metabolites, usually of less toxicity, that can be eliminated in urine and faeces. However, metabolites can also be as toxic as the parent compounds, in which case, they need to be assessed as independent toxicants. Often, a chemical may be degraded easily in water or plants but may be persistent in soil or vice versa. Recalcitrant chemicals are the ones that do not degrade readily in any of the environmental media, e.g. chlorinated compounds, including some neonicotinoid insecticides and herbicides. Persistent agrochemicals require special attention because their effects are not restricted to the time of their application to crops but also during a long period of time afterwards, which could be years. For example, in many developed countries, residues of DDE in soil are still causing eggshell thinning in birds, even if the parent compound (DDT) has not been applied for more than 30 years [76].
Once the basic data are available, concentrations of the chemical in various matrices can be predicted after its application to a crop, using mathematical models developed for specific purposes [14]. Transport models can refer to a single medium (e.g. air) or to the fate and movement between media. Examples of the first type are the dispersion of particles and movement through air, which can be modelled by AgDRIFT [77] and Gaussian models [78], soil erosion and sedimentation of runoff [79] and leaching of water-soluble fractions into the soil profile, e.g. Pesticide Root-Zone Model (PRZM) [80] and others [81]. By contrast, multimedia models such as fugacity [82] aim at predicting the concentrations of chemicals in all environmental media. In any case, model predictions should be validated by comparing the outputs of the model to the actual measurements of residues in water, air, soil and organisms. However, for new chemicals this is not feasible, as residue amounts could be negligible or nonexistent, so current regulatory risk assessment relies entirely on the predicted environmental concentrations (PECs) that result from such models. For old agrochemicals, on the other hand, measured residue levels would be preferable for two reasons: (i) a large body of actual residue data exists for different regions and times, as comprehensive monitoring surveys are carried out over many years and are available from the open literature, and (ii) relevant physicochemical and degradation data are often incomplete for many old compounds, thus hampering their environmental modelling. Each model output and survey data is considered a line of evidence for the risk assessment.
Essential information in regard to prediction of modelled data is the usage pattern of the chemical under consideration; this information is often not readily available, so it is often necessary to infer usage from sales figures. The total amounts of agrochemical applied determine ultimately the total load of residues to which organisms will be exposed. For persistent chemicals, residue loads will increase with time, so it is important to know the period of time they have been used. When all these factors are put together, the margins of error necessarily increase, and so variations of one order of magnitude in PECs are acceptable. The situation is no better for monitoring data, as the variability in residue loads between sites, regions, times of the year and among years can be even larger. Therefore, sensible exposure assessments should be based not so much on accurate concentration levels but on a range of concentrations for specific scenarios and situations. Statistical distributions of such residues help determine their probability of exposure in site-specific ERA [83]. PECs by themselves are not sufficient to estimate exposure within reasonable limits. The availability of chemicals is paramount, since only chemicals that are taken up by organisms can cause harm. Routes of exposure differ for aquatic and terrestrial organisms, and the estimated PECs should refer not only to the levels of chemicals in the environmental matrices but also the uptake by the organisms living there. In the aquatic environment, organisms are embedded in a matrix that contains pollutants, and their uptake through the gills and epidermis is almost constant. In this case, time of exposure really influences the chemical uptake, while toxicokinetics are influenced by surface area, porosity of the surface and volume/surface ratio. Also, the internal doses and effects with time can be estimated in accordance with Haber's rule (see above). Ingestion of residues in food appears to be a lesser route of exposure for zooplankton [84] and possibly other aquatic organisms. For terrestrial organisms, however, a more complex scenario is envisaged. Uptake of air pollutants may be constant through inhalation, but their residues in water and food sources (e.g. pollen, plant material, insects) are taken intermittently in small, discrete amounts and at different times. Dermal exposure, by contrast, is most relevant to animals living in the soil and sediment (e.g. earthworms, invertebrate larvae, etc.) or having unprotected skins (i.e. amphibians), whereas reptiles, birds and mammals would be somehow protected by the scales, feathers and fur that cover their bodies. Nevertheless, the lethal effects that follow deposition of pesticide spray droplets on birds suggest that concentrated hydrophobic compounds can enter easily their bodies through this route of exposure [85].

Pesticide
One way of finding out the availability of pollutants to organisms is by using biomarkers of exposure. Many biomarkers have been developed in the last two decades with the aim of measuring the extent of chemical exposure in organisms [86]. Usually they measure the physiological response to chemicals with the same mode of action, i.e. the cholinesterase assay determines the proportion of acetyl-cholinesterase enzyme bound to cholinesterase inhibitor insecticides [87]. Biomarkers based on inhibition of the detoxification mechanisms (P450 monooxygenases, glutamate-S-transferases, etc.) are less specific, as many different compounds can trigger the same response. Unless the chemical source is known, biomarkers can only provide evidence of the existence of a group of contaminants that are taken up by the organisms [88,89]. For this reason, they are useful to prove the availability of such chemicals, not for the identification of the individual compounds.

Risk assessment: First tier
Having obtained the above information, the chemical assessor is now in a position to evaluate the possible impacts of a chemical in the environment. The first hurdle assessors encounter is how to put together the disparate data gathered through the various lines of evidence from the toxicity and exposure assessments.
Whatever methods are used to evaluate the possible risks of a chemical, assessors must consider a range of scenarios: from a typical or most likely PEC under standard agricultural practices to a worst-case scenario that considers the highest possible PEC to be found in a particular environmental medium or under special conditions. Modelled PECs as well as monitoring residue data can be used for this first-tier assessment (Table 1).
For a start, different LC50 and LD50 values are derived for species belonging to disparate taxa, with a range of variation spanning several orders of magnitude! So, which dataset should be used in the risk assessment? One way of solving this problem is to evaluate representative species of each environmental media, and this has traditionally been done with the hazard quotient (HQ). OECD hazard assessments require at least one species from three different taxonomic groups (e.g. alga, crustacean, fish) for this first tier. Thus, for each environmental medium, one to three HQ values may help define the range of negative impacts. These HQ values should be derived for different exposure routes in the case of terrestrial organisms (e.g. dietary, inhalation or contact by dermal exposure), whereas a single value may be sufficient for aquatic organisms. Two shortcomings are found with this approach. Firstly, it is simplistic and often unrealistic, given that PECs for a range of scenarios typically vary up to one order of magnitude, and this variability can introduce a swing from being "safe" to "unsafe" if the resulting HQ values change from <0.1 to 0.1-1.0. Thus, there is an inherent large uncertainty in this approach even if the toxicity endpoints used are consistent and accurate. Secondly, a given chemical may be considered safe for the representative organism on which the toxicity data are based, but not safe for other organisms living in the same environment. An example of this is imidacloprid, which renders values of HQ below 0.0001 for standard Daphnia spp. at relevant concentrations of 5 µg/L in rice fields but has HQ values of 0.01 for epibenthic ostracods and benthic chironomid larvae [90] and values of 0.2-7.0 for larvae of mayflies [91]. This highlights the problem of using data for standard test species in the first tier to make decisions regarding the registration of a chemical; and yet most regulatory assessments still use this method.
A second and more plausible approach is to use all the toxicity data available for a chemical at once, by constructing an SSD -risk assessments using this approach are termed probabilistic risk assessments (PRA). For agrochemicals, SSDs are usually built using acute LC50 or LD50 data. Often an assessment factor is applied to give a chronic equivalent when the NOEL values are not available. In such cases, a factor of 10 or a derived acute-to-chronic ratio is commonly used. The SSD shows the overall range of toxicities for all species tested in the same environmental medium (e.g. aquatic; Fig. 2), so the assessor can determine the levels of chemical that would not have a serious impact on the majority of species. An advantage of the SSD is that it allows estimation of the number of species within a particular medium that would be seriously affected for any residue concentration of a given chemical. Since the latter residue levels can vary spatially, seasonally and with the passing of time, this relationship between residues in the environment and species affected (which includes their identity as well) can be very useful for risk management in site-specific ERA. In regulatory assessments, the threshold concentration hazardous to 5 % of species (HC5) is defined and compared to the PECs in that particular medium [92]. The term protective concentration for 95 % of species (PC95), which equates to HC5, is also used. Nevertheless, the probabilistic approach is rarely used for registration of new agrochemicals, since the range of toxicity data available for such compounds is usually insufficient to generate a valid SSD. They are mostly used for site-specific risk assessments, in which case, actual residue data from monitoring at the site or region are also used to complement the predicted exposures. PRAs are available for residues of the herbicide atrazine in waters [93] and for insecticides such as chlorpyrifos [94], pyrethroids [95] and endosulfan [96] and other agricultural chemicals [97,98].
The probabilistic approach is not exempt from criticism, as the protective thresholds set are subjective and have no ecological basis. It could well be that the most important species to be protected are the most sensitive, as it happens with pollinators and parasitic wasps exposed to neonicotinoids and other compounds [99]. One could argue that setting a threshold that protects 100 % of species is the most adequate solution to this problem, but it is indefensible to estimate such limits when the datasets are often incomplete, as it occurs with most chemicals, and also because the statistical error increases at the bottom of the SSD curve. Obviously, the accuracy of the SSD relies entirely on the number of species tested and the taxa covered in the dataset. Another criticism is that the actual concentrations measured in surface waters are typically underestimated because the peak concentrations of pesticides, which have the highest impact, are often missed, thus casting doubts about the reliability of the exposure predictions on which the probabilistic approach is based [100]. One solution is to build a cumulative graph of either measured or predicted environmental concentrations for different exposure scenarios, thus allowing estimation of probabilities of exposure in a similar way as the SSD [15]. This approach has successfully been used in ERA of residual herbicides that have a long history of usage [83,101], for which sufficient residue data are available. Another solution to this problem is the use of passive samplers that produce an integrated measurement of residues over a period of time [102]. Equally, passive samplers are also used for estimation of residue concentrations in the air phase [103]. In any case, SSDs provide a greater degree of accuracy than HQs in determining safe levels for chemicals in the environment. In fact, this approach has been used to set water quality guidelines in many countries, although it is more commonly used in the second tier of assessment [98].
A third approach is to use risk indices that combine toxicity values of a chemical to several organisms, its persistence in the environment, the frequency of detection of its residues, the likelihood of drift and other characteristics that are deemed important. Indices of risk were first introduced by Metcalf to rank the risk of pesticides to fish, birds, mammals and bees [104]. They are useful for comparing the potential risks of several compounds applied to the same environment, so that farmers, pest control operators and other users of pesticides may choose chemicals that have the least risk to the environment. Examples of indices are the Pesticide Impact Rating Index (PIRI), used by managers and pesticide users as a simple guide for selecting agricultural chemicals that are less damaging to the environment [105]; the ecological relative risk (EcoRR), designed to discriminate between chemicals that may have ecological impacts at a site-specific environment [106]; other risk indices, along with pesticide use information, were reasonable predictors of bee poisoning incidents compiled over a 21-year period in the United Kingdom [107]. Many indices have been proposed over the years (see review by Levitan et al. [108]), and some of them have been used in regulatory risk assessments in European countries [109]. It is important to realise that such indices are only indicative of relative risks among compounds and cannot predict the actual risk that a particular agrochemical may have in a given environment.

New approaches to risk assessment
First-tier assessments of agrochemicals are mostly based on acute toxicity data and a range of concentrations for different scenarios. Rarely do they consider chronic toxicity; this is most commonly estimated for mammals. In fact, survival over time is not considered for the vast majority of organisms tested. Indeed, one aspect that none of the methods used, whether the HQ, the SSD-probabilistic method or indices, have considered is toxicity in time. This may be due to the assumption that chemicals only cause toxicity on a dose-dependent manner, while the ratio between acute and chronic LC50s or LD50s is assumed to be fixed [110]. Recently, however, it was demonstrated that such ratios vary depending on the models used to estimate chronic toxicity [111]. Moreover, it is known that certain chemicals exhibit time-cumulative effects with time, such that infinitesimal concentrations can cause toxicity upon chronic exposure [40]. For these chemicals, which can be identified by a high acute-to-chronic ratio [91], the current methods of risk assessment fail to produce a plausible and realistic evaluation of their risks to the environment.
A new approach has been proposed, therefore, for assessing the risk of chemicals with timecumulative toxicity. The method is based on the relationship of LC50 or LD50 with time of exposure, which is defined mathematically by a log-to-log regression between the dose and the time to reach 50 % effect (ET50) [112], 50 50 LnET a bLnLD = + (1) The inverse of the slope of that regression (b) is the power exponent of the Druckrey-Küpfmüller equation [113,114] constant n C t * = (2) where n =1/b of the regression (1) and C is the PEC or measured concentrations of the chemical in the environment. Since the exponent n for a chemical can be determined using time-to-event bioassay data, the times to reach 50 % effect can be estimated for any given concentration in the media or dose ingested by organisms. In fact, the slope in equation (1) helps determine whether a chemical may have time-cumulative toxicity or not, because only the chemicals with values of n >1 are included in this special category [40], thus making the task of the assessor easier.
This method, therefore, estimates the time to reach a critical endpoint (e.g. 50 % mortality) based on the PECs or measured concentrations found in the environmental media. It has been applied to evaluate the risk of two neonicotinoids (imidacloprid and thiamethoxam) to honeybees and bumble bees [115]. The risks of these insecticides to bees, as evaluated by this new approach, are in better agreement with the observed declines of these insects in countries where neonicotinoids are used than the risks evaluated by classical methods. As indicated above, only the chemicals showing time-cumulative toxicity need to be considered here; for most compounds, the current probabilistic methods are appropriate.
Another aspect that has been neglected for too long is the inclusion of toxic metabolites and transformation products in risk assessments [116]. Certainly, a number of toxic pollutants, including insecticides (e.g. aldicarb, dimethoate, endosulfan, fipronil, heptachlor, imidacloprid, thiamethoxam and others), persistent pollutants such as DDT and a range of industrial chemicals, can be converted to degradates that are as toxic as the parent compounds or even more. For example, organophosphorus pesticides that contain sulphur double bonds can be more toxic when oxygen replaces the sulphur during degradation; such is the case of chlorpyrifos, profenofos and parathion. A proper risk assessment of these chemicals should include not only the original compound but the total toxic burden, i.e. parent + toxic metabolites [117]. While this area of assessment is still incipient in risk evaluations of agrochemicals, similar procedures as those used with mixtures should be devised to assess their toxicity.

Risk assessment: Second tier
It is envisaged that not all chemicals will pass the first tier without problems. Concerns inevitably arise from the lack of sufficient data regarding toxicity to some species that may be deemed important in a particular environment or from the lack of chemical data, which results in poor predictability of the fate of a new chemical. Too many uncertainties in the toxicity and exposure steps result in unrealistic risk evaluations. When that happens, regulators may request that manufacturers provide further evidence on whether a chemical may or may not have negative impacts on the environment.
The purpose of the second tier is, therefore, to obtain more data about the behaviour of the chemical so as to fill gaps in knowledge and resolve the uncertainties and consequences of its release in the environment. Different methods may be used to achieve this, each of them seeking evaluation in a realistic scenario: • Field trials

Field trials
Field trials, using typical and worst-case concentrations of the chemical, are useful to measure realistic concentrations and dissipation rates in water, soil, air and the movement between environmental compartments, e.g. leaching through the soil profile [118], volatilisation [119], chemical loads in runoff [120], eroded materials and sediments [121] and others. For concentrations in water and air, passive samplers give better estimates than individual grab samples, as the samplers integrate peaks from pulses as well as lower concentrations over time. Indeed, concentrations of pesticides in water measured using passive samplers are better correlated with the effects observed in aquatic organisms [122]. Air passive samplers are useful to study the transport of volatile chemicals such as endosulfan and persistent organochlorines [103]. A drawback of field trials is that factors other than the concentration of the agrochemical applied cannot be controlled. This introduces uncertainty about the effects on organisms, which could be due to the chemical applied or to other environmental factors, whether in combination or alone. In addition to this problem, it is almost impossible to monitor all individuals present in a field and its surroundings; consequently, the effects on organisms are underestimated. This may explain the apparent lack of mortality in birds in field trials where organophosphorus and other insecticides are applied to agricultural fields, since most of the fatalities go undetected [123]. Equally, several field trials with systemic insecticides appeared not to cause much impact in honeybee colonies placed in the surrounding of treated fields [124][125][126], but this is partly because bees that died or got lost in the field -due to disorientation -could not be counted [127], while at the same time, the hives keep producing more worker foragers to compensate for those losses. In this case, only by tagging of individual bees with special tracking devices may render accurate estimation of losses and foraging activity [128,129]. As it turns out, bee losses are larger than originally expected from former field trials with those insecticides [130,131]. Another problem with field studies of bees is that often the controls used are contaminated with a diverse array of pesticides [132], so the effects of the chemical under study cannot be distinguished from the controls or are confounded by other pesticide contaminants.
Field studies can include surveys where chemical and biotic data are collected at the same place and time. Such data can be analysed to determine whether or not the chemical pollutants found in a particular location have reduced its biodiversity. That requires, of course, a reference to natural or near-natural conditions. For instance, in order to determine the extent of pollution in water bodies, the European Water Framework Directive requires such definition before the extent of contamination by stress gradients (chemical or otherwise) can be established. To this purpose, the Species at Risk (SPEAR) index was developed to express the risk of pollutants to aquatic ecosystems. It considers the overall toxicity of pollutants measured in terms of toxic units with reference to Daphnia magna (TU Daphnia ) and has a built-in distinction between sensitive and non-sensitive species. SPEAR can be used as an indicator of the pollution status of rivers or other aquatic ecosystems [133].

Mesocosms and microcosms
Mesocosms and microcosms are model ecosystems but on a small scale. Given that most factors are controlled by the researchers, the effects of chemicals on organisms can be determined with more certainty than in field trials. Also, organisms are usually confined to the small experimental area (a few square metres per unit) and thus can be counted more reliably. Indirect effects on other species and communities are usually detected in this kind of experiments [134,135], whereas such effects would be difficult to observe and measure in field trials. One positive feature of mesocosms and microcosms is that they avoid contamination of the environment, something that field trials cannot do. The experimental units are self-contained, and contaminants in the water are eliminated from any runoff exiting the system or they are recovered. Another advantage of these systems is that can be replicated using practically the same conditions, whether outdoor or indoors, whereas field trials are subject to the inevitable variability existing among fields. In both, field trials and mesocosms, the use of controls without chemical treatment is imperative as a reference.
The data generated in these systems usually include dissipation half-lives of the chemicals in various matrices, changes in population and biodiversity in the communities under monitoring, and occasionally, some functional traits of the ecosystem (e.g. productivity) can also be obtained [64]. Data on individual species can be analysed using traditional methods to estimate the median effect concentrations in the field, and once these are obtained, an SSD can be built to determine the HC5 for the communities. Following this procedure, the laboratory toxicity data can then be compared to the field or mesocosms toxicity data to either prove or correct the findings [136]. Effects of chemicals on communities can be measured in a similar way as in typical toxicity tests, and dose-response relationships can be obtained for some endpoints, typically the abundance or species richness [137]. The communities' data can also be subjected to multivariate statistical analysis such as principal component analysis (PCA) or, even better, principal response curves (PRCs) [138]. The latter analytical tool helps determine the extent of the changes occurring in the community under exposure to the chemical(s) as well as their statistical significance. PRCs have been used to study the recovery of populations in agricultural ditches exposed to pesticides [139] and to compare the ecological effects of different insecticides applied to rice paddies [140].
Mesocosms have been instrumental in detecting effects of chemicals in populations and communities of macro-invertebrates [141], which otherwise could not be predicted using standard toxicity tests in the laboratory. This is important, since the functionality of an ecosystem depends on the structure and relationships between all the species that compose it. Even if the functionality of an ecosystem can be restored after the chemical is withdrawn, often the structure is changed, so the recovery of populations does not restore the disturbed community to the same structure it had before. This is because some species have been more affected than others or have been eliminated and need to be replaced by other species. This can be demonstrated by using similarity indices, a tool commonly used in ecological studies that can also be applied to assess the changes occurring in a community of arthropods during and after the exposure to insecticides [142]. Biodiversity indices, on the other hand, are poor indicators of ecosystem changes [137,143]. Similarity indices have also been used to estimate chemical thresholds that protect ecological communities from contaminants in freshwater systems [144]. Structural changes are usually detected by such similarity indices or by measuring the species richness, both of which are good indicators of ecosystem health and can be used to assess the impacts of pollutants on ecosystems [145]. However, among the difficulties and drawbacks of mesocosms can be cited their high cost and need for more replication, especially under different climatic or environmental conditions. This is because sometimes replicates produce variable results (e.g. communities go in different directions) that defy statistical analysis.

Sublethal effects and chronic studies
Laboratory experiments using sublethal doses/concentrations of chemicals are used to determine a variety of sublethal effects that were not considered in the first tier, which was only concerned with survival under exposure to lethal doses of chemicals. Sublethal effects can be dose dependent if they are related to the mode of action of the compound, e.g. behavioural disturbances caused by neurotoxic insecticides [146][147][148]. In such cases, sublethal effects can be assessed using HQ or other standard numerical methods used in tier I. However, sublethal effects often resist an explanation based on the known mode of action of the chemical concerned. For instance, is the anti-feedant behaviour [149] of imidacloprid in arthropods the result of the agonistic action of this insecticide on the nicotinic receptors? If so, it might be possible to establish a dose-response relationship for low exposure doses, but if not, how can the risk of sublethal doses of this insecticide be assessed in a quantitative manner? To date, no established method of numerical analysis of sublethal effects exists in the public literature, in spite of its importance for the long-term viability of natural populations of insects [150] and other organisms. In any case, sublethal effects are difficult, if not impossible, to assess in field trials and mesocosms. There are many examples in the published literature of sublethal effects of man-made chemicals, and they are usually reported in extensive reviews carried out by either independent or regulatory authorities. However, unlike lethal toxicity data, sublethal data does not produce robust statistical estimates of effects. Also, this kind of evidence is not always available for newly developed agrochemicals -information on sublethal effects comes usually a long time after a product has been approved by the authorities and used extensively. More often than not, research on sublethal effects is prompted by a series of negative impacts observed with particular products after many years: the reproductive impairment in birds caused by DDE (a metabolite of DDT) took about 20 years to be elucidated [151]; and the current demise of insect populations and communities as a result of the widespread use of neonicotinoids has taken some ten years to be understood [152].

Side effects -Endocrine disruption
Endocrine disruption is usually detected in experimental work of the second tier. This is an aspect that has been missing in ERAs until recently. One feature of EDC effects is that the potency of individual chemicals may be low but the effect can exceed thresholds with mixtures of additive EDC potency. However, endocrine disruption is for the most part a side effect unrelated to the main mode of action of the chemical concerned, and when present, its effects appear to follow a typical monotonic, dose-response relationship like any other mode of action. Some authors advocate that the mode of action of EDCs is non-monotonic but rather follows a U-shape curve similar to the response of natural hormones and micronutrients [153][154][155]. This unusual relationship has been disputed by many authorities in toxicology and ecotoxicology [52,156], who argue that EDCs should be treated in the same way as any other toxic chemical, i.e. following the same evaluation methods for their risk as outlined above, using either HQs, SSDs or time-cumulative methods.

Life-cycle effects
As a corollary of the previous sublethal and endocrine studies, life-cycle parameters can be determined to assess the impact of a chemical through the various stages of development of an organism [157]. Effects on survival, reproduction and development are thus integrated on a life table. This is particularly relevant for invertebrates, which often change from larvae to adults by experiencing several moulting stages or through metamorphosis.

Population modelling
Population modelling helps estimate the long-term consequences of regular, chronic chemical impacts on populations of organisms. In essence, data obtained at the individual level need to be extrapolated for effects at the population level using life table evaluations [158,159]. Although more theoretical than the previous methods, they are by no means less realistic, and they can be validated with experimental work appropriately designed for that purpose. Common population endpoints used are abundance, population growth rate [160,161] and the chance of population extinction [162]. Two types of models have been suggested: simple life-history models distinguishing between life-history stages of juveniles and adults [163] and spatially explicit individual-based landscape models [164]. Data can be gathered from experimental mesocosms or from historical records. For example, a demonstration that the decline in populations of sparrowhawks (Accipiter nisus) and kestrels (Falco tinnunculus) in Britain between 1963 and 1986 was due to dieldrin contamination was done using growth rates of those birds and analysing their life-history data on body residues [165]. The performance and long-term survival of honey bee colonies have also been modelled after exposure to systemic insecticides [166], as well as the breeding success of wood mice and skylarks exposed to a fictitious fungicide [167].

Evaluation process in the second tier
Unlike the straightforward mathematical methods used to analyse information for the first tier of the risk assessment, there aren't any mathematical procedures to deal with the risks derived from sublethal effects, indirect effects on communities, endocrine disruption, etc. Consequently, all of the evidence gathered for this second tier must be evaluated using the logical methods commonly used in the WOE approach. The exception, perhaps, is the PERPEST model [168], which was developed to predict risk of pesticides in freshwater ecosystems. It simultaneously predicts the effects of a particular concentration of a pesticide on various community endpoints. The model relies on case-based reasoning, a technique that solves new problems by using past experience (e.g. published microcosm or mesocosm experiments). In order for the model to do that, empirical data extracted from the literature are stored in a database of freshwater ecotoxicity studies, which are updated regularly.
Apart from PERPEST, whenever there is some evidence of negative effects of a chemical on a species or an ecosystem, risk assessors must weigh that evidence against the benefits that the chemical may have for human life, the general environment and agriculture -there may be some negative effects to certain organisms or certain areas, but not necessarily to entire ecosystems. How serious are the effects and how widespread their threats are questions for the assessors to ask and evaluate using their best professional judgement. Here, the purpose of the risk assessment influences the assessor's evaluation. For example, in regulatory assessments, the aim is to minimise the impact of a particular chemical on ecosystems to levels accepted by the community, which is not necessarily the same as protecting the integrity of the ecosystems. Thus, some chemicals can be tolerated if the benefits they have for our lifestyle offset their impacts to specific environments, i.e. agricultural areas that are sacrificed for the "common good". The task of assessing a chemical's risk in this context is not easy, as the value of the environment cannot be measured directly in terms of money [169], whereas the benefits of agricultural and industrial production are more tangible. Not surprisingly, there are many discrepancies between the regulations of various countries, even for the same agricultural chemical [170] because they are based on different human judgements. In practice, many hazardous agrochemicals are registered as long as certain precautions and management options are put in place. This similarly occurs with pharmaceuticals and industrial chemicals.
Other important questions to be asked during the evaluation process are how much of the chemical will be used and how often, since the concentrations or doses in the environment will depend entirely on the application rate and frequency of usage. Residue loads of persistent agrochemicals may accumulate in soil over the years until their concentrations or doses may reach sublethal or lethal levels to certain organisms and, in the case of herbicides, some agricultural crops. In such cases, regulators may suggest to restrict the application of such products to avoid their accumulation in the environment.
A final question, often raised by risk assessors of agrochemicals, refers to the different behaviours that a given compound may have in regions of the world that vary in climatic and other environmental conditions. Should the special conditions of those environments be taken into consideration at the time of allowing a pesticide to be used in a country? Currently, the bulk of insecticides are being used in developing countries, most of which are located in the tropical or subtropical regions of the world, where warm and humid conditions may affect their dissipation and exposure to organisms. Would the risk of such chemicals in tropical countries be lesser than their risk in temperate and cold countries? It appears that most chemicals pose a similar risk in either region because the increasing losses by microbial degradation or volatilisation are usually counterbalanced by greater desorption and movement of residues into the aquatic environment [171]. Using tropical taxa for toxicity testing in those countries has also been suggested, arguing that tropical species may differ in susceptibility to pesticides. However, most tropical species do not differ in sensitivity from their temperate counterparts in regard to agrochemicals [163], unlike what happens with metal contaminants [172,173]. Overall, there is no convincing evidence that, in regard to environmental impacts of pesticides, agrochemical products used in tropical countries should be evaluated differently from any other country.

Risk management
An important issue that stems from the evaluation of risks of agrochemicals is the management options available to mitigate or eliminate the hazards that these products introduce into the environment. Chemical companies should be responsible for establishing safety guidelines that ensure a continuous usage of their products does not harm the environment. These guidelines should be reviewed by the regulatory authorities together with all other relevant information for the risk evaluation of chemicals.
The most basic management option for any agrochemical is the application rate and frequency of usage of a product. Current pesticide labels specify the highest application rates to be used on specific crops, but these are based on the efficacy of the chemical to protect such crops against insect pests, diseases or competition from weeds, rather than on environmental grounds. One can assume that if the recommended rates are approved by regulatory authorities, they do not pose a serious harm to the environment; but is this always true? For newly developed chemicals, the manufacturing company must prove this is the case. However, for most chemicals already in the market, an ERA should be carried out with the aim of proving the safety of such rates. When the environment is compromised, restrictions to the frequency of application should be established so as to ensure a product is not applied ad libitum.
Other management options, often considered in IPM programs, may include i) restrictions to the type of crops the chemical products could be applied; ii) limits to the frequency of application of a product during a cropping season; iii) specific instructions about the mode of application, e.g. type of spray nozzles, planters, etc.; iv) timing of applications to avoid harm to bees; v) establishment of vegetated barriers or other types of buffer zones to capture spray drift that could fall onto neighbouring land or water bodies; and vi) establishment of vegetated ponds and/or sediment traps to capture runoff loaded with residues and allow the natural remediation of these [174]. There is ample evidence that some of these measures have proven very effective in reducing the impacts of pesticides in the environment [175,176].
Finally, most product labels include some basic management practices. However, in many developing countries, these instructions are ignored because they are written in foreign languages or, more commonly, due to the illiteracy of farmers and applicators [177]. Label instructions are, therefore, insufficient to ensure a proper management of toxic chemicals such as pesticides.

Conclusions
Determining the risk of chemicals in the environment requires a complex evaluation of many different lines of evidence. We have examined here the processes leading to such an evaluation and criticised the main deficiencies of the current systems. Bearing in mind that no system is perfect, as the risk process is essentially a human evaluation, we have also pointed out new approaches that could help improve chemical assessments in the future.
One area that will always be crucial to any chemical risk assessment is the accuracy and reliability of chemical and toxicity data. Without proper data, the evaluations would fail to determine realistic risks under possible scenarios. In this context, modelled predictions of exposure must be validated with factual data, whatever the conditions. A second area for improvement is the science underpinning the toxicity of chemicals. Toxicology as a science related to human health has a long history, but its application to environmental science lags behind. Part of the problem is that ecotoxicology relies too much on acute toxicity data obtained under laboratory conditions. In the absence of any other information, such data are used to evaluate risks in the first tier of the risk assessment, so mistakes are commonly made. While this shortcoming has been recognised a long time ago, the inclusion of new risk approaches that take into consideration long-term, sublethal, endocrine and population effects is met with resistance, perhaps because the new information thus gathered cannot be easily analysed using mathematical models. Time-dependent toxicity has made great advances in recent decades, and yet its incorporation in risk assessment is lagging. These aspects need more attention by risk assessors and regulators of new agrochemicals if risk assessments are to be more realistic.

Author details
Francisco