Screening against biochemical targets with compact chemical fragments has developed a reputation as a successful early‐stage drug discovery approach, thanks to recent drug approvals. Having weak initial target affinities, fragments require the use of sensitive biophysical technologies (NMR, SPR, thermal shift, ITC, and X‐ray crystallography) to accommodate the practical limits of going smaller. Application of optimized fragment biophysical screening approaches now routinely allows for the rapid identification of fragments with high binding efficiencies. The aim of this chapter is to provide an introduction to fragment library selection and to discuss the suitability of screening approaches adapted for lower‐throughput biophysical techniques. A general description of metrics that are being used in the progression of fragment hits, the need for orthogonal assay testing, and guidance on potential pitfalls are included to assist scientists, considering initiating their own fragment discovery program.
- fragment‐based drug discovery
- biophysical screening
- efficiency metrics
- nuclear magnetic resonance spectroscopy
- surface plasmon resonance
- thermal shift assay
- isothermal titration calorimetry
- fluorescence polarization
- X‐ray crystallography
“Going small” with fragment‐based drug discovery (FBDD) denotes using low molecular weight compounds to probe a therapeutic target. This also includes using smaller tailored libraries and lower screening throughput in more carefully measured assays. This is a consequence of being reliant on biophysical technologies, as compared to classical high‐throughput screening (HTS) approaches performed in 384‐well plates that detect product formation. FBDD at its core is target‐based drug discovery, but the initial approach of fragment screening differs from standard lead‐like screening, which utilizes much larger higher molecular weight screening libraries.
In theory, modern target‐based drug discovery screening libraries are designed to maximize coverage of chemical space. This is especially important for groups that use high‐throughput technologies (HTS) and screen against diverse targets. These target‐based programs tend to rely on screening the highest practical number of chemical entities from their screening libraries, sometimes accumulating millions of compounds . However, with over 166 billion possible synthetically accessible organic molecules containing up to 17 heavy atoms (nonhydrogen) , even the biggest screening libraries cannot possibly statistically represent this vast chemical space .
From modeling described in a 2001 article, Hann and colleagues showed how higher molecular complexity (i.e., ligand size) significantly decreased the probability of protein‐site molecular recognition . The authors outlined this as a primary shortcoming of the combinatorial chemistry/HTS approach to drug discovery and promoted the idea that screening smaller libraries with reduced complexity could be a complementary approach. Thus, by reducing compound complexity, FBDD evades the pitfall of scaffold bias which develops in large lead‐like screening libraries.
In this chapter, the reader should come to appreciate how going small is an intrinsically orthogonal screening platform that is easily integrated with established biophysical techniques. The methods, examples, and citations discussed are intended to guide a newcomer to FBDD, specifically scientists who have some prior experience with drug screening principles.
2. The rise of fragments as a screening ideology
Fragment‐based drug discovery started as a concept published in 1981  by biochemist William P. Jencks, who characterized the binding affinities of molecules to proteins as being built from components. Citing several examples, he described how the balance of Gibbs‐free energy for a two‐component molecule binding to a protein could also be described through the equation where values are the “intrinsic binding energies” of the components and is their “connection Gibbs energy.” A key aspect of the concept is that the component's contribution to the balance of the observed binding energy () would be relatively significant but weak, thus creating a challenge for detection. Jencks’ concept was obscured at the time due to the excitement around combi‐chem/HTS drug discovery approaches. In 1996, the conceptual observations of Jencks were experimentally validated by the team of Fesik. His team successfully produced a drug lead by linking fragment hits detected by a sensitive two‐dimensional nuclear magnetic resonance (NMR) spectroscopy binding assay (often referred to as SAR‐by‐NMR) . Since then, fragment‐based drug discovery has energized the pharmaceutical industry. In 2011, a drug for BRAF‐mutated metastatic melanoma became the first FDA‐approved drug discovered via FBDD . A second, for chronic lymphocytic leukemia, was approved in 2016 , with several related candidates currently making their way through the pipeline .
3. Fragment primary screening
At present, NMR, surface plasmon resonance (SPR), thermal shift assay (TSA), isothermal titration calorimetry (ITC), and X‐ray crystallography (Sections 3.2–3.4, 4.1 and 4.2) are the most widely used techniques in FDBB. Given their respective throughput capacities NMR, SPR, and TSA are often used as the primary screening technology with ITC and X‐ray crystallography reserved as secondary screening. Figure 1 shows the effective ligand affinity coverage of each technique which partly demonstrates their utility with FBDD. All biophysical screening techniques work best in combination and individual hits need careful orthogonal validation. Crystallography is the gold standard as the information gained allows for rapid ligand advancement. However, its application as a primary screen is often impractical due to resource and time limitations. High concentration inhibitor biochemical or fluorescence polarization (FP) assays can be used in some cases for orthogonal validation of primary screening hits where crystallography is not an option.
The workflow represented in Figure 2 shows how these techniques might be organized into a traditional screening paradigm. It is common to use some of these techniques in parallel, particularly at the secondary screening stage since there are always fewer compounds to evaluate. Pragmatically, scientists should obtain structural insights at this secondary stage to validate primary screening results. Structural information is preferred when deciding to progress a fragment to the hit generation stage (discussed further in Section 5.2).
3.1. Fragment library design
An obvious first step for any primary fragment screening is to resource compounds for the screen. However, wielding a proper fragment library as a tool for hit generation conflicts with traditional lead‐like screening methods. Central to the conflict is the regular use of high concentrations of compounds to accommodate expected low binding affinities. Practical pitfalls, such as compound aggregation, compound precipitation, dramatic pH changes, detector saturation, and nonspecific interactions, are a minor concern when nanomolar concentrations are used during screening of larger molecular weight molecules, but can become major issues when millimolar concentrations are used in fragment screening.
In 2003, scientists at Astex Pharmaceuticals published a synopsis of their emerging fragment drug‐discovery program and noted that the average physical properties of their fragment hits fell conveniently within different orders of 3 (molecular weight <300, hydrogen bond donors ≤3, hydrogen bond acceptors ≤3, and ClogP ≤3) . As a compliment to Lipinski's rule of 5 (RO5), the fragment rule of 3 (RO3) was a convenient target toward which chemical suppliers built fragment libraries from their existing stores. Ten years later, Astex Pharmaceuticals revised their position , stating that similar to the RO5 described by Lipinski, RO3 was more of a guideline and that their refined library consisted of fragments with less than 17 heteroatoms with molecular mass <230 Daltons. This exemplifies how well‐constructed fragment libraries should rely heavily on practicality to be effective tools and speaks to the need for additional scrutiny of (in order of importance) solubility, stability, and reactivity for effective fragment screening.
Commercial and nonprofit access to fragment libraries exists, and several examples have been characterized [12, 13]. A custom library allows for existing libraries to be used in addition to catalog resources with some tailoring based on specific cheminformatics principles, such as optimization of the representative chemical space, avoidance of nuisance compounds, and guidance with pharmacophore models. One early‐stage example of note is the Global Fragment Initiative (GFI) by Pfizer , wherein the library was built from compounds on hand, purchased, and synthesized. Each member of the GFI library was rigorously characterized and empirically tested for aqueous solubility up to 1 mM, with the intent of using the library for multiple biophysical techniques. How a fragment library is procured will depend on an acceptable balance of convenience and cost, but it is highly recommended that the end user have methods in place to reliably assess each compound for practical use at high concentrations. For an example of this workflow, see Figure 3.
Fragment hits can have limited traction toward chemical expansion or linking as a consequence of their size. As a safeguard, Merck strategically redesigned its general FBDD library to accommodate more structure‐activity relationships and to fill structural gaps by visual inspection . The purposeful move away from diversity in its general library was a concerted effort of cheminformatics and crowdsourcing of medicinal chemists to gain pipeline traction. The strategy leads to larger general screening libraries and effectively restricts widespread application to appropriately equipped programs. Regardless, any FBDD program design must account for this pitfall and ensure the potential to develop fragment hits through chemistry or catalogs.
Finally, Pan‐Assay Interference Compounds (PAINS) are a well‐known classification of chemical entities to activity across multiple assays and proteins, and they have been thoroughly reviewed in regards to their practical impacts on FBDD , and related cheminformatics filters are available via the Internet . The reduced chemical complexity of fragments does inherently diminish the number of “worst offenders” in its library and often bad fragments are quickly identified and triaged from screening libraries.
In conclusion, for FBDD, it is prudent to prioritize highly soluble fragment libraries with a diversity of ring shapes that can match a broad range of hydrogen bonding interactions from the protein target. A minimalistic approach would be to only eliminate mostly predictable fragment “show stoppers,” containing toxicophores subject to xenobiotic metabolism, since it is often easy to scaffold hop in the early stages of FBDD to remove unwanted motifs.
3.2. Nuclear magnetic resonance
Modern NMR spectroscopy is best known for enabling the three‐dimensional characterization of ordered molecular structures in solution and was the first technique to be used for fragment screening . It is also one of the few biophysical techniques that can easily be switched between perspectives of the small molecule and the protein at run time. A growing list of NMR experiments used in fragment screening can help validate hits without using additional biophysical techniques.
Samples are prepared
Spectroscopy experiments that indicate binding from the fragment's perspective are structurally less informative but have a higher dynamic range than do protein‐detected experiments. Because significant cost savings can be made by using unlabeled protein, early‐stage, budget‐conscious programs may focus on using the ligand‐detected suite of experiments shown in Figure 4, often acquiring them in parallel for each sample. The saturation transfer difference (STD) experiment can provide a binding‐epitope map, as magnetization can only travel through the protein to the bound fragment . The epitope map enables a scientist using unlabeled protein to identify the portions of the fragment in closest proximity to the protein and, conversely, the portions available for expansion or linking for hit generation. Specialized experiments, such as the interligand nuclear Overhauser effect and target immobilized NMR screening (ILOE  and TINS , respectively), are best reserved for the study of difficult proteins or competition experiments. From the protein perspective, several variants of the two‐dimensional HSQC experiment (typically 1H nuclei measured directly and X nuclei, indirectly) that helps to disperse the numerous signals in a protein target and depend on the protein isotopic enrichment strategy. TROSY (another HSQC variant) can be used for large, usually perdeuterated proteins but requires a high‐field instrument (i.e., 800 MHz and up); the discovery of methods to selectively label methyls  has simplified the resulting spectra and enabled screening on lower‐field instruments (e.g., 500 MHz) .
To reduce protein consumption, NMR FBDD relies on the screening of equimolar fragment mixtures. This step is immediately followed by mixture dereplication, usually involving manual interpretation of spectra, although with careful sample preparation and a well‐curated fragment spectral database, hits can be identified by software (e.g.,
With the exception of WaterLOGSY, titration and analysis of the resulting signal of these NMR experiments can provide binding affinity scores for fragments with reasonable accuracy. In addition to the previously described STD experiment, 19F NMR screening by filtered transverse relaxation (T2), a filter also referred to as a Carr‐Purcell‐Meiboom‐Gill (CPMG) scheme, can be a powerful option if used in competition with a known fluorinated ligand having a
There are relatively few drawbacks to using available NMR facilities in a FBDD program considering the method's ability to contribute to every aspect of the workflow, such as library quality control and hit generation. The two drawbacks that are most often cited are the speed of the screen from sample preparation to data analysis and the demands on protein production for screening by NMR.
3.3. Surface plasmon resonance
Surface plasmon resonance shares the spotlight with NMR as a major screening technique for FBDD programs. The hurdles that come with using an immobilized protein for screening are counterbalanced by increased sensitivity and immediate access to kinetics data. Although absolute binding kinetics are not assured when dealing with weak affinities, for well‐optimized experiments, obtaining
These experiments generally have long lead in times, as protein immobilization chemistry and buffer conditions need to be carefully perfected prior to screening experiments. Setting aside the routine instrument maintenance and screen validation using appropriate controls, the traditional stages of each experiment have been buffer and compound preparation, target immobilization, start‐up samples (i.e., buffer match, blanks, and positive controls), fragment primary screen, data reduction and analysis, hit selection, and secondary dose response of hits.
Successful FBDD programs focus most practical attention on the preparation of the chip immobilization surface, ensuring stability between experiments. This focus on stability assumes that the loading conditions for the protein have been standardized and validated across multiple coupling methods, such as a covalent amine coupling  (to amine termini or lysine) or the less‐amenable (regarding FBDD) coupling by protein tags (e.g., biotinylation or poly‐histidine).
Once the target is loaded, screening samples should be prepared as single‐point concentrations, for example 100 μM. The concentrations can be variable, but the expectation is that they are carefully prepared to avoid common problems such as precipitation and aggregate formation that may produce nonstoichiometric binding to the target. The use of detergents is allowable for the sake of the target but further complicates buffer matching in the reference channel. Troublesome fragments identified by their atypical sensorgrams are usually triaged from screening collections.
Once the data have been collected, data reduction and normalization follows and requires some practice to prosecute these steps efficiently. An experienced scientist can perform a first pass of the entire data set to quickly exclude sensorgrams for which a reasonable curve fit is unlikely due to compound incompatibilities or systemic problems with the instrument (Figure 5). Next, data reduction can seem like a tedious process to simply “clean” the sensorgrams; but in addition to aligning the injection time points, it checks the soundness of the blank injections, which yield important problem‐solving data. Finally, the configuration of most instruments requires multiple runs to cover entire fragment libraries, so run‐to‐run variation naturally exists. Normalization seeks to enable the comparison of experimental responses, regardless of target density, binding activity, molecular weight, and buffer mismatch. The most expedient way to achieve this is by using the known concentration and binding affinities of the control compound injections to convert the response unit (RU) to a relative occupancy (
Recently developed methods have increased the throughput of dose response studies, providing kinetic information earlier in the screening workflow and practically eliminating the need for a secondary set of experiments for hits. For example, a nonbinding diluent (i.e., 20%, w/v sucrose) can be used to create a range of compound concentrations from the same sample , or individually prepared sample concentrations can be sequentially injected (e.g., “single cycle kinetics” or kinetic titration) . Both methods save time by avoiding multiple regeneration steps. Further gains have been realized by using Taylor dispersion injections  in a longer flow path (e.g., OneStep™, SensiQ Technologies Inc.) to deliberately produce a gradient of analyte concentration flowing across the chip surface: by modeling  this dispersion from a known initial concentration, the same kinetic data can be obtained during the injection phase of the experiment.
3.4. Thermal shift assay
Protein exists in a thermodynamic equilibrium between the folded and unfolded state. As the temperature of the system increases, the ratio of folded to unfolded protein shifts toward the unfolded state, making it possible to determine the temperature at which half of the protein is in the unfolded state. This point is referred to as the melting temperature of the protein (
Thermal shift has since been revised and optimized for use as an economical fragment screening method. A typical TSA has minimal requirements for the quantity of the target protein. A pilot assay should be completed wherein the concentration of protein is altered over a given range, as running TSA with an excess of protein can saturate the detector. Alternatively, too little protein will give a flat curve and negatively affect the signal‐to‐noise ratio in the resulting data (Figure 7). Thermal shift assays are commonly completed with 1–10 μM target protein, which should be as pure as possible. Gross impurities within the sample protein could lead to multiple curves, reducing the accuracy of the resulting data.
In addition to protein, TSA requires an environmentally sensitive dye. Historically napthylamine sulfonic acid dyes (such as 1,8‐ANS, 2,6‐ANS, or 2,6‐TNS) were used for this purpose. Currently, SYPRO Orange is used more commonly as its fluorescence (Ex/Em of 300, 550/630 nm) is better adapted to rtPCR instruments. This dye is supplied from the vendor as a 5000X concentrate in DMSO. A pilot assay should be conducted to determine the appropriate concentration of SYPRO Orange for a given assay, as is done for protein concentration. In practice, many assays can be effectively run at a 5X concentration of SYPRO Orange. After the appropriate dye concentration has been established, a master mix of dye and buffer can be applied to the assay plate. At this point the plate is ready for the application of a fragment library.
As in many fragment‐screening assays, the quality of the library is paramount. In the assay development stage, a pilot assay should be run in which the concentration of solubilizing agent is varied over a range to define any effects that the agent might have on the stability of the protein. If time and availability of fragments allow, then a screen of fragment alone should be performed to check for fluorescence in the absence of ligand; this additional screen could yield data that is useful for removing problem compounds within the library. With an appropriate fragment library, it is possible to apply fragment to the screening plates. This is typically accomplished by using a pin tool or similar device capable of accurately delivering small volumes. After the plate has been exposed to the fragment library, it should be sealed with a fluorescently inert plate seal to avoid sample evaporation during the course of the assay. The assay plate should then be centrifuged to remove any air pockets from the samples, as these might reduce the quality of the data.
It has been established that TSA can be completed in a standard qPCR instrument . The minimal requirement is the ability to evenly heat the samples over a suitable range of temperatures and record fluorescence. Deconvolution of resulting data output is variable based on instrumentation. This step can be time consuming and automation in data processing is helpful if large‐scale projects are planned. Results should be recorded as fluorescent units recorded at each temperature in each well. This information can then be moved into data analysis software (e.g.,
There are many benefits to using TSA for the initial biophysical screening of a fragment library. First, the assay does not rely on the biochemical activity of the target and can be performed with limited knowledge of the target's function, which is beneficial for FBLD because fragment binding often does not yield a measurable biochemical result. Additionally, TSA requires only a small amount of minimally stable protein whose thermal stability can be tracked in the presence and absence of the ligand . Thermal shift assays can be completed by using widely available RT‐PCR instruments  and is relatively simple to perform with limited training, reducing the up‐front cost of implementing. This medium‐ to high‐throughput assay typically enables the testing of up to 384 compounds in only 30–40 min.
Thermal shift assay is not a silver bullet
3.5. Fragment validation
For the purposes of validation, a “good” fragment hit should be spatially described within a known target site via crystal structure, two‐dimensional NMR studies, or at least, a ligand epitope map. The structural information enables the chemical expansion or linking of fragments during hit generation.
Hit rates in fragment‐based screening are typically high, frequently at least an order of magnitude higher than those of ligand‐based screening. Hits in the primary screen can be narrowed by using an orthogonal technique of comparable throughput for validation. Considering the numerous techniques available at the primary screening stage, the path to validation can be variable. Using an orthogonal approach to primary screening assumes that fragments that are hit by both techniques will translate successfully in secondary screening.
Using multiple techniques for fragment primary screening may yield a diminishing return. For example, after solving 71 crystal structures from soaking 361 fragments and statistically comparing the results of other fragment screening techniques , one group found that nearly half of the 71 “good” fragment hits were missed by other techniques. When used in combination, hit validation was statistically worse, but this fact was heightened by the inclusion of hits that were originally missed by crystallography. Therefore, orthogonal primary screening with at least two techniques still achieves the goal of fragment hit validation. However, if the primary fragment screening techniques do not provide meaningful structure characterization, then NMR or crystallography is required in a secondary screening capacity.
4. Fragment secondary screening
4.1. X‐ray crystallography
Several companies, including Astex Pharmaceuticals, SGX Pharmaceuticals, Plexxikon, and Abbott, have effectively used structural biology in their fragment‐based lead discovery efforts. This valuable tool avoids the pitfalls of false‐positive results and nonspecific binding that may result from other fragment‐screening methods. Any fragment hit discovered via crystallography is inherently validated for the given target. Crystallography gives a clear picture of the fragment binding posture within the active site. This information can greatly facilitate the design of libraries based on the initial hit.
Using crystallography as a method of fragment‐based lead discovery has some limitations. It has long been associated with slow throughput. Additionally, some targets, such as membrane‐associated proteins, do not readily lend themselves to crystallization. Crystallography often requires extensive and time‐consuming efforts to arrive at crystallization conditions suitable for fragment soaking experiments. Even if these conditions have been determined, ready access to a suitable beam line and expertise in crystallography and data reduction can be hurdles in the rest of the lead discovery process. Protein in crystallization conditions is in a crystal lattice, which does not completely reflect a physiological environment. This artificial environment can lead to artifacts in the data and an inaccurate picture of fragment binding. Although a crystal structure is rich in information, it does not reflect the potency or the biochemical activity of the bound fragment. With a wealth of structural information, it will not be possible to rank hits based on these criteria; orthogonal assays are critical for these purposes, and crystallography alone will not suffice.
The process of generating a fragment structure typically follows a set path. First, protein must be purified. Then, crystallization conditions for the purified protein are determined. Crystals can then either be grown in the presence of a fragment or soaked into a preexisting crystal. The resulting crystals are flash frozen and used for data collection either in house or at a larger beam line. The data is analyzed to generate a three‐dimensional model of the fragment binding site within the target protein. This model can be used in iterative design efforts to grow the fragment into the binding site or link it to other fragments in neighboring sites.
Perhaps one of the most critical hurdles to successfully implementing structural biology into the FBLD process is having a suitable supply of the target protein. In most cases, protein used for X‐ray crystallography must be pure and in high yields. A typical screen for crystallization conditions is completed by using as much as 20 mg/ml of protein. If initial crystal screening efforts using native protein are not effective, then it may be necessary to modify the target via removal of mobile loop regions or trimming the terminal ends. Creating multiple variants is commonly a valuable step in generating robust high‐resolution structural data.
Once protein is available with sufficient yield and purity, screening for crystallization conditions begins. This is typically performed as a high‐throughput screen with as many as 1000 set conditions in a single experiment. Many commercially available sparse matrix and additive crystallization screens use conditions that have historically yielded crystals. When these experiments yield a hit, the conditions can then be optimized to yield larger, highly reproducible crystals. Suitable crystals for fragment soaking should have fairly high resolution (<2 Å). Starting with a higher resolution structure improves resulting maps and increases the chances of producing an accurate model of fragment binding. If multiple crystallization conditions are available, it is best to choose the one that more closely represents physiological conditions, even at a slight cost to resolution. This trade‐off will result in fragments with best chance of advancement to be prioritized.
Fragments suitable for X‐ray crystallography benefit from good solubility, as insoluble fragments have a low probability of yielding a structure with suitable occupancy of the ligand. Compounds are soaked at high concentrations to improve the chances of high occupancy within the structure. Given this fact, a fragment should have a solubility of at least 1 mM . SGX pharmaceuticals benefited from generating a brominated fragment library, which enabled detection of anomalous scatter as an indication of successful soaking, streamlining the data collection process. To further increase throughput, fragments were soaked in mixtures composed of fragments with diverse shapes. Resulting structures could then be deconvoluted based on the shape of the ligand in the active site . Fragment mixtures do run the risk of decreasing the effective concentration of individual fragments in the mixture because high fragment concentration contributes to high‐occupancy crystal structures, which can be detrimental to an experiment. In addition, these mixtures increase the chances of damaging the crystal in the soaking process, and fragments within the mix can interact with one another, skewing the results of the experiment. In one report, fragment mixtures resulted in 11 structures, whereas 20 structures resulted from individual soaking experiments . These data suggest that if time and resources permit, soaking individual fragments is preferable to using mixtures.
If all other factors fall into place, a data set is collected. Improper treatment of this data set can result in an inaccurate and misleading model of fragment binding. Methods of data reduction and refinement are highly variable, and model building is a continuously evolving process. Certain steps in model building are especially pertinent when dealing with fragments. When searching for ligand density in a map, it is tempting to perform a quick refinement, and presume the location of a ligand. This approach is especially hazardous when modeling small fragments. If the map is not of high enough quality when the search for ligands begins, it is possible that water, a poorly resolved side chain, or even highly conserved buffers could masquerade as bound fragments . It is best to perform several rounds of refinement before the ligand hunt begins. Model in any waters and then refine a few more times . If the fragment is bound, a convincing map should take shape, and the creation of an accurate model of the target protein bound to fragments should be possible.
As technology progresses, many of the limitations of structural biology are being addressed. Most notably, throughput is being increased by incorporating automation. As this occurs, obtaining structural data is no longer the rate‐limiting step in lead development. In several cases, automation of this process has improved to the extent that structural studies are successfully being used as a primary screen. Although this approach might not yet be a feasible option in laboratories with limited resources and limited access to beam lines, it holds promise for an improved fragment screening process in the future.
4.2. Isothermal titration calorimetry
Calorimetry measures the thermodynamics of a molecular interaction via observations of heat change in a reaction occurring in an adiabatic (thermodynamically closed) system . In the context of drug discovery, the molecular interaction most commonly measured is the heat of binding of a small molecule to a protein target, although reaction kinetics can be measured under specific circumstances . Measuring the heat associated with a molecular interaction allows direct measurement of the extent of breakage and formation of noncovalent interactions upon complex formation . Using other methods, such as coupled reactions (e.g., product release) and fluorescent binding techniques, the change in enthalpy can only be inferred via the Van't Hoff relationship .
Classically, calorimetry has been applied to measurement of a binding interaction in two different ways: isothermal titration calorimetry, which measures heat release upon binding; and differential scanning calorimetry (DSC), which measures thermal stabilization of a protein due to binding of a small molecule. These methods offer a very detailed look at the thermodynamics of binding and have been used successfully in optimization after initial fragment hits have been identified. For the purposes of this chapter, discussion will be limited to ITC, as it is more directly applicable to FBDD. As a direct measurement of the heat of binding, ITC allows the researcher to remove the effects of fluorescent tags, antibody relationships, or coupling chemistry from the investigation of a binding relationship. Because ITC is a solution‐based method, surface physical effects that interfere with binding (often seen with SPR) are not an issue.
Directly determining the thermodynamic components of overall binding allows a researcher to optimize a lead compound for a chosen target through specific binding interactions while minimizing off‐target effects that often derail drug discovery programs. When combined with X‐ray‐based binding information or applied to analysis of structure‐activity relationships, ITC can be a powerful tool in drug discovery. Overall binding of a small molecule to a target (as expressed by
The normal range of dissociation constants that can be measured by ITC is from 10 nM to 100 μM . This range can be extended below 1 nM or above 1 mM by using displacement methods [42, 43], although a suitable displacement ligand (independently characterized) must be identified beforehand. Displacement ITC has not yet gained wide acceptance in drug discovery as of this writing, with most researchers reporting results of direct binding studies. Most fragments have binding affinities in the millimolar range, limiting the applicability of ITC as a screening method. In addition, large amounts of protein are typically required (usually 0.1–0.3 mg of protein per experiment, this moderate amount adds up for multiple samples and repeats). Each titration requires a moderate amount of time (45 min–1 h), but for a large number of samples, experiment time becomes a hurdle to using ITC as a screening method.
For these reasons, ITC is usually brought in to the drug discovery process after screening, as part of hit validation and lead optimization . Fewer compounds are involved, allowing more focus on the large amount of information provided by ITC [44–46]. After a compound of interest is identified, a small set of structurally similar compounds can be purchased or synthesized to gain insight into the nature of binding to a target [47, 48]. At this stage, ITC offers the most benefit, as small molecules can be identified that maximize enthalpic interactions and minimize entropic interactions with the target.
Recent research into high‐throughput calorimetry offer solutions to researchers wanting to incorporate calorimetry into an earlier stage in their screening cascade. Research into technologies in pursuit of the so‐called “lab on a chip” has led to the development of both microfluidic  and droplet‐based systems [50–52]. The droplet‐based system has been applied to both binding and kinetics.
4.3. Fluorescence polarization
Biochemical screening is not a typical choice for the primary screening of fragments but can be used to verify an inhibition of function, and inhibition by a known mechanism which may help discriminate fragments binding to alternative target sites.
Fluorescence polarization‐based assays are an option in FBDD when preliminary information about a target, such as small molecules that bind, is known. FP assays are competition assays in that they indirectly measure the effect of a compound on binding of an enzyme to a fluorescent probe. A fluorescent probe ideally starts out as a small molecule that binds tightly to an enzyme with known stoichiometry. This small molecule is chemically modified by the addition of a fluorescent label via an aliphatic or polyol linker to generate a fluorescent probe. Signal readout, represented in millipolarization units (mP), is calculated by measuring the amount of plane‐polarized light passing through two filters (perpendicular and parallel to the plane of incident light) that remains after interaction with a solution containing probe and calculating the ratio of parallel to perpendicular light [53, 54]. In an FP assay, a plate reader measures the difference in relative tumbling between a free probe (high amount of tumbling, leading to more scattered emission, and thus low FP signal) and probe bound to a protein (low amount of tumbling, with more ordered emission relative to incident light, leading to high FP signal).
Depending on assay design, FP can be applied to either binding or activity assays, with ready‐to‐use kits available from BellBrook Labs  or Cayman Chemical . Activity assays using FP rely on endpoint detection of binding to a probe, thus, are modified binding assays. For applications for which no kit is available, a small amount of synthesis can combine a small molecule of interest with a wide variety of synthetic fluorophores available from major vendors. When designing a probe for FP, resources such as the Molecular Probes Handbook  can be valuable.
FP is less commonly used in fragment screening than perhaps it should be. The range of binding affinities normally found in fragment libraries is in the high‐μM to low‐millimolar range, previously considered to be out of the range of FP when tight binding probes are used . Although still somewhat limited in application by the need for a well‐characterized probe, FP offers an inexpensive way to screen large numbers of compounds and has been used for many years in conventional drug discovery programs. With the advent of fragment‐based techniques, FP is finding use both as a site directed primary [59, 60] and secondary [61–64] screening method, and has been used to validate new methods .
5. Fragment hit progression
For fragments, it is important to see activity in more than one assay, but equivalent potency in assays is not so important, and is not the best way to determine which molecules to promote. Various combinations of advanced metrics (i.e., efficiencies) with empirically driven evaluations (e.g., PAINS, metabolic stability) can help scientists make informed decisions on hit progression. An outside example of the successful application of metrics is sabermetrics. Today widely used in baseball, but also heavily scrutinized and evolving, sabermetrics uses advanced statistics to define in‐game performance and improve decision‐making by managers. Just like a game manager, scientists have to be aware of the limitations and effects of following a metric's indication, being consistent about hit progression occasionally in the light of conflicting results. Some experts  suggest using ligand efficiencies that can be easily determined without a calculator to facilitate discussions. Several metric‐focused reviews are available to consult; one in particular covers a large number of reported hit‐lead programs  for a wider perspective.
5.1. Empirically based fragment evaluations
Fragment screening methods virtually ensure that most screens will produce multiple hits for any target. Thus, the challenge to the researcher is not identifying compounds that interfere with a specific enzyme but determining which of many is the best to carry forward. Several metrics have emerged to guide the selection of fragment lead molecules through the drug discovery process: these metrics combine physical properties (e.g., molecular weight, cLogP, polar surface area, number of H‐bond donors and acceptors) with potency data. The earliest of these was simply termed ligand efficiency (LE) and involved dividing the free energy (Δ
Another metric in wide use in FBDD is lipophilic ligand efficiency (LLE) , which takes into account the total lipophilicity and potency of a molecule (IC50,
5.2. Hit generation
Fragments that progress to the hit‐generation stage typically do so with structural insight that either describes the fragment bound to its protein target or the binding epitope mapped onto the fragment. A typical downstream workflow for hit generation includes a path that is structure blind, but this is essentially a diversion from traditional target‐based discovery and may lead to an SAR bottleneck. The hit‐generation stage refers to acquisition and screening of larger nonfragment ligands, which are obtained by catalog or synthetically prepared. This workflow includes either a chemical elaboration of individual fragments or the linking of at least two fragments, which then requires some optimization of the linker between them.
The hit generation phase is a practical place for virtual screening to be used to assist in fragment development. Method comparisons [72, 73] suggest that the currently available force fields and docking procedures based on lead‐like molecules will provide adequate results for fragments (i.e., better than randomized screening). However, the careful consideration of a scoring function to reliably discern weak interactions cannot be overemphasized. Scientists have recognized this distinction within FBDD and have sought to improve the scoring functions for fragments . One notable viewpoint is that fragment elaboration or linking may be facilitated when binding poses are expressed as Gibbs energy .
Considering that FBDD has been attributed to at least two FDA drug approvals, and that the platform is relatively easy to integrate into existing technologies, many companies and academic groups have started their own fragment‐based discovery programs. The biophysical techniques each group uses will be dictated by the target, available facilities, and individual preferences of the investigators. Generally speaking, as long as the protein target has been successfully used with a technique in lead‐like screening and structural information is available, there are virtually no other major obstacles to generating new chemical matter for a given target‐based screening program. What remains are practical challenges, two of which bear repeating for those who are in the beginning stages of FBDD.
First, the key practical difference between lead‐like screening and fragment screening is the use of high concentrations of the fragments. The increased concentration impacts the compound library that is used and the clarity at which hits are delineated. Some suggestions have been made as to the optimal concentrations to use for a given technique. These are only suggestions and will likely change based on the system and techniques employed. With some practice, these procedures can be suitably optimized and need less attention going forward.
Second, is the challenge of directing fragment build out and/or fragment linking chemistry which can be resource intensive. As such, medicinal chemists are aided by the use of a preferred ligand efficiency metric early in the FBDD process to assist the ranking of fragment hits. Certainly other empirical and nonempirical factors will influence the progression of fragments, but metrics will help organize the structure‐activity relationship which is a key driver of the expansion or linking of fragments during hit generation.
The authors would like to acknowledge the funding support from the National Institutes of Health grant R01AI110578, and the American Lebanese Syrian Associated Charities (ALSAC), St Jude Children's Research Hospital. We thank Cherise Guess, PhD, ELS, of the SJCRH Department of Scientific Editing for her assistance with editing.