In this article we delve into the fundamental scientific challenges of in vitro High Throughput Screening (HTS) assay development. Often we hear why the drug discovery process is so complicated: because biology is complex, still poorly understood and often unpredictable. In fact, taking a drug all the way from initial discovery to market is very challenging. We chose to emphasize one very important aspect which is often entirely overlooked: pre-clinical screening of potential drug candidates. How to find among millions of compounds the promising structure that will take us to the next step of drug discovery? It is like finding a tiny needle in a giant haystack. More importantly though, how to set up a successfully performing HTS and control your screening quality? There are some inherent challenges to this and we believe that a discussion of this topic will be interesting.
Actually HTS deals with enormous compound libraries and different strategies are crucial to streamline the process of hit identification. In general, for HTS we can strategically combine two HTS compound library populations that provide two screening modes: diverse and focused screening.
The first mode, diverse screening, comprises analysis of a compound library without any structural bias and its goal is to identify structurally diverse active molecules in the most efficient way.
The second mode, focused screening, involves the testing of a computationally-based compound collection. This approach is initiated by a computational selection of potentially active compounds from a virtual compound library based on, e.g. the 3-D structure of the active site of desired target or known important pharmacophoric elements in active ligands. This in silico method might improve hit rates by even 10-100 fold in comparison with random screening. The bias in this method may however mean that novel types of active molecules could be missed. For this reason it is advantageous to perform the diverse and focused screens in parallel. Presently, the uHTS (>100,000 samples per day) is usually carried out by fully automated systems where the samples are handled from the compounds library to the final readout station by the robots. Robotization limits scientists’ efforts not only to the refilling of reagent solutions, removal of waste material and interpretation of obtained data but undeniably to increase the precision of the assay.
In general, greater efficiency is achieved in HTS by pooling a number of compounds, which reduces the number of data points, and then testing these pools to see if the mixture of compounds is potent. Retesting of individual compounds in the active pools only. The downside of this approach is that the activity of hits might be obscured by other compounds within the same pool, for example if compounds interfere with the assay or detection method.
Figure 1. From HTS process to drug development. Typically, 1% of compounds identified to be most active in a primary single point screening, called ‘hits’, are submitted to the secondary screen to confirm their activity. Additionally, the compounds should undergo counter screen to detect their potential to interfere with the assay applied in the primary/secondary screen. Confirmed hits with an established biological activity according to a structure-activity relationship (SAR) series and medicinal chemistry are termed ‘leads’ and can be developed into drug candidates for clinical testing.
When we look closer to HTS, we will see that more than 50 % of all assay type are based on cells. Generally this type of assay is more problematic than biochemical one and readout is dependent on a phenotypic response of cells. For this reason, during designing an assay, much attention should be put on selection of the appropriate cell type, the detection methods and assay miniaturization. The quality of experiments depends on these factors, so let’s quickly go through these aspects. Firstly, what determines the success of the assay is selection of the right type of cells with a sufficiently high expression level of target protein, or else cells that have the propensity for transfection of the target of interest. Both ways should permit the generation of sufficient signal output for detection. After choosing the best type of cells for testing, the paramount importance lies with how the cells are cultured. Such basic issues as cellular confluence on a bottle or the age of the cell culture, can drastically modulate the cellular response.
Other things that influence the results of an assay are detection methods; that’s why filter sets, instrument gain, and microplate types are all important variables when running a successful cell-based assay. The good way to improve assay throughput and reduce variability between wells on a plate is seeding cells onto 96- or 384-well plates. This assay format is not perfect though since it could reduce signal intensity because of low cell density in each well. Additionally, cells seeded in small volume are more sensitive for environmental changes, like temperature, humidity or CO2 concentration, which in turn affects the reproducibility between different assay runs.
Here comes to light the edge effect, which is a common factor that deteriorates assay performance to an unacceptable level. It was experimentally proven by measurement of temperature distribution – the greatest thermal gradients were observed in the edge of a plate and result in irregular distribution of cell on the bottom of the well and modulates cells’ response to tested compounds. Fortunately, there are two different approaches to avoid edge effects. First one can simply exclude the peripheral wells on a plate but of course this considerably decreases throughput capacity and increases costs. The other solution is pre-incubation of freshly seeded cells at room temperature before placing them in the incubator. Thanks to this approach this way we can use the whole plate and expedite screening.
Laboratories move with the times and implement automation, even for gentle cell-based assays. Leading companies, like Selvita, use high-quality dispensers in order to treat cells with tested compounds, to wash them or seed them into wells. Nevertheless, we have to remember that not every type of cells will be suited to tolerate automation!
Many diseases are caused by aberrant protein activity and that’s why enzyme assays represent a major focus of investigation for drug discovery programs. There are number of basic but not easy decisions that need to be made in the process of enzyme assay development. We should choose the proper enzyme source and form, identify substrates, select an assay detection method and assay format, measure kinetic parameters and finally optimize the reaction conditions. After a successful assay development we face another challenge – a validation phase which allows to decide if an assay is eligible for HTS.
Recently, the search for small-molecule inhibitors has expanded beyond kinases and proteases with so many new enzyme targets that it is almost impossible to mention them all. The crucial question is: where should we get all these enzymes from? Since the HTS requires high enzyme purity so as we can be sure that what we measure is our target enzyme activity, the best choice would be recombinant expression systems such as bacteria or insect cells. Nevertheless, it is still possible to develop robust assays using an unpurified source of enzyme – the native one. However, this approach requires a selective substrate that allows us to measure only the activity of target enzyme. Since enzymes are complex molecules which often consist of multiple domains the expression of a full-length enzyme may be difficult. Fortunately, in some cases it is possible to use a truncated form of the target and still have a reasonable model for activity testing.
How can we measure the enzyme activity? Firstly, we should choose a suitable substrate and detection method. Usually, a synthetic substrate – a surrogate of the natural one is designed which is converted by the enzyme to a detectable product. Fluorescence-based and luminescence-based readouts cover more than 70% of the main detection modes currently used in HTS, followed by radiometric readouts (13%) and absorbance (8%). Enzymatic assays can be divided according to their ability to detect substrate consumption over time or product formation as continuous or discontinuous. In continuous assays, we can uninterruptedly measure the progress of the catalytic reaction, allowing the detection in real time. On the contrary, discontinuous assays require that the reaction is sampled at certain time points (end-point), stopped and further processed to obtain a detectable signal. But what can we do if there is no substrate available that would allow us to directly measure our target enzyme activity? In this case we can design so-called indirect assays. An example of indirect assays are those based on coupling the primary reaction of interest to other enzymatic reactions. Hence one of the products of the studied reaction is the substrate for another enzymatic reaction that is more convenient for detection.
Once we have decided on the enzyme form, substrate type and detection mode, it is time to optimize the reaction conditions. A driving principle for assay optimization is the reconstitution of the native conditions in which the target accomplishes its physiopathological role. Furthermore, the optimization process should aim to setup experimental conditions that support the maximal catalytic efficiency of the target enzyme. Hence, optimized assays display higher robustness, which in turn are expected to produce more reliable and reproducible results during HTS. The enzyme activity depends on many factors such as: buffer system and its pH, temperature, reaction mixture composition and concentrations of substrate and enzyme. The most important factors that should be properly selected are pH and temperature. Many enzymes, especially those from mammalian sources, possess a pH and temperature optima near the physiological (pH= 7.0-8.0 and T= 37°C). To achieve a reliable measurement of enzyme activity and inhibition it is crucial to configure an assay under steady-state conditions when the catalytic rate is constant in time. These conditions are achieved when the substrate concentration is equal or below the Km value and the enzyme is at three-order magnitude lower concentration than substrate.
Finally comes the question: how can we estimate that our HTS has good quality?
Ideally, the assay should demonstrate a reproducible, dose-dependent response to a small panel of reference inhibitors active against the target. However, the simple fact is that biological systems are too complex and influenced by myriad of factors. The well-performing assays typically are dynamic within certain stable range. Quality criteria charts are very convenient tool to visualize this ”dynamic stability” (Fig. 2). Potency calculated in each assay run is introduced to data set of QC chart. Each assay run data are recalculated and provide diagnostics to judge the reproducibility of potency. The QC plot represents 95% limits (QC warning) and 99% limits (QC fail). This allows to look at every assay run in the context of all performed experiments. Secondly, it is reasonable to implement two control compounds of different potencies in assay. The purpose of the primary control is to ensure that its IC50 is stable, meaning that there is no “assay drift”. The purpose of the secondary control, which should be >100 less potent than the primary reference is to examine the stability of results over a concentration range.
Figure 2. Reproducibility of reference compound in different assay runs. Bold, blue line illustrates upper and lower control (99 % limits), blue dotted lines show warning limits (95% limits), red dots present individual IC50 value for reference compound. Data are scattered around geometric mean of reference compound.
A handy statistical parameter that can be implemented to characterize the reproducibility of control compounds potency is Minimum Significant Ratio (MSR):
where s is the standard deviation of the log10IC50 across at least 6 runs. It is assumed that 3<MSR<5 describes moderate assay variation and MSR as low as 2-3 indicates a stable assay.
Moving further with statistical parameters, it is important to calculate a coefficient of variation (CV), which very simply shows us a measure of relative dispersion of the data:
where SD is a standard deviation and µ is the mean of the data set. The plate uniformity criteria for HTS enzyme assays require the CV values to be less than 10%. Due to inherent variable nature of cell-based assays, the quality criteria are more liberal and allow up to 25% CV value for established controls.
In typical HTS unknown compounds are tested together with reference compounds. The negative controls (refers as a background) present minimum signal and positive controls give the maximum signal. To assess reproducibility and signal variation several assays run with positive and negative controls should be performed. This “two extremes of activity” range could be used to calculate signal to background and signal to noise parameters. A good assay has a widely separated sample signal and background:
Usually when analyzing the assay quality, the statistical parameters intertwine with each other and Z’ factor is a good example here. It is dictated by relationship between mean and variance of positive and negative controls, and signal-to-background ratio:
Z’ factor tells us if we have an ‘’assay window’’, which is a space between positive and negative control where compounds should exhibit their activity. Z’ factor equal to 1 represents an ideal assay. If Z’ factor value is between 0.5>Z’>1, separation band of signal is large and assay quality is excellent. When Z’ factor is lower than 0.5 it means that separation of signal distribution is moderate and that results are questionable. Z’ factor equal to 0 indicates that the assay window is very narrow and there is no signal separation. Screening is impossible when Z factor is lower than 0 because signal variation of control and sample overlap each other and no screening window is observed.
HTS handles many compounds and some basic errors can happen; usually the experienced scientist can spot that some point from analysis is clearly false. It is called an outlier: a data point that significantly protrudes from other observations. In order to keep data tight, it is beneficial to always statistically check gathered values for outliers. Grubb’s test is one of the most useful and simple method which can be applied in quality control. Its basic assumption is that all values were sampled from a Gaussian population. Therefore, it may be proved that deviation of the outlier from the other values is statistically significant and most likely that outlier comes from different population. The simple formula tells how far the outlier is from others.
The Z value for each sample is then compared with Z critical, which calculated based on sample size:
where N is a sample size. When one of the Z values calculated for each data point exceeds critical Z value, it becomes the outlier.
Sometimes, the validated assays need to be modified in order to enhance its quality, increase throughput or in the case of switching of laboratory location. Single step change in the assay is the most common modification of HTS format and comprises, e.g. change of the reagent grade or supplier or replacement of an instrument with similar mechanical properties. In the re-validation process, two experiments with old and changed conditions have to be performed and the results compared. Secondly, we deal with changes which may influence EC50/IC50 results, i.e. substitution of the protein batch or supplier or modification of dilution protocol. Re-validation includes the analysis of a set of reference compounds with known potency in a dose-dependent manner and again comparison of the obtained EC50/IC50 values. The third level of changes assumes that full validation is required. This is the case when introduced modifications may have substantial influence on the potency of reference compounds, assay window or its quality.
HTS is difficult to for many reasons: it requires your commitment and taking care of assay quality with continuous attention to detail, understanding the biological system and how to handle the data. However, it must be remembered what your ultimate goal is: to participate in the drug discovery process, and help the ultimate end user, the patient, so let’s always remember the bigger picture.
Klaudia Jastrzębska, Scientist I
Anna Mróz, Ph.D., Scientist II
Magdalena Stańczyk, Research Assistant
Mateusz Biernacki, Ph.D., Scientist II
To contact the authors please email firstname.lastname@example.org
Ahsen von O., Boemer U.: High-Throughput Sreening for Kinase Inhibitors. ChemBioChem 2005, 6:481-490
Beck B., Chen Y.F., Dere W., et al.: Assay Operations for SAR Support. In: Assay Guidance Manual [Internet]. Sittampalam G.S., Coussens N.P., Brimacombe K., et al. (eds). Bethesda (MD): Eli Lilly & Company and the National Center for Advancing Translational Sciences, 2012 (Updated 2017)
Bisswanger H.: Enzymatic assays. Perspectives in Science 1, p. 41-55, 2014
Carettoni D., Verwaerde P.: Enzymatic Assays for High‐Throughput Screening. In: Burger’s Medicinal Chemistry, Drug Discovery, and Development, Seventh Edition, Abraham D.J., Rotella D.P. (eds.), John Wiley & Sons, Inc., 2010, p. 401-435
Chai S.C., Goktug A.N., Chen T.: Assay Validation in High Throughput Screening – from Concept to Application, Chapter 10. In: Drug Discovery and Development – From Molecules to Medicine, Vallisuta O., Olimat S. (eds.), 2015
Comley J.: Tools and technologies that facilitate automated screening. In: High-Throughput Screening in Drug Discovery, vol. 3,. Huser J. (ed.), Mannhold R., Kubinyi H., Folker G. (ser. eds.), Series: Methods and Principles in Medicinal Chemistry. Weinheim: Wiley-VCH Verlag Gmbh & Co. KGaA, 2006, p. 37-70
Copeland R.A.: Assay Considerations for Compound Library Screening, Chapter 4. In: Evaluation of Enzyme Inhibitors in Drug Discovery: A Guide for Medicinal Chemists and Pharmacologists, second edition, Copeland R.A. (ed.), John Wiley & Sons, Inc., 2013, p. 123-166
Cronk D.: High-throughput screening. In: Drug Discovery and Development, Second Edition, Hill R.G., Rang H.P. (eds.), Churchill Livingstone, Elsevier Ltd. 2012, p. 95-117
Devlin J.J., Liang A., Trinh L., Polokoff M.A., Senator D., Zheng W., Kondracki J., Kretschmer P.J., Morser J., Lipson S.E., Spann R., Loughlin J.A. Dunn K.V., Morrissey M.M., High capacity screening of pooled compounds: Identification of the active compound without re‐assay of pool members., Drug Dev. Res. 1996, 37(2), 80–85.
Haas J.V., Eastwood B.J., Iversen P.W., et al.: Minimum Significant Ratio – A Statistic to Assess Assay Variability. In: Assay Guidance Manual [Internet]. Sittampalam G.S., Coussens N.P., Brimacombe K., et al. (eds). Bethesda (MD): Eli Lilly & Company and the National Center for Advancing Translational Sciences, 2013 (Updated 2017)
Iglewicz B., Hoaglin DC.: How to Detect and Handle Outliers, Asqc Basic References in Quality Control, Vol 16, Amer Society for Quality Control, 1993.
Iversen P.W., Beck B., Chen Y-F, Dere W., Devanarayan V., Eastwood B.J., Farmen M.W., Iturria S.J., Montrose C., Moore R.A., Weidner J.R. and Sittampalam G.S. HTS Assay Validation.: In: Assay Guidance Manual [Internet]. Sittampalam G.S., Coussens N.P., Brimacombe K., et al. (eds). Bethesda (MD): Eli Lilly & Company and the National Center for Advancing Translational Sciences, 2013 (Updated 2018)
Lundhotl B., Scudder K., Pagliaro L.: A Simple Technique for Reducing Edge Effect in Cell-Based Assays., Journal of Biomolecular Screening, 2013, 8(5)
Maddox C., Rasmussen L., White L.: Adapting Cell-Based Assays to the High Throughput Screening Platform: Problems Encountered and Lessons Learned., National Institutes of Health, 2008, 13(3): 168–173
Malo N., Hanley J.A., Cerquozzi S., Pelletier J., Nadon R.: Statistical practice in high-throughput screening data analysis., Nature Biotechnology, 2006;24(2):167-75
Reymond J.L., Sicard R.: Introduction. Enzyme Assays. In: Enzyme Assays: High‐throughput Screening, Genetic Selection and Fingerprinting, Reymond J.L. (ed.), Wiley‐VCH Verlag GmbH & Co. KGaA, 2006, p. 1-6
Sui Y., Wu Z.: Alternative statistical parameter for high throughput screening assay quality assessment., Journal of Biomolecular Screening, 2007, 12(2): 229-234
Thorne N., Auld D.S., Inglese J.: Apparent Activity in High-Throughput Screening: Origins of Compound-Dependent Assay Interference. Current Opinion in Chemical Biology 14(3), p. 315-324, 2010
Valler M.J., Green D. Diversity screening versus focussed screening in drug discovery. DDT 2000, 5(7)
Williams K.P., Scott J.E.: Enzyme Assay Design for High-Throughput Screening, Chapter 5. In: High Throughput Screening, Methods and Protocols, Second Edition, vol. 565, Janzen W.P., Bernasconi P. (eds.), Humana Press, a part of Springer Science+Business Media, LLC, 2009, p. 107-126
Zhang J., Chung T., Oldenburg K.: A simple statisctical parameter for use in evaluation and validation of high throughput screening assays., Journal of Biomolecular screening, 1999, 4(2):67-73
Zhang Z., Guan N., Li T., Mais D., Wang M.: Quality control of cell-based high-throughput drug screening., Acta Pharmaceutica Sinica B, 2012, 2(5):429–438