Towards Metabolic Biomarkers for the Diagnosis and Prognosis of CKD Towards Metabolic Biomarkers for the Diagnosis and Prognosis of CKD

Chronic kidney disease, the gradual loss of renal function, is an increasingly recognized burden for patients and health care systems; globally, it has a high and rapidly growing prevalence, a significant mortality, and causes disproportionately high costs, particularly for hemodialysis and kidney transplantations. Yet, the available diagnostic tools are either impractical in clinical routine or have serious shortcomings preventing a well-informed disease management, although optimized treatment strategies with impressive benefits for patients have been established. Advances in bioanalytics have facilitated the identifi - cation of many genomic, proteomic, and metabolic biomarker candidates, some of which have been validated in independent cohorts. Summarizing the markers discovered so far, this chapter focuses on compounds or pathways, for which quantitative data, substantiat-ing evidence from translational research, and a mechanistic understanding is available. Also, multiparametric marker panels have been suggested with promising diagnostic and prognostic performance in initial analyses, although the data basis from prospective trials is very limited. Large-scale studies, however, are underway and will validate certain sets of parameters and discard others. Finally, the path from clinical research to routine appli- cation is discussed, focusing on potential obstacles such as the use of mass spectrometry, and the feasibility of obtaining regulatory approval for metabolomics assays.


Introduction: innovation in laboratory diagnostics
Most technological innovations go through typical cycles of acceptance and spread, the so-called innovation curves, quite generically analyzed by Rogers [1]. The same is true for new bioanalytical techniques or methods, which typically trigger a phase of early adoption and rather untargeted exploration of their use in many different areas (basic and applied research, drug development, clinical diagnostics, etc.). Some of these new technologies were able to make significant inroads in routine diagnostics within years of their invention, for example, immunoassays [2] based on monoclonal antibodies [3], Southern and Western blotting [4,5], the polymerase chain reaction [6], or nucleic acid sequencing by chain termination [7], while others have been well-developed and contributed to major scientific successes for decades without being adopted in clinical routine, for example, Raman, infrared, or nuclear magnetic resonance (NMR) spectroscopy (nota bene, the latter succeeded as a disruptive new imaging modality in radiology instead of clinical chemistry).
The discovery, development, and validation of new diagnostic markers and of assays for their standardized detection are a very costly endeavor that can only be successful after diligent analysis of all relevant boundary conditions (medical, ethical, technological, commercial, etc.). The first and foremost question in this analysis is, of course, if there is an unmet medical need for more and/or different information about a patient's status, for example, describing the actual pathophysiological alterations with greater diagnostic accuracy, predicting the course of the disease or the response to certain treatments earlier or with improved predictive values, or keeping track of the beneficial or harmful effects of any therapeutic interventions (clinical, pharmacodynamic, or pharmacokinetic monitoring [8]).
In this context, it should go without saying that additional diagnostic data points are only clinically valuable-and justify product development by companies and reimbursement by public health care systems-if there is a rather immediate therapeutic consequence, that is, if treatment or disease management are guided in a way that would not be possible without this information. Producing diagnostic data without a clinical consequence must be seen as a dubious way of stretching the already tight budgets of health care systems and insurances, and may even be problematic in terms of medical ethics, for example, in newborn screening for metabolic disorders, where signatures indicative of many conditions could be simultaneously detected by mass spectrometric assays but only those findings that trigger an immediate therapeutic or dietary intervention are actually communicated to the parents and the attending pediatrician.
Eventually, and that has been a major obstacle for many recent developments, the measurement of new diagnostic parameters must also be feasible in a routine environment, that is, in a robust, standardized, and quality-controlled fashion generating sufficiently precise and accurate data that warrant the expected sensitivity and specificity (or rather, in praxi, positive and negative predictive values) [9,10]. Of course, all of the above has to be achieved keeping the commercial viability in mind, that is, balancing the disease-related socio-economic impact and the savings made possible by refined disease management against the actual costs of the new diagnostic tools.

The rationale for new diagnostic markers 2.1. Unmet medical need and socio-economic impact
For chronic kidney disease (CKD), a thorough assessment of the aforementioned aspects is rather straightforward although many epidemiological and economic figures have an unexpectedly large uncertainty in the relevant literature. Chronic kidney disease (listed in chapter N18 of the International Classification of Diseases-ICD10) has a very high-and almost certainly underestimated-prevalence in the general population; most recently published figures range from 5 to 7% in global evaluations [11][12][13], to roughly 8-10% for the adult population in Western countries [14,15]. More specialized studies also claim markedly higher numbers, for example, 16.8% in the National Health and Nutrition Examination Survey (NHANES) of the adult Americans [16,17]. Most of these differences are clearly caused by discrepant diagnostic criteria and definitions, on which the statistics in the reports are based (e.g. proteinuria AND/OR pathologically low estimated glomerular filtration rate (eGFR) vs. impaired eGFR alone). Whatever the actual prevalence of a defined diagnostic finding in a certain population may be, there is a pronounced age-dependency (8.5% in 20-39-year-old people, 12.6% in 40-59-year-old, and 39.4% in >60 year-old, respectively [16,17]) and a moderate but still highly significant ethnic difference (19.9% in black vs. 16.1% in white Americans of non-Hispanic origin; p < 0.0001). Obviously, though, the strongest associations exist with the most relevant etiologies: diabetes and hypertension, which, together, cause approximately 75% of all cases of CKD (40.2% in diabetics vs. 15.4% in euglycemic individuals; 24.6% in hypertensive patients vs. 12.5% in normotensive individuals; both p < 0.0001) (Figure 1).
In all of these analyses, the most worrying observation is that the demographic changes and the presently much-discussed pandemic of obesity, type II diabetes mellitus (T2D) and other Western lifestyle-associated diseases will further increase these figures (particularly in developing countries). In fact, epidemiological surveys already identify these changes in the weight of the different etiologies: the percentage of type II diabetics among patients initiating renal replacement therapy has more than doubled in the last two decades [18]. Many experts stress the obvious pathophysiological relevance of a high-salt diet for CKD via hypertension; however, high sodium intake also seems to be an independent risk factor both for poor therapeutic efficacy of anti-hypertensive treatment with angiotensin converting enzyme (ACE) inhibitors and for rapid progression to end-stage renal disease (ESRD) [19].
Chronic kidney disease is responsible for an alarming number of deaths, but these numbers may still be underestimated because the mortality is more often due to comorbidities and sequelae, particularly cardiovascular disease (CVD) and its clinical endpoints myocardial infarction (MI) and stroke, or to complications of renal replacement therapies, than to kidney failure itself. Throughout the mass of publications on cardiovascular complications in CKD and the causal relationships (lately summarized by Alani et al. [20]), the broad consensus is that CKD patients are much more susceptible to CVD, particularly to coronary artery disease (CAD); in fact, after age-adjustment, CKD patients have a 15-to 30-times higher risk to die of CVD than the general population [21,22], and in ESRD patients, all-cause mortality is 10-to 100-times elevated compared to individuals with normal renal function [23].
It may be very difficult if not impossible to completely unravel the chicken and egg problem of whether kidney damage is causing CVD or vice versa, even if the study design is diligently targeting this question and sophisticated statistical tools are applied to correct for all potential confounders. The most plausible interpretation is that both conditions have common pathomechanisms, for example, inflammation, oxidative stress, and endothelial dysfunction, and so they frequently co-develop. At any rate, the huge socio-economic burden caused by the network of obesity, hypertension, diabetes, CKD, and CVD can hardly be overrated. In terms of global mortality, these non-communicable conditions have long exceeded the most problematic infections, and the pivotal role of CKD in this closely interwoven network has recently been dissected in a truly compelling paper [23].
As noted above, the spending for renal replacement therapy (RRT, be it hemodialysis or transplantation) is disproportionately high; in industrialized countries, as much as 2-3% of the entire health care budget can go into treatment of ESRD patients [24]. A more specific calculation published by the United States Medicare system demonstrates that 18.2% of the total budget are necessary for the 9.2% of recipients who are CKD patients (any stage in this  [16,17]. (A) Age-related increase: 8.5% in the third and fourth decade, 12.6% in the fifth and sixth, and 39.4% in individuals more than 60 years old. (B) Ethnic background: 16.1% in white Americans (non-hispanic), 18.7% in Mexican-Americans (p < 0.001), and 19.9% in black Americans (p < 0.0001). (C) Etiology, role of glycemic control: 40.2% in diabetic patients vs. 15.4% in non-diabetics; p < 0.0001. (D) Etiology, role blood pressure: 24.6% in hypertensive patients vs. 12.5% in normotensives; p < 0.0001). *** p < 0.001 **** p < 0.0001. Modified after [82]. statistics), and this discrepancy is aggravating at an alarming rate: in the last decade, CKDrelated costs have surged almost four times faster than the overall Medicare expenditures (380 vs. 100% increase; United States Renal Data System [25]). Moreover, there is no doubt that this situation will further worsen because ESRD is still a globally undertreated condition. In the near future, much more than the approximately 2-3 million ESRD patients who are currently receiving RRT [26] will have access to appropriate treatment, particularly in developing countries where 1 million people are estimated to die from untreated ESRD each year [27].

Shortcomings of available diagnostic tools
The second major motivation for exploring new diagnostic biomarkers for CKD is the extremely poor performance of the currently available parameters. There is a considerable number of publications and probably just as much clinical hearsay around this issue, so that even a somewhat comprehensive treatise cannot fit here. Still, a couple of concise problems related to the estimated glomerular filtration rate (eGFR), the most frequently used basis for diagnosing, staging, and monitoring CKD, shall be discussed: First and most obvious, the eGFR is only a calculated estimate of a very informative but, in praxi, difficult-to-obtain gold standard parameter describing kidney function-the experimentally measured glomerular filtration rate (mGFR). Despite huge efforts to optimize approximations such as the Cockcroft-Gault (CG), the Modification of Diet in Renal Disease (MDRD), andmore recently-the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation for adults, or the Counahan-Barratt for pediatric patients [28][29][30][31], these estimates have a common weakness since they are all based on creatinine levels, which are themselves influenced by anthropometric variables like muscle mass, and whose increase under pathophysiological conditions may be blunted by higher rates of creatinine secretion in the proximal tubule. In any case, they all have important flaws that are of the utmost clinical relevance [32][33][34].
More specifically, while all of the aforementioned formulas seem to work reasonably well in large cohorts, that is, a statistical assessment will yield acceptable correlation coefficients, the estimates for an individual at a given time can differ from the actual mGFR to a drastic degree.
Extremely problematic examples have been reported where patients with an mGFR of zero had eGFR values of 40-50 ml/min/1.73 m 2 (i.e. stage III CKD according to the recommendations of the Kidney Disease Outcomes Quality Initiative, KDOQI), thus severely underdiagnosing a life-threatening condition [35]. On the other hand, it is generally acknowledged that the approximations perform particularly poorly and tend to underestimate the actual kidney function at higher, near-healthy levels of glomerular filtration rate (GFR) (>60 ml/min/1.73 m 2 ) [36]. This shortcoming was implicitly accepted when developing the CG and MDRD equations; the CKD-EPI development, in contrast, specifically included healthy cohorts but the formula still does not perform well enough in some of the most common and clinically relevant situations for assessing renal function, for example, for patients to be given contrast agents in radiology who have kidney-related risk factors or potential donors for organ transplantation inter vivos. Both indications specifically rely on accurate estimates in the near-normal range. The catastrophic failure of all current equations to meet this expectation was recently highlighted in a compelling analysis [37]: in a study comprising almost 300 potential living kidney donors, eGFR values below 80 ml/min/1.73 m 2 (the typical cut-off for acceptance of living donors) had positive predictive values (PPV) for an mGFR below this threshold of only 0-40% with the vast majority of situations (formula, age, sex, and BMI) yielding less than 20% (Figure 2). In other words, many-actually most (!)-potential donors would have been declined on the basis of a falsely low eGFR, which cannot be justified considering the dramatic shortage of donor organs in general [38] and the much better clinical outcome of donations inter vivos compared to post mortem [39,40].

Therapeutic consequences?!
As noted in the introduction, decision-makers in diagnostics companies, in insurances, and in clinical practice will scrutinize new biomarkers or assays regarding their actual diagnostic performance but also regarding the clinical utility of the additional information they provide, that is, their therapeutic consequences. Of course, such deliberations can always be countered by the thought-terminating cliché that better diagnostic tools will eventually facilitate better standards of care for patients or, in this concise case, earlier diagnosis of impaired renal function and more accurate staging/monitoring of CKD will allow for more informed clinical Positive predictive values (PPVs) for a measured GFR below 80 ml/min/1.73 m 2 in individuals with an eGFR below 80 ml/min/1.73 m 2 , modified after [37,82]. Subsets are analyzed according to sex (male vs. female), age (<50 vs. >50) and BMI (<25 vs. >25). Estimates of GFR are calculated using any of the three most common equations: Cockcroft-Gault per body surface area (CG/BSA), modification of diet in renal disease (MDRD), or chronic kidney disease epidemiology collaboration (CKD-EPI). All PPVs are below 40%, in most cases even below 20%. decisions, thus more effective disease management, and a slower progression of patients to ESRD. Yet, the evidence for such generic and ambitious claims was quite sparse until, around the end of the last century, rather aggressive treatment regimens were tested in long-term, controlled studies.
These recent developments, however, define a surprisingly clear rationale for diagnostic innovation in this field: first, precise and accurate evaluation of renal function in (practically) healthy individuals gains in clinical relevance, that is, with glomerular filtration rates in the (almost) normal range, for which the commonly applied equations have exceptionally poor positive predictive values (for a detailed example, see Section 2.2, Figure 2).  Second, the therapeutic approaches available for CKD patients have been revolutionized in the last 20 years, and the significant benefits of personalized, multi-modal, and titrated regimens have been demonstrated beyond any doubt. The Renoprotection of Optimal Antiproteinuric Doses (ROAD) study clearly showed that maximizing the antiproteinuric effect by up-titration of ACE inhibitors (e.g. Benazepril™) and Angiotensine-II-Receptor-Subtype-1 (AT1) antagonists (in this case, Losartan™) to individually tolerated limits in euglycemic CKD patients did not further lower the blood pressure in comparison to standard treatment but was far superior in reducing albuminuria and in delaying the decrease of eGFR and creatinine clearance in a 36-months follow-up [41] (Figure 3). Maybe even more impressive, the so-called 'Remission Clinic' program initiated at the Istituto Mario Negri in Milan, a staggered intervention strategy consisting of a low-sodium and low-protein diet with an ATI antagonist, an ACE inhibitor, a calcium channel blocker, and a statin-each in titrated dosescould drastically reduce the incidence of ESRD in a 7-year observation period. Concisely, in two paired cohorts consisting of 56 individuals each, 17 patients (30.4%) who received the standard treatment developed ESRD but only 2 (3.6%) who were treated according to the Remission Clinic protocol did so, which translated into an odds ratio for progressing to ESRD of only 0.092 under the more aggressive treatment scheme [42,43].

Biomarker discovery for CKD
The motives examined in Section 2 have spurred significant efforts in all modern disciplines of bioanalytics-genomics, transcriptomics, proteomics, and metabolomics-aiming at the identification and validation of new biomarkers or biomarker panels addressing the unmet diagnostic needs and overcoming the flaws of the currently available solutions.
In genomics, genome-wide association studies (GWAS) have identified a large number of single nucleotide polymorphisms (SNP) significantly associated with the risk of developing CKD, incident diabetic nephropathy, renal function, and metabolic traits associated with CKD [44][45][46][47][48][49][50][51][52], and these findings already shed new light on pathomechanisms and regulatory networks in CKD although they seem to be quite far from routine clinical applications (see below).
Expression profiling of messenger ribonucleic acid (mRNA) and micro ribonucleic acid (μRNA) species found patterns associated with the risk of disease progression [53], the repair of acute kidney injury [54], various etiologies of CKD [55], the role and function of the immune system before and during hemodialysis [56], and with the regulation of atherogenic pathways in CKD [57].
Based on top-down and bottom-up proteomics workflows, a range of new kidney-related biomarkers have been advocated, for example, kidney injury molecule 1 (KIM-1), neutrophil gelatinase-associated lipocalin (NGAL), fibroblast growth factor 23 (FGF-23), monocyte chemotactic protein 1 (MCP-1), or urine retinol-binding protein 4 (uRBP4) [58,59]. In addition, there is particularly vivid research on urinary peptides; most of these peptides are products of the turn-over of the extracellular matrix and, actually, derived from collagen ( [60]; Mischak, personal communication). Over more than a decade, a compelling body of data has been accumulated documenting a clinically relevant diagnostic and prognostic performance of one particular panel of urinary peptides called CKD273 [61][62][63][64][65].
Chronic kidney disease and related indications such as nephrotoxicity also range among the most frequently addressed subjects in metabolomics, partly because of their clinical, societal, and commercial impact (see above), partly also because-from a purely scientific angle-they are considered to be rather straightforward targets or even 'low-hanging fruits'. It is perfectly reasonable to believe that a functional impairment of organs with such a central role in metabolism as the kidneys will alter the systemic homeostasis to a degree that should be easily detectable in blood or urine by state-of-the-art bioanalytics. The same reasoning led metabolomics towards some of its greatest successes so far, for example, the much more detailed characterization of the pathobiochemistry of type II diabetes, the evidence-based assessment of anti-diabetic drugs in preclinical and clinical development, and even the identification of highly promising and biochemically plausible biomarker signatures for the early diagnosis of prediabetes/impaired glucose tolerance and for an individual risk assessment as much as a decade before the manifestation of the disease [9,[66][67][68][69][70]. Even more disruptive, an utterly compelling proof-of-concept for the utility of mass spectrometry (MS) as a diagnostic tool was achieved by implementing routine screening programs for many genetically determined metabolic defects, the so-called 'inborn errors of metabolism', based on quantitative multiplex assays for amino acids and acylcarnitines [71,72]. Originating from pilot projects in the mid-1990s, this screening is now available in most industrialized and several developing countries and clearly set the stage for the workflow that is today called targeted metabolomics [9,73,74].

Metabolic biomarker candidates for CKD
The sizeable number of publications on kidney-related metabolomics studies mentioned aboveand there may be a comparable amount of work performed by the pharmaceutical industry that has not been published-suggest an immense array of potential metabolic biomarker candidates.
Over the last few years, several teams of experts went to great lengths to summarize these results in very systematic review articles [59,[75][76][77]. This chapter, however, follows a slightly different and maybe less comprehensive but hopefully complementary approach by highlighting alterations in selected metabolic pathways instead of individual molecules. Also, it focuses on findings that fulfill additional quality criteria, that is, for which there is relevant translational evidence, quality-controlled quantitative data, and at least some degree of mechanistic plausibility.
Of course, when claiming mechanistic insights based on typical metabolomics studies, one must never ignore the fact that anabolic and catabolic pathways are not the only factors influencing the homeostatic concentrations of metabolites in urine or peripheral blood. Nutritional uptake, microbial metabolism in the gastro-intestinal tract and urinary excretion (and, of course, hemodialysis in ESRD patients!) play equally fundamental roles. Unfortunately, in the large population-based studies, these aspects are never documented in sufficient detail and reliability to be suitable for a quantitative assessment (e.g. questionnaire-based reports on nutritional habits [78]). So, in a way, the pathway-centric methodology used in this chapter can only reflect one set of possible explanations for how metabolite concentrations are altered, and this shortcoming is primarily caused by difficult-to-avoid gaps in the documentation of most biomarker studies. Yet, the clinical experience from screening millions of newborns demonstrates that this particular limitation can be partly overcome: as soon as ratios of products and substrates of enzymatic reactions (or entire pathways) are analyzed instead of individual metabolite concentrations, the data are far less prone to all sorts of confounding factors such as dietary uptake and rather reflect the actual metabolic activity of the organism [66,[79][80][81].

Creatinine
At first sight, it is hardly worth mentioning that the most frequently found metabolite indicative of CKD is creatinine [15,82]. However, this simple statement has two fundamentally different reasons: First and quite trivial, creatinine is indeed a marker for kidney function and has rightly been established as one of the most common diagnostic parameters for many years although-as noted above-its performance (both as a single laboratory parameter and as the basis for calculating the eGFR) is far from perfect.
Second, and this must be kept in mind for all further considerations: the vast majority of all studies in this field enrolled and staged or classified patients according to their eGFR and, thus, indirectly also according to their creatinine levels. Therefore, identifying creatinine as a statistically significant marker metabolite is just a typical case of self-fulfilling prophecy (it is actually quite revealing that, in many cases, creatinine does NOT rank as the top candidate, that is, does not have the highest significance level or the lowest p-value).
On the other hand, what is the alternative? Due to obvious cost, time, and compliance issues, the number of studies that are based on mGFR or hard clinical endpoints is rather limited, and so the subsequent discussion does not exclude eGFR-based studies as a matter of principle but instead tries to substantiate the level of confidence in the various findings through a synopsis of statistics (significance, independent replication), translational research (relevance of models and match of patterns in different species), and biochemical scrutiny (pathway mapping, enrichment analyses, or similar approaches).

Dimethylarginine metabolism
Considering all single metabolic markers and/or panels that have been suggested so far, the most compelling preclinical and clinical evidence underpins a central role of dimethylarginine metabolism and, in particular, of symmetric dimethylarginine (SDMA) [83][84][85].
A quick look at the underlying biochemistry: the guanidinium side chains of arginine residues in polypeptides are the targets for specific post-translational modifications by a set of isoenzymes called protein arginine N-methyltransferases (PRMT), which are evolutionarily well conserved from unicellular eukaryotes such as yeast all the way to humans [86]. In two consecutive reactions, first monomethylarginine and then one of the two possible isomers of dimethylarginine are formed: an asymmetrically substituted version (ADMA) if one ω-nitrogen atom carries both methyl groups, and a symmetric version (SDMA) in case the methyl groups are bound to both terminal nitrogens (Figure 4). In two subsequent reactions, arginine sidechains are mono-and then dimethylated resulting in either asymmetric (ADMA) or symmetric dimethylarginine (SDMA). ADMA is a potent inhibitor of nitric oxide synthases (NOS), which produce nitric oxide (NO) through direct conversion of arginine to citrulline, and is mainly metabolized to citrulline and dimethylamine. In contrast, SDMA is metabolically inert and primarily eliminated via the kidneys. Although these two molecular species are structurally quite similar (isomers, in fact), their physiological roles differ quite fundamentally. To be more specific, ADMA acts as a potent endogenous inhibitor of nitric oxide synthases (NOS) and, therefore, high concentrations lead to a decreased systemic production of nitric oxide (NO). Thus, ADMA is promoting or aggravating endothelial dysfunction, and elevated levels of ADMA may be among the functionally most meaningful cardiovascular risk factors in general [87,88]. Once released by proteolysis, the bulk of ADMA is catabolized to citrulline and dimethylamine by two isoforms of dimethylarginine dimethylaminohydrolase (DDAH) while SDMA is biologically rather inert, hardly metabolized in the body and, instead, eliminated via the kidneys.
In widely used animal models of kidney injury such as Sprague-Dawley rats treated with the nephrotoxic aminonucleoside antibiotic puromycin, SDMA levels in plasma were shown to increase in a dose-and time-dependent manner [89,90] (Figure 5A). A very similar correlation was observed in a cross-sectional study on CKD patients; both in diabetic nephropathy and in non-diabetic CKD (mainly hypertensive patients), later stages of the disease were characterized by significantly higher concentrations of SDMA in plasma, further corroborating that the observed changes are linked to the severity of kidney damage instead of the underlying etiology [89,91] ( Figure 5B). The statistical significance of this finding was even stronger when using the SDMA-to-Arginine ratio (corrected analysis of variance (ANOVA) p-value in the range of 10 −11 instead of 10 −9 for SDMA alone [82]) although this is not a simple product-to-substrate ratio as briefly discussed above [66]. For a more detailed discussion of the outstanding improvements that a systematic exploration of such metabolite ratios can achieve both in statistical power and in biological plausibility, please refer to recent genomewide association studies on phenotypes defined by targeted metabolomics, the first highly synergistic combination of different omics platforms to date [81,92,93].
Quite incomprehensibly, several current review articles on metabolic biomarkers for kidney failure do not even mention SDMA [15,76] although the scientific and clinical evidence underpinning its diagnostic potential is undeniable. A thorough meta-analysis of 20 clinical studies encompassing more than 2100 patients demonstrated beyond any reasonable doubt that there is a highly significant association of SDMA and creatinine concentrations on the one hand, and an inverse correlation of SDMA levels and several measures of renal clearance on the other [94]. Admittedly, in some of these studies, ADMA was also identified as a marker candidate, describing markedly elevated levels in individuals with impaired kidney function [84,91,94]. However, this must be seen as part of the aforementioned chicken and egg problem: CKD and cardiovascular disease are very closely interwoven and, therefore, alterations of both conditions' most significant biomarkers will frequently coincide. Yet, there is a relatively simple but revealing clinical observation that plausibly links ADMA to cardiovascular risk and SDMA to kidney function: after kidney transplantation, ESRD patients have at least partially restored renal function, and their SDMA levels drop quickly and markedly while, at the same time, ADMA concentrations stay elevated [95].

Tryptophan metabolism
The second group of CKD-associated metabolic changes to be discussed here is substantiated by especially convincing evidence from translational research: tryptophan metabolism. Tryptophan is a non-polar, aromatic amino acid, has the bulkiest sidechain in the proteinogenic repertoire, is essential in humans, and has long been the subject of particular interest for neurobiologists because it is the starting point for the biosynthesis of the neurohormone melatonin [96] and the neurotransmitter serotonin [97]. Lately, though, the alternative catabolic pathway originating from tryptophan, the kynurenine pathway, which ultimately leads to niacin via quinolinate, has drawn much more attention because of the immunomodulatory and tolerogenic effects [98][99][100] of the enzyme catalyzing the rate-limiting step in this pathway, indoleamine-2,3-dioxygenase, which oxidizes tryptophan to N-formyl-kynurenine [101,102] (Figure 6).  Increased turn-over of tryptophan in animal models with renal insufficiency has been observed as early as the 1960s, for example, in spontaneously hypertensive rats [103], and tryptophan depletion in peripheral blood has been identified in different sorts of nephropathies [104,105], although these studies relied on a rather limited analytical armamentarium. Yet, these very early observations have recently been confirmed in two independent animal models, namely in Sprague-Dawley rats treated either with adenine or puromycin to induce kidney damage. In fact, two completely different workflows led to the same conclusions: Untargeted metabolic profiling of serum, urine, and kidney tissue identified tryptophan as a biomarker candidate distinguishing adenine-treated rats from untreated controls [106,107] and, based on a targeted and analytically validated quantitative data set, the aforementioned puromycin model had already shown the same effect [89,90]. The latter study, due to its longitudinal design with three dose escalation arms, could demonstrate very clearly that, upon puromycin treatment, plasma tryptophan concentrations decreased in a dose-and time-dependent manner ( Figure 7A) and, even more notable, they plunged to below one-third of the baseline levels observed in the group receiving a vehicle-only control. Effect sizes of this magnitude may be quite frequently found in urine (or in other omics disciplines, for that matter) but they are almost unheard of in plasma or serum where the homeostatic regulation of all amino acid concentrations is typically very stringent-even under rather drastic environmental conditions, various dietary influences, exhausting physical exercise, or pharmacological interventions [9,66,78,[108][109][110][111][112]. Figure 6. Tryptophan metabolism. Schematic summary of tryptophan catabolism (modified after [82]. The essential amino acid tryptophan is the substrate for two different pathways, the so-called serotonin pathway producing the neurotransmitter serotonin [97] and the neurohormone melatonin [96], and the so-called kynurenine pathway leading to the synthesis of niacin via its precursor quinolinate. As shown for dimethylarginine metabolism (see Section 4.2), these alterations in tryptophan metabolism are also very convincingly replicated by translational research. In this case, both cross-sectional studies on specifically selected patients [89,91,113,114] and larger populationbased cohorts [115,116] were able to demonstrate that tryptophan concentrations in serum and plasma are significantly lower in patients with less residual kidney function, that is, at a later stage of CKD ( Figure 7B). Just as in the animal models described above, the absolute magnitude of this depletion is quite remarkable: the median tryptophan levels in stage 5 patients are more than 50% lower than in stage 3 patients, and these drastic changes seem largely independent of the underlying etiology as indicated by a separate subgroup analysis of diabetic and non-diabetic patients ( Figure 7B).  It has been known for a very long time-more than half a century-that tryptophan is specifically bound and transported by albumin (elucidated in impressive physico-chemical detail by McMenamy and Oncley [117]), and more recent publications estimate that 75-90% of the circulating tryptophan is actually present in a bound form [118,119]. The continuous excretion, that is, loss, of albumin in proteinuric animals and patients could, thus, influence these findings in a highly relevant manner (considering the relatively low binding affinity-depending on the experimental conditions, the apparent association constant k' is approximately 10 4 -and the absolute concentrations reported, it is fair to assume that the assays used were measuring total instead of free tryptophan although neither publication directly comments on this question). A quantitative comparison of data from the UroSysteOmics study, however, sheds some light on these speculations: the drop of the tryptophan levels is much more pronounced than that of the albumin concentrations in the same period (−58 vs. −19% from stage 3 to stage 5 [89]), so it is evident that there must be other mechanisms involved in depleting tryptophan than just a loss of transport capacity.
Indeed, both aforementioned pathways metabolizing tryptophan appear to be dramatically upregulated in later stages of CKD as underpinned by significantly elevated ratios of serotonin and kynurenine to tryptophan, respectively [89,91,113,114] (Figure 7C and D). In a large independent study, the kynurenine-to-tryptophan ratio was one of the strongest predictors of changes in kidney function between consecutive visits of study subjects in the Cooperative Health Research in the Region of Augsburg (KORA) S4 and F4 surveys and also of newly diagnosed CKD in this roughly 7-year period [115]. Besides these extremely convincing findings with significant diagnostic and prognostic potential, tryptophan plays yet another, albeit less direct role in CKD: it is the starting point for the synthesis of indoxyl sulfate (IS), one of the most intensely discussed and, arguably, also most dangerous uremic toxins. Some of the dietary tryptophan (as noted, Trp is an essential amino acid in humans) is cleaved to indole by the gut microbiota. Indole is absorbed by the intestinal mucosa and, ultimately, metabolized to IS in hepatocytes [120]. In an impressive number of reports, IS has been characterized as a nephrovascular toxin [121], an indicator of poor residual renal function [122], and a prognostic biomarker predicting elevated risks for vascular disease and all-cause mortality in CKD patients [120].
Of course, cleaving indole from Trp is only one of the countless ways, in which the gut microbiome may affect the pathophysiology of CKD and, more specifically, the systemic availability of many metabolites. The multifaceted connection between the microbiota and CKD-too complex to be discussed here-has recently been reviewed in great detail and, notably, also examined for its potential as a target for specific therapeutic interventions [123].

Urea cycle alterations, nitric oxide synthesis, and polyamine metabolism
A less straightforward and sometimes even controversial set of observations presents itself around another well-known pathway, the urea cycle and its interfaces with nitric oxide synthesis and polyamine metabolism (Figure 8). More concisely, a longitudinal analysis of a fairly large subset of the KORA cohort (more than a thousand individuals), identified spermidine as one of the biomarker candidates with the best statistical significance [115]; it was inversely correlated with the annual eGFR change, especially in people without a diagnosis of CKD at baseline, which clearly confirmed findings from the early 1980s [124,125]. Yet, quite surprisingly, some of the most recent reviews on biomarkers in nephrology do not even mention spermidine at all [59,75,76].
In the UroSysteOmics patients, citrulline-to-arginine ratio was significantly higher at later stages of CKD (p = 3.5 × 10 −3 for diabetics, p = 3.7 × 10 −3 for non-diabetics). In theory, this could, of course, be caused by a higher activity of nitric oxide synthases (although this is quite unlikely in a situation of pronounced oxidative stress; see Section 4.5) just as well as from a combined effect of arginase and ornithine transcarbamoylase (OTC) activities in the urea cycle. A more detailed evaluation showed that there were etiology-specific differences in this pathway: in euglycemic patients, ornithine-to-arginine ratio was markedly higher at later stages of CKD (p = 7.8 × 10 −5 ) while, in diabetic patients, this ratio was not associated with the degree of renal failure in a significant manner [89,91,113]. Further mechanistic studies are certainly required to untangle the intricacies of these observations because arginase also plays a role in the regulation of NO synthesis and vascular function beyond just competing with NOS for their joint substrate, arginine [126].
Considering the pathogenetic relevance of endothelial dysfunction and the well-documented NO deficiency in CKD (reviewed by Baylis [127]), it is-again-quite surprising that arginine, ornithine, and citrulline have so far hardly been discussed as biomarker candidates [15,128,129]. This Figure 8. Urea cycle, nitric oxide and polyamine production. Schematic summary of the urea cycle and its connections to polyamine and NO production. As discussed in Section 4.4., the levels of biomarker candidates such as arginine, ornithine, and citrulline are regulated in a complex manner; therefore, interpretation of the data for this pathway is quite speculative, except for spermidine, which is only involved in polyamine synthesis. Modified after [82].
Towards Metabolic Biomarkers for the Diagnosis and Prognosis of CKD http://dx.doi.org/10.5772/intechopen.80335 may either mirror gaps in the analytical portfolio-after all, metabolomics is not a comprehensive omics discipline yet, and even less so if studies are based on a single technology or workflow-or a lack of pathway-oriented data analysis [79,80].

Oxidative stress and functional consequences
With 16,605 PubMed-listed publications in 2017 alone, oxidative stress, that is, a biochemical imbalance of oxidizing and reducing agents but most often just referring to reactive oxygen species and their detoxification, is clearly one of the most intensely studied concepts of pathobiochemistry, and probably even of modern biology in general. In fact, looking at the literature, it seems that there is hardly a disease, in which oxidative stress is not one of the major culprits. Its pivotal role in CKD has been identified and reviewed hundreds of times from the 1980s to the present [130][131][132][133][134], so-for completeness' sake-it must be discussed here, too. Although there is a very broad agreement in the community about the finding itself and its clinical relevance, no direct diagnostic application of that knowledge has been achieved so far, mainly for (pre-)analytical reasons. Oxidative stress causes damage to various groups of biomolecules such as amino acids, lipids, or nucleotides by processes that are extremely well understood on the molecular level [135][136][137][138]. However, these reactions often generate quite unstable intermediates, for example, peroxides of unsaturated fatty acids (Figure 9). Therefore, fragments from the cleavage of such oxidized lipids (usually malone dialdehyde and 4-hydroxynonenal) are commonly detected as surrogate markers. Other compounds require tricky preanalytical procedures or challenging analytical methods, for example, nitrated amino acids or oxidized nucleotides. Highly sophisticated sample preparation protocols and mass spectrometric assays have been developed for many of these analytes [139,140] but, due to their complexity and limited robustness, they are still far from a routine application in clinical chemistry.
A rather stable and analytically more accessible measure of oxidative stress was recently suggested and already applied in several studies, namely methionine sulfoxide and its ratio to unmodified methionine [108,111,113,141,142]. Methionine sulfoxidation is a posttranslational modification (PTM) of methionine residues caused by various oxidizing agents, for example, hydrogen peroxide, hydroxyl radicals, hypochlorite, chloramines, and peroxynitrite, and has frequently been reported to cause a more or less severe loss of function in modified proteins [143].
As may have been expected, the patients enrolled in the UroSysteOmics study had markedly higher methionine sulfoxide-to-methionine ratio, the more advanced their CKD became, and that could be observed in diabetic and non-diabetic patients in a very similar fashion [89,91,113]; (Figure 10). Interestingly, the broad coverage of the targeted metabolomics experiments combined with hypothesis-driven data analytics and interpretation allowed to corroborate this finding by depicting functional consequences of the oxidative conditions in the same data set. Under oxidative stress, the activities of enzymes that depend on oxidation-sensitive cofactors, for example, tetrahydrobiopterin (BH4), have been reported to drop due to a limited availability of these cofactors [144][145][146][147]. In principle, this applies to NOS but, in practical terms, the effect on the citrulline-to-arginine ratio is often masked by alterations in the urea cycle as discussed above (Section 4.4). In the present case, the activity of phenylalanine hydroxylase (PAH) was a much more plausible read-out parameter. When a relevant percentage of BH4, an essential cofactor for PAH, is oxidized to 4-α-hydroxytetrahydrobiopterin, it is no longer available in sufficient quantities to ensure normal PAH activity and, so, one observes a 'phenylketonuria-like' phenotype with a decreased tyrosine-to-phenylalanine ratio (Figure 9). In fact, these opposing trends could be clearly observed in the quantitative data: just as methionine sulfoxidation rates increased from moderate to severe disease in the UroSysteOmics cohort, Figure 9. Oxidative stress. Simplified overview of some biochemical consequences of oxidative stress (modified after [82]). Unbalanced oxidative conditions cause chemical modifications to various classes of metabolites, for example, lipid peroxidation, amino acid nitration, or oxidation of nucleotides. Many products of these reactions are very unstable orfor other reasons-difficult to analyze, therefore, the methionine-sulfoxide-to-methionine ratio is now commonly used as a robust surrogate for these parameters. In the same context, highly reduced cofactors, for example, tetrahydrobiopterin, are oxidized and-thus-no longer sufficiently available to warrant a normal level of enzymatic activity. In the example chosen here, this affects the enzyme catalyzing the first step in phenylalanine catabolism, phenylalanine hydroxylase, which leads to a 'phenylketonuria-like' phenotype characterized by a lowered tyrosine-to-phenylalanine ratio. the PAH ratio continuously dropped towards later stages of CKD, and this was again independent of the underlying etiologies [89,91,113] (Figure 10). In very good accordance with this finding, the cross-sectional analysis of the KORA F4 cohort revealed a strong and highly significant negative correlation of phenylalanine levels with eGFR (effect size: 2.36 ml/min/ 1.73 m 2 per SD, p = 7.8 × 10 −22 ). One has to concede that this correlation was not significant in a replication cohort (UK Twins), possibly because of the much smaller sample size, but the pooled analysis of both cohorts still yielded a rather convincing p-value of 1.4 × 10 −18 [116].

Acylcarnitines
Particularly strong alterations in nephrology-related metabolomics studies were observed in a class of compounds that does not get much attention in mainstream biochemistry but has been part of the standard diagnostic repertoire since the beginning of the mass spectrometrybased era in newborn screening [71,72], the acylcarnitines. Long-chain fatty acids are the primary substrates for β-oxidation in the mitochondria and, thus, one of the pivotal energy sources in the cell. Yet, they can neither cross the double membrane of the mitochondria as free fatty acids nor as fatty acyl esters of coenzyme A (CoA), which is their metabolically activated form. Instead, tissue-specific isoforms of carnitine palmitoyl transferase I (CPT-I) located at the outer mitochondrial membrane first trans-esterify the fatty acyl residues from CoA to carnitine, a quaternary ammonium compound derived from lysine. The resulting acylcarnitines are then taken across the inner membrane by an antiporter system named carnitine acylcarnitine translocase (CACT). On the matrix face of the inner membrane, a second carnitine palmitoyl transferase (CPT-II) re-esterifies the fatty acids to CoA, and this fatty acyl-CoA can then undergo p-oxidation while the released carnitine is returned to the cytoplasm to be available as a carrier again (a cyclic process thus called the carnitine shuttle) [148][149][150].
In the aforementioned study on puromycin-induced nephrotoxicity in rats, the plasma levels of acylcarnitines of various chain lengths surged in a dose-and time-dependent manner [82,89,90], and this may be understood as a consequence of increased mitochondrial leakage caused by apoptosis or other sorts of damage to the mitochondrial membranes (a basal level of leakage must be assumed to explain the fact that acylcarnitines are detectable in peripheral blood at all).
As for the marker candidates discussed previously, there is striking translational evidence supporting these findings both from population-based and from smaller, dedicated clinical studies. In a cross-sectional assessment of the KORA F4 data, 26 acylcarnitines showed statistically significant correlations with renal function, that is, eGFR, with the strongest effect size of 3.74 ml/min/1.73 m 2 per SD and a compelling p-value of 2.2 × 10 −52 for glutarylcarnitine, and most of these hits were confirmed as significant in the UK Twins cohort [116]. Also in the UroSysteOmics study, certain species of acylcarnitines such as glutarylcarnitine (C5-DC) or pelargonylcarnitine (C9) showed significantly elevated levels in patients with more advanced kidney disease, again independent of the etiology (ANOVA p-values for the trend across stages 3, 4, and 5 of 3.9 × 10 −6 , 1.7 × 10 −4 , 3.4 × 10 −4 , and 4.4 × 10 −5 for C5-DC and C9 in diabetics and non-diabetics, respectively) [89,113].
One has to consider, however, that many of these acylcarnitines have very low absolute concentrations in healthy or early stage patients (typically in the low nanomolar range) and, so, many individual values that were the basis of the statistical evaluation may have been around or even below the lower level of quantitation (LLOQ) of the assays applied in these studies (Biocrates AbsoluteIDQ P150 and P180 kits [151,152]). Of course, this does not necessarily put the observation itself into question (since these kits were real first-in-class products, the LLOQ values were chosen very conservatively; in the validation and in routine applications, the precision was still extremely good at the lower end of the calibration curves; and, finally, even the qualitative difference of a marker being below LLOQ at early stages and well above at later stages could bear meaningful diagnostic information) but some of the published significance levels could be slightly overestimated. All of the above leaves one important question to be discussed: three independent clinical cohorts ranging from population-based to specifically selected in the clinics identify C5-DC as the most significant marker candidate in this class of metabolites-is there any plausible mechanistic explanation for this particular finding? Glutaric acid (systematically: pentanedioic acid) is an intermediate of lysine, hydroxylysine, and tryptophan catabolism, and elevated concentrations are the eponymous hallmarks of glutaric acidemias, autosomal recessive deficiencies of various enzymes affecting this pathway [153,154]. Yet, besides some observations that neonates with glutaric acidemias can sometimes also have birth defects like polycystic kidneys, there is no convincing genetic or metabolic link in the literature between glutaryl-CoA dehydrogenase and CKD, for example, a higher rate of heterozygotes among patients with renal failure or the like. However, the explanation may also be much easier: C5-DC as a dicarboxylic compound could simply be a product of ω-oxidation of fatty acids, that is, a side effect of the oxidative stress discussed in Section 4.5.
As noted, this list is certainly not exhaustive and primarily presents compounds that were identified in clinical samples, but it may also highlight that these biomarker candidates range from well-known uremic toxins like indoxyl sulfate to xenobiotics like azelate, the role of which can only be subject to speculation.
On the other hand, many of these metabolites may just represent typical confounders, for example, reflect characteristics of the underlying etiology of CKD or comorbidities such as diabetes [174]: pyruvate and glucogenic amino acids like alanine, serine, and glycine, for instance, directly mirror the systemic glycolytic or gluconeogenic flux; glycolysis replenishes their pools while gluconeogenesis tends to deplete them [9,66]. Also closely linked to diabetes are the branched-chain amino acids (BCAA): leucine, isoleucine, and valine accumulate in peripheral blood in a situation that bears a striking resemblance to insulin resistance [9,66,175], and they are-particularly in combination with the aromatic amino acids (AAA)-some of the earliest predictors of incident type II diabetes [68].
Similarly, lysophospholipid levels (and, even more so, their ratios to intact phospholipids) are directly correlated to the activity of phospholipases, which represents the first step in a cascade that releases various polyunsaturated fatty acids (PUFA) from membrane phospholipids and, eventually, produces oxidized PUFA-derivatives such as prostaglandins, leukotrienes, thromboxanes, and others, thus mirroring the systemic level of inflammation rather than an organ-specific effect [176][177][178].
In conclusion, a very long list of metabolic changes has been identified in individuals with impaired renal function and in relevant animal models of kidney damage. However, from a diagnostic point-of-view, it seems that only a limited subset of these marker candidates has been underpinned by quality-controlled quantitative data [9,10,108] analyzed by appropriate bioinformatics strategies [79,[179][180][181][182], diligent study design that allows for an assessment of the specificity of the observations, and mechanistic insights based on translational research and a detailed biochemical understanding of the differentially regulated pathways [66,79,80,182]. Yet, the marker candidates fulfilling these criteria could certainly have a significant potential in clinical chemistry, particularly if they are (or have been) successfully replicated in independent clinical cohorts [116] and scrutinized in studies specifically designed to test their prognostic value [115,183,184].

Multiparametric metabolic biomarker panels
Up to now, the majority of studies in this field have analyzed their data sets to identify single metabolites as candidate biomarkers but, as noted above, derived parameters such as ratios of product and substrate concentrations of a particular enzymatic reaction or an entire metabolic pathway, are extremely powerful tools for finding correlations with larger effect sizes, better statistical significances, and, notably, also higher biochemical plausibility [9, 66, 71, 72, 79-81, 92, 116, 175, 182]. Yet, and that was one of the basic assumptions nurturing the immense optimism at the beginning of the omics era in biomedical research and development, the superior information content offered by the comprehensive analytical platforms could only be harnessed by using broader, multiparametric panels or even signatures as biomarkers. To this end, two profoundly different strategies for defining such panels have been successfully applied in CKD-related metabolomics, one guided by biochemical background knowledge and the other by hypothesis-free bioinformatics approaches.

Knowledge-based biomarker panels
Hypothesis-driven attempts to define multiparametric diagnostic signatures are founded on a detailed, pathway-oriented description and-whenever possible-also a mechanistic understanding of the pathobiochemical changes identified in particular studies [79,80]. If specific sets of such metabolic alterations are repeatedly observed, for example, the various effects discussed in Section 4, it would seem logical and straightforward to devise simple linear combinations of concentrations or ratios to refine the diagnostic performance. To illustrate this point, here is one very simplistic example based on the UroSysteOmics data set (not an optimized model that, of course, cannot be revealed in this context): Various versions of generic scores like this (Eq. (1)) easily outperformed the individual ratios; this was demonstrated by relevant improvements of the area under the receiver-operating-characteristics curve (AUROC) for the distinction of stage 4 from stage 3 (in the combined cohort, i.e., irrespective of the etiology) from 0.75 to 0.853 [114], and these equations can and, in fact, must be extended and further optimized for each diagnostic indication. However, in contrast to early hopes in theoretical biology, this strategy does not work ad infinitum. The area under the curve (AUC) does not asymptotically converge to 1.0 if one only adds enough parameters with the intention to 'exhaustively' describe the biological system (more is not always better). In most cases (unfortunately, as with so many important aspects of metabolomics, there are hardly any relevant publications on this subject), the AUC peaks for signatures comprising 5-20 metabolites and then drops again, supposedly because more features add too much analytical and biological 'noise'. In one concise example, for which this was analyzed in a systematic fashion, panels of plasmalogens were checked for their performance in the (admittedly trivial) distinction between diabetics and euglycemic controls. Plasmalogens are phospholipids that have one fatty acid linked to the polar head group by an ester bond and the second residue by an enol ether bond, whose concentrations continuously drop during the progression of the metabolic syndrome towards manifest type II diabetes. Starting with the single most significant molecular species and then adding up to six plasmalogens in a linear combination as shown in Eq. (1) improved the AUC from 0.92 to 0.95 but further addition of up to 35 features from the same class caused the AUC to drop to 0.88, that is, below the value for the best monoparametric marker [10,79,182].

S = a
Such 'constructed' marker panels are transparent, plausible, even intuitively pleasing and, thus, have a reasonable likelihood of gaining acceptance in the clinical community, they certainly fail to take advantage of the entire information content of a given data set in a systematically optimized way. Therefore, the definition of marker panels and the generation of diagnostic/prognostic algorithms are increasingly based on hypothesis-free strategies such as machine learning, network analysis, and other advanced data mining strategies.

Panel selection and optimization by machine learning
In contrast to the hypothesis-driven definition of metabolic biomarker panels described above, a much more technical selection process can be conducted by means of machine learning. The roots of machine learning as a discipline promoted by a rather special community date back to the middle of the twentieth century, when the legendary Alan Turing published his groundbreaking work on 'Computing machinery and intelligence' [185]. Still in the 1950s, the 'Dartmouth summer research project on artificial intelligence' took place as a kick-off meeting for the entire field [186], Arthur Samuel presented his seminal paper 'Some studies in machine learning using the game of checkers' [187] and Ray Solomonoff first discussed his idea on 'An inductive inference machine' which he later extended and matured in his 'Theory of inductive inference' [188].
Machine learning is closely related to the fields of artificial intelligence (AI), knowledge discovery, and data mining. It investigates systems and algorithms that learn from experience (data) to improve its performance, and is traditionally split into three major arms: supervised, unsupervised, and reinforcement learning. The main objective in supervised learning is the recognition of patterns or the predictive classification (e.g. the distinction of cases and controls in a clinical trial) by approaches such as k-Nearest Neighbor (kNN) classification, Bayesian networks, logistic regression, decision tree learning, support vector machines (SVMs), or neural networks. In contrast, unsupervised learning applies a descriptive approach to detect unknown patterns and rules in data sets, to cluster data, or to identify classes, which were not previously defined, for example, by using partitioning-based approaches such as k-means clustering, density-based strategies, or hierarchical clustering. The third variety, reinforcement learning, uses the principle of cumulative rewards to optimize the actions of a software agent in complex situations [189][190][191].
Successful attempts to define biomarker panels by machine learning were recently conducted based on the metabolomics and proteomics data sets from the UroSysteOmics study [183]. In a rather typical workflow, potential biomarker candidates were initially identified by univariate statistics of differences between groups of patients with early and advanced stages of CKD. Multiparametric classifiers were then developed by supervised machine learning using support vector machines (SVM) [192]. To this end, 76 putative biomarkers, which showed statistically significant differences between the defined cohorts, were combined into 3 distinct panels: 'MetaboP' consisting of 17 plasma metabolites, 'MetaboU' comprising 13 metabolites in identified in urine, and finally 'Pept' consisting of 46 urinary peptides (for the far more advanced panel CKD273 consisting of urinary peptides analyzed on the same platform, see Section 3). The performance of these three classifiers was evaluated in an independent test set by checking their correlations with renal function (eGFR) at baseline and at a follow-up visit approximately 2 years later. Each of the classifiers showed a very good correlation with baseline eGFR, that is, in a diagnostic setting, but-much more excitingly-also with the renal function at the follow-up examination indicating their marked potential as prognostic tools. In summary, this study presented a convincing methodology for the systematic development of multiparametric biomarker panels by machine learning. Yet, looking at the details of the initial biomarker selection, several obvious mistakes were made by paying too little attention to peculiarities of the analytical platforms and to previous experiences analyzing this data set, so there is certainly a lot of room for improvement regarding the diagnostic and prognostic performances.
In addition, the authors of this study also tested a combination of the three classifiers for its correlation with baseline and follow-up eGFR, but this approach did not significantly improve the correlation coefficients when compared to each of the three original biomarker panels. This may come as a surprise, even a disappointment, particularly considering the breakthroughs that could be achieved when conducting GWAS phenotypes defined by targeted metabolomics [81,92]. Obviously, however, the fundamental belief of modern systems biology that the amassment of data from different omics disciplines would automatically generate such synergies (the more the merrier), is not necessarily true.

Summary and future perspectives
The present chapter attempted to summarize some of the latest breakthroughs in the identification and development of metabolic biomarkers for chronic renal failure. It is more than obvious that there is a huge unmet need for improved diagnostic and prognostic tools for this indication, which represents an immense-and continuously growing-burden for affected patients and public budgets alike. Also, the introduction and impressive clinical validation of more aggressive and personalized intervention strategies has further stressed the necessity for finding and implementing better markers for accurate staging of the disease, monitoring its progression or the effects of therapeutic regimens and, eventually, assessing each patient's individual prognosis to make early and informed decisions in disease management.
In the last decade, the entire armamentarium of bioanalytical platform technologies has been used in this quest. While genomic analyses identified certain risk factors and elucidated some new regulatory relationships, it was primarily proteomics and metabolomics, that is, the disciplines depicting functional endpoints rather than predispositions that reported findings with the potential for clinical application in the foreseeable future.
The statistical basis for these developments and, thus, their credibility has been strengthened substantially in the last few years when large population-based cohorts were analyzed in addition to the smaller, dedicated studies, in which clinical research was initially conducted. So, today, there are highly promising biomarker candidates to be developed as diagnostic tools that are not just backed by single observations but have been confirmed in independent cohorts, some even in comprehensive meta-analyses (e.g. SDMA), and many of them could be substantiated by research on relevant animal models and by a thorough elucidation of the underlying mechanisms, pathways, and networks.
Yet, so far, these data have often come from retrospective studies (at least, from retrospective data analyses), and (too) many of these studies have been based on eGFR-related inclusion and classification criteria (or rather fuzzy diagnostic parameters like micro-albuminuria, which is no longer considered a relevant clinical endpoint). Despite sophisticated statistical strategies and detailed additional phenotyping of the patients, the lack of large, prospective studies designed around hard clinical endpoints like mortality, initiation of RRT (or mGFR as a better measure of renal function, for that matter) still limits the dissemination and acceptance of these findings. The next couple of years, however, will witness the completion-or, at any rate, meaningful interim analyses-of presently ongoing studies that fulfill some of these essential criteria such as the PROVALID study initiated by the SysKid consortium [193], the German Kidney Disease (GCKD) Study [194], the UroSysteOmics study [61,89,91,113,183], the French Chronic Kidney Disease-Renal Epidemiology and Information Network (CKD-REIN) cohort study [195], the Proteomic prediction and Renin angiotensin aldosterone system Inhibition prevention Of early diabetic nephRopathy In TYpe 2 diabetic patients with normoalbuminuria (PRIORITY) trial [65], or the KoreaN cohort study for Outcome in patients With Chronic Kidney Disease (KNOW-CKD) [196], and these studies will be complemented and extended by both general and dedicated biobanking activities [197][198][199][200]. In the end, the data generated in these studies will hopefully facilitate a reasonable validation of some of the putative marker candidates presented in this chapter.
Still, the question remains, how close this will take the field to new, broadly available diagnostic tools in daily clinical practice? This issue will primarily depend on the technical feasibility and robustness in a routine setting such as clinical chemistry core facilities and the regulatory implications for the related products. For some of the most likely candidates to be successfully validated, these points can already be discussed in a fairly well-informed manner: Starting with the proteomics output, the monoparametric protein markers, for example, KIM-1 or NGAL, can be quantified with reasonable accuracy and precision by standard immunoassays, and kits certified for in vitro diagnostics (IVD) use are commercially available, for example, from BioPorto Diagnostics (Hellerup, DK). The far more complex peptide and metabolite marker panels, however, have originally been discovered by MS (some also by NMR), and-as of today-there is hardly a plausible alternative platform that could replace mass spectrometric detection methods for these parameters-antibody-based detection is not suitable for small ubiquitous molecules like most of the metabolites discussed above, and the peptide panels would quickly exceed any reasonable limits for multiplexing immunoassays (at least on the currently established platforms). So, if MS is the method of choice for now (and for many technical reasons not to be discussed here, it will most likely stay this way for quite some time), what is the present status of clinical applications of MS? Of course, this strongly depends on three technicalities: (a) the kind of instrument used (triple quadrupole-or time-of-flight (ToF)-based platforms), (b) the separation step needed for a particular class of compounds, for example, none (flow injection analysis), capillary electrophoresis (CE), gas (GC), or liquid (LC) chromatography, and (c) the appropriate ionization technique, for example, electrospray (ESI) or photospray ionization (APPI), chemical ionization (CI), or matrix-assisted laser desorption ionization (MALDI). As for ToF-based systems, there is no routine application of a CE-ToF-MS platform in clinical routine today, and only a fairly recent adoption of a MALDI-ToF-MS instrument in bacteriology, the so-called MALDI Biotyper from Bruker (Billerica, MA, USA) [201]. In contrast, tandem mass spectrometry (MS/ MS) using standard triple quadrupole (QqQ) instruments has been the gold standard in neonatal screening for two decades now [71,72] with millions of newborns tested in an extremely sensitive and specific and, yet, highly cost-effective manner. Tandem MS is also an established component of state-of-the-art clinical chemistry labs in most industrialized countries by now and-notably-subject to stringent quality control [202,203]. Routine applications of this platform range from therapeutic drug monitoring, most commonly for immunosuppressives [204], to assays for vitamin D and some if its metabolites [205], and from screening programs for drugs of abuse [206] to the worldwide anti-doping activities [207]. Moreover, the targeted portfolio is growing fast to also cover clinically relevant classes of compounds such as steroid hormones [208,209], bile acids [210], catecholamines [211], and others.
As a matter of fact, the majority of the biomarker candidates presented in Section 4 (e.g. amino acids, biogenic amines, or acylcarnitines) are already amenable to standardized quantification by commercially available kit products, for example, as subsets of the AbsoluteIDQ™ portfolio from Biocrates (Innsbruck, AT). These kits are validated on most of the suitable mass spectrometers from the leading instrument vendors AB Sciex (Framingham, USA), Waters (Milford, USA), and Thermo Fisher (Waltham, USA), and the sample preparation procedures can be fully automated on robotic liquid handling systems, for example, from Hamilton (Bonaduz, CH) or Tecan (Männedorf, CH). Most important in this context, technically very similar kits have already been certified as in vitro diagnostics (IVD) according to the European directive 98/79/EC (IVDD), for example, the SteroIDQ™ from Biocrates. Of course, the new European regulation 2017/746 (IVDR), published on May 5, 2017, sets more challenging standards for such a certificate but there is no reason to believe that this will undermine the suitability of MS-based assays for routine diagnostics in principle. In conclusion, the technicalities of bringing a diagnostic metabolite panel into routine clinical practice seem quite straightforward, so it will primarily depend on the actual performance of these marker candidates in the ongoing validation studies if and when they could become part of the standard diagnostic repertoire and patients could eventually benefit from an improved disease management for CKD.