Histologic revisted criteria of chronic allograft nephropathy
Since the first successful kidney transplantation in 1954 between Identical twins, a new modality to treat patients with terminal kidney insufficiency was born. Although the results in the first decades were modest, continuous development has characterized this captivating field. A major advance was the introduction of the new immunosuppressant cyclosporine A in the early 1980s. The fundament of its success was the aptitude to improve kidney graft survival significantly over the first year, and calcineurin inhibitors are the cornerstone of immunosuppression even in the present decade.
Chronic allograft nephropathy is a histopathological diagnosis used to denote features of chronic interstitial fibrosis and tubular atrophy within the renal allograft. It remains the most common cause of graft dysfunction and loss after renal transplantation.
The term Chronic allograft nephropathy was proposed in 1991, and it replaced the previously used term “chronic rejection”. The intention was to unify chronic histological changes seen under light microscopy, such us interstitial fibrosis, tubular atrophy, transplant glomerulopathy and vasculopathy. The pathophysiology behind each of these features may nevertheless be different. The processes involved are approached by dividing them roughly into immunological and non-immunological factors, although they may be interrelated.
In this chapter we will discuss the histological features, the pathogenesis, the different etiologies and the therapeutic possibilities in cases of chronic allograft nephropathy.
2. Epidemiology of chronic allograft nephropathy
Prevalence of chronic allograft nephropathy at 2 years was reported in a prospective multicenter trial that compared cyclosporine against Tacrolimus (Solez et al., 1998), in which 72.3 % and 62.0 % of biopsies exhibited CAN, respectively. There was no difference in chronic histology between the therapeutic arms, but CAN at 2 years was associated with older donor age, early acute rejection, and episodes of acute CNI nephrotoxicity. Functional studies unfortunately underestimate significantly the incidence of histological graft injury. One study found that 94 % of grafts has histological evidence of interstitial fibrosis and tubular atrophy at 1 year (Nankivell et al., 2003). This same study found that much of the progressive chronic damage was related to Calcineurin inhibitors, even though the levels of these drugs had been maintained well within the defined target range.
3. Histopathology of chronic allograft nephropathy
Previously, chronic allograft rejection was considered the main aetiological factor for chronic graft loss, as features of cellular inflammatory immune infiltrates, identified on kidney biopsies, were suggestive of injury from immunological changes within the graft. This classification changed with the implementation of the Banff 97 working classification of renal allograft pathology criteria, which integrated features of the Chronic Allograft Damage Index (Racusen et al., 1999) and Cooperative Clinical Trials in Transplantation systems (Isonemi et al., 1994). This led to the standardization and semiquantification of these lesions. Then, the term chronic allograft nephropathy replaced chronic allograft rejection.
The histological features that define chronic allograft nephropathy in the kidney transplant allograft include interstitial fibrosis and tubular atrophy, as mentioned above, as well as features of glomerulosclerosis with an aspect of double contours in the glomerular basement membrane, arteriolar hyalinosis and arteriolosclerosis (Fig 1) (Nankivell et Chapman, 2006).
Chronic allograft nephropathy is graded as mild, moderate or severe based on the severity of chronic interstitial fibrosis and tubular atrophy and the area of cortex affected in the biopsy specimen. Interstitial fibrosis, denoted as ci, is scored by the area fibrosed and ranges from mild (ci1 6–25%) to severe (ci3 >50%). Tubular atrophy refers to the loss of tubular height and increased luminal size of the tubules and is denoted as ct (ct0–ct3). Tubular atrophy and interstitial fibrosis are often nonspecific by themselves (Table 1).
Chronic transplant glomerulopathy refers to the thickening of the glomeruli and is quantified by the percentage of glomeruli developing “double contours” of peripheral capillary loops and is denoted as cg (cg0–cg3). Arteriolar hyalinosis, as suggested by the term, denotes thickening of arterioles within the kidney based on the amount of periodic-acid-Schiff-positive hyalinosis and is denoted as ah (ah0–ah3), often implying calcineurin inhibitor nephropathy. More in-depth quantification of all of these criteria is readily available (Racusen et al., 1999).
|Grade||Histology||Interstitial Fibrosis||Tubular Atrophy|
|I||Mild||ci1* 6-25% of cortical area||ct1 Up to 25 % of cortical tubules|
|II||Moderate||ci2 26-50 %||ct2 26-50 %|
|III||Severe||ci3 "/ 50 %||ct3 "/ 50 %|
The addition of C4d staining to the Banff criteria in 2003 has allowed for the helpful diagnosis of chronic antibodymediated rejection. C4d is a positive marker of complement activation, implying the presence of antidonor antibodies and hence antibody-mediated rejection. C4d is released on binding to antibody. These antibodies bind to endothelial cells in glomerular and peritubular capillaries, suggesting antibody deposition (Feucht et al., 1991, Nickeleitt et al., 2002) and prompting the clinician to request donor-specific antibody testing. C4d staining is regarded as positive or negative, and its position within the biopsy is recorded and graded by type, as acute tubular necrosis-like, capillary or arterial (Racusen et al., 2003). C4d has a role in acute rejection, early unexplained primary graft non function and chronic dysfunction where transplant glomerulopathy is present (Nickeleitt et al., 2002). The evidence for chronic allograft nephropathy as the leading cause for progressive renal failure and graft loss is supported by both transplant registry and protocol biopsy data. Graft loss secondary to the progressive development of chronic allograft nephropathy has consistently been recorded within Australian–New Zealand (ANZDATA) transplantation registries (Chang et al., 2007). Although histological confirmation of chronic allograft nephropathy by biopsy is variable, reports from all databases show progressive transplant loss attributable to CAN continuing to the present day despite improved changes to immunosuppression regimens. Cohort studies using protocol biopsies performed from day of transplant to 10 years posttransplantation consistently demonstrate the evolution and progression of CAN (Fernando et al., 2004, Nankivell et al., 2003, 2004c, Schwarz et al, 2005). Larger studies have helped identify aetiological factors involved in chronic graft injury.
In particular, the 10-year protocol biopsy study on adult patients with kidney–pancreas transplants defined the occurrence of severe rejection, of subclinical rejection and in some cases true chronic rejection, as evidenced by tubulointerstitial damage, with increasing evidence of progressive nephropathy from calcineurin inhibitors. Histological lesions of grade 1 chronic allograft nephropathy present in up to 94.2% of adult patients at 1 year posttransplant (Nankivell et al., 2003, 2004c), and grades progressively worsen up to 10 years.
4. Pathogenesis of chronic allograft nephropathy
The pathogenesis of chronic allograft nephropathy is still not fully elucidated, although several theories have been suggested (Häyry et al., 1993, Halloran et al., 1999, Paul et al., 1999, Joosten et al., 2004). Chronic allograft nephropathy is thought to initiate from a series of challenges to the allograft. Injury to the graft begins even before the effect of the alloresponse: donor brain death, warm ischemia, cold ischemia, and ischemia/reperfusion injury all result in increased immunogenicity in the graft, causing increased inflammatory alloresponse after the revascularization of the allograft. Series of injuries continues during the first weeks after transplantation; acute tubular necrosis, acute rejection episodes, calcineurin inhibitors nephrotoxicity, and infections in the graft, among others, contribute to the injury of the transplanted kidney. All this occurs against the background of foreign MHC antigens and often in a kidney from an older donor with some extent of age associated changes and limited capacity of restoration from injury.
Renal injury is thought to result in an inflammatory response; recipient lymphocytes and monocytes enter the graft and produce cytokines, which stimulate inflammatory and mesenchymal cells to produce excess growth factors, resulting in proliferation of myofibroblasts and smooth muscle cells in the vascular wall and increased of collagen synthesis in fibroblasts. This process is thought to lead to scar formation; excess interstitial fibrosis, tubular atrophy and vascular intimal thickening represent the stereotypic histopathological picture seen in endstage renal diseases.
Whether the initial injury in the graft occurs in the vascular wall, resulting in the proliferation of the vascular wall and narrowing of the lumen, followed by tubulointerstitial lesions due to growth factor response and partly ischemia (Häyry et al., 1993), or whether the initial event is tubular cell injury, resulting in growth factor response, tubular atrophy, interstitial fibrosis, and finally vascular narrowing, is still under discussion (Paul et al., 1999). Nankivell et al. (2003) support the hypothesis that interstitial fibrosis and tubular atrophy were the first manifestations of chronic allograft nephropathy and preceded vasculopathic changes. Most of the histopathological changes of chronic allograft nephropathy are also seen in the aging kidney. This fact and knowledge of the limited cell cycle capacity has led to the theory that cellular or tissue senescence might play a role in the pathogenesis of chronic allograft nephropathy (Halloran et al., 1999).
Continuous injury to an aging graft may result in exhaustion of the replicative cells and inability to repair injury and remodel the tissue in an adequate way. This is thought to lead to the histopathological lesions seen in chronic allograft nephropathy.
5. Diagnosis of chronic allograft nephropathy
Clinically, chronic allograft nephropathy is characterized by a slow but variable loss of function, starting 3 months after implantation, often in combination with proteinuria generally in the non nephritic range and hypertension (Paul et al., 1999).
The progressive decline in renal function measured by increasing serum creatinine, or the development of overt proteinuria, is often the first indication alerting the clinician to the presence of chronic allograft nephropathy. Evaluation of large registry data, however, has shown that serum creatinine has limited predictive value for subsequent graft loss (Kaplan et al., 2003). A single reference range for serum creatinine can be misleading and often underestimates the deterioration of renal function, especially at glomerular filtration rate between 30 and 70 mL/min (Levey et al., 1999). If clinicians wait until their individual patients start showing rising creatinine levels, then a considerable amount of damage will already have been done and it may be too late for successful intervention (Chapman et al., 2005).
6. Risk factors of chronic allograft nephropathy
6.1. Immune-dependent factors
6.1.1. HLA mismatches
HLA antigens present in the donor but absent in the recipient are counted as HLA-mismatches. HLA-A, -B and –DR mismatches have been associated with poor graft survival. Although organ allocation relies on several non-immunological and logistic factors, a close HLA match is desirable. It was clearly demonstrated that the greater the number of mismatches, the poorer the graft survival at 5 years of follow-up (Opelz et al. 1999) The importance of HLA-mismatch in the era of modern immunosuppression (i.e. tacrolimus, mycophenolate mophetil, induction therapies) was evaluated in the largest European study, where the results of two eras: 1985–1994 and 1995–2004, were compared. A total of 135,970 cadaver kidney transplants were followed up for 5 years. Although the survival rates improved over the years, HLA mismatches still had a clear impact. A multiregression analysis of factors contributing to graft survival revealed that the impact of HLA-mismatches on graft survival was equally strong in the two decades compared (Opelz & Döhler 2007).
The main routes of sensitization are blood transfusion, pregnancy and transplantation.
Alloantibodies were first implicated in chronic rejection of human allografts (Russel 1970) with the occurrence of chronic allograft arteriopathy only in patients who developed de novo antidonor antibodies (human leukocyte antigen (HLA)). Terasaki et al. (2007) detected an association of circulating HLA antibodies with an increased risk of long-term graft loss. Halloran et et al. (1990) showed that acute renal allograft rejection in patients with donor-specific anticlass 1 HLA antibodies had distinct pathological features. An advance finding was the demonstration of the complement fragment C4d in peritubular capillaries (PCT) in patients with acute rejection (Feucht et al., 1991). This was tied to circulating donor-specific antibodies and graft pathology by Collins et al., (1999), and confirmed by many others, leading to the introduction of the diagnosis “acute antibody-mediated rejection” in the BANFF classification. Mauyyedi et al. (2002) then connected the dots and discovered that glomerulopathy or arteriopathy was liked to C4d deposition in peritubular capillaries and donor-specific alloantibody. For this condition, a new term was proposed with “chronic humoral rejection”. Several groups had confirmed these findings and it’s clear that about 50 % of patients with transplant glomerulopathy or arteriopathy have C4d deposition in peritubular capillaries (Sis et al., 2007, Colvin et al., 2007).
6.1.3. Chronic humoral rejection
|Duplication of glomerular basement membrane (cg1-3)|
|Multilaminated PTC basement membrane|
|Arterial intimal fibrosis without elastosis|
|Interstitial fibrosis with tubular atrophy with or without PTC loss|
|Diffuse C4d positivity along PTC|
|Presence of donor-specific antibody|
Recent studies have indicated that chronic humoral rejection is common in unselected indication biopsies, found in one 10-year series in 9.3 % of 771 cases (Farris, 2009). The onset of chronic humoral rejection is typically late, after the first year, and the prevalence rises progressively to about 20 % in the fifth year. Proteinuria is common but not invariable (~50 % have > 1 g per day proteinuria). Renal function is often abnormal, but can remain stable for considerable time periods (years) (Kieran et al., 2009).The strongest risk factor identified to date is the existence of pretransplant donor-specific antibodies, but most cases occur in patients without a history of presensitization or even an episode of acute humoral rejection. Serologically, the most interesting aspect of chronic humoral rejection is the strong association with class II DSA (Gloor et al., 2007) which is not a characteristic of acute humoral rejection.
The major features of chronic humoral rejection are duplication of the glomerular basement membrane (“transplant glomerulopathy”), multilamination of peritubular capillaries basement, mononuclear cells in glomeruli and peritubular capillaries, and loss of normal glomerular capillary endothelial fenestrations (Colvin et al., 2006). In addition to multilamination of basement membranes, loss of peritubular capillaries has been demonstrated in patients with chronic graft injury and this correlates inversely with serum creatinine (Ishii et al., 2005). It is possible that the loss of capillaries is related to endothelial-mesenchymal transition (Zeisberg et al., 2007).
6.1.4. Acute rejection
Historically, acute rejection episodes that are severe, recurrent, or that occur late have been associated with inferior outcomes. The strong correlation between late acute rejection and chronic allograft rejection, as well as late graft loss, has been reported consistently (Nankivell et al., 2001, Sijpkens et al., 2003). The risk of graft loss is different for acute rejections that are functional reversible on treatment compared with those with functional deterioration (Meier-Kreische et al., 2004). In the mean time however, higher risk donors and recipients have also been transplanted in more recent times. An impaired ability to tissue restoration in these kidneys, as has been described for acute cellular or Banff grade I rejection in kidneys from older donors, may at least partly explain the observed lack of functional reversibility (de Fijter et al., 2001).
6.1.5. Subclinical rejection
The term subclinical rejection refers to allografts with stable renal function that display an interstitial infiltrate and tubulitis.
There is increasing evidence that subclinical rejection may represent an important factor in predicting early graft loss (Nankivell et al., 2004b, Veronese et al., 2004). The prevalence of subclinical rejection is maximal during the initial 3 months, progressively declines in the first year, but may persist in a small number of patients after the first year (Nankivell et al., 2004b). According to the Banff criteria, approximately 1 out 3 subclinical rejection episodes are classified as being of interstitial acute rejection grade 1, and 2 out of 3 are classified as borderline changes (Nankivell et al., 2004d).
The largest observational study to date of 961 renal transplant biopsies performed on 119 consecutive simultaneous pancreas kidney transplant recipients reported an subclinical rejection prevalence of 60.8, 45.7, and 25.8 % at 1 month, 3 months, and 1 year, respectively (Nankivell et al., 2003). In renal recipients, the prevalence of subclinical rejection in 3-month protocol biopsies has been observed at between 23 and 43 %, but its difficulty to directly compare these results because of differences in histological interpretation, histocompatibility, patient selection, and immunosuppressive regimens (Nankivell et al., 2001, Rush, DN et al., 1994, 1995, Shapiro et al., 2001, Shishido et al., 2003). A prevalence of 18 % at 3 months was reported in deceased donor kidney transplant recipients on Tacrolimus, azathioprine, and prednisone (Jurewicz, WA 1999), whereas in patients on tacrolimus, mycophenolate mofetil, and prednisone, the prevalence was only 2.6 % (Gloor et al., 2002). A recent randomized, multicenter study in renal transplant patients, considered to have a low risk profile of acute rejection, receiving tacrolimus, mycophenolate mofetil, and prednisone reported a low overall prevalence of subclinical rejection (4,6 %) (Rush, D 2007). Treatment of subclinical rejection in the biopsy arm of this study with high-dose steroids, however, did not result in beneficial outcomes, at least in the short term.
6.2. Non Immune-dependent factors
6.2.1. Donor age and cellular senescence
It is generally accepted (Halloran et al., 1999, Basar et al., 1999, Oppenheimer et al., 2004) that increased donor age is associated with reduced actuarial graft survival, increased rate of delayed graft function, and earlier onset of chronic allograft nephropathy.
In vitro studies have shown that cellular senescence can be categorized in either replicative, stress- or aberrant signaling-induced senescence (Itahana et al., 2004).
Replicative senescence (so called “mitotic clock”) is associated with shortened telomeres in human (Harley et al., 1990). After a variable number of mitotic divisions, cells cannot pass from G1 to S phase of their cycle as a consequence of DNA loss corresponding to telomere shortening.
A recent study (Westhoff et al., 2010) demonstrated in a telomere-deficient mouse model that telomere shortening was associated with reduced replicative capacities and a compromised ability of tubular cells to respond adequately to acute kidney injury. Although intriguing, the precise role of replicative senescence on the onset of chronic allograft nephropathy remain unclear.
Stress- and aberrant signaling induced senescence is secondary to extrinsic stress and characterized by an increased expression of the cyclin-dependant kinase inhibitor p16, a cell cycle regulator, and the activation of the p53 pathway (Zindy et al., 1997). It has been shown that markers of the senescent cellular phenotype, such as p161NK4a, cyclogenase 1, or HSP A5, are overexpressed in allografts with chronic allograft nephropathy, suggesting the implication of some features of aging on graft impairment (Chkhotua et al., 2003. Melk et al., 2004).
6.2.2. Brain death
Brain death, in addition to other unspecific injuries of organs at the time of transplantation, is considered as the main reason for the superior clinical survival of living donor transplants, even when disadvantaged for human lymphocyte antigen compatibility (Gasser et al., 2000).
It’s supposed that there’s, in kidneys from brain dead donor, an augmented inflammatory and dentritic cell response compared with kidneys from living donor (Timsit et al., 2010).
Consequences of ischemia/reperfusion injury correlate with chronic allograft nephropathy in clinical and experimental studies. A retrospective study based on the United Network for Organ Sharing database showed that prolonged cold ischemia was a significant risk factor for late allograft loss (Salahudeen et al., 2004). Increased graft immunogenicity, accelerated host immune responses, and fibrotic changes due to increased matrix synthesis are currently evocated as mechanisms correlating ischemia/reperfusion injury and chronic allograft nephropathy.
Iischemia/reperfusion injury leads to an increased expression of both class I and class II major histocompatibility complex molecules in the allograft (Shoskeet al., 1996), as well as to accelerated dendritic cell differentiation and increased rates of acute rejection through either direct or indirect allorecognition (Ke et al., 2005). In addition, ischemia/reperfusion injury upregulates the expression of adhesion molecules and leukocyte recruitment in the graft leading to a sustained host immune response (Farhood et al., 1995, Osborn et al., 1990).
6.2.4. Calcineurin Inhibitors nephrotoxicity
Calcineurin inhibitors are pleomorphic nephrotoxins affecting every histological compartment of the transplanted kidney. The classical calcineurin inhibitors lesions (Benigni et al., 1999, Mihatsch et al. 1988, Davies et al., 2000) include de novo or increasing arteriolar hyalinosis and striped fibrosis, supported by microcalcification unrelated to other causes such as tubular necrosis and hyperparathyroidism (Gwinner et al., 2005). Ciclosporine A and Tacrolimus nephrotoxicity and increasingly common late after transplantation (Nankivell et al., 2003, Solez et al., 1998). Arteriolar hyalinosis is the most reliable diagnostic marker of calcineurin inhibitors nephrotoxicity (Solez et al., 1993). Confirmation of the diagnosis can be made by exclusion of donor hyalinosis detectable on implantation biopsy, diabetes and hypertensive nephrosclerosis (distinguished by subendothelial hyalinosis, elastic lamina reduplication and medial hyperplasia in larger arteries (Mihatsch et al., 1988, Mihatsch et al., 1995). Severe arteriolar hyalinosis causes vascular narrowing and downstream ischemic glomerulosclerosis (Nankivell et al., 2004c)
6.2.5. Recurrent glomerulonephritis
Recurrent glomerulonephritis is diagnosed by exclusion of donor-transmitted disease and de novo glomerulonephritis, and currently accounts for 8.4% of allograft loss by 10 years in recipients with renal failure from glomerulonephritis (Briganti et al., 2002). The clinical course and severity of recurrent glomerular disease often recapitulates the patient’s native disease (Chadban et al., 2001) except for vasculitis or lupus nephritis, which are usually controlled by transplant immunosuppression. Focal segmental glomerulosclerosis (20–50% recurrence rates) and dense deposit disease (50–90% recurrence) have the worst prognosis; compared with membranous glomerulonephritis (29–50% recurrence), membranoproliferative glomerulonephritis type 1 (20–33% recurrence) or IgA nephropathy, which recurs in up to 58% but with less clinical impact (Chadban et al., 2001). Diabetic glomerulopathy can also recur in allografts, usually after many years.
Infectious diseases affect graft and patient survival and contribute to the development of chronic allograft nephropathy. This group of diseases includes nephropathy due to polyoma (BK) virus infection, direct and indirect effects of cytomegalovirus (CMV) infection, and bacterial infections. Rarer infections causes of chronic allograft nephropathy include cryoglobulinemia associated with hepatitis C, Epstein-Barr virus (EBV)-associated posttransplant lymphoproliferative disease (PTLD), and direct cytotoxicity from adenoviral infection or parvovirus B19.
22.214.171.124. BK virus
BK virus is an endemic polyoma virus of high prevalence, low morbidity, long latency and asymptomatic reactivation in immunocompentent individuals (Hirsch et al., 2003, Mannon et al., 2004). Prospective screening studies suggest that 50 % or more of patients develop BK viruria after transplantation, with a peak incidence in the first 3-12 months (Nickeleit et al., 2000b, White et al., 2008, Koukoulaki et al., 2009). However only 1-5 % of viruric patients go on to develop nephropathy (Smith et al., 2007, Kim et al. 2005). When BK virus-associated nephropathy (BKVAN) occurs, reported rates of graft loss have ranged from 10 to 80 % (Nickeleit et al., 2000a. Weiss et al., 2008). Although early reports suggested a link with Tacrolimus and mycophenolate-based regimens, it seems likely that the risk of BKVAN relates to the total burden of immunosuppression rather than to any specific drug (Rahamimov et al., 2003, Nickeleit et al., 2000a).
Some authors reported that the injury of the renal allograft may have an important role in the pathogenesis of BKVAN (Drachenberg et al., 2005). The association found between human lymphocyte antigen mismatching and BKVAN supports the hypothesis (Awadalla et al., 2004).
The extent of the contribution of cytomegalovirus infection to chronic allograft nephropathy remains controversial. Cytomegalovirus infection has been shown to upregulate class I and class II major histocompatibility complex molecules on T lymphocytes and renal parenchymal cells. This effect is likely to be a cytokine-mediated phenomenon because of upregulation of proinflammatory cytokines such as interferon-γ. The immediate early gene product of cytomegalovirus shares a sequence homology with human lymphocyte antigen DR (Beck et al., 1988). Cytomegalovirus infection also blocks p53 (an important cell cycle regulatory protein), which may inhibit apoptosis and promote graft vasculopathy (Garcia et al., 1997). Other potential indirect effects of cytomegalovirus include upregulation of antiendothelial antibodies contributing to graft vascular injury (Toyoda et al., 1997) and upregulation of adhesion molecules (Helantera et al., 2005), leading to enhanced adhesion of host, leukocytes to graft endothelium, and thereby promoting allograft injury and/or rejection. These indirect effects of cytomegalovirus are thought to affect graft survival principally through an increased risk of acute rejection.
Although the combination of cytomegalovirus infection and acute rejection has an adverse effect on graft survival, whether cytomegalovirus infection contributes to graft failure, in the absence of acute rejection remains unclear. There is some evidence, mainly from animal models, suggesting a direct role for cytomegalovirus in mediating chronic allograft nephropathy.
Cytomegalovirus infection has been shown to have both proinflammatory and profibrotic effects. In rodents, cytomegalovirus upregulates transforming growth factor-β, platelet-derived growth factor (which stimulates smooth muscle proliferation and fibroblast activity), and connective growth factor (Inkinen et al., 2001, Inkinen et al., 2003, Helantera et al., 2006). Clinical studies have demonstrated similar effects in man (Helantera et al., 2005). Cytomegalovirus infection is also known to induce macrophage scavenger receptors and phenotypic changes in vascular smooth muscle cells, which have been shown to contribute to vasculopathy in cardiac allografts (Carlquist et al., 2004).
126.96.36.199. Urinary tract infection (UTI)
Urinary tract infection is a common complication following renal transplantation (Prat et al., 1985, Abbott et al., 2001). Graft pyelonephritis is well recognized to cause graft dysfunction but the longer-term impact is less clear (Pelle et al., 2007). A review of US registry data suggested that late urinary tract infections are not benign but they may be associated with an increased risk of death and graft loss (Abbott et al., 2004).
7. Management of chronic allograft nephropathy
The multiple pathophysiological causes of injury suggest that no single action will suffice, but instead, it is more likely that several therapies and approaches will be needed to abrogate specific etiological insults. These may include multiple, specific antagonists which are targeted to drivers of fibrogenesis, and well as indirect therapy targeting control of hypertension, hyperlipemia, infections etc.
7.1. Optimal immunosuppression
Prevention of chronic allograft nephropathy is presently one of the main goals in renal transplantation for the improvement of kidney graft survival. Refinements in immunosuppressive protocols, both controlling alloimmune responses and avoiding calcineurin inhibitor nephrotoxicity, are mandatory.
Ideally, optimal immunosuppression to prevent chronic allograft nephropathy should provide low rates of acute rejection, low rates of subclinical rejection and should be based on non-nephrotoxic agents able to preserve renal function.
Because calcineurin inhibitors nephrotoxicity, which seems common in the long term, has been considered as one of the main contributors to chronic allograft nephropathy, the prevention of chronic allograft nephropathy has been mainly attempted by reducing/avoiding the use of calcineurin inhibitors.
Unfortunately, sparing calcineurin inhibitors strategies have been associated in several trials with an increased rate of acute rejection without significant improvement in renal function and no impact on graft survival.
7.1.1. Mycophenolate mofetil (MMF)-based strategies
The impact of avoiding of calcineurin inhibitors based on the immunosuppressive potency of mycophenolate mofetil was associated with controversial results. The first study conducted 10 years ago combined daclizumab, mycophenolate mofetil and steroids (Vincenti et al., 2001). The lack of calcineurin inhibitors resulted in an incidence of acute rejection of 48 % during the first 6 months after Tx, and at the end of the first year, more than 60 % of patients were on calcineurin inhibitors. The CAESAR study compared 2 arms. The first arm associated a triple therapy with conventional doses of cyclosporine A, mycophenolate mofetil and steroids and the second arm involving daclizumab, low-dose Cyclosporin A (trough level 50-100 ng/ml) and steroids with the discontinuation of cyclosporine A 6 months after the transplantation in this arm (Ekberg et al., 2007). Renal function was not significantly different between the 2 groups and there was a trend toward higher creatinine clearance in the low-dose cyclosporine A group. In contrast, the cyclosporine A withdrawal group did not have better renal function, and there was a rebound of acute rejection after cyclosporine A withdrawal that could have counterbalanced the renal benefits of cyclosporine elimination. Other studies with planned conversion from Calcineurin inhibitors to antimetabolites in de novo renal transplant recipients also resulted in an increase in acute rejection and biopsy-proven chronic rejection (Smak Gregoor et al., 2000, 2002). In established stable patients treated with mycophenolate mofetil, discontinuation of cyclosporine A was followed by significant improvement in renal function at 1 year after this therapeutic change (Abramowicz et al, 2005), but with more patients losing their graft because of immune-mediated rejection at 5 years. In the Symphony study (Ekberg et al., 2007), the control arm with standard cyclosporine A and the low-dose cyclosporine A group had the same drug doses as in the CAESAR study. In addition, two more groups with reduced doses of Tacrolimus (target levels 3-7 ng/ml) or low sirolimus (target levels 4-8 ng/ml) were also enrolled in this large study of more than 1600 patients. The mean calculated GFR was higher in patients receiving low-dose Tacrolimus (65,4 ml/min) than in the other three groups. The rate of biopsy-proven acute rejection (BPAR) was lower in patients receiving low-dose Tacrolimus (12,3 %) than in those receiving standard-dose cyclosporine A (25,8 %), low-dose cyclosporine A (24,0 %), or low-dose sirolimus (37,2 %). Allograft survival differed significantly between the four groups (p=0.02) and was highest in the low-dose Tacrolimus group (94,2 %), followed by the low-dose cyclosporine A group (93.1 %), the standard-dose cyclosporine A group (89.3 %), and the low-dose sirolimus group (89.3 %). The 3-year data of this study was published recently (Ekberg et al., 2009) and showed that the differences between treatment groups were often no longer significant and renal function remained stable during the follow-up, suggesting that low-dose Tacrolimus with mycophenolate mofetil may avoid the negative effects on renal function commonly reported for standard calcineurin inhibitors regimes despite the potential patient’s selection and uncontrolled treatment modifications.
7.1.2. Mammalian target of rapamycin (mTOR) inhibitor-based strategies
The conversion from calcineurin inhibitors to mTOR inhibitors has been followed by variable success (Flechner et al., 2008). Conversion in cases with a significantly deteriorated renal function and proteinuria does no help to stabilize graft function. Indeed, all these concepts have been consolidated after a large prospective study with more than 800 maintenance patients of conversion from calcineurin inhibitors to sirolimus (Schena et al., 2009). In this trial, the efficacy and safety of converting maintenance renal transplant recipients from calcineurin inhibitors to sirolimus were evaluated. The primary end points were calculated GFR (stratified at baseline: 20-40 vs > 40 ml/min) and the cumulative rates of BPAR, graft loss, or death at 12 months. Enrollment in the 20-40 ml/min stratum was halted prematurely because of higher incidence of safety and points in the sirolimus conversion arm. The intent-to-treat analyses at 12 and 24 months showed no significant treatment difference in GFR in patients with baseline GFR higher than 40 ml/min stratum. On-therapy analysis of this cohort showed significantly higher GFR at 12 and 24 months after sirolimus conversion. Rates of BPAR, graft survival, and patient survival were similar between groups. Median urinary protein-to-creatinine ratios (UPr/Cr) were similar at baseline but significantly increased after sirolimus conversion.
Switching immunosuppressive therapy from cyclosporine A-mycophenolate sodium (MPS) and therapy with everolimus-steroids at 6 months after renal transplantation is effective in preventing rejection with ameliorating renal function has been shown in a recent published interim analysis (Bemelman et al., 2009).
It’s important to note that the immense majority of studies attempting de
All the recent data suggest the importance of defining the right target levels for mTOR inhibitors and an adequate management of the overlapped toxicities of their use with antimetabolites.
7.1.3. Belatacept-based strategies
Belatacept is a second generation CTL4-Ig costimulator blocker with a high avidity for CD86 and CD80 molecules and prevents T-cell activation (Larsen et al. 2005). This drug may be very interesting in the development of safe calcineurin inhibitors-free regimens to preserve renal function. Thus, in a phase II multicenter trial (Vincenti et al., 2005), therapy with cyclosporine A, MMF, and steroids plus basiliximab was compared with belatacept in two therapeutic regimens (more intensive (MI) and less intensive (LI)) depending on the dose and frequency of belatacept administrations, in association with mycophenolate mofetil, steroids and basiliximab. Belatacept was used as in induction treatment since the first day of the transplantation and next given as a maintenance immunosuppressant. The belatacept-treated arms, had a similar low incidence of acute rejection (18 and 19 %) at 6 months, compared with the cyclosporine A arm. This trial demonstrated, particularly at 12 months, lower incidence of chronic allograft nephropathy in protocol biopsies, better renal function in terms of measured GFR, and a more favorable cardiovascular risk profile in the costimulation blockade arm in comparison with the standard cyclosporine A and mycophenolate mofetil combination.
This trial was followed by two phase III pivotal trials (Durrbach et al., 2010, Vincenti et al., 2010) in patients receiving kidneys from conventional donors (BENEFIT) or extended criteria donors (BENEFIT-EXT). The Belatacept dose regimens, M1 and L1, were similar to those in phase II trials. The data of these 2 studies indicate that the renal benefits on function and structure are even more evident in optimal renal allografts, and that an induction and maintenance immunosuoppression based on belatacept may prevent long term graft deterioration. The incidence of acute rejection was similar across the groups and < 20 %, except for the MI in the BENEFIT study, suggesting the L1 regimen provides the best risk/benefit balance. Safety data from these studies have shown a higher incidence of post-transplant lymphoproliferative disease in patients treated with belatacept, which was associated with the use of polyclonals, concomitant cytomegalovirus infection, and recipient’s seronegativity for Epstein-Barr virus.
7.2. Non-immune interventions
The discrepancy between significant improvements in the prevention of acute rejection and failure to ameliorate long-term outcomes suggests that non-immunological injuries may have an important role in the occurrence of chronic allograft nephropathy (Remuzzi et al., 1998). Functional and structural changes of chronic renal allograft failure share similarities with those observed in other forms of chronic progressive kidney disease, in which decline of functioning nephron mass has been considered the key event. The existence of a single transplanted kidney supplies only half the number of nephron commonly available to a healthy subject. This implies workload per nephron to maintain body homeostasis (Brenner 1985). Graft injury is the result of glomerular hypertension and hyperfiltration in surviving units, which in turn leads to graft injury (Azuma et al., 1997). Other aggressions act against transplanted kidney like surgical and ischemic injury, acute rejection, and chronic toxicity of calcineurin inhibitors (Naesens et al., 2009) and mTOR inhibitors (Tomlanovich et al., 2007).
Strategies developed to preserve renal function in patients with chronic kidney disease are mandatory to improve renal graft outcome in the long term.
7.2.1. Control of blood pressure
Hypertension is frequent among renal transplant recipients and can be observed even starting from the first week after transplantation, mainly when doses of calcineurin inhibtors and steroids are elevated (Tedla et al. 2007). The prevalence of hypertension has changed from between 40 to 60 % in the pre-cyclosporine era to up to 80-90 % after the introduction of cyclosporine (Schwenger et al., 2001, Curtis et al., 1992).
There are no randomized, controlled trials comparing different antihypertensive drugs or optimal BP goals in transplant recipients. On the basis of large clinical trials in non-transplant patients with or without kidney disease, the Kidney Disease Outcomes Quality Initiative (KDIGO) guidelines recommend BP goals of 125/75 mmHg for transplant recipients with proteinuria and 130/85 mmHg in the absence of proteinuria (Bakris et al., 2000).
The first step of treatment should be based on non-pharmacologic interventions such weight reduction, exercise, smoking cessation, and dietary sodium restriction. Reach target levels of BP and proteinuria required frequently a simultaneous initiation of non-pharmacologic and pharmacologic treatment (Svetkey et al., 2005).
The choice of antihypertensive class depends on the individual patient. Calcium channel blockers can be used as first line therapy, especially in the early period of the transplantation because they are effective in counteracting the vasoconstrictive effect of high-dose calcineurin inhibitors (Harper et al., 1992).
In opposition, in proteinuric chronic kidney disease patients, dihydropiridinic-calcium channel blocker has been associated with increased risk of renal disease and death (Wright Jr et al., 2002, Agodoa et al., 2001).
The use of angiotensin-converting enzyme inhibitors and angiotensin II receptor blockers in renal transplant recipients is now more frequent (Ram et al., 2008). This fact is due to the cardioprotective and renoprotective effects of renin-angiotensin system (RAS) blockade in the general population (Braunwald et al., 2004) and in patients with chronic kidney disease (Ruggenenti et al., 1997).
A particular caution should be taken when using these agents in kidney transplant patients, as in the presence of artery stenosis of the graft the use of RAS inhibitors may dramatically increase the risk of kidney function impairment and hyperkalemia (Salzberg et al., 2007).
However, the multifactorial nature of hypertension in transplanted patients often requires multiple drugs, including α- or β-blockers, centrally acting drugs, and diuretics (Ojo et al., 2006).
The prevalence of proteinuria in kidney transplant patients ranges between 10 and 25% (Kasiske, et al., 2000). The three most common causes of persistent proteinuria after kidney transplantation are chronic graft injury, recurrent glomerulonephritis, and drug related nephrotoxicity (Bear et al., 1988). Calcineurin inhibitors have been considered as the immunosuppressive drugs with the highest risk of nephrotoxicity. Recently, increasing evidence has suggested a potential nephrotoxicity also with mTOR inhibitors. Indeed, they have been associated with an increased risk of proteinuria, possibly resulting from a direct toxicity on glomerular and tubular epithelial cells (Tomlanovich et al., 2007).
Proteinuria represents also a strong independent risk factor for graft loss (Bear et al., 1988). It’s a marker of progressive renal injury and contributes to progression of kidney dysfunction and fibrosis through aberrant proximal tubule protein uptake and direct tubular cell toxicity (Remuzzi et al., 2006).
RAS inhibitors have been shown to be efficient in reducing proteinuria and progression of renal disease in experimental models of renal mass reduction and in patients with chronic nephropathies (Remuzzi et al., 2006). Interestingly, RAS inhibitor therapy may exert renoprotection independently from its effect on proteinuria. Indeed, AT1 receptors mediate inflammation and are involved in the profibrotic action exhibited by potent cytokines (Ruiz-Ortega et al., 2006). Angiotensin II is also synthesized by the proximal renal tubule cells and exhibits powerful hemodynamic and non-hemodynamic effects, all implicated in the progression of chronic kidney disease (Brewster et al., 2004).
A recent analysis of 2031 patients, who received their first renal allograft at the Medical University of Vienna between 1990 and 2003, showed that RAS inhibitor therapy was associated with a significantly higher patient (74 versus 53%, P=0.001) and graft (59 versus 43%, P¼0.002) survival at 10 years after transplant as compared with non-RAS inhibitor therapy (Heinze et al., 2006). This is at variance with a previous systematic review of 21 studies consisting of 1549 patients on the effect of RAS inhibitor therapy after kidney transplantation that failed to show any beneficial effect on patient and graft survival over a median follow-up of 27 months (Hiremath et al., 2007).
Hyperlipidemia is a frequent finding in kidney transplant recipients, affecting 60% of patients (Kasiske et al., 2000). Its pathogenesis is multifactorial and includes posttransplantation weight gain and the use of immunosuppressive drugs, such as mTOR inhibitors and steroids (Tsimihodimos et al., 2008). Particularly, higher triglyceride levels have been associated with poorer graft outcomes (Del Castillo et al., 2004).
Therapeutic agents to control low-density lipoprotein cholesterol and triglycerides include statins as well as fenofibrates. Intriguingly, statins have been implicated as nephroprotective agents beyond their lipid-lowering ability because of their potential to regulate fibrogenic mechanisms, as well as their impact on endothelial dysfunction (Perico et al., 2008). The Assessment of Lescol in Renal Transplantation (ALERT) trial randomized 2102 renal transplant recipients with total cholesterol 156–351 mg/dl to fluvastatin or placebo over a 5-year follow-up period. Treatment was safe and effective in lowering total and low-density lipoprotein cholesterol (Holdaas et al., 2003). Moreover, although the trial had insufficient power to detect a significant reduction in the primary end point of cardiac death, non-fatal myocardial infarction, or coronary intervention procedure, there was a significant 35% reduction in the secondary end point of cardiac death and non-fatal myocardial infarction with fluvastatin. The treatment, however, had no effect on graft survival or function. Yet, a recent randomized controlled study in 89 kidney transplant recipients showed a beneficial effect of fluvastatin (80 mg/day) over placebo on the incidence of transplant vasculopathy (7 versus 33%; p=0.02) over 6 month followup (Seron et al., 2008).
Attractively, experimental and clinical evidence suggest that statins may have an additive beneficial effect with RAS inhibitors on kidney graft outcomes (Perico et al., 2008).
8. Immune monitoring and biomarkers to predict chronic allograft nephropathy
Chronic allograft nephropathy continues to plague kidney allografts, in spite of potent immunosuppressive therapies. Both immune-dependent and -independent factors continue to contribute to failure. A number of promising observations made in human kidney recipients suggest unique protein and genetic signatures that may identify biomarkers of injury, as well as potential targets of therapy. Technical advances such as gene cDNA microarrays, proteomics and metabonomics will multiply the number of potential etiologies and mechanisms of chronic allograft nephropathy. Discrimination between clinically important versus statistically significant factors yielding small effects will be essential. Transcriptional changes may be detectable prior to histologically apparent fibrosis, and discrimination of inflammatory infiltrates according to the constellation of expressed genes, promises to both improve diagnoses and optimize treatment strategies (Mannon et al., 2010).
Chronic kidney allograft abnormalities represent the effects of cumulative damage from a series of time-dependent stressors, which are combined with an allograft healing response and modified by immunosuppression. Early tubulointerstitial damage results from ischemia-reperfusion injury, acute tubular necrosis, acute and subclinical rejection and calcineurin inhibitor nephrotoxicity, superimposed upon donor abnormalities. Later, microvascular and glomerular Injury increases frequently as a result of calcineurin inhibitors nephrotoxicity, but also from hypertension, immune-mediated vascular hyperplasia, transplant glomerulopathy and occasionally from recurrent or de novo glomerulonephritis. Additional mechanisms of chronic allograft nephropathy include internal structural disruption of the kidney, cortical ischemia, inability to resolve chronic inflammation, senescence, cytokine excess, epithelial-to-mesenchymal induced fibrosis, hypertension and other stressors. Early detection (Fig 4) appears to be critical issue for this disorder. The role of protocol biopsy and management of subclinical rejection are under study. Treatment options are nonspecific and limited. Various immunosuppressive strategies avoiding or limiting calcineurin inhibitors, biologics and anti-proliferatives are under study. Despite marked improvements in short graft survival and reduction in acute rejection rates, long term graft function remains a critical issue. Current immunosuppressive regimens do not adequately address the causes of long-term allograft dysfunction and loss calcineurin inhibitors-sparing regimens are urgently required