Doctor Sean Bagshaw, Clinician Scientist and Associate Professor in the Division of Critical Care Medicine at the University of Alberta, Canada, supported by a Canada Research Chair in Critical Care Nephrology, has played an active and influential role in the research of clinical, epidemiological and translational issues related to acute kidney injury (AKI), and was this year elected as a Scientific Advisor for ISICEM. With AKI proving to be a hot topic of discussion and area of development of late, we asked Dr. Bagshaw to share his opinions on recent research as well as provide an insight into which organ interactions he thinks are posing the greatest challenge to physicians.
Following Your Widespread Research Related to AKI, What Guidance Can You Provide For Diagnosing the Condition, Predicting Worsening Injury, and Assessing the Need for Renal Replacement Therapy?
Acute kidney injury (AKI), occurring in the context of critical
illness, continues to take a heavy toll on patients, presenting a high risk of
death and long-term morbidity in survivors. It remains a common and challenging
clinical problem for clinicians, is often iatrogenic, has virtually no
recognised interventions to modify or improve outcome once well established,
and clearly burdens our health systems with added expenditures.
One of
the most important initiatives in AKI research has been to improve the capacity
for early recognition and diagnosis. This has repeatedly been set forth as a top
priority. While the recognition of established consensus-driven criteria, such as
the RIFLE or AKIN criteria that are based on detection of changes to serum
creatinine and urine output, has been a monumental advance in the field, we
also recognise that these criteria that emphasise the use of serum creatinine
as the driving marker for AKI are clearly inadequate and may contribute to not
only delays in diagnosis but also missed episodes of important declines to
glomerular filtration in our critically ill patients.
Recently,
a number of studies, including those from Macedo and colleagues as well as
myself and coinvestigators, both in 2011, have focused on time-honoured clinical
parameters, such as urine output and urine microscopy, to inform on not only
the diagnosis of AKI, but also predicting worsening AKI. These studies confirm suspicions
that an episode of oliguria (urine output <0.5 ml/kg/hr) is not banal. While
short episodes of oliguria (<4 hours) are not sensitive for predicting
subsequent overt AKI when defined by serum creatinine based criteria, longer
episodes of oliguria are very specific and show higher likelihood for worsening
AKI. They also correlate with the risk for initiation of renal replacement
therapy (RRT) and death, as Prowle and colleagues found. Importantly,
clinicians should recognise that the risk of overt AKI with even short episodes
of oliguria, of one to two hours, is probably context-specific, implying that
critically ill patients with greater haemodynamic instability, characterised by
metrics such as tachycardiac, hypotension, elevated central venous pressures
and ongoing vasoactive support, are more likely to worsen. Similarly, recent
studies, from Perazella and coinvestigators in 2010, and my own team in 2012,
have shown that an evaluation of the urine sediment for evidence of cellular
debris and casts correlates with AKI severity and can predict worsening AKI.
Additional novel methods for the
early detection of AKI in critically ill patients include the use of integrated
clinical information systems and automated electronic alerting (i.e. the AKI
sniffer). A recent prospective single-centre before and after interventional
study in a mixed medical/ surgical ICU, by Colpaert and coinvestigators,
utilised an AKI sniffer to send automated e-alerts to responsible physicians
when a patient had developed AKI, based on the RIFLE criteria. The AKI sniffer
was sensitive to the diagnosis of AKI. During the three-month intervention
phase, 1,416 e-alerts were sent to 616 patients, 92.3% of which were issued for
oliguria. Importantly, patients who were issued an e-alert for AKI were more
likely to have a faster assessment and more likely to receive an
intervention—most commonly a fluid bolus, a diuretic or initiation of a
vasopressor—when compared with the pre- and post-alert control phases. This
translated into a higher proportion of patients in the e alert phase showing
kidney function that had returned to baseline within eight hours.
It is believed, collectively,
that these innovations in the diagnosis of AKI will greatly improve our
capacity to identify risk and triage patients to interventional strategies that
lead to improved outcomes.
What do You Consider to be the Prime Factors Related With the Initiation of Renal Support in Critically Ill Patients?
The optimal time to start RRT in critically ill patients with AKI,
in the absence of immediate life-threatening complications such as
hyperkalaemia or diuretic-resistant pulmonary oedema, is currently unknown;
unfortunately, there is little consensus to guide clinicians on this issue.
This is an important knowledge gap in how we care for critically ill patients
with AKI, considering that RRT is one of the core technologies we use to
sustain life. Furthermore, survey data would suggest that there is considerable
variation in practice as to why and when RRT is utilised. This is clearly
suboptimal and there is belief that this may in and of itself contribute to
less favourable outcomes.
Survey data we collected in 2012, as well as data collected by Thakar
and colleagues in 2012, also show that the perception of life threatening
complications is an absolute trigger for starting RRT. However, observational
studies have shown that these complications account for a minority of the prime
indications for RRT in critically ill patients. Indeed, a recent study found
that of all critically ill patients started on RRT, hyperkalaemia (K+ >6 mmol/l)
was present in only 8%, severe acidaemia (pH<7.15) in 11% and azotaemia
(urea >36 mmol/l) in 21%, respectively. Instead, the most common indications
in studies led by myself in 2012, were related to fluid overload or
accumulation and oligoanuria, with most patients having multiple indications.
Moreover, worsening illness severity correlates with a decreased threshold for
starting RRT, which may account for the low incidence of classic
life-threatening complications in critically ill patients. Fortunately, there
are ongoing randomised trials that are evaluating the optimal timing and
circumstances for starting RRT in the critically ill, which should better
inform on this issue (ClinicalTrials.gov NCT01557361).
What New Findings Can You Report on Haemofiltration and Haemodialysis for Acute Kidney Injury, and What Future Studies are Required in this Area?
In critically ill patients with AKI, who are supported by
continuous renal replacement therapy (CRRT), there has been uncertainty whether
a particular mode of clearance, either in the form of continuous haemodialysis
(CVVHD) or haemofiltration (CVVH), is more efficacious and associated with
better outcomes. The lack of certainty in this area has also likely contributed
to a wide variation in clinical practice in how CRRT is prescribed.
Theoretically, continuous haemofiltration, whereby solute is cleared by convection,
should better enable clearance of middle molecular weight molecules, including
inflammatory mediators, and accordingly translate into improved clinical
outcomes when compared with continuous haemodialysis. In a small phase II randomised trial (2012) comparing CVVH to CVVHD,
we found a trend for improved organ dysfunction in those allocated to CVVH,
driven largely by a reduction in vasoactive requirements. In a systematic
review, including 19 unique studies with variable data available to allow evaluation
of clinical outcomes, there was no clear suggestion of superiority of
haemodialysis or haemofiltration; however, the risk of bias across these
studies was high. These data imply that a further large, high quality
randomised comparison of CVVH versus CVVHD is not only feasible, but necessary,
to better shape best practice for the delivery of renal support in critically
ill patients with AKI.
Which Interactions Between Organs or Compartments do You Think are Posing the Greatest Challenge to Physicians, ICUs and Medical Establishments Worldwide, and Where is Further Research Most Warranted?
The kidneys’ contributions to physiologic homeostasis are often
under-appreciated. The kidney receives a considerable proportion of all cardiac
output and is vital for several regulatory processes, including nitrogenous
waste excretion/detoxification, fluid balance, electrolyte (i.e. sodium,
potassium) and acid-base homeostasis and neuro-hormonal regulation (i.e. renin
angiotensin, erythropoietin). Importantly, when the kidneys fail, renal
replacement therapy does not in fact “replace” kidney function, but merely
supports limited aspects of the kidneys’ normal function (i.e. fluid,
acid-base, azotaemic, potassium control). This could, in essence, only be
accomplished by a kidney transplant. Indeed, as
shown by Duranton and his study team in 2012, there are literally dozens of
uraemic toxins that have the potential to interact with, and cause disruption
of, other vital organs. In the critically ill patient, multi-organ dysfunction
may herald the final common pathway of many inciting events (i.e. sepsis); however,
without question the kidney is an active pro-inflammatory participant, if not
protagonist in this process. The failing kidney has implications for numerous
vital organs, whereby specific organ interaction may instigate and exacerbate
bi-directional dysfunction, including the brain, heart, lung and liver, as
described by Grams and Rabb, 2012. The challenge to clinicians is to understand
key strategies and develop interventions that interrupt organ crosstalk
pathways and lead to improved outcomes for patients.
What are Your Most Significant Research Findings Regarding Elevated Cardiac-Specific Troponin (and Related Cardiac Complications) Following Emergency Repair of Ruptured Abdominal Aortic Aneurysms?
With local collaborators, and those at other centres, a number of
investigations have focused on the prognostic implications of cardiac-specific
troponin leak and outcomes among patients undergoing noncardiac surgery and in
critically ill patients. In the VISION study published in 2012, Devereaux and
coinvestigators found that peri operative peak
troponin elevation was independently associated with a graded increase in
30-day all-cause mortality after non-cardiac surgery. Elevation in cardiac- specific
troponin has also been shown to be common in critically ill patients and correlates
with myocardial infarction and increased risk for death.
We
recently (2012) explored the incidence and significance of perioperative
troponin elevation in a retrospective population- based cohort of patients with
ruptured abdominal aortic aneurysm surviving to receive emergent operative
repair. In this cohort, we found 55% had elevated troponin levels in the first
72 hours after surgery, of which 43% had acute changes on their
electrocardiogram (ECG) that were consistent with ischaemia. Troponin positive
patients had a higher baseline prevalence of coronary artery disease and
greater illness severity; also, importantly, these patients received a greater intensity
of support (i.e. vasopressor or inotopic support), used greater health
resources and were at a higher risk of inhospital death. Moreover, elevated
troponin associated with acute ECG changes was linked
to an increased risk for complications, including heart failure and cardiogenic
shock. We also found that fewer than two-thirds of troponin positive patients
were investigated using the echocardiogram, despite a high incidence of
myocardial dysfunction and wall motion abnormalities; and fewer still received
an interventional procedure. We believe our data imply uncertainty in how to
ideally manage troponin elevation in perioperative critically ill patients, to
mitigate less favourable outcomes.
What Studies do You Currently Have Underway? What is the Significance of this Research?
At the University of Alberta Hospital, there is a large liver
failure population and a large transplant programme. We have shown in
preliminary studies (2009; 2011) that the use of continuous RRT during liver
transplantation is safe and feasible in carefully selected patients. We are now
performing a phase II randomised trial investigating the optimal method for intraoperative renal support during liver transplantation for patients with
high illness severity and kidney dysfunction (ClinicalTrials.gov: NCT01575015).
This trial is evaluating the impact of intraoperative CRRT, compared with usual
care, on the occurrence of intraoperative and early postoperative adverse
events, fluid management, and graft function. We believe this trial will help
inform best perioperative practice for critically ill liver failure patients
referred for liver transplantation. In addition, with local colleagues, we have
had interest in exploring the clinical significance and the potential modifying
impact of frailty in critical illness. Frailty is described as a
multi-dimensional syndrome characterised by the loss of physiologic and
cognitive reserves that gives rise to heightened vulnerability to adverse events.
We have hypothesised (2011) that frailty may be an important determinant of
survival and recovery from an episode of critical illness. We have recently
finished a large multi-centre prospective observational cohort study evaluating
the prevalence and outcomes associated with frailty in older patients admitted
to the ICU.
What Problems in Critical Care Management do You Think Warrant the Most Consideration in Developing and Well as Developed Countries?
My belief is that there are considerable challenges ahead for critical care, both for developing and developed countries. Some of the challenges in developing countries are related to inadequate primary care, a mechanism that could be seen as able to prevent critical illness in many respects, as studied by Adhikari and colleagues in 2010. However, the challenges are far more complex and have to consider the critical illnesses seen in developed countries as well as the added burden related to conflict and natural disasters, as expressed by Vanholder and his team in 2010. So while demand in developing countries is likely to expand, critical care services are expensive, and this capacity to pay for them will be limited. In developed countries, in particular in those with publicly funded models, one of our most significant challenges will be how to judiciously respond to the growing demand and societal expectation for critical care services amid limited ICU bed capacity and resource availability, in particular in the context of a growing older population.