Is Kt/V really the best way to measure dialysis adequacy?

9511 1

This morning’s Renal Grand Rounds was an interesting talk given by Edmund Lowrie, a nephrologist currently employed by Fresenius Dialysis who has been studying measurements of dialysis adequacy for several decades.  In his talk, he made the case that Kt/V is a highly flawed means of assessing dialysis adequacy.

To summarize his argument, he showed that both the classic HEMO study as well as more recent large database-driven studies demonstrate that BOTH Kt and V are independent predictors of survival.  Plotting Odds Ratios (OR) of dying on the y-axis against Kt on the x-axis shows that better Kt’s are associated with better survival; similarly, patients who have larger “V”‘s are also associated with better survival–excluding those who have excessively high V’s, such as the morbidly obese.
So why would we divide one independent predictor by another independent predictor in order to assess adequacy?  When framed this way, it doesn’t make sense.  The speaker advocated an alternate approach:  deriving optimal Kt’s for individual ranges of V using existing mortality data (actually, he advocates using body surface area rather than volume of distribution just because it is so easy to measure).  He also points out something that many of us have figured out with clinical practice:  smaller, more petite patients can easily achieve the existing Kt/V goals, but that doesn’t necessarily mean that they are adequately dialyzed.  
Currently the Kt/V is so universally adopted by the dialysis community, however, so it doesn’t appear that this is going to change anytime soon. 

1 comment

  1. Perhaps Kt/V Urea should be replaced with Kt/V Beta-2 microglobulin.

Leave a Reply