Traditionally success in implant dentistry is determined through the use of surrogate outcomes. In many instances these surrogates have little or no influence on the patient's interpretation of treatment success. All the while, the surrogate outcomes continue to be used because they are easy to quantify. Common thought in research is that all assessments must be objective in nature. In distinction, the use of subjective assessments makes it difficult to establish a number that defines success and, without a number, it is nearly impossible to identify a success hierarchy. Although it may be recognized that patient acceptance of treatment is the most critical factor that clinicians face, establishing a series of acceptable outcomes is a difficult, maybe impossible, task. Ultimate treatment success may be more appropriately found by actively listening to patients to identify what has "worked" for them and then extrapolating this into what will work for others.
In the world of healthcare it is always interesting to observe the outcomes that are evaluated when attempting to define success or failure of treatment. We often find that researchers utilize a number of clinically identifiable and measurable events to describe the favorability of one technique over another. Rather than directly asking the patient if they are satisfied with the treatment that has been performed, investigators seem to be enamored of the use of surrogate outcomes with the assumption that the surrogate is clearly identified with the outcome of choice.
This approach, using surrogate outcomes because they are quantifiable, is virtually ubiquitous within the scientific community. The logic is obvious, if a device can be created and then used to measure a specific clinical performance and if those measurements are somewhat correlated with a clinical outcome, that device then is thought to have clear value when determining successful treatment. Furthermore, assembly of a sufficient number of different surrogates must then be a way to identify treatment success.
Devices that create numerical values would seem to be preferable to methods that fail to provide discrete outcomes. Using this logic, the most predictive clinical studies are described as the ones that assembled the largest number of data points among different assessment methods. If one surrogate outcome is nice to have, one would logically assume that a large number of surrogates, all measuring something different, would be even better.
Although the argument seems, at face value, so obvious that it is virtually impossible to disagree with this approach, one may wonder however if this is truly the best way to measure successful treatment. Since we know that science is by nature objective, the use of objectively created and calibrated devices must certainly be a superior approach to one that depends upon subjective findings.
Even trying to envision a subjective qualification appears to be an impossibility. Certainly it would be nice to wave a wand over a painting that would identify beauty or eliminate a lack of it. It would be wonderful to have a “good-o-meter” that reliably and predictably differentiates good from bad even when the assessment of good or bad is by nature subjective. Of course this entire discussion seems somewhat frivolous in our world of X’s and O’s where our objective measurements provide the clear differentiation while subjective assessments become nothing but artistic interpretations. Even so, we must appreciate that the patient must be adequately served when undergoing treatment.
Although it seems difficult, there have been a number of efforts to quantify performance that may not always lend itself to simple analysis. Think about chewing function, traditional methods of assessment use a sieve to determine how well a patient triturates food. Chewing carrots for a specified number of chewing cycles before expectorating into the sieve may well determine how small the chewed particles of carrot become but this may not be relevant to the patient even though the assessment is objective. The alternative approach of asking the person to judge how well they chew would likely provide an answer that might be absolutely truthful but unsatisfactory when the investigator feels a need to put a value on the performance.
Defining patient-centered outcomes is harder than you may think. If you search the internet it is likely that you will become bogged down in millions of hits with no clear indication of the most correct site or definition. Wikipedia provides a succinct definition as “outcomes from medical care that are important to patients.” It is certainly hard to argue with this definition but we probably need something more concrete to evaluate so it is back to the internet once again. Eventually one may discover a description of a governmental agency, the “Patient-Centered Outcomes Research Institute”, that describes the efforts that are being used to clearly describe the field of patient-centered outcomes. Unfortunately this institute, like so many governmentally funded efforts, provides group speak which may lead to confusion rather than clarity. Frankly, reading through the myriad information on this and other sites is likely to bring sleep to the most hopeless of insomniacs.
Circling back to the notion that patient-centered outcomes identify issues that are important to patients we then thing about ways to objectify the subjective “feelings” of patients. One method that has been used is the Oral Health Impact Profile as this questionnaire uses Likert scales in a survey designed to assess patient quality of life before and after provision of a specific treatment. Indeed the patient identifies the responses to simple questions and those answers differentiate success from a lack of success. This approach is certainly more appropriate than simply asking the patient if they received benefit from the intervention because the design of the study allows patients to score the importance of their outcomes to themselves. The problem is that this sort of test needs to be administered before and after (sometimes during) treatment. Failure to assess the patient using the survey prior to treatment means that the results of the study depend on patient memory of previous experience and comparing that memory with their current, at the time of questionnaire administration, observations. Memory in this situation may be faulty.
Are we missing something? By requesting objective criteria for all treatment interventions do we lose our ability to differentiate? Perhaps a few examples might help. I grew up in the 60s, when everyone had an opinion on who was better, the Beatles or the Stones. I took literature courses in college where we argued the relative merits of Dostoevsky in comparison to Tolstoy. I've seen paintings by Rembrandt, van Gogh, Monet and many other artists but in the final assessment, the ability to determine that one is clearly superior to another is a visceral response rather than an objective measurement.
I recently finished reading the Walter Isaacson biography of Steve Jobs. Isaacson portrayed Jobs as an individual who evaluated products in very polar ways. Things, in the case of Apple products, were either insanely great or they were excrement. But it wasn't just products that were introduced, Jobs was introducing ideas that had not previously existed. There wasn't much equivocation in the way that Jobs considered products or ideas. Even more, it appeared that he looked at life in much the same way, events were either insanely great or they were crap (for those of you who've read this book you're probably seeing a sanitized version of the way that I am representing how Jobs described these dichotomous events). Interestingly enough, much of what Jobs brought to the world was ideas that were based upon subjective assessments rather than absolute objective truths. Jobs described products that were brought to the market before people knew that they needed the product. This is a direct contradiction to the rest of life where we see products introduced in response to demand, but in the world of Apple the product was created with the anticipation that the demand would follow in response to the brilliance behind the product.
This approach was incredibly risky. One miscalculation, one misinterpretation in this year's product line could portend disaster. The iPad was not the first tablet neither was the iPhone the first smart phone. But the iPad and the iPhone were devices that transformed their categories. There certainly were many who purchased alternative devices or clung to their previous generation of device assuming that the differences between what they had and what Apple created was only an incremental difference. Objectively they would argue that the price differential could not be justified but subjectively they gradually appreciated the differences, eventually succumbing to the desire to embrace the Apple product line. People came to realize that working around minor shortfalls demanded energy that ultimately diminished creativity and when that happened everyone lost a little.
Coming back to a discussion of patient-centered outcomes, we might ask what we miss if we fail to involve the patient and their assessment of the outcomes of treatment in our discussions of "successful" treatment. Being scientists, we are always reluctant to ask a question that is difficult to measure. We seem to be fearful of solicitation of input when we realize that the input that the patient gives us may differ from our assessment of what is right and what is wrong. After all, our assessment almost inevitably prefers results that identify quantifiable differences that show, without a doubt, that one treatment is superior to the alternative treatment.
When I received the request to write an essay on the question of the role of patient-centered outcomes in implant dentistry my first reaction was to assemble all the examples of how we evaluate the outcomes that are addressed specifically to the patient and their response to implant treatment. I started to look up the different factors that could be evaluated. I looked at ways to objectify the subjective responses of the patient. I identified a field that is obviously growing and is receiving much financial support from government, perhaps because government loves to study more than they wish to act. The more I looked at this topic the more confused I became.
On the first draft my thought was to present an intellectual treatise on patient-centered outcomes and clinical outcome assessments of different oral health impact factors that have been identified. All of this seems like a nice academic effort that may provide some, albeit small, amount of information to the readers. The more I wrote, the more drafts I went through, the more I started to look at things differently. Perhaps we don't need a scorecard to determine what is good and what is bad for everything that we do. Perhaps we learn as much by talking to our patient and truly listening to what they say rather than attempting to put a new number on every observation. In putting these few paragraphs onto paper I think that I've gained a better appreciation that it isn't the number that we derive when we see if an implant resonates. Likewise, it isn't the number that we observe when we loosen a screw. It is instead the effect of what we do for our patients that really makes the difference.
Rather than transforming patient-centered outcomes into a new area of investigation primarily quantified by psychosocial outcomes rather than physical assessments, we might consider the value in learning how to ask our patients if we are doing right by them. Perhaps, by listening, we may hear all that we need to hear. The fly in this ointment is related to listening and hearing what the patient says. Clinicians must listen actively, use validated surveys and carefully analyze results. Likewise the clinician investigator (we are all investigators in the sense that we evaluate the treatment we provide) must honestly assess all aspects of treatment while understanding that the ultimate goal of treatment is to please the patient.