With PX Data, Timing Matters More Than We May Think
By William R. England, Ph.D., Strategic Advisor—Research, NRC Health
In our recent nSight about moving from measurement to momentum, we pointed out how timely patient feedback can complement HCAHPS surveying and help health systems respond more quickly to what patients are experiencing.
That piece focused mainly on operational benefits: faster feedback equals faster learning, and (fingers crossed) faster improvement. We also showed how looking at NPS® and HCAHPS scores side by side over shorter time intervals can be a bit misleading: while these metrics are strongly correlated, variance due to low n-sizes over short time periods can bury the signal in the noise.
But there’s another dimension to survey timing that is perhaps too little discussed: namely, that getting timely feedback doesn’t just influence how quickly organizations can react to data, but actually shapes the data itself. That’s a distinction that makes a potentially profound difference.
When patients respond to surveys affects what they say, and it also impacts who ends up responding at all. So yes, timing influences how quickly we can react to patient feedback but just as importantly, it affects respondent sentiment and the representativeness of the data.
What We Know About Early vs. Late Survey Responders
There is a raft of previous research showing late responders to surveys tend to be less positive than early responders. And we see evidence of this with HCAHPS data as well.
When responses are grouped according to how many days have passed between patient discharge and survey return, Overall Rating scores steadily decline as time goes on. Patients who returned surveys within the first few weeks of their discharge were significantly more likely to give positive scores (9s or 10s), while patients who responded later were less positive on average.
The statistical relationship is strong enough that ~90% of the variation in positive scores (the R2 in this analysis) could be explained simply by the number of days that had passed since discharge.
Why Sentiment Declines Over Time
There are a few reasons this might happen. One explanation has to do with response behavior. Patients who have had a good experience often respond more quickly. Maybe they’re still thinking about their excellent visit and want to share that feedback, so they complete the survey right away.
Patients who had a less-than-positive experience, on the other hand, may be slower to respond or may only respond after a reminder mailing arrives. Over time, the composition of respondents drifts, and average scores drift with it.
Another explanation has to do with how people remember experiences. Anyone who has tried to recall the details of something that happened a month or two ago knows that memories can get hazy. Specific interactions fade. A patient might remember being uncomfortable after surgery but forget that a nurse checked in frequently during the night. Or they may remember waiting longer than expected but not recall the justifiable reason they were given for the longer wait time. And as time passes, people sometimes just make up—or fill in—experiences, rather than recall them exactly as they happened.
Also, during time that passes post-discharge, the bill usually arrives.
The Impact of Delayed Responses on HCAHPS Scores
In the traditional HCAHPS mail process, the response window stretches across several weeks. Roughly 80% of responses are returned around 40 days after discharge. Over that time, Overall Rating (OR) scores drift downward from ~80% positive among early respondents to 71%, once most surveys have been returned.
That difference matters. If a hospital leader saw scores fall eight or nine points from one month to the next, it would likely (and rightfully!) trigger serious concern. But when the shift occurs gradually across response waves and results are later aggregated, timing-related variation in sentiment can be difficult to see.
It’s important to note: the decline does not happen immediately. The slope of the line isn’t steep, so drift first emerges, then stabilizes over the longer response tail. From the point where 80% of surveys are in, OR scores are seen to move down just one point once 99% of surveys have been returned. In other words, the issue is not that HCAHPS responses have recency bias from the start, but that the extended response window allows “sentiment drift” to accumulate before most of the data arrives. Timely patient feedback compresses that window.
Shorter Response Windows Reduce Sentiment Drift
One way to see this dynamic more clearly is to compare response windows across survey methods. When HCAHPS surveys are delivered by email instead by mail, the response window shortens considerably.
In the ‘received by web’ dataset shown above, nsizes are smaller so there is more variance, but the pattern is still present. Roughly 80 percent of responses arrive by about day 15 post-discharge rather than day 40. Sentiment drift still exists, but the magnitude of shift ranges from just 2-7 points around that time frame (avg 70.4% for days 14-16, a 4% drop in scores) because the response window is shorter, and most responses are captured earlier.
Real-time feedback compresses the window even further. With NRC Health’s experience management solution, roughly 80% of responses are returned within about eight days of the care experience. When responses are examined across those first several days, positive scores remain relatively stable. Scores returned on day eight have fallen off less than HCAHPS scores and remain higher at the 80% return rate threshold.
The takeaway is not that recency bias disappears entirely; some level of sentiment drift occurs in all three datasets. What changes is the degree to which the drift shapes the final aggregate score. The longer the response window, the more opportunity there is for sentiment drift to impact scores before most of the feedback has been collected.
Why Timing Cannot Be Ignored
All of this brings us back to a simple point. When feedback is collected plays a major role in the story the data ends up telling. That said, HCAHPS remains essential for national benchmarking and public reporting; it provides a mandated and standardized comparison point. NRC Health’s experience feedback serves different purposes. It captures impressions closer to the moment of care and offers perspectives that might otherwise be under- (or mis-) represented.
We know that when health systems combine these approaches, they gain a clearer and more complete picture of patient experience. And as a result, they improve.
In the end, measurement is not just about what we ask patients; it’s about when we ask them and when they answer. The bottom line: survey timing can heavily influence the answers to our questions in ways that are easy to overlook but important to understand—so augmenting HCAHPS data with more timely feedback is a good idea.
Net Promoter®, NPS®, NPS Prism®, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., NICE Systems, Inc., and Fred Reichheld. Net Promoter ScoreSM and Net Promoter SystemSM are service marks of Bain & Company, Inc., NICE Systems, Inc., and Fred Reichheld.