Stats and other evidence can look great at first, but after a little digging…not so much
A big part of the content we create for clients, whether it is a blog post, bylined article, white paper, or even an infographic, is research. It can be a collaborative process where the client will recommend statistics or a recent study they have discovered and want to highlight, but often it will be up the writer to unearth interesting data on their own.
Anyone who’s spent time researching online knows what an exhaustive process it can be. When searching on PubMed, for example, a writer can spend hours scanning study after study and still not find enough relevant data for the article they’re writing. Either the nature of the research does not quite relate to your content, it’s too dated, or it says nothing conclusive other than “more research is needed.”
Searching for recent results from a non-scientific poll or survey can be even more fruitless and risky. Part of the reason is healthcare information technology is at least a $227 billion industry, and it can be difficult to generate media and industry awareness of your company (unless you have a great partner like Amendola). That is why many companies will conduct their own survey or poll to generate media interest.
More often than I would like, while researching for one client I will discover the enticing results data was from a survey commissioned by a competitor. Such a conflict makes those stats, while tempting, off-limits. (But I am surprised how often I come across competitors who will cite one another’s research in their content.)
To avoid these mistakes and ensure the research you use in your content is relevant and accurate, consider these tips:
Get to the Bottom of it
I came across this stat recently that was perfect for a writing project I was working on: 85% to 99% of medical device alerts are not clinically actionable. I saw it cited in numerous medical journals and even in books with different attribution, with many citing it from The Joint Commission.
Although TJC did reference it, the stat originally came from a 2011 report from the Association for the Advancement of Medical Instrumentation, which cites that figure to a study conducted by Children’s National Medical Center in Washington, D.C. I highlight this example to illustrate how challenging, but also how important it is, to identify and link to the original source of the stat.
Not only is it the most accurate way to present the data, but you may discover that by getting to the root of the source it may not be reliable or from an organization (i.e. competitor), that you want to draw attention to in your content.
Find the Context
Avoiding information from a competitor is an excellent reason to scrutinize the source of data. But after you determine where the information is coming from, the writer should also investigate the context around the data so you can further evaluate its credibility.
Some organizations will issue press releases, or mention in blog posts or bylined articles, “astounding” results from a survey or research they’ve conducted. In reading the full report or study it came from, you will learn that its PR or marketing materials carefully omitted important context, which creates a misleading perspective.
For example, you may find survey results that show “90% of physicians are considering retiring within the next five years.” However, if you dig deeper into the survey, you may find that the survey question was only conducted on physicians age 65 or older. While that may be an exaggerated hypothetical scenario, it shows how risky it is to feature data without investigating the context.
Science or Pseudo-Science?
As New York Times reporter Carl Zimmer pointed out, thanks to COVID-19, many more people are reading scientific papers, but are finding them difficult to understand. Although I’ve been reading these studies for many years, I, too, occasionally struggle to interpret findings that I can incorporate in the content we create.
Fortunately, article abstracts typically offer enough information to help decide if you should keep reading. Abstracts also provide insight into the scientific rigor behind the study, such as if it is a randomized controlled trial, which is the gold standard for medical research.
Even in such trials, if it includes only a very small or narrow population of patients, it may not apply to what you’re writing. The publication itself should also be considered. A peer-reviewed publication is ideal, as is information published in prominent journals such as JAMA, Science, The New England Journal of Medicine and Lancet, although even some of these journals have taken some credibility dings lately in the rush to publish COVID-19 research.
You are the Gatekeeper
Regardless of the quality of the data or the publication, you are the ultimate gatekeeper: Do you find the information and publication credible? Will it be meaningful and interesting to the prospective reader of the content? If so, then include it.
The great part about finding lots of meaningful research data is that it can help accelerate the writing process. With lots of information to include, it is just a matter of organizing and presenting it compellingly…but that’s for another post.