More data does not mean more over-treatment (in the long run)

Whenever I read articles about new methods of gathering health information (such as frequent blood work, imaging, or other sensor data), one of the most common responses is that with this data we risk massive over-diagnosis and over-treatment of patients.

This isn’t just a straw man argument to beat up, thats an exact quote from a hacker news discussion, and its one I’ve seen brought up many times by doctors in quantified self related articles.

I appreciate the systemic concern around these things, because they represent truths on the ground right now. Yet every time I see this response, I see it as incredibly short-sighted. Its yesterday’s thinking projected into the future.

Today’s Mis-Match of Patients and Medicine

Over-diagnosis and over-treatment is a concern based on real experiences and systemic struggles felt today. Increasingly, patients are coming in armed with DNA results, heart rate data, or just things they heard and symptoms that they looked up, leading to the common refrain ‘with google, everyone is a doctor’. Unfortunately what these patients want is fundamentally incompatible with today’s way of doing medicine.

Our present system is a hydra of many models and approaches, so its hard to reduce it down to one thing, but I think its fair to say today’s medicine is a rather blunt instrument. It only intervenes when things are generally pretty bad for patients. It operates on a ‘trust us, we’re experts’ model. It even (generally) presumes everyone’s decently healthy, otherwise they’d be in a hospital. Medicine most often ends the moment the patient steps outside the hospital.


Just as our medical system is reactive to sickness rather than proactive with preventative medicine, so too is its approach to information. Doctors must react to patients misunderstandings of what they read or what test result they have, rather than having a proactive system provide information services that help patients live healthier lives and grapple with medical issues.

This experience is incredibly unsatisfying and often sub-optimal for patients wishing to take control of their well-being. We live in a world where answers are a query away, so its natural that when we’re not feeling well, we seek answers rather than ignore our symptoms or trust what the doctor said in the few minutes they saw us.

Unfortunately looking for medical information online is almost a fool’s errand. Either you get highly generic information that leads you to think your symptoms are some horrible disease, or you find specific information followed by terrible advice in some sort of blind-leading-the-blind forum. Nevertheless, many ill-informed patients gain confidence in their new found ‘expertise’ and begin to distrust doctors, as the medical system isn’t providing them adequate peace of mind in the age of information.

today, Data is for sick people

Medical information and patient data are two sides of the same coin, yet they rarely intersect in a meaningful way for patients. Most health data today is in the form of bloodwork or imaging that we barely have access to. We generally get it only when we’re sick. Because people aren’t sick too often, the data tends to be sparse, and doctor interpretation really matters. This interpretation is costly because that sparseness means the data is rather inaccurate, and expertise is the best way to improve that accuracy.

Because we have so few data points, we actually have little idea about the natural rhythms and individual differences of any one person. Instead, we have this really static view of all aspects of our bodies, coupled with very broad ‘normal’ ranges – the blunt instrument of medicine applied to data. Given this context, if we choose to do a bunch of tests on a perfectly healthy person, it’s highly likely something will be outside normal. Applying todays thinking, that’s a patient over-diagnosis problem, and could lead to over-treatment, so we don’t do it.

The counter-intuitive nature of ‘less data is better’ is not lost on patients, and many have sought to better understand their health through DNA kits, nutrition and activity tracking, heart rate and blood pressure monitors, and many other sources. Meanwhile the medical system itself has no idea what to do with data that is not directly related to sick people. They view this patient data skeptically, and, because they bear the costs of servicing inquisitive patients, simply worry about over-diagnosis.

Yet Data from healthy people means less sick people

The whole process of diagnosis can change though. If we gathered more frequent bloodwork and imaging from lots of people, we’d get a better sense of the patterns and variance in every individual – the dynamic body not the static snapshot. We’d be able to catch things earlier. We’d start to understand what normal was for the individual, rather than the mythical average person (who doesn’t exist). And with more focus on data over time, the medical system would now know what to do with patient provided data – it would act as a complement to paint a more complete picture of what was going on.

This has the potential to lower costs in three ways.

First, this acts as a way to bridge the gap between patients and doctors. With an outlet to provide their own health data, patients would feel heard and get more bought into the process, ultimately being more involved in their health. Mending the distrust between patients and doctors would mean less wasted time fielding questions about google results and patients more likely to follow care plans their doctors give them.

Second, this data would bring greater accuracy in diagnosis. This can lead to a lower cost structure because interpretation could be more broadly distributed among healthcare workers and automated systems, rather than always requiring the expertise of a doctor.

Third, this data would greatly enhance preventative medicine. Instead of being overly concerned when one of our tests goes outside the normal range, as we would today, we’ll be used to this. Normal ranges themselves will change to normal for us ranges. These data might result in health care providers recommending some minor preventative medicine course-correction instead of being unaware of a condition until it reached a sufficiently bad point.

Its the getting there that is hard

This is why the medical system should be working today to move from the doctor-centric, data-when-necessary model to a data early, data often model. With enough data, we’ll have much better ability to understand when something is truly problematic that requires expensive & risky treatment. In that respect, frequent testing can lead to less expensive/risky over-treatment than today’s world of infrequent sampling, and more cheap/minor nutritional/activity/etc interventions backed by data.

I think the concern behind statements about more data leading to over-diagnosis is that the current system isn’t going to be able to easily adapt to this new approach. I’d definitely agree – in the meantime there’s going to be some over-diagnosis until the medical system can wrap its head around a new data-rich approach to things. There was even some over-diagnosis in my recent past, due to poor medical information.

I’ve had to deal a lot with medical data the past year and a half. I learned just how bad the state of medical information is online, and I could see how it would change a patient’s relationship with their doctor. Yet I’m lucky enough to have had enough education and time to empower myself to be a better patient. I kept myself informed enough to have significant insight into what I was dealing with. I got my bloodwork history going back a decade which allowed me to understand what my normal was and how concerned I should be about any particular result. And I logged all my physical activity, tracked medications, and wrote health journals, which let me understand my symptoms, find causality, and optimize treatment. I believe overall my experience is a crudely hacked together glimpse into what a future patient experience could be.


Journalism and User Experience

In the age of the internet, quality journalism must be matched with quality presentation. This is something I’ve long advocated for, and something I tried to build into my company EdSurge when I was there. Strong brands like the NY Times, WaPo, Economist, and others can for a while longer get away with focusing less on presentation, as their brands indicate to readers time spent reading is unlikely to be totally wasted. Newer publications, however, are stuck between high end brands and content farms, and must find ways to quickly express their brand quality to newcomers. Design is that way.

Newcomers must innovate through design

In fact, newer publications have a leg up on the old guard, as they can establish a new culture where design and journalism can be on equal footing. An example of this is Vox Media, who pride themselves on putting Design, Business, and Journalism on equal footing, with strong cross-functional teams being assigned to articles.

These adaptations allow them to better push the boundaries of journalism and educating users. (This is also true of establishing better business models, distribution channels, and adaptation content to new mediums, but that’s a post for another day). Older brands have struggled with these changes because they’re set in their ways. The New York Times Innovation Report from 2014 is an excellent read that describes the barriers the Times as faced internally as they struggled to adapt to the internet. While they started out better, the rate of improvement at the Times has been dwarfed by Buzzfeed, allowing the latter to move upmarket and build economies of scale in the meantime. Sound familiar? It should.

Where’s the data?

The Times definitely has improved in the past several years, and has done a great job with content innovation in projects like Snow Fall. However, this New York Times Magazine piece on California’s massive methane leak shows that the Times is still struggling with design. The content was a perfect opportunity to use data and charts to visualize the magnitude of the discussed problem, and the timeline of the discoveries and activities. These are hardly new in publishing – the Times even ran Nate Silver’s FiveThirtyEight for a while.

Instead we get a wall of text, with numbers and dates trapped in paragraphs, and the big number in the lede (97,100 metric tons) lacking context. I’m sure somewhere deep in the article the context is provided, but it isn’t readily available for the reader to quickly assess whether to dedicate 15 minutes to finding out its importance.

Specifically, the lack of subheads and figures make the article really hard to skim, which is rather inexcusable in the age of internet publishing. The only images are a guy with a rolls royce, an overhead shot of the area, a pool, and a gate. Not exactly compelling stuff, nor highly descriptive or additive. The article is very well researched and real journalism went into it, and its a shame it wasn’t put together well. Although this is an NY Times Magazine article, we should be long past the days of magazine-first publishing.

The goal is user experience

Journalism is hard. Deadlines creep up quickly, and the presentation of an article is often the first thing dropped. I’m not even a journalist, and I’ve noticed this in my blogging. But thinking about design and presentation from the beginning helps clarify thought and guide decisions – what does the person know going in, what do they learn along the way, what is the best way to express each constituent idea, what is its structure. As with most things, the goal is a good user experience. Though their methods differ, these same principles apply to all content – textbooks, articles, video, podcasts, and games. This applies to both the content itself, and the context of that content – from discovery to delivery to engagement to reflection and sharing. We all seek to capture people’s attention, and bring them to a new place of greater understanding, and the future of journalism is in building organizations that value and excel in all these areas.

The reintroduction of iPad

Last week I wrote a series of articles called Thoughts on iPad (Part 1, Part 2, Part 3, Part 4) just prior to and after the iPad Pro 9.7″ launch. In these I discussed the bifurcation of the iPad line into regular and Pro, and the implications that has on launch dates, features and use cases, price, and relationship to past iPads.

The gist of the articles is that I believe Apple will rid itself of the ‘Air’ sub-brand and numbering iPads with the launch of a new iPad in the fall. This probably includes collapsing the iPad mini into that brand as well for the same reasons. The new iPad would run last-gen iPhone A-series chips, cost less, and possibly change chassis slightly – lighter, possibly even thinner, possibly colors. Think of it as a bit like how the iPod Touch line went thin and light and many-colored, while iPhones got the latest bits.

After thinking about the fall launch more, I realize how silly it is for Apple to keep two generations of products on the market at the same time as it typically does. I also believe I was too conservative in my cost estimates. In short, I now believe the new iPad will be $399/$499, and the iPad Air 2 will be retired along with the iPad Mini 4, thus making a clean break and not offering previous-gen iPads for at least this first year of the new brand.

The reasons for the retirement are pretty clear:

  • Brand Confusion – The biggest reason. Having ‘iPad’ and ‘iPad Air 2’ on the market at the same time is confusing, as the latter sounds more impressive. Time to retire the ‘Air’ brand and the numbers, and use the whole stage to introduce the reinvisioned ‘iPad’, which may even get a chassis change and some colors.
  • Age & Support – The Air 2 is already ‘last years model’ with the iPad Pro 9.7″ now out. If it is kept around after October, it will have been on the market 3 years when retired. No reason to push out support of an old architecture even later (especially true if the A8-powered iPad Mini 4 is retired and replaced, along with the 6th gen iPod Touch. Good fall cleaning).
  • Inventory – Apple can spend the next 6 months clearing out the remaining Air 2 inventory, and they’ve already closed out the 128gb SKUs.
  • Cost – Since the next iPad likely inherits a lot of the other components of the Air 2, Apple can get good value out of the supply chain there. Its likely the A9 + single LPDDR4 will be the same price as or cheaper than the A8X + dual LPDDR3. The combo also uses less power, meaning a smaller and cheaper battery, and some more potential for chassis refinements.
  • Cost/Benefit – The A9 performs too similarly to the A8X for the latter to justify staying around at a much lower price point.

One other thing I blanked on last week: the regular iPad shouldn’t have a cellular option. That’s a Pro level feature, and its costly to Apple because it doubles the SKUs. Although I’m not 100% confident they’ll add more colors, getting rid of cellular means they’re dealing with far less SKUs and opens up the capacity to add a few more. With 8 colors and 2 storage tiers, they’d have the same number of SKUs as if they kept the existing 4 colors and cellular. And if you think about, Gold and Rose Gold just don’t make much sense as colors for entry level devices – better blue, red, green, black, yellow, orange, purple, silver. Fun stuff for anyone who wants a device that’s more personal and expressive, and great for kids and schools.

I’m very confident now in this change-over, and any future speculation I’ll undoubtedly be doing here will be based on these assumptions. It just makes too much sense to try and position iPad any other way.

MacOS: The New OS X

According to Cult Of Mac, some source code in the latest release of OS X refers to ‘MacOS’, hinting at a rebranding. Finally. Once Yosemite arrived, OS X should have been retired and MacOS 10: Yosemite should have been born. Before then, ‘MacOS 9: Mavericks’ would have stepped the name of the old MacOS 9 from the 90s – not a huge deal, but enough of one that certain quarters would resist. ‘OS X’ does have some brand cache, so I imagine some were hesitant to give that up. But we’re long since the time when Apple only has one OS brand. The problem becomes apparent come June when iOS 10 is announced. Having ‘iOS Ten’ and ‘OS Ten’ sit side by side is no bueno marketing for a company that prides itself on clarity in a sea of products like the “LG G Tab F 8.0”. The iOS name collision isn’t the only problem. For instance, its not even supposed to be pronounced O-S-X, but instead O-S-Ten. And on top of this don’t-pronounce-it-literally hogwash, the numbering further wordsoups it with ‘X’ being followed by ‘10.whatever’, and marketing is stuck in this dilemma of wondering when exactly it should communicate the version number. Last year it would have been accurate to call Yosemite ‘OS Ten Ten Dot Ten’. Nothing quite like point release version numbers to make things seem unnecessarily technical (see LG example, above). Simple brands are the best brands – they’re memorable, unintimidating, and easy to communicate. Apple has long enjoyed a high ground of simplicity in brands relative to their peers, even when they’ve added secondary brands like ‘Air’, ‘Mini’, etc. Those secondary brands have helped distinguish large new changes in a product line, but they’ve added some baggage. With the relaunch of ‘Macbook’, the upcoming ‘iPad’ rebrand, and now MacOS, we’re seeing Apple put their best foot forward again.