Really interesting examination of the further evolution of our for-profit healthcare system [1], specifically the entrance of private equity and the disastrous consequences of their actions. I’ve long thought the correct path forward for healthcare in the US should be single payor, but the political climate and deep entrenchment of the current system make such a change impossible. However, as healthcare consumes more and more of GDP, attracting more and more financial sharks (like private equity firms), producing more and more tragic stories like Hahnemann, we could witness a violent transformation of the way we finance healthcare in the US.

Also, note this brazen example of injustice:

When I spoke to Freedman by phone last summer, he had returned to California, where he had bought a new eight-thousand-square-foot house south of Los Angeles, with twenty-foot ceilings and a stone spa, for nearly seven million dollars…He was asked to step down from his board position at the University of Southern California. “That really hurt me,” he said.

Hahnemann patients suffered serious health consequences and untold psychological and financial impact. Hahnemann employees similarly suffered the stress of job insecurity and an uncertain future. Freedman, meanwhile, went back to California and bought a new mansion.


  1. Yes, the American healthcare system is a for-profit system and we should be framing it that way in all our discussions because it has implications for how the system functions and what levers are available fo reform.  ↩

Siddhartha Mukherjee:

Finally, we need to acknowledge that our E.M.R. systems are worse than an infuriating time sink; in times of crisis, they actively obstruct patient care. We should reimagine the continuous medical record as its founders first envisaged it: as an open, searchable library of a patient’s medical life. Think of it as a kind of intranet: flexible, programmable, easy to use. Right now, its potential as a resource is blocked, not least by the owners of the proprietary software, who maintain it as a closed system, and by complex rules and regulations designed to protect patient privacy. It should be a simple task to encrypt or remove a patient’s identifying details while enlisting his or her medical information for the common good. A storm-forecasting system that warns us after the storm has passed is useless. What we want is an E.M.R. system that’s versatile enough to serve as a tool for everyday use but also as a research application during a crisis, identifying techniques that improve medical outcomes, and disseminating that information to physicians across the country in real time.

I don’t disagree with this sentiment at all, but this paragraph is assuredly much easier to write than implement. Just as Mukherjee points out earlier in this piece that, “medicine isn’t a doctor with a black bag,” [1] EMRs are not simple digital copies of paper notes. These are highly complex systems encompassing clinical notes, order writing, laboratory and pathology and radiology results, vital sign tracking, medication administrations, and on and on. And the data these systems generate is highly dimensional. Even if we were able to easily “encrypt or remove a patient’s identifying details” [2] I am skeptical that the data would prove easily interpretable. We will need investments not just in ‘making our EMRs better’ but data science and clinical researchers to leverage that data for improving our pandemic response.


  1. This is really a great quote overall: “Medicine isn’t a doctor with a black bag, after all; it’s a complex web of systems and processes. It is a health-care delivery system—providing antibiotics to a child with strep throat or a new kidney to a patient with renal failure. It is a research program, guiding discoveries from the lab bench to the bedside. It is a set of protocols for quality control—from clinical-practice guidelines to drug and device approvals. And it is a forum for exchanging information, allowing for continuous improvement in patient care.”  ↩

  2. You don’t realize how many places identifying information is within a patient’s “chart” until you start trying to remove it. Think about a consult note that I write. Yes, the patient’s name, medical record number, and many other identifiers are in the document headers in structured fields. This could easily be removed. But, I also use the patient’s name and potentially other identifying information throughout the note. So, then you want to scan the note text itself and character match the patient’s name and remove any instances where you find it. What about when I misspell the name? Or use a nickname? Or refer to their parents and use their names? The complexity of the problem grows exponentially.  ↩

Setting aside the creepiness of these robots, this is an interesting use case as use of field hospitals may become more prevalent. I am also interested to see what the robots creators can come up with in terms of sensors for assessing things like vital signs given the complexity of the sensors they employ for the robots just to navigate their environment.

Epidemiologist Gregg Gonsalves recently called for “a WPA for public health,” referring to the Depression-era program that employed millions to build roads, parks and other projects that endure to this day.

I think this a good way to frame the scale we need for contact tracing and the benefit it can have for employment. We need to massively invest in our public health departments right now because they will help us get through all phases of this pandemic (not just the acute crisis).

Two things to say upfront: (1) this is obviously a preprint and has not been peer-reviewed so extra caution is warranted when reading and evaluating such studies [1], and (2) we have no idea right now if people with seroconversion have immunity nor how long such immunity may last if conferred.

The goal of this study was to ascertain the seroprevalence of antibodies to SARS-CoV–2 in a county in Northern California by sampling the population and creating population-weight estimates. I think there are 3 important questions to have in mind when evaluating this study:

  • How good was their sample?
  • How good was the test?
  • How good was their analysis?

How good was their sample?

They used Facebook ads to find volunteers for testing. I see two major problems with this. First and most obvious, Facebook users do not represent a homogenous cross-section of the US population. They mention targeting ads to balance their sample for under-represented zip codes in the county, meaning their sample should be representative of the county by zip code. Despite this effort, they had very uneven participation across the county. And this does not obviate the bias introduced by recruiting using Facebook ads. Facebook users tend to be younger and wealthier. Second, participants voluntarily clicked on this ad and completed a form to participate. Certainly people who may have been sick in the past few months with COVID-like symptoms would be more likely to volunteer to participate. Their sample was almost certainly enriched with people more likely to have had COVID. Some basic stats on the whole group presented with the Facebook ads compared to those who clicked and fully participated would be quite informative (Facebook certainly has a significant amount of detailed information on these groups).

In addition to these recruitment biases, they used drive-through testing. I presume that if you didn’t have a car, then you couldn’t participate. This again introduces some bias [2].

How good was the test?

They very smartly did not rely exclusively on the manufacturer’s reported test performance and did their own validation. This differed dramatically from the manufacturer (manufacturer’s sensitivity = 92%; Stanford’s validation sensitivity = 68%). Specificity was high in both analyses. This means there were few false positives in their testing and possibly many false negatives. Overall, these test characteristics were reasonable for the purposes of this study, if their specificity results are to be believed.

We should keep in mind the logistics of trying to complete this study. They do not mention a goal sample target [3] but presumably were trying to include as many people as possible. To this end, they used a point-of-care lateral flow assay using fingerstick blood samples. Accuracy may have been improved by using a venous blood sample and/or an ELISA, but both would be more time consuming and expensive. Unfortunately, the validation of the test kits completed by Stanford did not use capillary blood; they used serum samples. It would have been more accurate (though very difficult) to complete the validation under the same conditions as the actual conduct of the study.

Also somewhat interestingly, the authors list Premier Biotech in Minneapolis as the manufacturer, but they are only a distributor. The manufacturer is actually Hangzhou Biotest Biotech, Co., Ltd. Premier Biotech seems to exclusively work in illicit drug testing.

How good was their analysis?

Population-based estimating is not in my wheelhouse and will therefore leave it to others who would have better insights. I will say that the steps they took seem reasonable. I think what concerns me somewhat is that when they estimate the population prevlance and adjust for clustering (as some particpants brought children and were from the same household), they get a relatively wide confidence interval (1.45 - 4.16).

With all of that being said, this is the study we’ve been looking for. There are a lot of people who have been sick with COVID that we never knew about. Unfortunately, this was not desiged to and does not provide insight into the meaning of being seropositive or what higher seropositivity within a community might mean for public health measures. If there was one thing I would change with this study, it would be the sampling methodology (both using Facebook ads and taking volunteers). It would be interesting to hear from the authors more about this choice. It may have taken more time, staff, and money, but developing a population-based sampling method (something like randomized cluster sampling) and contacting individual households by phone or mail would have been a stronger approach. Regardless, this represents only one small geographic region and we will need more studies like this (hopefully with better sampling) to truly understand seroprevalence in the US.


  1. To be perfectly honest, peer review in our current climate is not offering much protection from crap studies being published. We are starving for any information that may help us so editors in this climate seem to be pushing out any studies that provide some insight.  ↩

  2. This may be a minor concern in a place like Northern California where most people have cars. However, in East Coast cities like NYC or Boston, this would produce tremendous bias.  ↩

  3. Something a peer-reviewer will hopefully point out. They should indicate how they decided to stop recruitment.  ↩

Test > Trace > Quarantine (TTQ). This needs to be our mantra for getting back to normal. We have been making progress with testing capacity, though there is still a long way to go to ensure adequate supplies. However, very few are focused on tracing or quarantine. In order to prevent a secondary outbreak after initial containment, once positive cases are identified, their close contacts need to be identified and quarantined. This is how you break the chain of transmission and all 3 parts are required. Massachusetts, using the expertise of Partners in Health, is putting in place the necessary mechanisms for large-scale contact tracing. It’s unclear if 1,000 contact tracers will be enough, but that is a great start. Next step, quarantine.

Jupyter notebooks essentially allow you to “show your work” when doing data analysis. There are additional tools like Shiny apps that don’t provide full analytical code but allow you to expose more of your data analysis than simple 2D images printed in a journal. These things really are the future for clinical research. I have not seen any utilized in the major medical publications, but I hope editors start including them soon.

More microbiome research today. This is something that seems like a no-brainer and why should it even require a study. However, weird things happen in medicine so I’m glad they are examining this practice through a controlled study.

This report doesn’t seem to indicate it, but it would be interesting to examine both the mother’s and baby’s microbiomes throughout the process.

I love studies like this that examine how antibiotics are affecting our normal bacterial flora. This new microbiome paper in Nature Microbiology [1] examines how broad spectrum antibiotics change the gut microbiome immediately following administration and how it recovers over time.

I think Ars missed it with their headline. I mean, it is notable that it takes around 6 months for the gut microbiome to recover after broad spectrum antibiotics. However, this paper also showed that immediately following administration of broad spectrum antibiotics, they saw blooms of pathogenic bacteria like Escherichia coli, Veillonella spp., Klebsiella spp., E. faecalis and F. nucleatum. This raises the question (at least in my mind): does broad spectrum antibiotic use make us susceptible to serious bacterial infections for a period while our normal gut flora is restored? We know this is true for Clostridium difficile infection (and these researchers also showed it survived their broad spectrum regimen in high numbers). This period of vulnerability may be less important for otherwise healthy people, but seems to be critically important for patients undergoing chemotherapy or bone marrow transplant who get blasted with antibiotics for prolonged periods when they are neutropenic and febrile.

A couple notes on their methodology:

  • The broad spectrum antibiotic regimen used included vancomycin, meropenem, and gentamicin; indeed very broad! I’m a little surprised two nephrotoxic agents (vanc and gent) were used. Seems a similar “hit” to the gut microbiome could be achieved without the risk of gentamicin (or perhaps a fluoroquinolone could have been included though that raises its own safety issues).
  • These participants were only given 4 days of antibiotics. It would have been a little more useful if they had only donw 2 days (mimicing a typical 48 hour rule-out). On the flip side, almost all treatment courses of antibiotics are much longer than 4 days so it would be interesting to repeat this methodology with a longer course and examine the same trends.

There’s some great microbiome research going on out there!!


  1. Palleja A, Mikkelsen KH, Forslund SK, Kashani A, Allin KH, Nielsen T, Hansen TH, Liang S, Feng Q, Zhang C, Pyl PT, Coelho LP, Yang H, Wang J, Typas A, Nielsen MF, Nielsen HB, Bork P, Wang J, Vilsbøll T, Hansen T, Knop FK, Arumugam M, Pedersen O. Recovery of gut microbiota of healthy adults following antibiotic exposure. Nat Microbiol. 2018 Nov;3(11):1255–1265. doi: 10.1038/s41564–018–0257–9. Epub 2018 Oct 22. PubMed PMID: 30349083.  ↩

Tim Cook:

Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content…We shouldn’t sugarcoat the consequences. This is surveillance.

I don’t think Tim Cook was thinking about it, but the combination of genetic or medical data with our “enduring digital profile” is even more scary. While many of the direct-to-consumer genetic testing companies have taken great care in crafting their privacy policies, data breaches or a change in a company’s governance/business model could create significant harm.

As more of our lives are captured and stored digitally, we need to think carefully about not only the implications of that digital data itself but also what it means when linked to genetic or digital medical data.

Long but well worth the read. While it’s ostensibly about Bitcoin, it’s more about the current state of information technologies.

An excellent piece on the state of electronic medical records from primarily an administrator standpoint. Well worth the long read; always good to know the enemy’s perspective. A few thoughts:

To date, the priorities of most health care organizations have been replacing paper records with electronic ones and improving billing to maximize reimbursements. Although revenues have risen as a result, the impact of IT on reducing the costs and improving the quality of clinical care has been modest, limited to facilitating activities such as order entry to help patients get tests and medications quickly and accurately.

The quote represents the crux of the problem–EMRs to date have been implemented to maximize billing (read: make sure no money is left on the table). Hospital administrators have assessed the EMR options and purchased the best products to achieve this goal. Doctors, nurses, and other care personnel have rarely been involved in the decisions, therefore, the products selected are not optimized for patient care (read: no increased productivity, only more headaches). Until doctors/nurses have direct input into purchasing decisions, I think there is little hope for this to change. [1]

Relatively few organizations have taken the important next step of analyzing the wealth of data in their IT systems to understand the effectiveness of the care they deliver. Put differently, many health care organizations use IT as a tool to monitor current processes and protocols; what only a small number have done is leverage those same IT systems to see if those processes and protocols can be improved—and if so, to act accordingly.

I would say that most hospitals aren’t even effectively using their EMR data to “monitor current processes and protocols”. Clinical informatics–the nascent field of applied IT in healthcare–and quality improvement are only beginning to come together in large academic medical centers to nail down effective evaluation of their ongoing data streams. It is going to take time and development of talent/expertise in these areas before the true potential of EMR data for improving outcomes is harnessed. It will take even more time for efforts to then translate to smaller hospitals and private practices.

So how can health care organizations realize the promise of their large and growing investments in IT to help lower costs and improve patient outcomes?

I know this is the Harvard Business Review, but please–improving patient outcomes should always come before lowering costs (generally improving outcomes lowers costs).

Two key constituencies outside of technical personnel—senior leaders and clinicians—must play significant roles. Leaders are crucial because they will have to enlist clinicians in the cause by persuading them that the effective use of IT is central to delivering higher quality…

If IT is implemented in a way that makes clinical workflows efficient, then no convincing will be necessary. Make it easier for doctors and nurses to do their jobs, feed data back to them to help them be better at their jobs, and minimize technical glitches. Quite simple.

The pledge to improve quality should be more than words; it must be translated into visible practices.

Duh. Again, I know this is a business journal, but does that really need to be said? This article could have been much shorter.

Besides acquiring the necessary hardware and software, leaders must make complementary changes in their operating and business models to generate and capture value. Of primary importance is investment in dedicated information-technology and analytics staff—individuals tasked with managing the IT system or analyzing the data it contains.

This isn’t said until the last part of the article, but at least it was said. The IT infrastructure in a large academic medical center is huge; their staff needs to be huge too.

All in all, a relatively good article, but could have really benefitted from a physician perspective amongst the four authors.


  1. It’s a pipe-dream, but I long for the days when each doctor will be able to pick their own interface with the EMR. That is, instead of my hospital purchasing Epic or Cerner for everyone to use, they will have a “dumb” EMR backend that anybody can choose whatever product they want to use to access that “dumb” EMR. Twitter clients are an example of this in action. With a Twitter account, I can choose to access it via the Twitter website, Tweetbot, Twitterrific, Echofon, or any other client. It’s all the same Twitter service, but each presents the information and interaction in its own unique way with consequent pros and cons for each.  ↩

Great podcast episode looking at the history of the stethoscope and it’s role today in the practice of medicine. Very interesting how the introduction of the stethoscope in the 19th century led to worries about technology coming between doctors and patients, which parallels our views today about any new diagnostic modality.

No piece of technology can replace the physical exam when you consider timeliness, cost, comprehensiveness, and the connection it provides for the doctor-patient relationship.

As a follow-up to my previous post, Dr Bryan Vartabedian talking about applying artificial intelligence to EKG interpretation and medicine in general:

Machines will evolve to do ‘mindless’ things like identify heart rhythm disturbances. As that happens our work as doctors will be redefined around the things that only we can do as humans. Those things involving, as [Deloitte’s John] Hagel suggests, “imagination, creativity, curiosity and emotional and social intelligence.”

For the record, I never look at automated EKG reads. I’ve never been able to trust them because of all the reasons Dr John Mandrola cites.

Excellent piece from Siddhartha Mukherjee on the state of advanced computer learning in medicine.

While this piece is very long, it is well worth the read. Mukherjee takes care to highlight the promise of computer-aided diagnosis as well as the potential pitfalls.

Sebastian Thrun, formerly of Stanford’s Artificial Intelligence Lab and Google X who has worked on machine-learning for medical diagnosis, discussing the impact of artificial intelligence in medicine:

“I’m interested in magnifying human ability,” Thrun said, when I asked him about the impact of such systems on human diagnosticians…"The industrial revolution amplified the power of human muscle. When you use a phone, you amplify the power of human speech. You cannot shout from New York to California”—Thrun and I were, indeed, speaking across that distance—“and yet this rectangular device in your hand allows the human voice to be transmitted across three thousand miles. Did the phone replace the human voice? No, the phone is an augmentation device. The cognitive revolution will allow computers to amplify the capacity of the human mind in the same manner. Just as machines made human muscles a thousand times stronger, machines will make the human brain a thousand times more powerful.” Thrun insists that these deep-learning devices will not replace dermatologists and radiologists. They will augment the professionals, offering them expertise and assistance.

We need such augmentation in medicine. The current practice of medicine is incredibly labor intensive, not only from the well known burden of paperwork and administrative tasks, but also the fundamental process of diagnosis and treatment. For complex diseases, physicians must integrate a long patient history and disease course with hundreds of clinical data points. This process is cumbersome and error-prone. The complexity of modern medicine is only going to grow and with it our need for augmented medicine.