Issue 351
Welcome back to E&O: a paying subscribers-only weekly newsletter focused on three areas of health tech: FDA-regulated software devices, digital health as an employee benefit, and national virtual clinics.
Exits & Outcomes Newsletter

This week’s newsletter is a little brief because of the big event this week. Before we dig into some new letters to the FDA about how it should regulate Generative AI-based software as a medical device for mental health… three quick notes:
- Spring Health’s covered lives number has jumped — significantly — again. As E&O previously reported, the company said it had 11 million covered lives in September 2024. It increased to more than 20 million in February 2025. Fast-forward to October 2025 and Spring Health now claims to have more than 31 million covered lives: “Today, more than 31 million people worldwide have access to Spring Health. We’re trusted by leading employers, health plans and channel partners, including Target, The Coca Cola Company, BlackRock, Microsoft, Pfizer, Wawa, Highmark and Guardian.”
- Another quick update on Big Health: According to a government filing from last month, the company’s entire board of directors has resigned with the exception of CEO Yael Berman. E&O previously reported that three board members left in June and July 2025 — now three more departed in September. Does that indicate an acquisition is in the works? A shutdown? What’s your best guess?
- Also: If you have been following my Oshi Health patient volume tracker, you know the latest update was the company posted on its website that it was now trusted my 30K+. Well, in the second week of October Oshi reduced that number to 27K+. Reminder: It jumped from 23K+ to 30K+ in the second week of September, so I guess that was a bit premature.
Was this forwarded to you? Click this link for more info on how to become a paying subscriber.
Mental health tech companies, groups write FDA letters about how to regulate GenAI
A number of companies submitted comments to the FDA on October 17th, the day after the early deadline for comments that will inform the FDA’s upcoming meeting on how to regulate GenAI in mental health. Here are a few highlights from a select number of companies and groups that sent the FDA letters:
Spring Health offered up its thoughts on how the FDA should evaluate generative AI-enabled mental health and it aligns closely with Spring’s own AI evaluation initiative, which it calls Validation of Ethical and Responsible AI in Mental Health or VERA-MH, for short. More:
“Spring Health serves as a model for how AI can be harnessed to Make America Healthy Again… Spring recommends that to avoid risks to health and mitigate harm, generative AI-enabled mental health tools should be evaluated, and the evaluation should be:
- Clinically Informed. Experienced clinicians should be included at every stage of the design and validation process.
- Narrowly Scoped. Because safety in mental health is difficult to quantify, the evaluation should focus on clear, well-defined concerns.
- Multi-turn. Single-turn conversation evaluations, which involve sending a single prompt to the AI system and assessing its response, are not enough to evaluate for clinical safety. Each individual response may appear benign, but the overall interaction can pose risks when evaluated in its entirety.
- Automated. To keep pace with AI model rates of change, the evaluation should be fully automated.
- Model Agnostic. The evaluation should be agnostic of the specific AI system. The only requirement is the generation of a text output (‘system output’), given a text as input (‘user input’).
- Multi-metric. Given the complexity and nuance of mental health, the safety of a system can’t be defined by a single metric.”
Slingshot AI, which launched in July by touting its offering, Ash, as “the first AI designed for therapy,” wrote the FDA to make clear it is just a wellness app and should not be regulated like a medical device:
“Slingshot AI continues to believe that general wellness apps like Ash can provide enormous benefit at low risk by simply instituting basic guardrails and transparency. In the future, these apps may be further refined and rise to a level that is more consistent with treatment and diagnosis. When that time comes, we encourage the DHAC to recommend that the FDA consider instituting a more flexible, contemporary risk-aligned regulatory framework that accounts for rapidly evolving technologies, including generative AI. In the meantime, we would urge the DHAC to encourage the FDA to formalize a dedicated ‘Wellness Guidance’ for GenAI-enabled digital mental health apps like Ash. To be clear, this means a dedicated FDA wellness designation with simple guardrails and transparency that promotes resilience, daily support, and wellbeing, that is free to provide its services to the American people without the disproportionately burdensome requirement of high-risk applications.”
Click Therapeutics covered a lot of ground in its comments to the agency, especially around including its preferred PDURS pathway in the discussion. Among other things Click suggested that the FDA update its documents about what was a regulated software device to include an example focused on GenAI:
“Proposed ‘Example of Software Functions that are the focus of FDA’s regulatory oversight (Device Software Functions and Mobile Medical Apps)’
- A software function that uses Gen AI to engage in personalized dialogue to diagnose, assess, or create/manage individualized mental health treatment, or representing itself as a licensed professional or “therapist.” These functions have a device intended use by nature, whether by directly providing diagnosis or treatment, or implying it by presenting in dialogue as a therapist or healthcare agent (virtual or otherwise).”
Otsuka Precision Health‘s letter to the FDA went on a bit of a tangent. The company asked the agency to creating a new rule, similar to one it created for prescription drugs last year:
“This final rule establishes the conditions under which a prescription only drug product can be used by consumers without the supervision of a practitioner licensed by law to administer such drug when an additional condition to ensure appropriate self-selection or appropriate actual use, or both, is implemented. OPH asks the Committee to recommend that Center for Devices and Radiological Health (CDRH) consider whether [a similar rule] for medical devices broadly, or, at minimum for SaMD, would be a viable pathway to promote greater access to nonprescription SaMD for certain conditions. We encourage the Committee to consider ways in which validated, patient-facing software-based screening tools can be leveraged to effectively aid patients in understanding their condition and available treatment options. Leveraging these digital screening tools as a part of the product or the patient information may help to ensure appropriate use of the tools in appropriate non-prescription contexts. Such an option could significantly expand access to these clinically validated, FDA-vetted technologies in a way that allows for scalable implementation independent of individual HCP participation.”
In its letter to the FDA, Big Health discussed how challenging it will be to build evidence that GenAI tools are actually clinically effective:
“Broadly, GenAI has significant potential to support cleared DTx products in helpful but non-therapeutic ways. Patient onboarding, ‘nudges’ or reminders to use the product, or more personalized interactions leading to improved engagement, could all be safely and effectively implemented without altering or compromising the underlying therapeutic content. To the extent that GenAI is introduced into therapeutic content, we believe that effectiveness must be established through high quality clinical research. We see inherent challenges in constructing robust scientific evidence validating an open model GenAI tool. Because each interaction will be unique, there is no guarantee that one individual patient experience will mimic another. There is an opportunity here to use validated reporting tools at scale to obtain large quantities of real world evidence, to minimize any statistical error in analyzing the data and drawing conclusions about effectiveness for these more variable interventions. In addition to unique challenges with evidence generation, there are unique challenges for risk mitigation in GenAI-based therapeutics. New mitigations will likely need to be implemented to address known product risks. We do not, however, see GenAI presenting a new safety risk relative to the patient’s pre-existing condition(s).”
The American Psychiatric Association had a shortlist of things it hopes FDA will implement around GenAI:
- “The FDA should establish a clear framework for evaluating the safety and effectiveness of generative AI-enabled mental health devices, including the use of validated psychiatric assessment tools and ongoing post-market surveillance to monitor evolving model behavior.
- That the FDA requires evidence demonstrating that AI-enabled systems function fairly across demographic, cultural, and diagnostic subgroups.
- The adoption of standardized labeling framework for AI-enabled devices that discloses the data sources used for training and validation, the model’s intended scope, capabilities, and limitations, and known uncertainty levels or confidence intervals for outputs.
- Regulatory standards specifying whether patient data are processed locally or via cloud services. Secondary data use, such as for marketing or retraining, should be prohibited without explicit, informed consent.
- The FDA shift liability to developers from clinicians and health systems to better balance the risks and duty of care of a tool that is being implemented in clinical care.”
SonderMind urged the FDA to remain flexibile:
“The field of mental health AI is diverse–ranging from self-help chatbots to clinical decision support–so a rigid, one-size-fits-all validation approach could inadvertently hamper beneficial technologies.
- Flexible Validation: FDA should allow a variety of evidence to demonstrate an AI tool’s safety and effectiveness. In some cases traditional trials may be feasible; in others, real-world evidence or observational studies might suffice. The key is to focus on meaningful outcomes and performance standards, rather than prescribing one uniform testing method for all products. FDA can provide guidance on best practices, but developers should have leeway to choose validation methods suited to their tool and patient population, as long as the evidence is sound.
- Proprietary Information: We also urge caution on any requirements to publicly disclose sensitive proprietary details, such as source code or algorithms. Regulators should have access to detailed information as needed for evaluation, but this can be done confidentially. Forcing companies to reveal trade secrets or extensive training data is not necessary for accountability and would discourage innovation. We support transparency about performance–for example sharing summary results, intended use, and limitations–without requiring companies to expose their intellectual property. This balanced approach to keeping proprietary details confidential while being transparent with performance data protects both patients and the incentives to develop new mental health technologies.”
Links to E&O’s reports, databases, newsletters
Click below for dedicated pages for each of those categories:
- Read through the long-form E&O research reports here.
- Search and sort the E&O databases here.
- Revisit hundreds of past issues of E&O newsletters here.
