What mental health apps get wrong about how people actually function under stress
- Salvo La Rosa

- Apr 1
- 4 min read

Photo by Kev Costello on Unsplash
Many mental health apps are thoughtful, well-designed, and built with good intentions. They often draw on established psychological ideas and aim to make support more accessible.
And yet, something often doesn’t quite work. Not always, not for everyone, but often enough to be noticeable.
If I think about moments in my clinical work where someone is overwhelmed or dysregulated, it becomes harder to imagine these kinds of interactions landing in the way they are intended. A prompt like “what are you feeling right now?” may seem reasonable, but in practice it can feel out of reach—or simply miss the mark entirely. At times, the response is closer to irritation or disengagement than reflection.
In a real-life setting, an intervention that relies on reflection or analysis can easily miss the person altogether. Someone may struggle to hear it, or not register it at all, if their nervous system is in a state of survival.
Not because the question is wrong in itself, but because it assumes a kind of access that isn’t always available in that moment.
1. Support can be reduced to interaction
Many apps are designed to feel responsive and understanding. They simulate conversation—asking questions, reflecting back, offering validation.
On the surface, this can resemble something familiar, but much of what makes therapy helpful is not simply the exchange of words. It is the presence of another person—the sense of being met, of being accompanied, of having one’s experience registered by someone who is actually there.
A significant part of the work happens in that shared space. It involves subtle shifts in tone, pacing, and attention. At times, it is less about insight and more about what happens between two people.
Digital tools can simulate aspects of this, but there is a difference between a response that is generated and one that emerges from another person’s lived experience.
That difference tends to become more apparent in moments of greater difficulty.
2. The person’s state is known—or at least knowable
Most apps rely on what the user says or inputs. From that, they try to guide the interaction. This assumes that the person’s state is both accessible and accurately represented, but, in practice, this is not always the case.
People under stress may minimise, overstate, or simply not know what they are experiencing. Language can become unreliable. At times, there is very little capacity to describe anything at all.
Even in therapy, where attention is deliberately focused on tracking shifts in the person’s state, things are missed. Pacing can be off. Interventions can land differently than intended. An app, by comparison, is largely blind.
It responds to signals, but it does not perceive in the way another person does. And it cannot adjust in real time to subtle changes in the same way.
3. Knowing what to do means being able to do it
Many tools are built around strategies—things to try, ways to reframe, techniques to apply.
But in practice, there is often a gap between knowing and being able to do something.
Many people already understand what might help. They are familiar with techniques, and can describe them clearly, however, in the moment, those strategies are not always available.
This is not a lack of understanding. It reflects a shift in what is accessible when someone is overwhelmed, anxious, or shutting down.
What is available in one state may not be available in another.
4. More engagement is always helpful
From a product perspective, engagement is a measure of success. The longer someone interacts with an app, the better.
Psychologically, this is not always the case. There are moments where more prompts, more questions, more interaction can feel like too much. When someone is already overwhelmed, additional input can increase that sense of overload. When someone is shut down, it may simply not reach them.
Sometimes what is needed is not more interaction, but less.
Where this leaves us
None of this means that mental health apps are not useful. They can be supportive, informative, and at times genuinely helpful, but they tend to work best when the person using them already has the capacity to engage.
The difficulty is that this is not always when support is most needed.
A different direction
If these limitations are taken seriously, the question becomes slightly different.
Less about what the right prompt or intervention is, and more about what is actually available to the person in that moment.
This might mean:
starting from the person’s current state, rather than assuming reflection
recognising that knowing something is not the same as being able to use it
reducing the amount of input when someone is already overwhelmed
and acknowledging that some forms of support rely on the presence of another person, not just interaction
It may also point toward a different kind of development altogether.
As a bit of a self-confessed geek myself who has a bit of soft spot for gadgets, wearables and health tech, I am curious about the many tools that already exist to measure physiological data, such as HRV (heart rate variability), to track aspects of the nervous system in real time.
They each capture something, but the integration is not yet seamless. There remains a gap between measurement and meaningful support.
None of this is to suggest that digital tools can replace therapy, or the role of another person within it. If anything, these limitations tend to highlight what is specific to human relationship, and why it remains difficult to reproduce.
Arguably, if these tools are to develop further, it may not be through more sophisticated conversation alone, but through a better understanding of the person’s state—and the limits of what can be done without another human being present.



Comments