Smartphone-based apps are driving a revolution in health care. However, we are likely at the beginning of a long road. Many approaches lack validation and excessive use of technology itself also has detrimental impacts on mental well-being.
The ubiquity of smartphones and social media is a compelling reason for their use to monitor and even improve mental health. If everybody is already carrying around a highly advanced piece of technology in their pockets, then why not harness that potential? The sophisticated sensors with which such devices are equipped means that they can be used to continuously and unobtrusively gather a wealth of information without any input from the users themselves, so-called ‘passive sensing’. This approach predates the invention of smartphones and is already widely used to track sleep and physical activity, for instance. Its very unobtrusiveness is what makes it so promising as a tool to track mental health, an area where sensitivity and inconspicuousness are often paramount. In currently used mental health applications, such passive sensing often involves capturing data on location, physical activity as well as call and text activity. These data are then interpreted by the software to determine whether the user is showing signs of depression, loneliness or stress. Initial studies have shown that this approach can be feasible and suitable for assessing mental health and compares favourably with traditional approaches. Yet, a significant issue for ‘passive sensing’ using smartphones is data security. Not only must all data be securely transmitted or encrypted, but of equal importance, the use of personal data by third-parties is a concern that must be addressed. Further, it remains unclear how best to combine ‘passive sensing’ with care and treatment by mental health professionals.
Another passive approach to monitoring mental health involves the use of machine-learning algorithms that scour a person’s social media posts for language and patterns that may indicate depression or that a person is contemplating harming themselves. However, there are significant concerns connected with this approach. For one, it remains questionable how companies like Facebook use the data that they glean. Indeed, it appears that the company’s plans have already run foul of strict EU laws on online privacy, and last year, Facebook was forced to deny that even though it was evaluating the emotional state of users, it was not passing on such information to third parties for advertising purposes. Moreover, Facebook is remaining reticent on the exact methods that they are using to flag worrying online behaviour as well as how the algorithms have been validated. The issue remains fraught to say the least.
So-called ‘digital therapies’, applications that monitor a user’s mood on a daily basis and suggest activities that developers claim promote mental well-being, represent a more active approach to using technology to improve mental health. Recent years have seen a striking proliferation of such resources, and thousands are now available. Indeed, this huge choice coupled with the fact that many seem to lack any rigorous scientific validation has led the chair of the American Psychiatric Association’s Smartphone App Evaluation Task Force to describe the situation as “…like the Wild West of health care”. A recent meta-analysis sought to bring some clarity to this issue and to sort the wheat from the chaff. The authors of this study analysed data from 18 randomised controlled trials and concluded that there were indeed significant positive effects associated with these tools.
In another randomised control trial, currently ongoing in Spain, the app iFightDepression is being tested. It has been developed in an initiative of the European Alliance Against Depression with the aim of helping “individuals to self-manage their symptoms of depression and to promote recovery.” The tool, which is based on the principles of cognitive-behavioural therapy, is guided, meaning that while it is based on self-management, it is also intended that users are supported by doctors and trained mental health professionals.
Aside from considerations of data security and validation, another major concern related to the use of technology for mental health relates to the potentially corrosive effects of indiscriminate and immoderate use of technology and social media on a person’s well-being. Even social media giant Facebook has now admitted that users who spend time “passively consuming information” are likely to feel the worse for it. Salesforce CEO Marc Benioff has called for technology and social media to be regulated like the tobacco industry as he believes they are similarly addictive and also pose risks to mental health, while the influential philanthropist George Soros has described social media companies as a “menace” whose “days are numbered”. Some researchers have even stridently claimed that the massive spike in depression in U.S. teenagers seen from 2012 onwards can be attributed primarily to the explosion in smartphone use.
There are some obvious challenges on the road ahead. For instance, if symptoms are at least partially caused by technology, is a technology-based solution really the right one? Also, how do we ensure that sensitive personal data does not fall into the hands of bad actors or is used in ways that compromise our right to privacy? Last but not least, the rampant growth in this sector means that efforts to evaluate the large number of different apps and approaches have not kept up, and potential users are faced with a huge number of products of dubious effectiveness. Like the Wild West, the technology-based approach to monitoring and improving mental health may be full of opportunity, but the lack of regulation and of a basis in hard scientific evidence also represents a danger.