Svetlana Velmar-Janković
Artificial intelligence in mental health refers to the application of artificial intelligence (AI), computational technologies and algorithms to support the understanding, diagnosis, and treatment of mental health disorders.[1][2][3] In the context of mental health, AI is considered a component of digital healthcare, with the objective of improving accessibility and accuracy and addressing the growing prevalence of mental health concerns.[4] Applications of AI in this field include the identification and diagnosis of mental disorders, analysis of electronic health records, development of personalized treatment plans, and analytics for suicide prevention.[4] [5] Despite its many potential benefits, the implementation of AI in mental healthcare presents significant challenges and ethical considerations, and its adoption remains limited as researchers and practitioners work to address existing barriers.[4]
Part of a series on |
Artificial intelligence (AI) |
---|
![]() |
Background
In 2019, 1 in every 8 people, or 970 million people around the world were living with a mental disorder, with anxiety and depressive disorders being the most common.[6] In 2020, the number of people living with anxiety and depressive disorders rose significantly because of the COVID-19 pandemic.[7] Additionally, the prevalence of mental health and addiction disorders exhibits a nearly equal distribution across genders, emphasizing the widespread nature of the issue.[8]
The use of AI in mental health aims to support responsive and sustainable interventions against the global challenge posed by mental health disorders. Some issues common to the mental health industry are provider shortages, inefficient diagnoses, and ineffective treatments. The Global market for AI-driven mental health applications is projected to grow significantly, with estimates suggesting an increase from 0.92 billion USD in 2023 to $14.89 billion USD by 2033.[9] This growth indicates a growing interest in AI's ability to address critical challenges in mental healthcare provision through the development and implementation of innovative solutions.[10]
AI-driven approaches
There are several components of AI that are currently widely available for multiple applications, such as machine learning (ML), natural language processing (NLP), deep learning (DL), and computer vision (CV). These technologies enable early detection of mental health conditions, personalized treatment recommendations, and real-time monitoring of patient well-being.
Machine learning
Machine learning is an AI technique that enables computers to identify patterns in large datasets and make predictions based on those patterns. Unlike traditional medical research, which begins with a hypothesis, ML models analyze existing data to uncover correlations and develop predictive algorithms.[10] ML in psychiatry is limited by data availability and quality. Many psychiatric diagnoses rely on subjective assessments, interviews, and behavioral observations, making structured data collection difficult.[10] Researchers are addressing these challenges using transfer learning, a technique that adapts ML models trained in other fields for use in mental health applications.[11]
Natural language processing
Natural Language Processing allows AI systems to analyze and interpret human language, including speech, text, and tone of voice. In mental health, NLP is used to extract meaningful insights from conversations, clinical notes, and patient-reported symptoms. NLP can assess sentiment, speech patterns, and linguistic cues to detect signs of mental distress. This is crucial because many of the diagnoses and DSM-5 mental health disorders are diagnosed via speech in doctor-patient interviews, utilizing the clinician's skill for behavioral pattern recognition and translating it into medically relevant information to be documented and used for diagnoses. As research continues, NLP models must address ethical concerns related to patient privacy, consent, and potential biases in language interpretation.[12]
Deep Learning
Deep learning, a subset of ML, involves neural networks that mimic the human brain to analyze complex data. It is particularly useful for identifying subtle patterns in speech, imaging, and physiological data.[13] Deep learning is used in neuroimaging analysis, helping researchers detect abnormalities in brain scans associated with conditions such as schizophrenia, depression, and PTSD. [14] However, deep learning models require extensive, high-quality datasets to function effectively. The limited availability of large, diverse mental health datasets poses a challenge, as patient privacy regulations restrict access to medical records. Additionally, deep learning models often operate as "black boxes", meaning their decision-making processes are not easily interpretable by clinicians, raising concerns about transparency and clinical trust.[15]
Computer Vision
Computer vision enables AI to analyze visual data, such as facial expressions, body language, and micro expressions, to assess emotional and psychological states. This technology is increasingly used in mental health research to detect signs of depression, anxiety, and PTSD through facial analysis.[16] CV can detect subtle nonverbal cues, such as hesitation or changes in eye contact, which may indicate emotional distress. Despite its potential, computer vision in mental health raises ethical and accuracy concerns. Facial recognition algorithms can be influenced by cultural and racial biases, leading to potential misinterpretations of emotional expressions.[17] Additionally, concerns about informed consent and data privacy must be addressed before widespread clinical adoption.
Applications
Diagnosis
AI with the use of NLP and ML can be used to help diagnose individuals with mental health disorders. It can be used to differentiate closely similar disorders based on their initial presentation to inform timely treatment before disease progression. For example, it may be able to differentiate unipolar from bipolar depression by analyzing imaging and medical scans.[10] AI also has the potential to identify novel diseases that were overlooked due to the heterogeneity of presentation of a single disorder.[10] Doctors may overlook the presentation of a disorder because while many people get diagnosed with depression, that depression may take on different forms and be enacted in different behaviors. AI can parse through the variability found in human expression data and potentially identify different types of depression.
Prognosis
AI can be used to create accurate predictions for disease progression once diagnosed.[10] AI algorithms can also use data-driven approaches to build new clinical risk prediction models[18] without relying primarily on current theories of psychopathology. However, internal and external validation of an AI algorithm is essential for its clinical utility.[10] In fact, some studies have used neuroimaging, electronic health records, genetic data, and speech data to predict how depression would present in patients, their risk for suicidality or substance abuse, or functional outcomes.[10]
Treatment
In psychiatry, in many cases multiple drugs are trialed with the patients until the correct combination or regimen is reached to effectively treat their ailment—AI could theoretically be used to predict treatment response based on observed data collected from various sources. This application of AI has the potential to reduce the time, effort, and resources required while alleviating the burden on both patients and clinicians.[10]
Benefits
Artificial intelligence offers several potential advantages in the field of mental health care:
- Enhanced diagnostic accuracy: AI systems are capable of analyzing large datasets—including brain imaging, genetic testing, and behavioral data—to detect biomarkers associated with mental health conditions. This may contribute to more accurate and timely diagnoses.[19]
- Personalized treatment planning: AI algorithms can process information from electronic health records (EHRs), neuroimaging, and genomic data to identify the most effective treatment strategies tailored to individual patients.[19]
- Improved access to care: AI technologies can facilitate the delivery of mental health services—such as cognitive behavioral therapy (CBT)—through virtual platforms. This may increase access to care, particularly in underserved or remote areas.[19]
- Early detection and monitoring: AI tools can assist clinicians in recognizing early warning signs of mental health disorders, enabling proactive interventions and potentially reducing the risk of acute episodes or hospitalizations.[5]
- Use of chatbots and virtual assistants: AI-powered systems can support administrative functions, including appointment scheduling, patient triage, and organizing medical history. This may improve operational efficiency and enhance patient engagement.[5]
- Predictive analytics for suicide prevention: AI models can analyze behavioral, clinical, and social data to identify individuals at elevated risk of suicide, enabling targeted prevention strategies and informing public health policies.[5]
Challenges
Despite its potential, the application of AI in mental health presents a number of ethical, practical, and technical challenges:
- Informed consent and transparency: The complexity and opacity of AI systems—particularly in how they process data and generate outputs—require clinicians to clearly communicate potential limitations, biases, and uncertainties to patients as part of the informed consent process.[4]
- Right to explanation: Patients may request explanations regarding AI-generated diagnoses or treatment recommendations. Healthcare providers have a responsibility to ensure that these explanations are available and comprehensible.[4]
- Privacy and data protection: The use of AI in mental health care must balance data utility with the protection of sensitive personal information. Ensuring robust privacy safeguards is essential to building trust among users.[4][5]
- Lack of diversity in training data: AI models often rely on datasets that may not be representative of diverse populations. This can lead to biased outcomes and reduced effectiveness in diagnosing or treating individuals from underrepresented groups.[5]
- Provider skepticism and implementation barriers: Clinicians and health care organizations may be hesitant to adopt AI tools due to a lack of familiarity, concerns about reliability, or uncertainty about integration into existing care workflows.[20]
- Responsibility and the “Tarasoff duty”: In cases where AI identifies a patient as a potential risk to themselves or others, it remains unclear who holds the legal and ethical responsibility to act—particularly in jurisdictions with mandatory duty-to-warn obligations. [1]
- Data quality and accessibility: High-quality mental health data is often difficult to obtain due to ethical constraints and privacy concerns. Limited access to diverse and comprehensive datasets may hinder the accuracy and real-world applicability of AI systems.[21]
Current AI trends in mental health
As of 2020, the Food and Drug Administration (FDA) had not yet approved any artificial intelligence-based tools for use in Psychiatry.[22] However, in 2022, the FDA granted authorization for the initial testing of an AI-driven mental health assessment tool known as the AI-Generated Clinical Outcome Assessment (AI-COA). This system employs multimodal behavioral signal processing and machine learning to track mental health symptoms and assess the severity of anxiety and depression. AI-COA was integrated into a pilot program to evaluate its clinical effectiveness, though it has not yet received full regulatory approval.[23]
Mental health tech startups continue to lead investment activity in digital health despite the ongoing impacts of macroeconomic factors like inflation, supply chain disruptions, and interest rates.[24]
According to CB Insights, State of Mental Health Tech 2021 Report, mental health tech companies raised $5.5 billion worldwide (324 deals), a 139% increase from the previous year that recorded 258 deals.
A number of startups that are using AI in mental healthcare have closed notable deals in 2022 as well. Among them is the AI chatbot Wysa (20$ million in funding), BlueSkeye that is working on improving early diagnosis (£3.4 million), the Upheal smart notebook for mental health professionals (€1.068 million), and the AI-based mental health companion clare&me (€1 million).
An analysis of the investment landscape and ongoing research suggests that we are likely to see the emergence of more emotionally intelligent AI bots and new mental health applications driven by AI prediction and detection capabilities.
For instance, researchers at Vanderbilt University Medical Center in Tennessee, US, have developed an ML algorithm that uses a person’s hospital admission data, including age, gender, and past medical diagnoses, to make an 80% accurate prediction of whether this individual is likely to take their own life.[25] And researchers at the University of Florida are about to test their new AI platform aimed at making an accurate diagnosis in patients with early Parkinson’s disease.[26] Research is also underway to develop a tool combining explainable AI and deep learning to prescribe personalized treatment plans for children with schizophrenia.[27]
AI systems could predict and plan treatments accurately and effectively for all fields of medicine at levels similar to that of physicians and general clinical practices. For example, one AI model diagnosed depression and post-traumatic stress disorder with precision that exceeded that of general practitioners based solely on their assessments.[28]
AI systems studying social media data are being developed to help spot mental health risks sooner and in more locations and for less money. Ethical concerns include uneven performance between digital services, the possibility that biases could affect decision-making, and trust, privacy, and doctor-patient relationship issues.[29]
In January of 2024, Cedars-Sinai physician-scientists developed a first-of-its-kind program that uses immersive virtual reality and generative artificial intelligence to provide mental health support. [2] The program is called XAIA which employs a large language model programmed to resemble a human therapist. [3]
The University of Southern California is researching the effectiveness of a virtual therapist named Ellie. Through a webcam and microphone, this AI is able to process and analyze the emotional cues derived from the patient's face and the variation in expressions and tone of voice. [4]
A team of Stanford Psychologists and AI experts created "Woebot". Woebot is an app that makes therapy sessions available 24/7. WoeBot tracks its users' mood through brief daily chat conversations and offers curated videos or word games to assist users in managing their mental health. [5] A Scandinavian team of software engineers and a clinical psychologist created "Heartfelt Services". Heartfelt Services is an application meant to simulate conventional talk therapy with an AI therapist. [30]
Outcome Comparisons: AI vs Traditional Therapy
Research shows that AI-driven mental health tools, particularly those using cognitive behavioral therapy (CBT), can improve symptoms of anxiety and depression, especially for mild to moderate cases. For example, chatbot-based interventions like Woebot significantly reduced depressive symptoms in young adults within two weeks, with results comparable to brief human-delivered interventions [31]. A 2022 meta-analysis of digital mental health tools, including AI-enhanced apps, found moderate effectiveness in reducing symptoms when user engagement was high, and interventions were evidence-based [32].
However, traditional therapy remains more effective for complex or high-risk mental health conditions that require emotional nuance and relational depth, such as PTSD, severe depression, or suicidality. The therapeutic alliance—the relationship between patient and clinician—is widely recognized as one of the most important predictors of treatment success, accounting for up to 30% of positive outcomes [33]. AI tools, while highly capable of detecting patterns in behavior and speech, currently lack the ability to convey empathy, understand social context, or respond intuitively to emotional shifts. As such, most experts view AI in mental health as a complementary tool, best used for screening, monitoring, or augmenting care between human-led sessions [34].
While AI systems excel at processing large datasets and providing consistent, round-the-clock support, their rigidity and limitations in contextual understanding remain significant barriers. Human therapists can adapt in real time to tone, body language, and life circumstances—something machine learning models have yet to master [32][34]. Nonetheless, integrated models that pair AI-driven symptom tracking with clinician oversight are showing promise. These hybrid approaches may increase access, reduce administrative burden, and support early detection, allowing human clinicians to focus on relational care. Ultimately, the future of mental healthcare will likely depend not on replacing clinicians with AI, but on using AI to enhance what human-centered therapy does best.
Criticism
Although artificial intelligence in mental health is a growing field with significant potential, several concerns and criticisms remain regarding its application:
- Data limitations: A significant barrier to developing effective AI tools in mental health care is the limited availability of high-quality, representative data. Mental health data is often sensitive, difficult to standardize, and subject to privacy restrictions, which can hinder the training of robust and generalizable AI models.[35]
- Algorithmic bias: AI systems may inherit and amplify biases present in the datasets they are trained on. This can result in inaccurate assessments or unequal treatment, particularly for underrepresented or marginalized groups.[36]
- Privacy and data security: The implementation of AI in mental health typically requires the collection and analysis of large amounts of personal and sensitive information. This raises ethical concerns regarding user consent, data protection, and potential misuse of information.[37]
- Risk of harmful advice: Some AI-based mental health tools have been criticized for offering inappropriate or harmful guidance. For example, there have been reports of chatbots giving users dangerous recommendations, including one case in which a man died by suicide after a chatbot allegedly encouraged self-sacrifice. [6] In response to such incidents, several AI mental health applications have been taken offline or reevaluated for safety.[7]
- Therapeutic relationship: Decades of psychological research have shown that the quality of the therapeutic relationship—empathy, trust, and human connection—is one of the most important predictors of treatment outcomes. Critics argue that AI systems, by nature, are unable to replicate this relational dynamic. [8]
- Lack of emotional understanding: Unlike human therapists, AI systems do not possess lived experiences or emotional awareness. This raises questions about their capacity to provide genuine empathy or respond appropriately to the emotional nuance of mental health challenges. Some experts argue that AI cannot substitute for human-centered therapy, particularly in cases requiring deep emotional engagement. [38]
Ethical issues
Although significant progress is still required, the integration of AI in mental health underscores the need for legal and regulatory frameworks to guide its development and implementation.[4] Achieving a balance between human interaction and AI in healthcare is challenging, as there is a risk that increased automation may lead to a more mechanized approach, potentially diminishing the human touch that has traditionally characterized the field.[5] Furthermore, granting patients a feeling of security and safety is a priority considering AI's reliance on individual data to perform and respond to inputs. If not approached properly, the process of trying to increase accessibility could remove elements that negatively alter patient experience with receiving mental support.[5] To avoid veering in the wrong direction, more research should continue to develop a deeper understanding of where the incorporation of AI produces advantages and disadvantages.[20]
See also
- Artificial intelligence in healthcare
- Artificial intelligence detection software
- AI alignment
- Artificial intelligence in healthcare
- Artificial intelligence
- Glossary of artificial intelligence
- Clinical decision support system
- Computer-aided diagnosis
- Health software
References
- ^ Mazza, Gabriella (2022-08-29). "AI and the Future of Mental Health". CENGN. Retrieved 2023-01-17.
- ^ Thakkar, Anoushka; Gupta, Ankita; De Sousa, Avinash (2024). "Artificial intelligence in positive mental health: a narrative review". Frontiers in Digital Health. 6: 1280235. doi:10.3389/fdgth.2024.1280235. PMC 10982476. PMID 38562663.
- ^ Jin, Kevin W; Li, Qiwei; Xie, Yang; Xiao, Guanghua (2023). "Artificial intelligence in mental healthcare: an overview and future perspectives". British Journal of Radiology. 96 (1150): 20230213. doi:10.1259/bjr.20230213. PMC 10546438. PMID 37698582.
- ^ a b c d e f g Lu, Tangsheng; Liu, Xiaoxing; Sun, Jie; Bao, Yanping; Schuller, Björn W.; Han, Ying; Lu, Lin (14 July 2023). "Bridging the gap between artificial intelligence and mental health". Science Bulletin. 68 (15): 1606–1610. doi:10.1016/j.scib.2023.07.015. PMID 37474445.
- ^ a b c d e f g h Shimada, Koki (2023-11-29). "The Role of Artificial Intelligence in Mental Health: A Review". Science Insights. 43 (5): 1119–1127. doi:10.15354/si.23.re820. ISSN 2329-5856.
- ^ "Global Health Data Exchange (GHDx)". Institute of Health Metrics and Evaluation. Retrieved 14 May 2022.
- ^ "Mental disorders". www.who.int. Retrieved 2024-03-16.
- ^ Rehm, Jürgen; Shield, Kevin D. (2019-02-07). "Global Burden of Disease and the Impact of Mental and Addictive Disorders". Current Psychiatry Reports. 21 (2): 10. doi:10.1007/s11920-019-0997-0. ISSN 1535-1645. PMID 30729322. S2CID 73443048.
- ^ "AI in Mental Health Market". Market.us. Retrieved 2025-03-01.
- ^ a b c d e f g h i Lee, Ellen E.; Torous, John; De Choudhury, Munmun; Depp, Colin A.; Graham, Sarah A.; Kim, Ho-Cheol; Paulus, Martin P.; Krystal, John H.; Jeste, Dilip V. (September 2021). "Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom". Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 6 (9): 856–864. doi:10.1016/j.bpsc.2021.02.001. PMC 8349367. PMID 33571718.
- ^ "What is transfer learning? | IBM". www.ibm.com. 2024-02-12. Retrieved 2025-03-01.
- ^ Le Glaz, Aziliz; Haralambous, Yannis; Kim-Dufor, Deok-Hee; Lenca, Philippe; Billot, Romain; Ryan, Taylor C; Marsh, Jonathan; DeVylder, Jordan; Walter, Michel; Berrouiguet, Sofian; Lemey, Christophe (2021-05-04). "Machine Learning and Natural Language Processing in Mental Health: Systematic Review". Journal of Medical Internet Research. 23 (5): e15708. doi:10.2196/15708. ISSN 1438-8871. PMC 8132982. PMID 33944788.
- ^ "What Is Deep Learning? | IBM". www.ibm.com. 2024-06-17. Retrieved 2025-03-01.
- ^ Su, Chang; Xu, Zhenxing; Pathak, Jyotishman; Wang, Fei (2020-04-22). "Deep learning in mental health outcome research: a scoping review". Translational Psychiatry. 10 (1): 116. doi:10.1038/s41398-020-0780-3. ISSN 2158-3188. PMC 7293215. PMID 32532967.
- ^ V, Chaitanya (2025-01-13). "Rise of Black Box AI: Addressing the Lack of Transparency in Machine Learning Models". Analytics Insight. Retrieved 2025-03-01.
- ^ ai-admin (2023-12-05). "The role of computer vision in artificial intelligence - advancements, applications, and challenges". AI for Social Good. Retrieved 2025-03-01.
- ^ "Why Racial Bias is Prevalent in Facial Recognition Technology". Harvard Journal of Law & Technology. 2020-11-04. Retrieved 2025-03-01.
- ^ Fusar-Poli, Paolo; Hijazi, Ziad; Stahl, Daniel; Steyerberg, Ewout W. (2018-12-01). "The Science of Prognosis in Psychiatry: A Review". JAMA Psychiatry. 75 (12): 1289–1297. doi:10.1001/jamapsychiatry.2018.2530. ISSN 2168-622X. PMID 30347013.
- ^ a b c "AI in Mental Health - Examples, Benefits & Trends". ITRex. 2022-12-13. Retrieved 2023-01-17.
- ^ a b King, Darlene R.; Nanda, Guransh; Stoddard, Joel; Dempsey, Allison; Hergert, Sarah; Shore, Jay H.; Torous, John (30 November 2023). "An Introduction to Generative Artificial Intelligence in Mental Health Care: Considerations and Guidance". Current Psychiatry Reports. 25 (12): 839–846. doi:10.1007/s11920-023-01477-x. ISSN 1523-3812. PMID 38032442.
- ^ Yadav, Rajani (2023-11-29). "Artificial Intelligence for Mental Health: A Double-Edged Sword". Science Insights. 43 (5): 1115–1117. doi:10.15354/si.23.co13. ISSN 2329-5856.
- ^ Benjamens, Stan; Dhunnoo, Pranavsingh; Meskó, Bertalan (2020-09-11). "The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database". npj Digital Medicine. 3 (1): 118. doi:10.1038/s41746-020-00324-0. ISSN 2398-6352. PMC 7486909. PMID 32984550.
- ^ Park, Andrea (2024-01-26). "FDA accepts first AI algorithm to drug development tool pilot". www.fiercebiotech.com. Retrieved 2025-03-01.
- ^ "Q3 2022 digital health funding: The market isn't the same as it was | Rock Health". rockhealth.com. 2022-10-03. Retrieved 2024-04-12.
- ^ Govern, Paul (15 March 2021). "Artificial intelligence calculates suicide attempt risk at VUMC". Vanderbilt University. Retrieved 2024-03-16.
- ^ "MINDS AND MACHINES". Florida Physician. Retrieved 2024-03-16.
- ^ Pflueger-Peters, Noah (2020-09-11). "Using AI to Treat Teenagers With Schizophrenia | Computer Science". cs.ucdavis.edu. Retrieved 2024-03-16.
- ^ Laacke, Sebastian; Mueller, Regina; Schomerus, Georg; Salloch, Sabine (2021-07-03). "Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy". The American Journal of Bioethics. 21 (7): 4–20. doi:10.1080/15265161.2020.1863515. ISSN 1526-5161. PMID 33393864.
- ^ Laacke, Sebastian; Mueller, Regina; Schomerus, Georg; Salloch, Sabine (2021-07-03). "Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy". The American Journal of Bioethics. 21 (7): 4–20. doi:10.1080/15265161.2020.1863515. ISSN 1526-5161. PMID 33393864.
- ^ Günther, Julie Helene (2024-04-22). "Bekymret for bruken av KI-psykologer: – Burde ikke alene tilbys av kommersielle aktører". NRK (in Norwegian Bokmål). Retrieved 2024-05-18.
- ^ Fitzpatrick, Kathleen Kara; Darcy, Alison; Vierhile, Molly (2017-06-06). "Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial". JMIR Mental Health. 4 (2): e19. doi:10.2196/mental.7785. ISSN 2368-7959. PMC 5478797. PMID 28588005.
- ^ a b Hollis, Chris; Falconer, Caroline J.; Martin, Jennifer L.; Whittington, Craig; Stockton, Sarah; Glazebrook, Cris; Davies, E. Bethan (April 2017). "Annual Research Review: Digital health interventions for children and young people with mental health problems - a systematic and meta-review". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 58 (4): 474–503. doi:10.1111/jcpp.12663. ISSN 1469-7610. PMID 27943285.
- ^ Wampold, Bruce E. (October 2015). "How important are the common factors in psychotherapy? An update". World Psychiatry: Official Journal of the World Psychiatric Association (WPA). 14 (3): 270–277. doi:10.1002/wps.20238. ISSN 1723-8617. PMC 4592639. PMID 26407772.
- ^ a b Vaidyam, Aditya Nrusimha; Wisniewski, Hannah; Halamka, John David; Kashavan, Matcheri S.; Torous, John Blake (July 2019). "Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape". Canadian Journal of Psychiatry. Revue Canadienne de Psychiatrie. 64 (7): 456–464. doi:10.1177/0706743719828977. ISSN 1497-0015. PMC 6610568. PMID 30897957.
- ^ Ćosić, Krešimir; Popović, Siniša; Šarlija, Marko; Kesedžić, Ivan; Jovanovic, Tanja (June 2020). "Artificial intelligence in prediction of mental health disorders induced by the COVID-19 pandemic among health care workers". Croatian Medical Journal. 61 (3): 279–288. doi:10.3325/cmj.2020.61.279. ISSN 0353-9504. PMC 7358693. PMID 32643346.
- ^ Nilsen, Per; Svedberg, Petra; Nygren, Jens; Frideros, Micael; Johansson, Jan; Schueller, Stephen (January 2022). "Accelerating the impact of artificial intelligence in mental healthcare through implementation science". Implementation Research and Practice. 3: 263348952211120. doi:10.1177/26334895221112033. ISSN 2633-4895. PMC 9924259. PMID 37091110. S2CID 250471425.
- ^ Royer, Alexandrine (2021-10-14). "The wellness industry's risky embrace of AI-driven mental health care". Brookings. Retrieved 2023-01-17.
- ^ Brown, Julia E. H.; Halpern, Jodi (2021-12-01). "AI chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare". SSM - Mental Health. 1: 100017. doi:10.1016/j.ssmmh.2021.100017. ISSN 2666-5603.
Further reading
- Lee, Ellen E.; Torous, John; De Choudhury, Munmun; Depp, Colin A.; Graham, Sarah A.; Kim, Ho-Cheol; Paulus, Martin P.; Krystal, John H.; Jeste, Dilip V. (2021). "Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom". Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 6 (9): 856–864. doi:10.1016/j.bpsc.2021.02.001. PMC 8349367. PMID 33571718.
- Alhuwaydi, Ahmed M. (2024). "Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions – A Narrative Review for a Comprehensive Insight". Risk Management and Healthcare Policy. 17: 1339–1348. doi:10.2147/RMHP.S461562. PMC 11127648. PMID 38799612.
- Liu, Feng; Ju, Qianqian; Zheng, Qijian; Peng, Yujia (2024). "Artificial intelligence in mental health: innovations brought by artificial intelligence techniques in stress detection and interventions of building resilience". Current Opinion in Behavioral Sciences. 60: 101452. doi:10.1016/j.cobeha.2024.101452.