Provided you understand their limitations and know how to use them, chatbots, AIs, or large language models can help you manage symptoms of depression

AI chatbots have become increasingly popular in recent years, with men using them at a significantly higher rate than women.[1,2] They are now frequently being used as mental health tools, due to their near-constant availability, and their ability to simulate empathy and rapidly respond to questions.

In this article, we’ll cover what chatbots are, how they can be used as mental health tools, and what limitations and drawbacks they have.

WHAT ARE AI CHATBOTS?

The First Ever Chatbot Was a ‘Therapist’

In simple terms, chatbots are computer programs that can simulate written or spoken conversations with humans, and they’ve actually been around for a very long time. The first chatbot, called ELIZA, was developed by MIT professor Joseph Weizenbaum from 1964 to 1967, and was ironically made as an attempt to show that communication between humans and machines was “superficial”.[3]

Programmed to act as a very simple ‘therapist’, ELIZA responded to user input with open-ended questions and reflected back statements. Dr. Weizenbaum was surprised that, even with its limited abilities, users often found conversing with ELIZA to have real therapeutic value.[4]

Now, more than 50 years later, technology has advanced to the point that most people have access to advanced AI language models in their pockets at any time.

Current Chatbots

A few key terms you may have heard are:

  • Machine Learning: A way for systems to learn patterns and make decisions from data.
  • Large Language Model (LLM): A type of machine learning model trained on vast amounts of text to interpret and generate language.
  • Generative Pre-Trained Transformer (GPT): An LLM that predicts and generates human-like text.

It’s important to remember that LLMs (including ChatGPT) don’t truly understand language – they predict likely words based on patterns in data. This allows them to produce more complex responses than older bots, but they still have limitations; they can make mistakes in simple tasks, invent sources, and they also reflect biases from their training data.

If you want to know more about chatbots, this article on how ChatGPT works goes into greater detail.

AI CHATBOTS FOR MENTAL HEALTH SUPPORT

Challenges and Limitations of Chatbots

While they’re useful for a range of problem-solving applications, it’s important to keep the following limitations in mind when using bots for advice around mental (or physical) health, relationships, or other life decisions:

  1. They rely on training data.
    The usefulness of chatbots relies heavily on the data that they are trained on. If a chatbot is trained using poor or inaccurate resources, it may provide advice that is not just inappropriate, but potentially harmful.[5] When it comes to mental health and therapy, there aren’t any standardized databases of real therapy transcripts that a broad panel of experts have evaluated and deemed as being reflective of “good therapy”. This means there’s no guarantee that any bot has been trained on medically sound data.
  2. They can be too agreeable.
    An update to OpenAI’s ChatGPT-4o was rolled back in April 2025 after users found it was far too likely to support whatever they were saying. In one case, the bot simply applauded a user saying they were discontinuing medication, without offering any analysis, reflection, or pushback. While this “agreeable” model has been taken offline, it’s important to always use critical judgment when interpreting advice from bots.
  3. Biased towards cognitive behavioural therapy (CBT).
    When prompted to offer therapeutic tools, chatbots tend to focus on cognitive behavioural therapy (CBT), and will often neglect other methods, unless directly prompted.[6] While CBT can be useful for some people, not everyone finds it helpful.
  4. Initial responses are often vague or generic.
    Chatbots need prompting to refine their advice more specifically for you. Unlike a therapist, who has direct experience and can draw connections between things you are saying, a chatbot often won’t introduce new topics or concepts unless you specifically ask it.[5] If a user doesn’t have any clinical knowledge or know what to ask, they may not receive very good or relevant advice.
  5. They lack intuition.
    A well-trained therapist can intuitively pick up on subtle aspects of communication, such as body language, tone, sarcasm, frustration, and irony, while chatbots tend to interpret things literally.[7] A therapist can also pick up on additional issues that a patient may avoid and may notice patterns in topics that a client brings up repeatedly, whereas a chatbot is more likely to engage in repetitive conversations or only focus on what it is explicitly asked.
  6. Ethical, privacy, and security concerns.
    There is currently very little standardization and regulation of chatbots, and many chatbots save user input for further training, including intimate and personal details about the health and mental health of users.[5] Remember that chatbots are not entirely anonymous and it’s often unclear how various platforms use the information you enter into them. There is also very little information at the moment about who bears responsibility for decisions made by people who relied principally on feedback from chatbots.
  7. Memory limitations
    While some paid platforms have considerable memory and are able to effectively draw on past conversations, most chatbots will reset and ‘forget’ information once the chat is closed.

How Chatbots Can Support Mental Health

Chatbots can be used effectively in a variety of ways to support mental health and well-being, but are not a substitute for professional therapy and shouldn’t be used on their own for diagnosing health issues. They can be used in-between therapy sessions to provide immediate feedback and offer interim coping strategies.

Many bots are good at simulating empathy, and some guys may feel more comfortable opening up to a chatbot about sensitive topics, rather than another human.[8,9] That being said, it is often the experience of sharing difficult or sensitive information with another human being who is caring and compassionate, and not judgemental, that is truly therapeutic.

Some of the best mental health uses for chatbots (noting privacy and security concerns) include:

  • Journaling
    Journaling is a way to improve mental clarity and track your mood over a period of time. Using a chatbot for journaling is a good way to help identify negative triggers and build positive habits. Some platforms have been made specifically for journaling, such as Rosebud and Mindsera.
  • Venting
    Even without the structure and habit of journaling, just getting your thoughts and feelings into words can be therapeutic. If you don’t have someone to vent to, a chatbot can work as a substitute, and will usually provide some feedback that can help to give you some distance from negative ruminations or provide alternative perspectives.
  • Feedback with Decision Making
    Chatbots can be useful tools for brainstorming and developing lists of pros and cons when it comes to making difficult decisions. They can be used to clarify and lay out our thoughts when we’re feeling overwhelmed or having trouble deciding what to do.

How to Use Chatbots in a Helpful and Realistic Way

  • Set Realistic Expectations
    A chatbot is not a therapist and will not be able to provide the guidance and support that a professional therapist can give you. They’re just one tool among many that can help support you, but they won’t solve your problems on their own.
  • Be Clear and Specific When Sharing Information
    Chatbots don’t intuit or ask follow-up questions the way humans do. The more clearly and fully you describe your situation, the more helpful the response will be. A bot isn’t able to draw on lived or professional experience and needs to be guided towards appropriate responses. It can be useful to explicitly tell the chatbot that you want it to ask you questions.
  • Ask Questions
    A bot may provide immediate feedback, but that feedback is only based on what it is told, which means it is often making many assumptions. When it gives advice, ask it for more options, counterpoints, alternative viewpoints or other possibilities. Ask why it gave the advice it did, and make sure it isn’t assuming things about your situation that are inaccurate or incorrect.
  • Avoid Self-Diagnosis
    While a chatbot can be useful for providing some potentially helpful information regarding possible causes of one’s health symptoms, it’s important to always discuss your health concerns with a primary care provider or another healthcare professional, as only they can properly conduct a diagnostic assessment. Chatbots are generally bad at collecting relevant diagnostic information and cannot exercise clinical judgment or critical thinking.[10] Use chatbots to help organise your thoughts, not to offer a diagnosis.
  • Explicitly Outline How You Want the Bot to Behave
    Be upfront from the beginning in telling the chatbot how you want it to respond to you, in terms of both content and tone. Do you want solutions you can act on or do you just want to vent? Are you looking for general information or more direction? The more you communicate your preferences, the more tailored and supportive the chatbot experience can be.

Is AI Able to Be A Good Therapist?

In short – ChatGPT is not a substitute for professional psychological therapy, but it can be a useful tool to help you out when therapy isn’t available.

Provided you understand their limitations and know how to use them, chatbots, AIs, or large language models can help you manage symptoms of depression, anxiety, and other mental health concerns, particularly spiraling thoughts and negative ruminations.

Remember:

  • Always provide the best information you can and be specific in your descriptions and questions.
  • Reiterate or restate important points to avoid bad advice.
  • Ask for alternative perspectives.
  • Challenge misassumptions, mistakes and inaccuracies.
  • Don’t take any advice on face-value and always use your best judgement to analyse and reflect on your conversations.

In situations where things are dire, human engagement is needed. If you are experiencing a mental health crisis, phone/chat lines are available 24/7 in many countries, staffed with trained volunteers who want to hear from and support you.

References

  1. Vogels, E. A. (2023, May 24). A majority of Americans have heard of CHATGPT, but few have tried it themselves. Pew Research Center. https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/ 
  2. Iñaki Aldasoro, Olivier Armantier, Doerr, S., Gambacorta, L., & Oliviero, T. (2024). The gen AI gender gap. Economics Letters, 241, 111814–111814. https://doi.org/10.1016/j.econlet.2024.111814
  3. Epstein, J., & Klinkenberg, W. D. (2001). From eliza to internet: A brief history of computerized assessment. Computers in Human Behavior, 17(3), 295-314. https://doi.org/10.1016/S0747-5632(01)00004-8
  4. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.
  5. Singh O. P. (2023). Artificial intelligence in the era of ChatGPT – Opportunities and challenges in mental health care. Indian journal of psychiatry65(3), 297–298. https://doi.org/10.4103/indianjpsychiatry.indianjpsychiatry_112_23
  6. Raile, P. (2024). The usefulness of ChatGPT for psychotherapists and patients. Humanities & Social Sciences Communications, 11(1), 47-8. https://doi.org/10.1057/s41599-023-02567-0
  7. Zúñiga Salazar, G., Zúñiga, D., Vindel, C. L., Yoong, A. M., Hincapie, S., Zúñiga, A. B., Zúñiga, P., Salazar, E., & Zúñiga, B. (2023). Efficacy of AI Chats to Determine an Emergency: A Comparison Between OpenAI’s ChatGPT, Google Bard, and Microsoft Bing AI Chat. Cureus15(9), e45473. https://doi.org/10.7759/cureus.45473
  8. Park, G., Chung, J., & Lee, S. (2023). Effect of AI chatbot emotional disclosure on user satisfaction and reuse intention for mental health counseling: A serial mediation model. Current Psychology, 42(32), 28663-28673. https://doi.org/10.1007/s12144-022-03932-z
  9. Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding, P., & D’Alfonso, S. (2023). To chat or bot to chat: Ethical issues with using chatbots in mental health. Digital Health, 9. https://doi.org/10.1177/20552076231183542
  10.   Dergaa, I., Fekih-Romdhane, F., Hallit, S., Loch, A. A., Glenn, J. M., Fessi, M. S., Ben Aissa, M., Souissi, N., Guelmami, N., Swed, S., El Omri, A., Bragazzi, N. L., & Ben Saad, H. (2024). ChatGPT is not ready yet for use in providing mental health assessment and interventions. Frontiers in Psychiatry, 14, 1277756. https://doi.org/10.3389/fpsyt.2023.1277756

Launching Our New Peer Support Course!

Too many men suffer in silence. Become a peer supporter for the men in your life.

In this four-part course (15–20 min each), you’ll learn what effective peer support looks like, how to show up for others, and how to stay grounded while doing so.

Start Course