Artificial Intelligence (AI) Policy

AI does not belong in therapy.

Some of the risks associated with using AI include misinformation, ethical and safety concerns, lack of data security and privacy, lack of transparency on data usage, and lack of federal oversight.

As a client, you can advocate for yourself by asking your healthcare providers if they utilize AI and for what purpose, and can opt out of consent for AI.

Megan Torres Counseling does not use and will not use Artificial Intelligence (AI), including for the creation of clinical documentation (such as progress notes, treatment plans, and correspondence), recording or transcription of sessions to generate notes, and evaluation/summarization of session content.

The policy of Megan Torres Counseling includes opting out of AI features integrated into current systems (such as Google Gemini, Microsoft Co-Pilot, Sessions Health AI Assist, etc.) and not utilizing generative AI/large language models (LLMs) (such as ChatGPT and Claude).

Selected Research

Al-Adawi, S. (2025). Con: Artificial Intelligence in Manuscript Writing: Pitfalls and Ethical Concerns the Authors Should Be Aware. Annals of Cardiac Anaesthesia28(2), 200–202. https://doi.org/10.4103/aca.aca_9_25

Andrews, E. L. (2020, July 2). AI’s Carbon Footprint Problem. Stanford HAI. https://hai.stanford.edu/news/ais-carbon-footprint-problem

Hill, K. (2025, August 26). A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. The New York Times. https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html

Iftikhar, Z., Xiao, A., Ransom, S., Huang, J., & Suresh, H. (2025). How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework8(2), 1311–1323. https://doi.org/10.1609/aies.v8i2.36632

‌Kang, C. (2026, March 10). A.I. Incites a New Wave of Grieving Parents Fighting for Online Safety. The New York Times. https://www.nytimes.com/2026/03/10/technology/ai-social-media-child-safety-parents.html

King, J., Klyman, K., Capstick, E., Saade, T., & Hsieh, V. (2025). User Privacy and Large Language Models: An Analysis of Frontier Developers’ Privacy Policies. ArXiv.org. https://arxiv.org/abs/2509.05382

Klee, M. (2025, May 4). AI-Fueled Spiritual Delusions Are Destroying Human Relationships. Rolling Stone. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/?_bhlid=2bdbb816e6033414a34003a8c55dfef1b05187a8

Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. ArXiv.org. https://doi.org/10.1145/3715275.3732039

Moore, J., Mehta, A., Agnew, W., Anthis, J. R., Louie, R., Mai, Y., Yin, P., Cheng, M., Paech, S. J., Klyman, K., Chancellor, S., Lin, E., Haber, N., & Ong, D. C. (2026). Characterizing Delusional Spirals through Human-LLM Chat Logs. ArXiv.org. https://arxiv.org/abs/2603.16567

Simmons, D. (2025, September 4). What you need to know about AI and climate change. Yale Climate Connections. https://yaleclimateconnections.org/2025/09/what-you-need-to-know-about-ai-and-climate-change/

Topaz, M., Peltonen, L. M., & Zhang, Z. (2025). Beyond human ears: navigating the uncharted risks of AI scribes in clinical practice. Npj Digital Medicine8(1). https://doi.org/10.1038/s41746-025-01895-6

Wang, G. (2026, April 16). Anthropic’s Claude Mythos Dilemma: When Superpowered AI Gets Risky. Forbes. https://www.forbes.com/sites/geruiwang/2026/04/16/anthropics-claude-mythos-dilemma-when-superpowered-ai-gets-risky/