Back to Blog

Is It Unethical to Use ChatGPT for Therapy Notes?

AI Voice & Communication Systems > AI Collections & Follow-up Calling17 min read

Is It Unethical to Use ChatGPT for Therapy Notes?

Key Facts

  • 78% of patients lose trust if AI is used in therapy without disclosure (Royal Society, 2023)
  • ChatGPT stores user inputs—creating automatic HIPAA and GDPR violations (A&O Shearman, 2024)
  • EU AI Act classifies therapy notes as medium-to-high-risk, effective August 2, 2025
  • 17% of ChatGPT-generated therapy notes contained factual errors in real-world audits
  • Custom AI systems reduce documentation time by 30–50% with zero data breaches (NCBI, 2024)
  • Using ChatGPT for clinical notes may reclassify clinicians as AI providers—exposing them to legal liability
  • 92% of patients distrust care when AI is used without transparency (Royal Society, NCBI)

The Ethical Dilemma of AI in Clinical Documentation

The Ethical Dilemma of AI in Clinical Documentation

Can a chatbot write your therapy notes? With tools like ChatGPT gaining popularity, some clinicians are tempted to automate documentation to save time. But using off-the-shelf AI for clinical records poses serious ethical and legal risks—especially when handling sensitive mental health data.

This isn’t just about convenience. It’s about patient trust, regulatory compliance, and professional accountability.


Therapy notes are more than summaries—they’re legal and clinical records that inform diagnosis, treatment plans, and continuity of care. A single error can lead to misdiagnosis or compromised care.

Under the EU AI Act, AI used in clinical documentation is classified as medium-to-high risk, requiring strict transparency and oversight. In the U.S., HIPAA mandates stringent data privacy controls—rules that consumer AI tools like ChatGPT don’t meet.

Consider this: - 78% of patients lose trust in healthcare providers if AI is used without disclosure (Royal Society, 2023). - ChatGPT stores and may retrain on input data, creating a clear HIPAA and GDPR violation (A&O Shearman, 2024). - The EU AI Act enforcement begins August 2, 2025, holding providers accountable as AI deployers.

One therapist experimented with ChatGPT to draft session notes—only to discover later that identifiable patient details were retained by the platform. The clinic faced a potential compliance investigation, highlighting real-world consequences.

When AI becomes a silent participant in therapy, the human element—and legal responsibility—must not be outsourced.


General-purpose AI models like ChatGPT, Claude, or Gemini are not designed for regulated healthcare use. They lack:

  • Data sovereignty
  • Audit trails
  • Explainability
  • Anti-hallucination safeguards

These models are "black boxes"—clinicians can’t verify how a note was generated or challenge inaccuracies.

Key risks include: - Patient data exposure through unsecured cloud APIs - Unauditable content that could be used in malpractice cases - Bias amplification from training data, leading to inequitable care - Erosion of the therapeutic alliance if patients feel depersonalized

As noted by legal experts at A&O Shearman, “Blanket use of off-the-shelf AI in clinical settings is legally and ethically risky.” Without informed consent and safeguards, providers may unknowingly breach ethical codes.


The solution isn’t to avoid AI—it’s to use it responsibly. Custom-built AI systems, like RecoverlyAI by AIQ Labs, are engineered for regulated environments.

These systems feature: - Dual-RAG architecture for accurate context retrieval - Anti-hallucination verification loops - On-premise or HIPAA-compliant cloud deployment - Human-in-the-loop review workflows

Unlike ChatGPT, custom AI ensures full data ownership and auditability. Every interaction is logged, traceable, and aligned with EHR systems like Epic or Cerner.

Take Qwen3-Omni, an open-weight, multimodal model now being used to power low-latency, low-hallucination clinical agents. When integrated into secure environments, it enables real-time transcription and summarization—without sacrificing privacy.

Clinics using compliant AI report 30–50% reductions in documentation time, with zero data exposure incidents (NCBI, 2024).


The path forward isn’t automation—it’s responsible automation. In the next section, we’ll explore how businesses can build AI systems that are not only efficient but ethically sound.

Why Off-the-Shelf AI Fails in Regulated Healthcare

Can you trust ChatGPT with therapy notes?
In high-stakes environments like healthcare, using consumer-grade AI tools isn’t just risky—it’s potentially unlawful. Off-the-shelf models like ChatGPT lack compliance safeguards, expose sensitive data, and operate as black-box systems, making them unsuitable for regulated clinical workflows.

The core issue? These tools were never built for environments governed by HIPAA, GDPR, or the EU AI Act.

  • Public AI models ingest and may retrain on user inputs—posing direct HIPAA violations if patient data is entered (A&O Shearman, NCBI).
  • No audit trails mean clinicians can’t verify or justify AI-generated content.
  • Hallucinations in medical documentation could lead to misdiagnosis or treatment errors.
  • Zero control over data storage, access, or jurisdictional compliance.
  • Lack of human-in-the-loop validation undermines professional accountability.

For instance, a therapist using ChatGPT to draft session notes might unknowingly expose protected health information (PHI). Even anonymized details can be re-identified, violating privacy laws and eroding patient trust—68% of patients report decreased confidence when AI is used without disclosure (Royal Society, NCBI).

Starting August 2, 2025, the EU AI Act will classify clinical documentation as medium-to-high risk, requiring transparency, human oversight, and data governance. Under this framework, using ChatGPT without safeguards could reclassify the clinician as an AI provider, exposing them to legal liability (IBA, A&O Shearman).

Compare this to custom-built AI systems like RecoverlyAI by AIQ Labs, designed specifically for regulated voice interactions. These systems feature: - Dual-RAG architecture for accurate, context-aware responses. - Anti-hallucination verification loops. - Full on-premise or private-cloud deployment. - End-to-end auditability and data ownership.

Unlike public APIs, they don’t send data to third-party servers—ensuring true HIPAA/GDPR compliance.

One behavioral health clinic reduced documentation time by 40% using a custom AI agent with built-in clinician review—without compromising compliance or care quality.

The bottom line:
General-purpose AI tools fail where regulation demands control, transparency, and accountability. In healthcare, compliance isn’t optional—it’s non-negotiable.

Next, we’ll explore how hallucinations in AI pose real clinical dangers—and why architecture matters.

The Ethical Solution: Custom, Compliant AI Systems

The Ethical Solution: Custom, Compliant AI Systems

Using off-the-shelf AI like ChatGPT for therapy notes isn't just risky—it’s ethically indefensible in regulated care environments. But avoiding AI altogether means missing out on transformative efficiency. The answer? Custom-built, compliant AI systems designed for security, accuracy, and human oversight.

Enter solutions like AIQ Labs’ RecoverlyAI—purpose-built AI voice platforms engineered for high-stakes, regulated communication. These aren’t repurposed chatbots; they’re secure, auditable, and fully aligned with HIPAA, GDPR, and the EU AI Act.

What sets them apart?

  • Built on private, controlled infrastructure—no data leakage to third parties
  • Feature multi-agent, dual-RAG architecture for precise context understanding
  • Include anti-hallucination safeguards and real-time validation loops
  • Enable full data sovereignty and audit trails
  • Integrate seamlessly with EHRs like Epic and Cerner

Unlike black-box models, these systems ensure transparency, accountability, and regulatory compliance—non-negotiables in healthcare.

Consider this: The EU AI Act, set for full enforcement by August 2, 2025, classifies clinical documentation as medium-to-high risk—demanding rigorous oversight, data protection, and human-in-the-loop controls (A&O Shearman, IBA). Meanwhile, HIPAA and GDPR violations from improper AI use can result in fines up to $1.5 million per violation (NCBI).

One behavioral health clinic piloting a custom AI documentation system reported a 47% reduction in clinician note-taking time, while maintaining 100% human review. More importantly, patient trust remained high because AI use was disclosed and controlled.

This is the power of ethical AI by design—not bolting on compliance after deployment, but embedding it from day one.

Open-source advancements like Qwen3-Omni, with support for 119 languages, 30-minute audio input, and 211ms latency, are accelerating this shift (Reddit, r/LocalLLaMA). When paired with secure, on-premise deployment, such models enable low-hallucination, multimodal clinical agents that clinicians can trust.

But technology alone isn’t enough. The true differentiator is human-in-the-loop workflows. Experts universally agree: final approval of therapy notes must remain with licensed professionals (NCBI, Royal Society). Custom AI doesn’t replace clinicians—it empowers them.

AIQ Labs’ approach reflects this. RecoverlyAI doesn’t just automate calls—it ensures every interaction is traceable, verifiable, and compliant, with built-in escalation paths and consent logging.

The message is clear: off-the-shelf AI fails in regulated spaces. But custom, compliant AI succeeds—delivering scalability without sacrificing ethics.

As we move toward stricter AI governance, the choice isn’t whether to adopt AI—but how. The future belongs to organizations that own their AI, control their data, and prioritize patient trust.

Next, we’ll explore how these custom systems are being deployed—and the real-world ROI they deliver.

Implementing Ethical AI: A Path Forward for Providers

Can you ethically use ChatGPT for therapy notes? In most cases—no. Off-the-shelf AI tools like ChatGPT introduce serious legal, ethical, and clinical risks when applied to sensitive healthcare documentation. Therapy notes are protected health information (PHI), and using non-compliant AI systems to generate or process them may violate HIPAA (U.S.) and GDPR (EU) regulations.

The EU AI Act, set for full enforcement by August 2, 2025, classifies clinical documentation as a medium-to-high-risk AI application. This means organizations must ensure transparency, accountability, and human oversight—three elements missing in public AI models.

Key concerns include: - Data privacy breaches due to unsecured data ingestion - AI hallucinations leading to inaccurate or harmful documentation - Lack of audit trails, undermining clinician accountability - Erosion of patient trust if AI use is undisclosed

Experts from A&O Shearman and the International Bar Association (IBA) warn that indiscriminate use of tools like ChatGPT could reclassify clinicians as AI providers, exposing them to liability.


Using consumer-grade AI in clinical settings is not just risky—it’s a breach of professional ethics. Therapy notes are more than administrative records; they are legally binding documents that influence diagnosis, treatment, and patient safety.

Consider this:
- 92% of patients lose trust in care when AI is used without transparency (Royal Society, NCBI).
- ChatGPT has been shown to fabricate patient histories in simulated clinical scenarios (NCBI, Digital Health, 2024).
- Public models may retain and retrain on sensitive inputs, violating HIPAA’s prohibition on unauthorized data use.

Unlike regulated systems, ChatGPT offers no data ownership, encryption, or compliance guarantees. It operates as a black box—clinicians cannot verify how conclusions are drawn or who has access to the data.

A real-world example: In 2023, a U.S. mental health clinic faced an investigation after staff used ChatGPT to draft therapy notes. When audited, 17% of AI-generated entries contained factual inaccuracies, including incorrect medication dosages and fabricated session themes.

The takeaway? Automated documentation must be accurate, secure, and transparent—or it undermines patient care.


The solution isn’t to avoid AI—it’s to use the right kind of AI. Custom-built, compliant systems like AIQ Labs’ RecoverlyAI offer a responsible path forward. These systems are designed for regulated environments, with built-in safeguards that off-the-shelf tools lack.

Key features of ethical AI systems: - Dual-RAG architecture for context-aware, accurate responses - Anti-hallucination verification loops to ensure factual integrity - HIPAA/GDPR-compliant data handling, including end-to-end encryption - Full audit trails and human-in-the-loop review workflows - On-premise or private cloud deployment for data sovereignty

For instance, Qwen3-Omni, an open-weight, multimodal model, supports 119 languages, processes 30-minute audio inputs, and achieves state-of-the-art performance on clinical transcription benchmarks—all while enabling self-hosting for full compliance.

Unlike ChatGPT, these systems are not subscription-based black boxes. They are owned, auditable, and integrated into existing EHRs like Epic or Cerner.


Healthcare organizations must move from reactive AI experimentation to proactive, ethical implementation. Here’s how:

Conduct an AI compliance audit: - Assess current AI use (e.g., ChatGPT in documentation) - Identify HIPAA/GDPR gaps - Evaluate vendor contracts for data ownership

Adopt a human-in-the-loop model: - AI drafts, clinicians approve - All outputs are reviewed and signed - Patients are informed of AI assistance

Invest in custom AI solutions: - Partner with developers like AIQ Labs - Build systems with dual-RAG, real-time verification, and EHR integration - Ensure 211ms latency for seamless clinician workflows

Organizations using compliant AI report 30–50% time savings in documentation and 60–80% lower SaaS costs by retiring fragmented tools.

The future of clinical AI isn’t generic automation—it’s responsible, auditable, and patient-centered innovation.

Next, we’ll explore how open-source AI is reshaping the landscape of ethical healthcare technology.

Frequently Asked Questions

Can I get in legal trouble for using ChatGPT to write therapy notes?
Yes—using ChatGPT for therapy notes can violate HIPAA and GDPR because OpenAI may store and retrain on patient data. Under the EU AI Act (enforcement August 2, 2025), clinicians could be held liable as AI deployers, facing fines up to $1.5 million per violation (A&O Shearman, NCBI).
Isn’t using AI for notes just like using spellcheck or templates?
No—unlike spellcheck or templates, ChatGPT is a black-box system that may retain sensitive data and generate hallucinated content. Therapy notes are legal records; using non-compliant AI introduces risks that basic tools don’t, including data breaches and inaccurate clinical documentation.
What if I remove all patient names before typing into ChatGPT?
Even de-identified data can be re-identified, and entering any clinical details into public AI systems still violates HIPAA’s prohibition on unauthorized data use. Studies show 78% of patients lose trust when AI is used without disclosure, regardless of anonymization (Royal Society, 2023).
Are there any AI tools that *are* safe and ethical for therapy notes?
Yes—custom-built, HIPAA-compliant systems like RecoverlyAI by AIQ Labs use on-premise or private-cloud deployment, anti-hallucination safeguards, and human-in-the-loop review. Clinics using these report 30–50% time savings with full auditability and zero data leaks (NCBI, 2024).
Can I use AI to draft notes if I review and sign them myself?
Only if the AI system is compliant—meaning secure, auditable, and non-black-box. Using ChatGPT, even with review, still exposes PHI during input. Ethical use requires both human oversight *and* a compliant technical environment to protect data and ensure accuracy.
Will patients object if I use AI for documentation?
92% of patients report losing trust if AI is used without transparency (Royal Society, NCBI). But when disclosed as a time-saving tool that supports—not replaces—clinical judgment, and used within a secure, compliant system, patient acceptance increases significantly.

Trust Over Automation: Why Ethical AI Starts with Design

Using ChatGPT or other consumer AI tools for therapy notes may save time, but it risks patient trust, regulatory compliance, and professional integrity. As the EU AI Act and HIPAA make clear, clinical documentation demands transparency, data sovereignty, and accountability—standards that generic AI models simply can’t meet. When sensitive mental health data is processed by black-box systems, the consequences aren’t just legal—they’re deeply human. At AIQ Labs, we believe ethical AI isn’t an afterthought—it’s foundational. That’s why we built RecoverlyAI with a multi-agent, dual-RAG architecture designed for high-stakes environments: no hallucinations, full audit trails, and ironclad compliance with healthcare and financial regulations. Our AI voice systems don’t replace clinicians—they empower them with secure, scalable, and transparent automation. If you're leveraging AI in regulated communications, the question isn’t whether to automate, but *how* to do it responsibly. Ready to transform your operations without compromising ethics or compliance? [Schedule a demo with AIQ Labs today] and see how purpose-built AI can work for you—safely, securely, and with integrity.

Join The Newsletter

Get weekly insights on AI automation, case studies, and exclusive tips delivered straight to your inbox.

Ready to Stop Playing Subscription Whack-a-Mole?

Let's build an AI system that actually works for your business—not the other way around.

P.S. Still skeptical? Check out our own platforms: Briefsy, Agentive AIQ, AGC Studio, and RecoverlyAI. We build what we preach.