
Are Your AI Chats Discoverable? What Lawyers Need to Know After United States v. Heppner
This article examines the recent Heppner ruling and its implications for attorney-client privilege and work product protection when lawyers and clients use AI tools. But we go further: we argue that the court’s reasoning is fundamentally flawed-treating AI as categorically different from every other cloud-based technology lawyers use daily, and creating an inversion where attorney mental impressions that need not even be produced to the client suddenly become discoverable by adversaries simply because the attorney used AI to develop them. We conclude with practical guidance for protecting yourself and your clients, including specific engagement letter language, while the profession works to develop more coherent doctrine.
The convenience of generative AI has already fundamentally changed how many lawyers practice law. We draft motions, analyze contracts, research case law, and brainstorm legal strategy using tools like ChatGPT, Claude, and Gemini. In many ways, these AI “assistants” function like a paralegal or junior associate, and in some cases are much more reliable and effective.
But there’s a critical difference between working with a staff member or young lawyer: according to one federal judge, your conversations with an AI platform may not be privileged or confidential.
On February 17, 2026, Judge Jed S. Rakoff of the Southern District of New York issued a written decision in United States v. Heppner that should give every lawyer pause. The ruling appears to be the first of its kind nationwide, directly addressing whether communications with a publicly available AI platform are protected by attorney-client privilege or the work product doctrine. The answer, on the facts of that case, was an unequivocal “no.”
But as we’ll explore, the court’s reasoning raises serious questions, and may create precedent that is both logically inconsistent and practically damaging to the legal profession. The Heppner decision has sent shockwaves through the legal profession, suggesting that AI chats may be discoverable and that privilege can be waived simply by using these tools. This article explains the ruling and its practical implications but also offers a critical analysis of why the court’s reasoning fails to distinguish AI from the countless other cloud-based technologies lawyers rely on every day without similar consequences. The article provides concrete steps for protecting your law practice and your clients, including model engagement letter provisions, while calling on the profession to push back against doctrine grounded in technological ignorance and anxiety rather than precedent and principle.
The Heppner Case: A Cautionary Tale
Bradley Heppner, the former CEO of a Dallas financial services company, was indicted on charges of securities fraud, wire fraud, and related offenses. After receiving a grand jury subpoena and learning he was the target of a federal investigation, Heppner retained defense counsel. During the critical period before his indictment, he independently used a consumer-grade AI platform to draft approximately thirty-one documents outlining potential defense strategies, legal arguments, and factual analyses.
Here’s where it gets complicated: Heppner incorporated information he had received from his defense attorneys into his AI prompts. He then shared the AI-generated documents with his lawyers. When the FBI executed a search warrant at Heppner’s residence, they seized devices containing these AI-generated materials. Heppner’s attorneys asserted privilege.
The government moved to compel production. Judge Rakoff agreed with the prosecution on every element.
Two Separate Issues: Attorney-Client Privilege and Work Product
The court’s analysis addressed both major protective doctrines that lawyers typically rely upon, and found that neither applied under the circumstances.
The Attorney-Client Privilege Problem
For attorney-client privilege to attach, communications must meet specific requirements: they must occur between privileged persons (an attorney and client or their agents), be made in confidence with a reasonable expectation of privacy and be for the purpose of obtaining or providing legal advice.
The court found that Heppner’s AI communications failed these tests:
No attorney-client relationship. An AI tool is not a lawyer. It has no law license, owes no duty of loyalty, cannot form an attorney-client relationship, and is not bound by confidentiality obligations or professional responsibility rules. When prompted for legal advice, these tools typically respond that they are not lawyers and cannot provide such advice.
No reasonable expectation of confidentiality. This finding carries the broadest implications. The court examined the AI platform’s terms of service and privacy policy, which explicitly permitted data collection, retention, use for model training, and disclosure to third parties including governmental authorities. Judge Rakoff stated from the bench that the defendant “disclosed it to a third-party, in effect, AI, which had no obligation of confidentiality.”
No retroactive privilege. The defense argued that because Heppner eventually shared the AI outputs with his attorneys, they should become privileged. The court rejected this reasoning, explaining that privilege must exist at the time of communication-it cannot be manufactured after the fact by routing previously disclosed materials through an attorney.
The Work Product Doctrine Problem
The work product doctrine, rooted in Hickman v. Taylor, shields materials prepared in anticipation of litigation by a party or their representative. Its core purpose is to create a zone of privacy in which attorneys can develop legal theories and strategies without fear of disclosure to adversaries.
Here too, the court found no protection:
Not prepared at attorney direction. Defense counsel conceded that Heppner created the documents “of his own volition” and that the legal team “did not direct” him to run the AI searches. Work product protection is strongest when materials are prepared by or at the direction of counsel. Heppner acted entirely independently.
Did not reflect attorney mental processes. Although Heppner undoubtedly prepared the materials in anticipation of litigation-he knew he was under investigation-they reflected his own thinking rather than the mental impressions or legal strategies of his attorneys.
Waiver through disclosure. Even if the documents could otherwise qualify, disclosure to a third-party AI platform effected a waiver by destroying the confidentiality that work product protection presupposes.
The Contrasting Michigan Decision
Not all courts have reached the same conclusions. In Warner v. Gilbarco, Inc. (E.D. Mich. Feb. 10, 2026), the court denied a motion to compel discovery into a party’s use of generative AI tools in litigation preparation. The court held that AI-assisted internal analysis and drafting were protected by the work product doctrine and that use of ChatGPT did not waive that protection absent disclosure to an adversary.
The Michigan court reasoned that generative AI tools are “tools, not persons,” and that waiver of work product protection requires disclosure to an adversary or in a manner that substantially increases the likelihood of adversarial access. As the court observed, “no cited case orders the production of what Defendants seek here: a litigant’s internal mental impressions reformatted through software.”
These contrasting decisions highlight that courts are still working through how traditional privilege and work product principles apply to this new technology. But they also reveal a fundamental tension in Heppner’s reasoning, one that deserves closer scrutiny.
Did the Heppner Court Get It Wrong? A Critical Analysis
While Heppner has generated significant attention, there are compelling reasons to believe its core reasoning is flawed, which, if applied consistently, would create untenable consequences for the entire legal profession and legal clients everywhere.
The Logical Inconsistency: Why Is AI Different?
Consider how the court’s reasoning would apply to other technology tools lawyers use every day. When an attorney types a search query into Google, Westlaw, or LexisNexis, something like “elements of securities fraud Second Circuit” or “statute of limitations personal injury [state],” they are transmitting information to a third-party platform that:
- Collects and stores that query
- Uses it to improve services and algorithms
- May share data with third parties
- Has terms of service that disclaim confidentiality
- Could be compelled to produce records in response to legal process
The same is true for virtually every cloud-based tool lawyers rely upon: Clio, My Case, any practice management software, cloud storage, email providers, document collaboration platforms like Google Docs, task management apps, notes apps, and legal research databases. Under the logic of Heppner, every time an attorney’s search query or document edit reveals something about their legal thinking, they would arguably be “disclosing” information to a third party with no obligation of confidentiality.
Yet no court has ever suggested that using Westlaw waives work product protection over an attorney’s research or case strategy, or that a Google search revealing the legal issues you’re investigating constitutes discoverable evidence of your mental impressions.
The Court Conflated Two Different Things
Judge Rakoff’s analysis appears to conflate two very different concepts:
The content of AI outputs (the 31 documents Heppner generated, which contained detailed legal strategy) with the act of using a tool (the prompts and interactions themselves).
The court stated that Heppner “disclosed it to a third-party, in effect, AI, which had no obligation of confidentiality.” But this framing treats the AI platform as a recipient of confidential communications rather than as a tool for processing information, which is how we treat every other technology platform.
The Michigan court’s approach is more analytically sound. By recognizing that AI programs are “tools, not persons,” the Warner court applied the correct framework: waiver requires disclosure to an adversary or in circumstances making adversarial access likely. Using a software tool, even one that processes and stores your inputs, is not the same as disclosing information to a person or entity adverse to your interests.
The “Training Data” Red Herring
Much of the concern about AI platforms centers on the fact that consumer-tier tools may use inputs to train their models. But this is a distinction without a meaningful legal difference:
- Google uses your search queries to improve its algorithms
- Westlaw analyzes search patterns to refine its relevance rankings
- Cloud storage providers may scan content for various purposes
- Email providers process message content for spam filtering and features
The fact that an AI model might incorporate patterns from your usage into its training is functionally similar to how any machine learning-enhanced service operates. If “your data might influence the service’s algorithms” equals “disclosure to a third party,” then virtually all modern cloud-based legal tools would implicate the same waiver concerns.
The Fundamental Question the Court Avoided
The real question the court should have addressed is this: Does using a technology tool to process, analyze, or develop your thinking, or analyze or draft a document constitute “disclosure” to that tool’s provider in any meaningful sense?
If I use a calculator to work through damages calculations, I haven’t “disclosed” my damages theory to Texas Instruments. If I use document review software to analyze a production, I haven’t “disclosed” my case strategy to the software vendor. If I use a cloud-based voice dictation to draft a memo, I haven’t “disclosed” privileged information to Evernote, Apple or Google.
The court in Heppner never adequately explained why AI is categorically different from these other tools. The opinion focuses heavily on what Heppner uploaded into the AI platform, and on the content of what Heppner created (detailed legal strategy documents) rather than rigorously analyzing whether the act of using the tool constitutes disclosure and whether it should constitute disclosure.
What the Court Got Right…..and Wrong
To be fair to Judge Rakoff, there are aspects of Heppner that are defensible on the specific facts:
What the court got right:
- Heppner was a client, not an attorney, he wasn’t engaged in protected attorney work product in the first place
- He acted entirely independently, without any attorney direction
- He created detailed, substantive documents outlining legal strategy, not mere research queries
- The documents were then seized by the government, making them directly at issue
What the court got wrong:
- Treating AI platforms as categorically different from other technology tools without adequate justification
- Suggesting that the terms of service of a tool determine whether using it constitutes “disclosure” – a standard that would imperil virtually all cloud-based legal technology
- Failing to distinguish between using a tool to develop thinking versus communicating completed thoughts to a third party
- Creating a framework that, if applied consistently, would make much of modern legal practice problematic
The Perverse Outcome
Here’s the irrational result of Heppner’s reasoning: under this framework, the lawyer who uses AI to brainstorm case theories has potentially created discoverable evidence of their mental impressions. But the lawyer who does the exact same brainstorming in their head, on a yellow legal pad, in Google Docs, or in a Word document saved locally, has not.
The legal protection depends not on the substance of the attorney’s thinking, but on the medium through which they processed that thinking.
That’s not a principled distinction. It’s an accident of technology ignorance and anxiety applied to novel tools that courts and their clerks don’t yet fully understand.
The Conflict with State Ethics Rules and Work Product Ownership
There’s another fundamental problem with Heppner’s reasoning that has received insufficient attention: it cannot be reconciled with how all fifty states treat attorney work product under their Rules of Professional Responsibility and client file retention rules.
Under the ethics rules and case law in virtually every jurisdiction, attorney work product-particularly “opinion” work product reflecting mental impressions, conclusions, opinions, and legal theories-receives the highest level of protection. This protection is so strong that it even limits what attorneys must provide to their own clients.
When a client terminates representation and requests their file, attorneys are generally required to turn over:
- Documents the client originally provided
- Correspondence with the client and third parties
- Final work product prepared for the client’s benefit
- Pleadings, filed documents, and discovery materials
- Any documents the client paid for
But attorneys are typically not required to produce-even to the client who hired them:
- Internal notes and memoranda reflecting attorney mental impressions
- Draft documents that were never finalized or delivered to the client
- Research notes and legal strategy documents
- Internal communications among firm attorneys about the case
- The attorney’s personal analysis of case strengths and weaknesses
The rationale underlying this distinction is fundamental to the attorney-client relationship: these materials reflect the attorney’s professional judgment and analytical process. They belong to the attorney, not the client, precisely because they represent the attorney’s thinking rather than a deliverable prepared for the client’s benefit.
The Absurd Inversion
Now consider the implications of Heppner’s reasoning in light of these well-established rules.
Under current client file doctrine, if an attorney writes a memo to herself analyzing the weaknesses in her client’s case, that memo is her work product. She need not produce it even to the client. It reflects her mental impressions and professional judgment-the very core of what work product protection exists to safeguard.
But under Heppner, the moment that same attorney uses AI to develop that same analysis-to brainstorm a legal theory, identify case weaknesses, or draft an internal strategy memo-those mental impressions become potentially discoverable by adversaries.
This creates an absurd inversion: materials so protected they don’t even belong to the client suddenly become available to opposing counsel, simply because the attorney used a particular software tool to develop them.
If Heppner is correct, an attorney’s handwritten notes analyzing “why we might lose this case” are protected work product that need not be produced to anyone. But the same analysis developed through an AI prompt becomes a gift to the opposing party.
That cannot be right. The protection of attorney mental impressions has never depended on the medium through which those impressions were developed. A legal pad, a Dictaphone, a word processor, or an AI assistant-these are all tools attorneys use to think through problems. The substance of the analysis is what matters, not the technology used to generate it.
Rules of Professional Responsibility Require Reconsideration
Courts addressing these issues should consider how their rulings align with state ethics frameworks. The Rules of Professional Responsibility in every jurisdiction recognize a category of attorney work product that is so integral to the attorney’s professional function that it remains the attorney’s property even after the representation ends.
ABA Model Rule 1.16(d) requires attorneys to “take steps to the extent reasonably practicable to protect a client’s interests” upon termination, including “surrendering papers and property to which the client is entitled.” The phrase “to which the client is entitled” has been consistently interpreted to exclude attorney mental impressions, internal strategy documents, and draft work product.
If these materials are so protected that clients cannot demand them, how can adversaries obtain them simply because the attorney used AI in their development? The answer, under any coherent framework, is that they cannot.
The Heppner court either failed to consider these implications or implicitly rejected decades of settled law regarding work product ownership-without acknowledgment or analysis. Future courts should not make the same mistake.
The Chilling Effect Is Real
If the reasoning behind the Heppner ruling is allowed to stand and shape future case law about how the use of AI negates confidentiality or privilege, then there is a very real risk that such a precedent will have a chilling effect on the legal profession’s ability to use AI effectively. If attorneys fear that their AI-assisted research, brainstorming, and drafting may be discoverable, they face an impossible choice:
Option 1: Avoid AI entirely, sacrificing efficiency and capability while competitors who use these tools gain advantages.
Option 2: Use AI freely and accept the risk that opposing counsel may demand production of your prompts, research queries, and analytical process.
Option 3: Use AI only for “safe” tasks that reveal nothing about legal strategy, which eliminates most of the tool’s value for substantive legal work.
None of these options serves clients well. The lawyers who refuse to use AI tools will be less efficient, less thorough, and ultimately providing worse service. Meanwhile, their adversaries may be using these tools freely, knowing that discovery of their AI usage is at worst a minor tactical disadvantage.
And consider the asymmetry: sophisticated parties with resources to use enterprise AI platforms with contractual protections will have one set of rules, while solo practitioners and small firms relying on consumer tools will face potential exposure. This creates a two-tiered system of work product protection based not on the nature of the work, but on the budget available for technology. At the heart of it, there should be pressure put upon the AI companies to make the default, even for free tiers of their program, confidential and not used to train their LLM.
The Critical Distinction: Public vs. Enterprise AI Tools
Despite the problems with Heppner’s reasoning, the distinction between consumer and enterprise AI platforms is likely to remain significant in future litigation, at least until courts develop a more coherent framework.
Consumer AI tools (free or individual paid tiers of ChatGPT, Claude, Gemini, Copilot, and similar platforms) typically collect user inputs and outputs, may use that data to train their models, and reserve the right to disclose information to third parties and government authorities. The distinction between free and paid plans often matters less than users assume, both OpenAI and Anthropic use conversations from free and individual paid plans for model training by default. Users can opt out, but opting out of training does not eliminate the platform’s rights to disclose data in response to legal process.
Enterprise AI tools often provide contractual confidentiality protections, data isolation, commitments not to train on customer data, SOC 2 Type II certification, and formal data processing agreements. These platforms offer a more defensible position, as the contractual framework supports arguments that reasonable expectations of confidentiality exist.
Legal-specific AI platforms often go further, offering zero data retention policies, compliance with legal industry standards, and terms specifically designed to preserve privilege and work product protections.
The Bigger Problem: What About Your Clients?
While lawyers can control their own AI usage, Heppner highlights an equally pressing concern: your clients may be unwittingly destroying privilege and creating discoverable evidence every time they paste your advice or documents into a chatbot or prompt.
Consider these scenarios, which the court’s reasoning would likely reach:
- A client uses ChatGPT to summarize your legal memo before a board meeting
- An executive asks Claude to help them “think through” potential responses to a regulatory inquiry
- An employee pastes your demand letter into an AI tool to draft a reply
- A party uses AI to organize documents for production, including privileged communications
In each case, the act of sharing privileged material with a consumer AI platform may constitute a waiver, not just over the AI communications themselves, but potentially over the original attorney-client communications or attorney work product as well.
Judge Rakoff found that Heppner waived privilege over the information he fed into the AI tool. The government successfully argued that sharing privileged communications with a third-party AI platform constituted waiver of the privilege over the original attorney-client communications themselves. The privilege belongs to the client, but so does the responsibility to maintain it.
Protecting Yourself and Your Clients: Practical Steps
Despite my view that Heppner’s reasoning is flawed, it is now precedent in the Southern District of New York, one of the most influential federal courts in the country. Until it’s overturned or distinguished, lawyers must know how to navigate its implications.
For Your Own Practice
- Use enterprise-grade AI platforms when possible. Select platforms that offer contractual confidentiality protections, SOC 2 Type II certification, data processing agreements, and explicit commitments not to train on customer data. Enterprise tiers of major platforms and legal-specific AI tools generally provide these protections. If you can’t afford enterprise level subscriptions, make sure you opt-out of allowing the platform to use your prompts and data to train their LLM.
- Establish a “lawyer-in-the-loop” requirement. Work product protection in Heppner failed in part because the client acted alone. Any use of AI for legal-adjacent work – summarizing notes, analyzing contracts, researching issues – should be supervised and directed by counsel, when possible, so that at the very least, the lawyer can make sure confidential or client-identifying information is not included in the AI prompts.
- Document attorney direction. When directing clients or staff to use AI tools, document that instruction. The court in Heppner specifically identified counsel direction as the key factor that could have changed the result.
- Use hypotheticals and anonymization. When exploring legal concepts or drafting arguments, consider using anonymized facts or hypotheticals rather than uploading raw confidential data, even to secure platforms. That obviously limits how you can use AI, especially when many lawyers have discovered the usefulness of creating client-specific GPTS or projects to organize work on that AI platform for clients.
- Treat AI prompts as potentially discoverable. Until the law develops further, assume that detailed prompts containing case strategy could be subject to discovery requests, and craft your usage accordingly.
For Client Education and Protection
- Update your engagement letters. Include explicit disclosure about AI use and risks. Sample language:“You should be aware that inputting confidential information, including any communications with our firm or information relating to your legal matter, into consumer artificial intelligence platforms (such as ChatGPT, Claude, Gemini, or similar tools) may waive attorney-client privilege and work product protection over that information. This waiver may extend to our underlying communications as well. We strongly advise against using consumer AI platforms to analyze, summarize, discuss, or otherwise process any information related to your legal matter without first consulting with us. If you wish to use AI tools in connection with your matter, please contact us to discuss appropriate safeguards before you upload anything specific to your matter or anything we have provided you, like draft documents or emails with our analysis of your case.”
- Address AI in litigation hold notices. Update your preservation notices to address AI-generated materials and communications with AI platforms as potentially discoverable ESI.
- Include AI provisions in client onboarding. Make the AI confidentiality discussion part of your standard intake process. Don’t assume clients understand the distinction between a “private-feeling” interface and an actual confidential communication.
- Create client-facing guidance documents. Develop a one-page summary of AI best practices that you can share with clients, particularly corporate clients whose employees may use AI tools without understanding the implications.
Protecting Your Work Product from Client Misuse
The concern about clients inputting your advice and work product into free AI platforms implicates multiple issues
Copyright. When clients copy your legal memoranda, research, and other work product into AI platforms whose terms permit using inputs to train models, they may be authorizing reproduction of your copyrighted work. While the extent of copyright protection for legal work product varies, engagement letters can address this.
“All memoranda, research, briefs, and other work product prepared by our firm constitute our intellectual property and copyrighted materials. You may not reproduce, distribute, or input these materials into any artificial intelligence platform, machine learning system, or similar technology without our express written consent.”
Privilege waiver. As discussed above, client disclosure of your privileged communications to AI platforms may waive privilege not just over the AI interaction but over the underlying communications. Your engagement letter should make clear that the client bears responsibility for maintaining privilege and that disclosure to third parties-including AI platforms-may constitute waiver.
Contractual protections. Consider adding specific prohibitions to your engagement agreements:
“Client agrees not to input, upload, transmit, or otherwise provide any attorney-client communications, work product, or confidential information relating to this representation to any third-party artificial intelligence platform, chatbot, or similar service without the prior written consent of the Firm. Client acknowledges that such disclosure may waive applicable privileges and protections and may violate copyright protections in Firm work product.”
What Should Happen Next
The legal profession should not passively accept Heppner as the final word on these issues. If the decision is appealed, or when other courts address these questions, they should consider:
- The tool-versus-person distinction. Using a technology tool to process information is fundamentally different from communicating with a human third party. Courts should articulate a clear standard for when tool usage crosses the line into “disclosure.”
- Consistency with existing technology. Any rule about AI must be reconcilable with how we treat legal research platforms, cloud storage, practice management software, and other tools that process potentially sensitive information. A rule that treats AI as uniquely dangerous cannot be squared with decades of legal practice using cloud-based tools.
- The attorney direction factor. The court rightly noted that attorney-directed use of AI might be treated differently. Future cases should develop this distinction more fully and establish clear guidelines for when attorney involvement creates protection.
- The nature of the “disclosure.” There’s a meaningful difference between a search query or analytical prompt (which reveals something about your thinking) and a completed work product document (which is your thinking). Courts should consider where on this spectrum different AI use cases fall. This rule should also align with Rules of Professional Responsibility. For example, in most states, attorney work product and unfiled or unserved drafts of documents in the attorney’s file are not even required to be provided to the client when he/she asks for her client file.
- The practical implications. A rule that makes AI uniquely dangerous for legal work will not prevent AI adoption-it will simply drive it underground or create a two-tiered system where well-resourced parties have protections that others lack.
Lawyers should make their voices heard on these issues. File amicus briefs in relevant cases. Write bar journal articles. Engage with ethics committees and continuing legal education programs. The legal profession should not cede the development of this doctrine to courts that may not fully understand how these technologies work-or how they’re fundamentally no different, in principle, from the technology tools we’ve relied on for decades.
Looking Ahead
The Heppner decision is an early judicial statement on AI and privilege, and its limitations in scope and precedential value leave many questions unresolved. Future cases will likely test:
- Whether enterprise AI platforms with robust confidentiality agreements receive different treatment
- Whether AI tools directed by counsel qualify for Kovel-type protection
- How work product doctrine analysis changes when attorneys actively supervise AI-assisted preparation
- Whether courts will adopt the Michigan approach treating AI as a tool rather than a third party
- How to reconcile AI-specific rules with treatment of other cloud-based legal technology
For now, Heppner establishes a cautious baseline in at least one jurisdiction: consumer AI platforms may be treated as third parties, and disclosure to them may carry privilege consequences. But this baseline rests on reasoning that is difficult to defend when examined closely-and that creates perverse incentives for legal practice.
The practice of law is changing, and AI will undoubtedly play an increasingly important role. The question is whether courts will develop doctrine that sensibly addresses these tools, or whether we’ll be stuck with rules that treat AI as uniquely dangerous while ignoring the identical issues presented by every other cloud-based technology lawyers use.
The lawyers who thrive will be those who harness AI’s power while maintaining appropriate safeguards. But the profession as a whole has a stake in ensuring that those safeguards are grounded in principle-not in unfounded technological anxiety.
This article is for informational purposes only and does not constitute legal advice. The law regarding AI and privilege is rapidly evolving, and attorneys should conduct their own research and analysis based on the specific facts and jurisdiction at issue.
Stay Ahead of AI Law as It Develops
The Heppner ruling is just the beginning. As AI becomes embedded in legal practice, courts, bar associations, and ethics committees will continue issuing guidance that directly impacts how you can use these tools.
The Liberated Lawyer is a membership community for forward-thinking attorneys who want to confidently embrace AI while staying on the right side of ethics rules and protecting their clients. Members get:
- Real-time updates when rulings like Heppner drop, so you’re never caught off guard
- Ready-to-use templates like the engagement letter language and client notices discussed in this article
- Deep-dive training modules on integrating AI into your practice ethically and effectively
- A private community of lawyers navigating these same challenges
The legal landscape is changing fast. Your career survival depends on your ability to adapt.
Join The Liberated Lawyer: https://community.liberatedlawyer.co/join-liberated-lawyer


