Legal AI: Adoption in a Risk-Averse Profession
CCV Founding Investor Andrew Carton (WG26) provides insights on AI applications in the legal space
Welcome back to Center City Ventures Insights, a Substack featuring musings on startups + entrepreneurship from the investment team at Center City Ventures, an early-stage fund funded and operated by members of Wharton’s MBA Class of 2026. You can learn more about us here :) Today, we have Founding Investor Andrew Carton outlining lessons in legaltech.
Two lawyers and an LLM walk into a bar. Sounds like the start of a bad joke. Yet, in June 2023, the premise materialized to dramatic effect in New York.
Two attorneys of the law firm Levidow, Levidow & Oberman filed a personal injury suit against the airline Avianca. In their suit, they claimed their client suffered a severe knee injury on an Avianca flight when he was hit by a metal service tray. They advanced their claims with relevant precedents, including court opinions supporting whether a statute of limitations could be extended due to bankruptcy.
When Avianca’s lawyers attempted to verify the legal citations, however, they hit a dead end. Upon deeper review, a New York federal judge determined the opinions and quotes never existed.
The two attorneys in question had asked ChatGPT to help them prepare the court filing and the model hallucinated, crafting an argument based on fabricated precedent. To compound the error, the chatbot assured the lawyers the “cases I provided are real and can be found in reputable legal databases.” Both lawyers were sanctioned, each forced to pay a $5,000 fine, and instructed to notify each judge falsely identified as the author of a nonexistent case ruling.
Talk to a lawyer – whether in big law, mid law, solo practice or in-house – about the promise of AI in the legal profession and he or she will undoubtedly cite this story. My conversations over the past few weeks all followed a similar track: “We’re open to technological change and actively track the tools available, but hesitant to try anything that will hurt our reputation. Have you heard the story of . . .” I believe the case reflects worse on the lawyers than on the technology, but it remains top of mind nearly 18 months later. The fear of censure, reprimands and loss of trust from clients or other legal practitioners serves as a significant barrier to adopting nascent legal technologies. With the risk-averse attorney in mind, I’ve outlined five key lessons for investors, industry observers and skeptics looking to understand the opportunities for AI in the legal profession.
Attorneys are slow to adopt tech solutions but, once adopted, the solutions become fully entrenched
The legal profession is notoriously averse not only to risk, but to change writ large. Don’t let an attorney convince you the scions of law were also scions of technology. Look no further than the U.S. Supreme Court for a prime example – the court used a printing press until the 1980s. Yet talk of the legal profession as a tech vacuum is similarly unfounded. While historically slow to embrace technology, law firms have recognized the need for digital transformation over the past two decades. Most corporate (big law) firms today utilize a combination of machine learning and document automation tools. Kira, a leader in contract analysis, has been standard fare in due diligence processes since the pandemic. Contract Express (fka Business Integrity), an automation tool that creates templates for engagement letters and other boiler-plate documents, was acquired by Thomson Reuters in 2015.
These applications, which are commonplace across the street, automate high volume, highly manual processes with solutions that fit seamlessly into attorneys’ existing workflows. They streamline tasks and provide valuable insights, but also require the guidance of an attorney to use them strategically. Law firms remain sophisticated buyers of legal software, but are often hesitant to be early movers.
AI solutions that automate formulaic, low-risk tasks are best positioned for adoption
Imagine you’re a personal injury attorney and your client broke her hip in NY. She was leaving a restaurant, the steps were creaky, the lights were broken. You want to know what a broken hip is worth in NY, and reported verdicts and settlements follow similar patterns, depending on the circumstances of the accident. Once you synthesize the facts of the accident with the value of the case, you then need to draft a demand letter to the defendant’s insurance company. Or consider the criminal defense lawyer preparing to cross-examine the prosecutor’s key witness, a prominent physician. As part of your cross examination, you want to know any time the doctor has been on public record stating an opinion on similar matters. Across either hypothetical, legal AI applications can identify patterns or uncover insights, tasks that would otherwise require hours of a paralegal or junior lawyer’s time.
These are newly attainable use cases – as LLMs have improved to handle workflows with unstructured data – yet they aren’t created equally. Automating the process of drafting demand letters is low-risk and prime for implementation today – a reasonably seasoned attorney can verify its accuracy with little additional work. The application replaces a rote workflow and frees lawyers to think creatively about the arguments behind their demands. EvenUp is a great example of this in practice today. By replacing the demand letter process for personal injury firms with an AI-enabled service, EvenUp facilitates the filing of 1,000+ demands and MedChrons weekly. As a result of a clear value proposition with limited downside, EvenUp already commands ~80% of the pricing power of Litify, a mission critical, core operating system for PI firms.
The value proposition is equally compelling in identifying verdict patterns and uncovering expert opinions, but the chances of critical hallucinations compound the downside (and hence attorneys’ hesitation). In May, researchers at Stanford’s RegLab and Human-Centered AI Institute evaluated the accuracy of two commonly used legal research tools: LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research). While the tools show a substantial improvement over general-purpose AI models like GPT-4, hallucinations are still alarmingly present. The Lexis+ AI system produced incorrect information more than 17% of the time and Westlaw’s AI-Assisted Research hallucinated more than 34% of the time. These are terrifying statistics for lawyers, who have an asymmetric downside for error. While the technology continues to improve – GPT-o1 represents a significant improvement over GPT-4 – it is hard for attorneys to embrace legal research and review tools until the accuracy improves further.
One potential remedy to the accuracy concern is an iterative roll-out of legal AI solutions. Margaret Hagan, Director of the Legal Design Lab at Stanford, has a useful framework to get AI solutions into the real world quickly without compromising trust or accuracy. It’s a phased roll out that begins with testing in university settings: can your solution match or beat the best available human on both quality and efficiency? If proven in these settings, founders should conduct controlled pilots with humans strongly in the loop. This allows them to stress-test solutions in real-world scenarios while still actively monitoring the tools to diagnose problems. By constantly iterating before the product ever reaches full scale release, founders can build trust with attorneys, which is critical to broader adoption.
Client pressure and business model shifts are insufficient today to create widespread adoption of legal AI; competitive tension currently drives adoption
Beyond improved data reliability, you may wonder what it will take for lawyers to adopt legal AI applications. The two most visible hypotheses today are client pressure on attorneys to act more efficiently and increasing movement toward contingency / flat fee models.
Some industry observers argue that client pressure will be the necessary spark to change the business model and encourage attorneys to adopt AI. Back in February, a group of general counsels from companies including Ford and Microsoft collaborated to accelerate AI adoption in corporate legal departments. Anecdotally, Ford’s GC also reached out to external legal partners and asked them how, rather than if, they use AI to improve workflows. There’s a fair bit of marketing going on here, with large corporations wanting to appear on the cutting edge of technology across their organization. Yet the thought remains that as corporations adopt AI internally, they will have higher expectations for attorney efficiency.
Proponents of this theory find inspiration in the example of Carta, cap table management software adopted by startups across the globe. By selling to startups, Carta indirectly incentivized startup attorneys to adopt the software as well: clients expected their lawyers to verify / maintain cap tables digitally and file their 83(b)s online. While a good benchmark by which to compare legal AI adoption, we lack a critical mass of client-driven momentum today. Companies like Microsoft and Ford touting legal AI is a promising start, but ultimately just that, a start. For every Ford there are dozens of corporations who care little about the tech savviness of their attorneys. They judge lawyers and their firms based on the results they produce, not the efficiency with which they achieve those results.
Wilson Sonsini’s recent decision to adopt a flat rate, corporate services offering for startup legal work has buoyed industry observers that changes to the business model are on the horizon. Yet the flat fee applies only to startup legal work – keep me honest when Wilson Sonsini announces its next IPO mandate – and the exception does not embody the norm. Wilson Sonsini has a brand incentive to be innovative given their history of supporting companies in Silicon Valley, and they’re only being innovative to a point. The limits imposed on their contingency / flat fee model seem to reinforce the idea that alternative billing practices only work in highly specific circumstances, such as simple projects or standard-fare, ongoing coverage. The broader industry lacks consensus here, and it is unclear what forces will succeed in generating agreement.
A better catalyst today to generate enthusiasm for legal AI is competitive tension. While clients aren’t necessarily compelling firms to push the envelope technologically, law firms are highly sensitive to falling behind their peers. By peers, I mean direct competition for deals that produce large fees and enable firms to invest in things like legal AI; while Wilson Sonsini’s startup advisory practice is forward-thinking, it doesn’t motivate change within the corporate services or litigation practices of Cravath, Latham or Davis Polk. These firms compete for the same deals and the same resources, namely junior talent. To get firms to invest heavily in legal AI, tell them their competitors already do so. Better yet, have competitors publish articles in Law360, talking about revolutionizing the associate experience through AI.
Right on cue, Latham & Watkins unveiled an “AI Academy” on October 31, including training sessions and other resources for lawyers to stay ahead of the latest technologies. While these initiatives are meant for attorneys of all levels, the program kicked off with a two-day in-person training event for junior lawyers, ranging from first-year associates to fourth-years. While undoubtedly a fair dose of marketing spin, this program states unequivocally Latham’s interest and investment in legal AI. If competitors hadn’t taken legal AI seriously before, they surely did then. Whether through the appointment of CIOs, the creation of AI task-forces or the piloting of legal AI solutions, competitors are investing heavily to avoid falling behind. They fear losing clients and failing to attract top talent – not because most clients are pressuring them to do so or because the billable hour is under assault – but because they want to stay competitive in an evolving legal services market.
Adoption of Legal AI requires attorneys to navigate a complex ethical landscape with few definitive answers from regulatory bodies
In his 2023 Year-End Report on the Federal Judiciary, Chief Justice John Roberts argued that “AI obviously has great potential to dramatically increase access to key information […] but just as obviously, it risks invading privacy interests and dehumanizing the law.” For founders, investors and industry observers, it is not sufficient to argue that the challenges of legal AI start and end with fewer hallucinations. There are critical ethical and legal challenges forthcoming, which might define the circumstances in which an attorney considers using AI. Law firms, like other client-facing entities such as consulting firms and investment banks, pride themselves on proprietary work products. They resist notions of commoditization, charging premia for tailored services that apply decades of expertise to the unique needs of their client. On Chief Justice Roberts’ concern about privacy, law firms are hesitant to share proprietary information – their own secret sauce – without sufficient guardrails in place to silo such information. Legal AI solutions are only as strong as the proprietary dataset with which they work, and a law firm likely won’t contribute to that dataset if it informs the decisions made by a competitor.
Moreover, firms would likely need consent before sharing clients’ proprietary information – protected under client-attorney privilege – with a third-party AI provider. This could lead to uncomfortable conversations between attorney and client. Attorneys would prefer to avoid perceptions that they’re cutting corners or dedicating fewer resources to a client, perceptions that can be expected from clients who aren’t clamoring for legal AI. The Economist published survey results in June 2023 that showed 82% of lawyers believe genAI can be used for legal work, but only 51% think it should. Hesitancy stems not only from fear of hallucinations, but questions around privacy safeguards and client demand.
Once clients consent to the use of their proprietary data for training purposes, attorneys must navigate the legality of such usage. Risk-averse attorneys would prefer having disciplinary and legal canons weigh in before acting with uncertain consequences. An ethics opinion from a state bar association would be a promising start in the U.S., but given the patchwork of legal regulations across the country, any opinion would be limited to its immediate parameters. Over the summer the European Union devised a preliminary, AI use framework for ethical compliance, but lawyers will look to domain-specific paradigms for guidance. Given the evolving regulatory landscape, I believe legal AI that integrates with both data observability and compliance software will be best equipped for adoption. Attorneys can then verify 1.) the technology used adheres to the latest ethical and legal guidelines and 2.) the data the technology produces are accurate and reliable.
Legal AI founders have an opportunity to democratize access to legal services
Beyond enhancing efficiency and reducing costs for corporations, AI can help provide access to and understanding of legal services for those disenfranchised or unable to afford meaningful representation. Legal AI solutions can simplify processes for those without legal counsel and improve the productivity of legal aid and pro bono organizations, overwhelmed by demand for their limited resources. Most people are familiar with their Miranda Rights: “You have the right to an attorney. If you cannot afford an attorney, one will be provided for you […].” Yet nearly two-thirds of people aged 18-34 misunderstand the context in which you hold these rights. If you face eviction, an injury on the job or a falling out with an employer, you have no legal right to an attorney. And those who need an attorney most oftentimes cannot access one – the Justice Gap Report produced by the Legal Services Corporation found that low-income Americans do not get any or enough legal help for 92% of their substantial civil legal problems. The system is broken, and there’s an opportunity to leverage AI to empower civil litigants and those hoping to provide them with counsel.
AI can help educate consumers about legal matters and unlock the potential of self representation. AI-driven tools can empower consumers with legal knowledge, similar to how personal finance apps have educated users about financial management. With user-centric design and non-lawyers in mind, AI solutions can provide greater understanding of the legal process, transform complex legalese into plain-language concepts and educate consumers as to the timeline of filings and appearances.
In addition to empowering self-representation, these tools can supercharge the reach of pro bono and legal aid organizations. AI solutions can catalyze rote tasks and automate laborious intake processes, saving time and money for both plaintiffs and aid organizations. In New York for example, Housing Court Answers teamed up with Josef Legal to simplify processes around tenant law. An AI copilot acts as a supervisor, helping to answer tenant questions and promote legal literacy; custom-built GPTs aid in letter generation, helping plaintiffs determine what and how to include in their demands.
Conclusion
Legal AI offers immense potential to streamline workflows and expand access to justice, yet attorneys face asymmetric consequences for error. They require technology that is not only accurate but also integrates seamlessly into their workflows, while adhering to strict ethical standards. Legal AI founders must focus on building trust by delivering highly reliable solutions for well-defined use cases and addressing concerns around data security and compliance.
If you’re building in the space or want to discuss anything written, I’d love to hear from you!
You can reach Andrew on Linkedin here. For more insights from our team, don’t forget to subscribe to Center City Ventures Insights on Substack — we’ll be back with more!