Best Practices in Pro Bono: Using AI to Further Access to Justice – Where Do We Start? Recap
On July 17th, we hosted a follow-up panel to our previous session on Best Practices in Pro Bono. The last event sparked many questions about leveraging AI to enhance access to justice and the considerations for its responsible use. How can we ensure equity and quality service delivery to clients? This follow-up expert panel delved into these questions and more, providing valuable insights into the ever-changing field of AI.
Our panelists included:
- Tiana Russell, Pro Bono Counsel, Crowell & Moring
- Michael Lukens, Executive Director, CAIR Coalition
- Jim Sandman, Vice-Chair of the ABA Task Force on Law and Artificial Intelligence, Law Professor, President Emeritus of the Legal Services Corporation, Past President of the D.C. Bar
The conversation was moderated by Jen Masi, Pro Bono Director, Children’s Law Center.
The use of generative AI calls for consideration of the potential challenges, ethical implications, and resource disparities. Our panelists offered understanding and recommendations for the responsible and effective use of AI in legal practice.
The panel began by addressing the considerations practicing attorneys and firms must make when using generative AI. Russell emphasized that while AI can significantly streamline internal processes, using it for client work raises critical confidentiality issues. Transparency and client communication regarding AI’s role in legal services is essential for a firm. Lukens expanded on these considerations, explaining how his organization restricts staff from using AI for confidential client information because of its unpredictable nature. Sandman, while acknowledging these considerations, pushed back, offering four key lessons when considering the implications of legal AI usage. He argues that generative AI is not a monolithic tool but a set of diverse technologies with different uses and limitations. He advocated for the least risky use of AI which in his experience is extractive AI, used to summarize and simplify large pieces of information. Sandman also notes the distinction of AI as an assistant rather than a replacement requiring a lawyer’s oversight. And finally, the conversation of AI use should include experts in AI technology, not just advocates.
When it comes to investing in AI, our panelists had varied opinions. Lukens acknowledged the resource disparities among different organizations and suggested that free AI tools could be beneficial. However, Sandman and Russell both argued that paid AI services offer more reliable results and should be prioritized for legal research.
The panelists discussed a variety of use cases for generative AI in legal services. Russell suggested that AI could significantly benefit her work by assisting with tasks like client intake, testimony summarization, as well as translation services. AI works efficiently and effectively when given the right input prompts. Summarizing or translating large documents is extremely beneficial to firms that intake many clients. Being able to send clients rundowns of meetings or translated forms is advantageous for a firm. One interesting question, brought by our audience, that addressed use cases asked how AI could help aid disaster relief efforts. Sandman responded, explaining AI’s potential in streamlining intake processes, completing forms, and spotting trends while stating the importance of coordination with various organizations on the ground.
The conversation addressed many of the ethical concerns of using generative AI in legal service work. Lukens raised concerns about demographic biases and the risks of using AI in decision-making processes, such as determining bail eligibility. He also mentioned the limitations of AI in legal research. Lukens elaborated on the potential biases in AI feedback loops and the importance of using the technology thoughtfully with appropriate oversight. He stressed the need for ongoing working groups to address these ethical issues. Our panelists further discussed the ethics of different standards of client-specific work with promising AI-assisted services to some clients while others receive direct attention from lawyers. The consensus was that AI assistance is better than none– AI can enhance a lawyer’s work and expand their outreach with efficiency; it is not necessarily a replacement for a Lawyer’s mind.
As the legal field evolves, skills training in AI becomes increasingly important especially considering the careful oversight AI required from legal professionals. Our panelists predicted a significant shift in legal research practices due to AI. Sandman specifically argued for integrating Ai training into the first-year legal curriculum. The panel collectively emphasized the need for training the next generation of advocates in the diverse range of AI tools available and the growing importance of AI proficiency in legal services.
The following resources were referenced during or relevant to the panel:
- In Redo of Its Study, Stanford Finds Westlaw’s AI Hallucinates At Double the Rate of LexisNexis
- Visalaw.ai and AILA Unveil Gen, a Groundbreaking AI-Based Solution for Immigration Lawyers
- Removing Demographic Data Can Make AI Discrimination Worse
- AI translation is jeopardizing Afghan asylum claims
- Interpreting SAFE AI Task Force Guidance on AI and Interpreting Services
- Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap
- Implications of Large Language Models (LLMs) on the Unauthorized Practice of Law (UPL) and Access to Justice
- Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for response to people’s legal problem stories