Miniature Models as Paralegals: LexisNexis Develops AI Assistant Solutions

Miniature Models as Paralegals: LexisNexis Develops AI Assistant Solutions

Transforming Legal Work: An Inside Look at LexisNexis and its AI Assistant Protégé

In the rapidly evolving world of legal technology, LexisNexis has made headlines with its creation of Protégé, an AI assistant designed specifically for the legal industry. Unlike general-purpose AI systems, Protégé is tailored to adapt to the unique workflows of law firms while leveraging the substantial capabilities of large language models (LLMs).

Meeting the Needs of Legal Professionals

LexisNexis set out with a clear goal for Protégé: to enhance the efficiency of lawyers, associates, and paralegals in drafting and proofing legal documents. This is critical because the accuracy of citations in complaints and briefs is paramount. However, the company recognized that a generic AI assistant would not suffice; understanding and customizing to a firm’s specific workflow were essential to success.

The shop’s tech-savvy CTO, Jeff Reihl, explained in an interview with VentureBeat that they harness a multi-model approach to AI. They employ the most effective models available—not just any LLM but those best suited to the task at hand. “We use the best model for the specific use case. Some use cases might benefit from smaller language models (SLMs) or even distilled versions of LLMs for improved performance at a lower cost,” Reihl remarked.

The Power of Model Distillation

In legal tech, the size of the model isn’t always indicative of its effectiveness. Many organizations, including LexisNexis, have started embracing smaller models, capitalizing on a method called distillation. This process involves a larger LLM “teaching” a smaller model, allowing it to perform adequately for specific tasks while requiring fewer resources and yielding faster response times. For Protégé, these small models play a crucial role in tasks like chatbots and basic code-completion functionalities.

The Evolution of AI in Legal Research

LexisNexis isn’t new to AI; it has been employing different forms of artificial intelligence for years, primarily in natural language processing, deep learning, and machine learning. However, the release of OpenAI’s ChatGPT in November 2022 revolutionized their approach. The burgeoning interest in generative and conversational AI capabilities prompted LexisNexis to rethink how they could integrate advanced AI into their tools, thus giving birth to Protégé.

A Multi-Model Approach

Protégé’s architecture is dependent on a sophisticated model-routing system, which allows the platform to switch between different AI models based on the task being handled. Reihl elaborated on this strategy: “We break down whatever task is being performed into individual components and identify the best large language model to support that component.” Different models, such as Claude from Anthropic and models from OpenAI and Mistral, are employed depending on the context and complexity of the query.

For instance, when a user interacts with Protégé, the system may initially use a fine-tuned Mistral model to assess the query before determining the relevant task model. This might involve switching to another model to generate queries for the search engine or to summarize results efficiently.

The Need for Specialized Models

Despite the advancements in AI, LexisNexis has decided to stick primarily with fine-tuned language models that are tailored specifically for legal use cases. For example, they choose not to rely on heavyweight models for preliminary assessments; instead, they leverage smaller, customized models to keep response times short and efficiency high.

This careful consideration ensures that when a query arises regarding a specific legal case, the Protégé assistant begins by engaging a specialized Mistral model designed for query evaluation. Following this, it selects the most suitable model for completing the task, whether that be generating new queries or summarizing complex documentation.

The AI Legal Suite and Its Comprehensive Features

Protégé operates within the LexisNexis + AI platform, functioning as a hub for legal AI services tailored for law firms. Its capabilities are extensive, supporting tasks that typically fall to paralegals or junior associates: drafting legal briefs, suggesting workflow steps, refining search queries, asking deposition questions, and linking citations accurately. It even offers timeline generation and summarizes intricate legal documents to facilitate easier understanding.

Reihl envisions that Protégé is just the initial phase in a journey toward personalizing AI for various legal specialties. “Think about the different types of lawyers: mergers and acquisitions, litigators, real estate,” he explained. The aim is to develop a personalized assistant for legal professionals that caters to their individual practices and needs, rather than applying a one-size-fits-all philosophy.

Competitive Landscape

As Protégé continues to evolve, it faces robust competition from other legal technology platforms. Notably, Thomson Reuters has integrated OpenAI’s o1-mini-model for its CoCounsel legal assistant, while Harvey, another notable player that LexisNexis invested in, has also launched its own AI legal assistant. The market for AI in legal research is heating up, and Protégé is poised to carve its unique niche.

Through dedicated efforts and a commitment to catering to the specific demands of legal professionals, LexisNexis is not only enhancing productivity but also fundamentally reshaping the landscape of legal research and technology. With plans to evaluate additional models, such as those from Google’s Gemini and OpenAI’s advanced reasoning models, the future of Protégé, and legal AI as a whole, appears to be bright and promising.

Source link

meenakande

Hey there! I’m a proud mom to a wonderful son, a coffee enthusiast ☕, and a cheerful techie who loves turning complex ideas into practical solutions. With 14 years in IT infrastructure, I specialize in VMware, Veeam, Cohesity, NetApp, VAST Data, Dell EMC, Linux, and Windows. I’m also passionate about automation using Ansible, Bash, and PowerShell. At Trendinfra, I write about the infrastructure behind AI — exploring what it really takes to support modern AI use cases. I believe in keeping things simple, useful, and just a little fun along the way

Leave a Reply

Your email address will not be published. Required fields are marked *