In the rapidly evolving landscape of generative AI (GenAI), one question has emerged: Is legal prompt engineering still relevant? As advancements in large language models (LLMs) and AI-driven solutions continue to reshape the legal industry, examining prompt engineering's role and enduring significance is crucial.
Having a conversation with a machine
As technology has evolved, so has the way we interact with it. If you have ever used Apple’s Siri or Amazon’s Alexa, you have likely engaged in the basics of prompt engineering.
Prompt engineering is the process of designing and refining input prompts into tools, like ChatGPT and Claude, to effectively guide LLMs in generating desired outputs. It involves carefully constructing questions, instructions, or contexts that help the tools understand and respond to specific tasks or queries. Prompt engineering is, therefore, equivalent to user interface design, in which the chatbot is the user.
Prompt engineering focuses on crafting successful controllable criteria, as opposed to other methods of LLM behaviour control, such as finetuning. A good prompt can even outperform fine-tuned specialist models.
When starting to prompt, it is good to think of the LLM, as Anthropic describes Claude: “as a brilliant but very new employee (with amnesia) who needs explicit instructions.” To get the best out of a model, as Harrison Doveas explains, ‘context is everything’. A good prompt should include:
- A persona — Assign a specific role or identity to the AI, such as “Act as an experienced intellectual property lawyer specialising in patent law.”
- Context — Provide relevant background information to frame the task, such as, “You are advising a tech startup that has developed a new AI algorithm for image recognition.”
- The task — Clearly state the specific action or output you want from the AI, such as “Generate a summary of the key steps involved in patenting this algorithm.”
- An example — Offer a sample or template to guide the AI's response, like “Your summary should follow this structure: 1) Invention description, 2) Novelty assessment, 3) Non-obviousness evaluation, 4) Practical application.”
- The tone — Specify the desired style or mood for the response, such as “Maintain a professional and authoritative tone throughout the summary, suitable for presentation to the startup's board of directors.”
Embracing the Art and Science
While navigating the AI landscape, it is essential to embrace prompt engineering as both an art and a science. Effective prompts require a deep understanding of language, coupled with knowledge of AI models and their underlying architectures.
Although using natural language is a critical part of interacting with an LLM, it is intertwined with the technical aspects of AI models. The technical aspects include:
- Model architectures — crafting effective prompts requires understanding how the underlying transformer architectures work and how outputs are generated.
- Training data and tokenisation — LLMs are trained on vast datasets, tokenising input data into smaller chunks (tokens) for processing. The choice of tokenisation can influence how a model interprets a prompt. We have all asked ChatGPT, “How many r’s are in the word strawberry?”.
- Model parameters — LLMs contain parameters that are fine-tuned during the training process. They can determine how a model responds to a prompt
- Temperature and Top-k sampling — when responding to a prompt, a model can be set to produce more random or diverse outputs. Known as the temperature, a higher temperature might yield more varied (but potentially less accurate) responses.
As models like GPT-4o, Clause 3.5 Sonnet and beyond demonstrate enhanced contextual understanding and adaptive capabilities, prompt engineering has shifted from mere instruction to a more nuanced and collaborative approach. Users can now engage in a dynamic dialogue with AI, leveraging prompts to unlock insights, automate tasks, and augment their expertise.
What about legal prompt engineering?
Legal prompt engineering involves crafting and optimising prompts for AI assistants to address legal queries and tasks effectively. It requires testing various terminology, phrases, and instructions to determine what works best. By crafting well-structured prompts, legal professionals can guide AI models to produce contextually relevant and precise outputs.
While the question ‘Should lawyers learn prompt engineering?’ emerged early on, the consensus was that being a pro prompter was not a prerequisite for using GenAI, but knowing the basics gives lawyers an advantage.
The use of GenAI in legal practice must be grounded in pre-existing professional conduct rules. Effective prompt engineering enables lawyers to harness the power of GenAI while maintaining the rigour and ethical standards demanded by the profession. Therefore, prompt engineering serves as a bridge between AI's capabilities and the legal domain’s requirements, enabling responsible leverage of the technology.
Building upon the prompting guidance above, Juro has provided additional takeaways for lawyers when prompting an LLM:
- Clearly define your objective. Define precisely what you want to achieve. Whether it's legal research, document drafting, or case analysis, clearly articulating your objective helps tailor the outputs appropriately.
- Use precise language. Be specific in the prompt and avoid ambiguous language. Use legal terminology and language appropriate to the context of your task.
- Provide sufficient context. Include the legal issue, jurisdiction, relevant laws or case details to ensure appropriate output.
- Specify the format. Whether a memo, brief or clause, mention it in the prompt or provide an example.
- Highlight key points. Explicitly mention specific points or arguments. This additional detail, necessary for longer prompts, will help the LLM focus and ensure those aspects are addressed in enough detail.
- Practice, practice, practice. Prompting is iterative in nature. If the output does not meet the expectations, refine it, or break up the problem into steps and try Chain-of-Thought prompting.
- Learn how to use AI responsibly. Familiarise yourself with the risks (e.g., confidentiality issues and hallucinations), and take appropriate measures when engaging with an LLM (e.g., do not put personal data in public AI platforms).
The Future of Prompt Engineering
As we look ahead, prompt engineering's relevance is unlikely to diminish but evolve. The rise of adaptive prompting techniques, multimodal prompts, ethical prompting frameworks and legal syllogism prompting indicates a future where prompt engineering becomes an even more integral part of GenAI implementation.
Prompt engineering will likely remain relevant and indispensable in 2024 and beyond. The future of legal practice lies not in choosing between human expertise and AI capabilities but in finding the optimal synergy between the two. Prompt engineering is the critical interface between human expertise and machine capability. By mastering the art and science of crafting precise prompts, we can unlock the full potential of GenAI while upholding the highest standards of legal compliance and ethical practice.
About the Author
Mitchell Adams is a senior lecturer at Swinburne Law School and a CLI Distinguished Fellow (Emerging Technologies) with the Centre for Legal Innovation (CLI). His specialisation lies at the intersection of law and technology, as an expert in intellectual property law, legal tech and legal design. Mitchell heads up the Legal Technology and Design Clinics at Swinburne Law School, where he creates the conditions for legal innovation through student and industry collaboration. As a CLI Fellow, Mitchell is working on developing education opportunities in prompt engineering for GenAI and a framework for the classification of legal GenAI.