What really is artificial intelligence (AI)? Examples of its use in the legal context.

What really is artificial intelligence (AI)? Examples of its use in the legal context.

Image source

Just a few years ago, AI was a term rarely used in colloquial conversations, with many not expecting it to have such an impact on our daily lives. Today, most of us have certainly heard about the term and, in fact, we use some sort of AI on a daily basis (i.e. spell check when sending a text message). Moreover, AI is just beginning to demonstrate its practicability within the legal industry, and given how ‘fresh’ such initiatives are, many still do not understand what AI actually is or how law firms use (or might use) such technology.

For this reason, below I will cover the fundamental basics of what AI is and what are its building blocks, and I will then talk about what is possible with the use of AI within the legal environment.

WHAT IS AI?

AI is a very broad term that covers many computer science topics and methods. The term was coined in 1956 by John McCarthy and refers to machines simulating and mimicking some of the cognitive functions of the human brain; that is learning, thinking, remembering, problem-solving, and decision making. However, AI cannot, for example, mimic emotions or creativity (i.e. deciding in a way that is unique or opposite to the way it was programmed).

Some of the most common and everyday uses of AI are:

  • • Translators
  • • Map navigations
  • • Text autocorrection on our phones
  • • Word prediction on our phones
  • • Voice assistants (i.e. Siri or Alexa)
  • • Email spam filters
  • • Online customer service bots
  • • Bot enemies in video games
  • • Social networks and phone albums detecting faces on photos

… and hundreds of more!

How does ai learn – Machine learning and deep learning

It is important to note that AI is a general term, which refers to any technique enabling computers to mimic the cognitive functions of the human brain. Nowadays, most AI initiatives deploy some kind of Machine learning (ML), which is a sub-field of AI focusing strictly on the ‘learning’ aspects of machines. Such ‘knowledge’ gathered by the models is then applied to all kinds of problem-solving.

To understand how ML works, we need consider the way machines actually learn. If were to develop a ML model, we would first have to gather large datasets for our machine (or model) to learn from and label the relevant bits of data we want it to recognise. For example, we are training our model to recognise law cases, so we provide it with text examples and instruct the model with algorithms to recognise features of legal cases (i.e. algorithmic instructions such as “contains a year number inside round or square brackets” or “contains ‘v’ between two surnames”). This is also known as supervised learning, and the model will then attempt to follow such rules and find legal cases within the given text.

Deep learning (DL), on the other hand, does not require us to define the features of such data manually, and the model will try to understand and distinguish the rules of the cases itself (unsupervised learning). We simply have to provide our model with data and it will then look at the text as a whole and try to make a sense of the syntax, and eventually recognising that cases often contain ‘v’ between two names and round or square brackets around case years, or that cases within judgements often follow the preposition ‘in’. There are thousands of rules that have to be defined, and DL is therefore prefered as it eliminates the need to define the rules ourselves. However, DL requires a much larger amount of data, usually a couple thousand at best.

Some also classify AI as either ‘weak’ or ‘strong’, where weak AI (or narrow AI) refers to machines mimicking limited parts of the human brain, whereas strong AI is closer to mimicking human intelligence as a whole (i.e. applying learned knowledge to tasks other than those specified to it in the first place).

applying ml and dl to languages

In light of the legal context, both ML and DL are mostly deployed in terms of understanding the legal language. As explained in this article, computers natively do not understand natural languages (i.e. French or Mandarin), so in order for our computer to figure out the meanings of words or sentences on its own, we need to teach it to read and understand natural languages. This field of machine learning where computers are trained to understand natural languages is known as Natural Language Processing (NLP).

Computers will never know that the word ‘Apple’ is either a proper noun or a name of a company, let alone a type of fruit. The computer will simply think of the word as a five-character sequence of characters. In order for the computer to distinguish the meaning of words, the programmers will have to give the code thousands of text material in which they annotate (or ‘highlight’) nouns, verbs, fruit, brands, legal cases (or whatever entities they wish the computer to recognise). From such a material, the model will then learn and apply its knowledge; the more material is provided, the more accurate the model (as for now, accuracy is never 100%, and sometimes even 90% accuracy is considered as ‘exceptional’).

Another example could be translators. Modern translators (i.e. the Google translator) accommodate some of ML or DL to translate text not only word by word but to translate meanings of the words and sentences within the text. When only translating sentences word by word, no AI is required, as the computer simply refers to its own dictionary to find the corresponding word in the other language. When translating meanings of the words and the whole sentences, some sort of NLP is inevitably required, as computers would otherwise never be able to translate meanings beyond the traditional dictionary-style linking.

In the translation above, we can see that the translator has disregarded the option of ‘apple’ literally translating to the german ‘Apfel’ and thought of it as being a company name (and it has also capitalised both ‘Samsung’ and ‘Apple’). To do so, the data scientists and programmers at Google had to ‘feed’ their model millions of textual information while directing the model as to the meaning of words and their function within sentences. Nowadays, such knowledge is usually bundled and available online, so if we were to develop a new translator, we would work with models that have already been trained. The same principles of teaching our model apply in other fields of ML and DL, such as face-recognition or self-driving cars. The models are teaching themselves according to the initial training dataset, and evaluate any future guesses on the basis of what was learned previously.

How does the AI model actually make decisions?

AI models, in most cases, apply some sort of a ‘loss function’ to evaluate how accurately the it perceives and models the given data (in simpler terms: “how sure is the AI model to make the given prediction or decision”).

An example would be the ‘hinge loss’ variant pictured above (do not run away! No complicated math is involved!). Think of it as a simplified function used in face-recognition model, where red dots mean ‘face’ and blue dots mean ‘not-face’. The bold line (0) along with the dashed lines (+1 and -1) form a margin between the two options. The bold line (0) would, therefore, mean that the model does not know whether to classify a picture as a face or not. Since we want our model to be as certain with its evaluations as possible, anything too close to ‘does not know’ (the bold line) would not be optimal. So if our model is only ‘+0.26’ sure that the picture we gave it is a face, then its evaluation will not be classified as ‘face’, but rather as ‘not sure’ (or whatever we instruct it to do in such cases). Needless to say, it gets much more complicated than this, and the above is only a simplified illustration of what happens behind the decision-making process of a ML model.

Consider the image above⁠⁠, where our model is evaluating what number does the handwritten picture digit correspond to. Image recognition is usually build on breaking down the pixels of images, meaning that it looks at each individual pixel and looks at the pixels around it. In the above image, the model would first find the relevant parts of the digit (i.e. all red pixels on a white background), and mark them, through the use of a so called ‘activation function’ with values ranging from 0 to 1, where 1 is the ‘relevant part of the digit’ and 0 the ‘not relevant’ part which can be ignored. It would then asses the grouping of such pixels (or ‘features’)—in this case two horizontal short lines and one longer curved vertical line on the right. After a much more complex series of evaluations and neural connections (similar to those of the human neural network), it would associate such a pattern of features to the number of ‘7’. Such a prediction would necessarily be accompanied by some kind of a function, and hence, a number (let’s imagine a number between 0 and 1) to tell us how certain our model is about that prediction based on the numerous mathematical functions and neural linkages.

Furthermore, below are two examples of a model I have been working on for some time. The aim of the model is to quickly recognise cases within a given legal text, and then let the model decide whether the found cases were rejected, discussed, applied or affirmed. Compare the first image, where the model was trained on a very small amount of data with the second, trained on one hundred case judgements:

1) The model was trained on a smaller dataset and got confused by parentheses (since it thought that parentheses normally contain a case year).

2) This model was trained on about 100 hundred of case judgements. It recognised cases much better, but still confused the case citation and parentheses in cases. Also, notice the ‘losses’ on the right ⁠— the closer the model is to ‘0’, the more ‘sure’ it is to make that particular classification (this is a different function from the one shown above). Being sure, however, does not at all mean that it is right, it just means it is not as confused as the previous model with regard to what it learned.

Lastly, to make sense of the terms used above, DL is simply a sub-field of ML, and ML is a sub-field of AI and NLP is a sub-field of all three.

How do law firms use ai?

Although the use of AI within the sector is still in the process of emerging from its incubation stage, many firms have adopted some kind of ML or DL within their practice in recent years. Most of the initiatives are focused on understanding the legal language to assist lawyers with low-level and time-demanding tasks (i.e. task connected with the review of large chunks of documents). Consider the following examples:

A) Document drafting, sorting and review

Machines are inherently faster at sorting through large datasets compared to humans. Many AI initiatives deployed by law firms are, for example, able to:

  • – create a contract template according to our input
  • – convert documents to machine-readable text (text understandable by the machine)
  • – reveal similar documents to the one in question
  • – create a summary of each document
  • – model most frequent topics and keywords from each document
  • – project real changes to the documents made by other colleagues
  • – spot potential issues relating to clauses we are currently reviewing

To illustrate the scenario closer, imagine being assigned the task of drafting a ‘Construction Equipment Lease Contract’.

First, we would use one AI assistant to draft a simple template of the contract in question. According to a simple ‘questionnaire’ (i.e. the title, the contractual parties, dates and keywords), the AI software would look over past contracts sharing similar values and create a template accordingly, inserting all clauses that might be relevant. Once such clauses are edited or more are added, the relevant AI system would first quickly read over the document and group clauses with similar patterns and topics into groups (i.e. time and delay related provisions, use of equipment provisions, or warranties). By using NLP (as described above), it would look at the patterns it observes among such groupings and find other finalised contracts similar to the one in question. It would then highlight all the differences between our contract and the ones most similar to it. This way, consistency is always ensured, and missing clauses will be suggested in a matter of seconds. Whenever lawyers spot an issue within a contract (either past or the one currently being drafted), they can mark it for the AI reviewer to remember the issue, and suggest it to any future drafting in the relevant circumstances.

One of the most used AI systems in the UK, which uses the above method (in a much more detailed process) is the Canada-based Kira Systems’, deployed by 8 out of the 10 top law firms in the UK (in 2018).

Source

B) Management automation

Over one-third of law firms in London have already deployed AI for the purpose of time and billing management (CBRE 2018 London report). Software solutions such as Clio allow not only for a cloud-based working (i.e. ensuring access to work from anywhere), but one of its features also focuses on ‘Legal Billing’. The aim of legal billing management is to quickly generate invoices for each client while considering past invoices and the types of work and clients they relate to. The software also tracks time that was devoted to the specific project (client) to quickly calculate the total billable hours.

C) Due dilligence

Traditionally, the process of due-diligence is a time-demanding manual task involving the review of large sets of unorganized (or unstructured) data and documents. Consequently, it is often difficult to meet tight deadlines due to the lengthy nature of the task, and the whole process is inevitably prone to human error. Multiple AI initiatives work with large chunks of documents to cluster them according to key themes while highlighting any missing or wrongly filed documents. Some AI initiatives also highlight areas which show potential signs of risk requiring closer analysis. However, given the significance of accuracy in due diligence, AI still has to be supervised and assisted by human lawyers. Nonetheless, it still speeds up the process dramatically, especially since documents are classified and reviewed within a matter of seconds.

To conclude, this article focused on the fundamental definitions of AI and the classification between ML, DL and NLP. It also showed that much of the AI initiatives within the legal sector are built on NLP, with the aim to understand natural languages in the legal context. It is also obvious that the sector is still in its incubation stage when it comes to the use of advanced AI, and most of the existing solutions usually apply to low-level tasks, rather than complex or important casework, that still need to be supervised by human lawyers. Only time will tell how quickly will the sector transform in the upcoming decades, and whether the methods described above will guide the sector or whether more ‘radical’ technologies will be deployed by law firms.

Leave a Reply

Your email address will not be published. Required fields are marked *