Thursday, 15 November 2018

Introduction

In most proceedings, discovery is a necessary evil. In complex corporate matters it can involve a long, monotonous, review process by armies of paralegals. And in one sense, things are only going to get worse as we create more and more ways to generate and share documents.

But all is not lost. There are now practical, tested and court-accepted methodologies that exist to leverage technology in order to perform effective, large-scale reviews that identify relevant documents based on limited human review. This is known as ‘technology assisted review’.

The problem of discovery

In Managing Justice: A Review of the Federal Civil Justice System, ALRC Report 89 (2000), the ALRC noted that “in almost all studies of litigation, discovery is singled out as the procedure most open to abuse, the most costly and the most in need of court supervision and control” 1.

More than a decade later, in 2011 the Australian Law Reform Commission released its report on Managing Discovery2, looking at discovery in the Federal Court, following a 2009 report from the Access to Justice Taskforce that found more work was needed into “the high and often disproportionate cost of discovery.”

The Managing Discovery report noted that “discovery is often the single largest cost in any corporate litigation” yet “remains an important feature of common law cases.”

A significant part of that cost is being driven by the growth in electronic document storage. The 2011 report noted that “the sheer volume of data available today … tests the historical rationale of discovery as being to facilitate fact‑finding, save time and reduce expense.”

The problem is only getting worse

Almost eight years later one thing is for sure. The amount of information and documents we store has increased, and the growth looks exponential.

If you think about it from an every-day business perspective, a lot of the information we store is duplicated or very similar. We write emails. We reply to emails. We forward those emails. We create documents, and then save multiple versions. We save them to our desktop and our file servers. We share them on Sharepoint, Microsoft Teams or proprietary document management systems. We transfer them via instant messages. We upload them into DropBox. And in five years we’ll have a hundred new ways to do these things.

This will all generate more hits, on more documents that might be ‘relevant’ for discovery and needing to be reviewed for privilege.


Enter TAR

Hindsight is a wonderful thing. The 2011 ALRC report does not mention ‘technology assisted review’ or ‘computer assisted review’ once. It does refer to ‘automated searches’ and ‘predictive coding’, yet these references were in the context of parties reaching agreement on groups of documents, or conducting searches.

It’s been an eventful eight years. In that time, we have seen the use, and acceptance by numerous courts, of TAR to facilitate the review of very large document sets in a cost-effective way while allowing all parties to have known and verifiable levels of confidence in the effectiveness of the review process in identifying relevant documents. From our own experience facilitating TAR reviews, we have seen law firms process millions of documents in a number of days. A traditional non-TAR process by comparison, would have taken a sizeable team months to complete.

What is TAR?

In short: relevant and non-relevant documents in a small sample set are reviewed by human reviewers. TAR then works by teaching a software program or algorithm about the properties of these documents so that the algorithm can find relevant documents within the broader set of documents. This process of identifying responsive documents is not based on traditional keywords, but rather by sophisticated algorithms that compare the properties of each document (including text and, depending on the algorithm, metadata) for similarity.

There are variations in this high-level process, and different technologies used to implement it. For example, often where there is a conflict between what the human and TAR algorithm chose, or where there are two very similar documents that humans coded differently, additional senior legal review can be added to minimise inconsistencies. This helps avoid teaching the TAR algorithm the wrong thing. There are also methodologies that perform analysis on defined tranches of documents (say sets of 1,000 documents) known as ‘simple active learning’ and others that constantly re-assess which documents to give to reviewers known as ‘continuous active learning’.

Not surprisingly, there have been many questions about the utility and reliability of TAR. Many have found it difficult to accept this process. While some courts have taken a more cautious approach, the Supreme Court of Victoria has grasped the nettle and is, in many respects, leading the way on the use of TAR in Australia and beyond.
 

The first Australian case to endorse the use of technology assisted review for discovery … and it won’t be the last



One of the landmark cases highlighting how TAR can be, and is being, used in practice is discussed in the article 'The first Australian case to endorse the use of technology assisted review for discovery … and it won’t be the last'.  This article considers the ground-breaking case McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors [2016] VSC 734 which was the first Australian case to endorse TAR, and the subsequent judgement McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors (No 2) [2017] VSC 640 which highlights the importance of having the appropriate level of expertise to use TAR effectively.
 
Further articles on technology assisted review
Notes

1  Discovery in Federal Courts (ALRC CP 2) published November 2010, https://www.alrc.gov.au/sites/default/files/pdfs/publications/Whole%20Discovery%20CP.pdf.
2  Managing Discovery: Discovery of Documents in Federal Courts published March 2011, https://www.alrc.gov.au/sites/default/files/pdfs/publications/Whole%20ALRC%20115%20%2012%20APRIL-3.pdf.