15 November 2018

Over the last eight years, there has been significant development in the use and acceptance of Technology assisted review (TAR) in proceedings. In that time, it has become clear that there are now practical, tested and court-accepted methodologies that exist to leverage technology in order to perform effective, large-scale reviews that identify relevant documents.

Not surprisingly, there have been many that find it difficult to accept this process and continue to question the utility and reliability of TAR. And while some courts have taken a more cautious approach, the Supreme Court of Victoria is, in many respects, leading the way on the use of TAR in Australia and beyond.

Technology assisted review - What is it and why is it needed?

One of the landmark cases highlighting how TAR can be, and is being used in practice is McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors.

McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors [2016] VSC 734

In or about November 2009, McConnell Dowell entered into a joint venture with Consolidated Contracting Company Australia Pty Ltd. A construction contract was then entered into by the joint venture with QCLNG Pipeline Pty Ltd to design and build a natural gas pipeline in Queensland. The contract value was $730 million.

McConnell Dowell took out various insurance policies with Santam, QBE and Liberty. Difficulties arose in the design and construction of the pipeline and, in particular, the welding of joints. McConnell Dowell’s insurer’s denied liability. An arbitration between the parties was conducted in 2012.

The proceedings and the associated arbitration generated approximately four million documents. ‘Deduplication’3 reduced this to approximately 1.4 million documents.

Discovery challenge

Justice Vickery estimated it would have taken over 23,000 review hours, or 583 working weeks to review these 1.4 million documents manually, not including quality control reviews by senior legal members of the team, a substantial imposition in terms of time and cost.

Adopting such a process would risk contravening the court’s overarching purpose of the “just, efficient, timely and cost-effective resolution of the real issues in dispute” set out in s7 of the Victorian Civil Procedure Act 2010.

In the TAR protocol agreed to by the parties, it was proposed that only approximately 20,000 documents would need to be reviewed. Using Justice Vickery’s estimates, this would take only 333 hours to review, clearly a significant opportunity to achieve time and cost savings.

Justice Vickery concluded that “the very large number of documents involved in the proceeding calls for special management” and that “traditional manual discovery of the Plaintiff’s documents is not likely to be either cost effective or proportionate.”

Special referee

For the reasons mentioned, Justice Vickery appointed a special referee to answer “questions as to the appropriate management of discovery in the proceeding and to deliver a report to the Court on those questions.”

The special referee’s role was to act as an educator and facilitator as to how the TAR process was to be conducted “with the cooperation of the parties … rather than an adversarial process.”

Justice Vickery appointed a senior counsel from the Bar to apply legal principles and address natural justice concerns, rather than someone with a technical background in the TAR process.

TAR recommendations agreed under the reference

The reference resulted in the parties agreeing to TAR discovery protocols and procedures. These were documented by the special referee and then adopted as orders by Justice Vickery.

Justice Vickery’s orders4 detailed the following TAR steps:

  • The use of a simple active learning (‘SAL’) protocol. A SAL protocol is a method of executing predictive coding and relies on a number of steps, explained below, to achieve agreed statistical outcomes (‘stopping criteria’), which are a measure of the accuracy and completeness of the TAR process.

  • The identification of a ‘comparison sample set’. This randomly selected sample of documents was a statistically valid representation of the population (1.4 million documents). The orders included a process where all parties would collaboratively review the documents within the comparison sample set for relevance. This comparison sample set and the assessments of relevance within it would be used at various stages of the training process to assess when the training of the TAR algorithm had reached the desired stopping criteria.

  • An initial training or ‘seed set’ would then be created by McConnell Dowell randomly selecting and reviewing 1,000 documents. This seed set is used to initially train the TAR algorithm as to which documents are relevant, and not relevant. In addition to the 1,000 documents, the defendants would include 200 documents identified as relevant and 135 documents identified as not relevant into the seed set.

  • From there, McConnell Dowell would create and review randomly selected samples of 1,000 to 1,500 documents as training rounds (as noted above, an underlying concept of TAR is training the TAR algorithm to allow it to identify what makes a document relevant or non‑relevant). At the completion of each training round, the re-trained TAR algorithm would be re-applied to the comparison sample set. The TAR algorithm’s accuracy would be tested by comparing the manual coding (i.e. the human decisions) of the comparison sample set to how the algorithm would classify the document. The parties’ IT experts would assess whether further training rounds would need to be undertaken to achieve the stopping criteria and algorithm stabilisation. The parties estimated that 10 to 15 rounds would be required before the stopping criteria would be reached.

  • The stopping criteria for a SAL protocol would normally (in our experience) consist of a measure for ‘precision’ and ‘recall’ (refer to call-out box). In the orders the parties had agreed to a recall rate of 80% meaning that the algorithm, when applied to the comparison sample set, would identify a minimum of 80% of the relevant documents tagged by the reviewers. The orders were silent as to the appropriate precision rate.

  • The final step would be to perform a ‘validation round’, which is very similar to the comparison sample set process. A randomly selected sample of documents (excluding documents in the comparison sample set) from the population would be reviewed for relevance by McConnell Dowell. Once this human review is completed, the algorithm would be applied to the validation round documents and its classifications compared with the human tagging results. In the orders, the recall and precision rates were to be compared with figures from the comparison sample set, and if they were very similar, or within a reasonable margin of error, the parties agreed that the algorithm was appropriate to use on the entire population of documents.

Unlike traditional discovery, this process meant that at a number of stages the defendants had input into the determination of legal relevance of the documents. To achieve this there were specific requirements to manage privileged documents, which were to be performed after the TAR process was complete.


McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors (No 2) [2017] VSC 640

Referee recommendation
By the middle of 2017, two of the three defendants in this matter had settled, leaving only the first defendant, Santam defending the proceedings. The special referee prepared a report (‘the July Report’) regarding progress of the TAR process, including recommendations as to how the process should continue.

By the time of the special referee’s July Report, the TAR protocol was near the point of completion and the outcomes from the TAR protocol were not as people had expected. To quote from the special referee’s report:

“unfortunately the protocols recommended by the parties IT experts have not been as successful as the parties wanted or expected.”

The special referee recommended that the protocol not be completed. Specifically, he suggested that no further training rounds be undertaken and no validation round be completed.
TAR results

The target recall rate was 80% and there was no precision rate specified. After six training rounds (the original protocol estimated ten to fifteen rounds), the best precision rate achieved was 52%. This means that around 50% of the documents would be falsely identified by the algorithm as relevant compared to a human reviewer. Few would consider this an ideal situation. Global case law studies suggest that an acceptable precision rate is around 80%5. Typically, additional training rounds are expected to improve the precision rate, however after an additional two rounds, this dropped to 46%. In both instances, the recall rate was being met.

The special referee found that “little or no likely improvement to the TAR algorithm” was expected from further rounds.
Santam objected to this on a number of bases. The two most significant objections were:

  1. The reduction in the precision rate from two rounds did not rule out future improvement as the performance of the model may have temporarily stalled. This was argued to be especially relevant in the context where the original protocol envisaged at least ten rounds of training and where “a relatively small improvement in the performance of the TAR model can significantly reduce the volume of discovery to be reviewed manually.”

  2. The validation round should be completed in any event, as it is designed to validate the process, not change the number of documents returned or adjust the recall/precision rates.


On the first point, Justice Vickery agreed with the special referee in finding that the cost of performing further training rounds was not proportionate, due to the insufficient likelihood of materially improving the precision rate. It is interesting to note that Santam argued that McConnell Dowell had not discharged their discovery obligations by burdening them with so many irrelevant documents (because of the low precision rate). Over 200,000 documents were estimated to be produced if no further training rounds were performed.

Justice Vickery did not however accept the special referee’s recommendation that the validation round be abandoned. Rather than conclude on the relevance of the validation round, Justice Vickery’s reasoning was founded in natural justice, since McConnell Dowell was not given adequate opportunity to make submissions to the special referee on this topic. His Honour therefore referred the matter back to the special referee to consider what validation ought to occur.

Even taking into consideration these issues, applying the existing algorithm to the whole population of 1.4 million documents meant that approximately 218,000 of the 1.4 million documents were classified as relevant. This was achieved by adoption of the abovementioned TAR process, which entailed the review of less than 20,000 documents in the population.

Procedural significance

Justice Vickery’s appointment of a special referee envisaged a role which was very much one of an educator and facilitator to help parties navigate the development of an agreement on a TAR protocol. Although this resulted in both parties initially reaching consensus, it later meant the special referee was responsible for making recommendations over procedural matters.

Some further consideration will need to be given over to whether such issues are best resolved by a special referee, and if so, whether that referee should have a technical or legal background. Clearly, in McConnell Dowell there were both legal, and complex technical issues. Arguably, the latter issues may have been better dealt with by having a technical expert in the role, leaving the resolution of legal issues for the court.

In any event, this case highlights a number of aspects that parties in similar circumstances should potentially consider as part of a TAR protocol. It also demonstrates the importance of up-front thought in the design of a TAR process, and the importance of understanding how the various iterations of a TAR process operate.


TAR has had a quite broad-reaching effect on the Victorian and Australian legal landscape and will change the way documents are reviewed as part of discovery going forward. It has the ability to achieve significant efficiency gains in the document review process.

This case is the basis for the practice note which came into effect on 30 January 20176. The practice note clearly indicates that legal practitioners need to upskill themselves in their understanding of technology in the discovery process, of which TAR is just one aspect. It states:

“All legal practitioners will be competent and equipped to deal with electronic documents in common formats during the discovery process.”

“The inability or reluctance of a lawyer to use common technologies should not occasion additional costs for other parties.”

The ability for courts to impose the use of TAR on a party means that legal teams must consider the use of TAR in matters involving large document populations.

Our experience is that not all parties are as keen as Justice Vickery to adopt TAR, and other aspects of technology, into their discovery processes. Whilst in some cases adopting a non-TAR approach may be appropriate, it is increasingly unlikely to be so in most commercial matters. Those who have successfully adopted TAR will, in our experience, swear by it.

An important part of accepting TAR’s effectiveness is understanding it. As shown above, at its core, TAR is a change in methodology. It is facilitated by technology, but founded on human review. It allows error rates to be overtly identified, and relieves teams of paralegals from relatively monotonous, and seemingly endless, review. But like any methodology, it must be executed successfully. Like any discovery process, the use of TAR can either help or hinder the legal process. And for those reasons, TAR does not, and will not, supplant the need for competent legal counsel and human review.

Further articles on technology assisted review

1  Discovery in Federal Courts (ALRC CP 2) published November 2010, https://www.alrc.gov.au/sites/default/files/pdfs/publications/Whole%20Discovery%20CP.pdf.
2  Managing Discovery: Discovery of Documents in Federal Courts published March 2011, https://www.alrc.gov.au/sites/default/files/pdfs/publications/Whole%20ALRC%20115%20%2012%20APRIL-3.pdf.
3  A process of excluding electronic documents that are identical, or effectively identical.
4  Editor’s note: These orders are not available on Austlii. They are available through the Supreme Court of Victoria.
5  For example, the Irish High Court in IBRC & Ors. v. Sean Quinn & Ors [2015] IEHC 175 discussed a study showing a TAR process resulting in 85% recall and an expert in that case held the view that the minimum ‘f-measure’ (which is an alternative measure relating to recall and precision) should be 80%.
 6  In the Supreme Court of Victoria’s SC GEN 5 Technology in Civil Litigation.