Seven Expectations to Set Your Team Up for Success with Computer-Assisted Review

Guest article by Jay Leib, kCura

We recently attended the ILTA 2013 conference and led a computer-assisted review hands-on lab at the show. Attendees really responded positively to the lab, so we’re holding it again at our upcoming annual user conference, Relativity Fest. We are excited to help attendees better understand the computer-assisted review workflow and become more comfortable with the technology. 

MP900442363

To aid folks who may be thinking about using computer-assisted review, we thought we’d go back to the basics and explain seven expectations that should be carefully considered before and during the process.

Before diving in, it always helps to have a refresher. Computer-assisted review uses analytics technology to amplify human expertise and suggest coding decisions in a document universe. The goal is to help case teams prioritize documents, sort out large groups of non-responsive documents, and ultimately facilitate a faster, more cost-effective review without sacrificing quality. Understanding how the computer-assisted review process works—and knowing best practices for working with the technology behind it—can improve your results tremendously.

Now let’s get to the seven expectations.

1)     Clearly define why you plan on using the technology, and what a successful outcome would be for you.

Computer-assisted review is a technology enabler and, as such, it must meet the strategy and objectives of the review team to be successful. All litigation technology—or all technology for that matter—will be more appropriate in some scenarios than others. Prior to starting a computer-assisted review, the review team should define the strategy of the review and the goal of the technology.  

Some of these goals may include:

  • Prioritization, or quickly categorizing documents by relevance or issue for batching
  • Removing the noise in the form of non-responsive docs, while teams review the relevant ones
  • Quality control, or comparing the results of computer-assisted review with human review and seeing if there are any discrepancies

2)     Understand that it takes time to build a proper index.

A well-constructed analytics index is an essential prerequisite for every computer-assisted review project. It is important to put effective filters on the index so it excludes any text in the documents that is not conceptually relevant. Allow sufficient time in your project plan for an analytics index to be built between processing and computer-assisted review.

3)     Understand that some documents will need to be manually reviewed.

Some documents—e.g. handwritten scanned images, graphics, and documents containing only numeric data—cannot be indexed by a text analytics engine because they have no conceptual meaning for the engine to use. This means that not every document should go through the computer-assisted review process.

4)     It’s crucial to have the right domain experts from the start.

Domain experts—those familiar with the intricacies of a particular case and the type of case at hand—should be involved in computer-assisted review from the start. Without a domain expert to train the system with good example documents, your computer-assisted review may not achieve the desired results.

5)     Your domain experts must be trained on the specifics of reviewing documents for a computer-assisted review workflow.

It is important to teach everyone involved in the computer-assisted review project best practices for teaching the system through training documents. The best way to learn about what makes a good training document is to understand a few rules about what makes a poor training document:  

  1. A document should never be included as an example based on the content of a family member.
  2. A document is only a good example if there is text on the document’s face. If there is no relevant text within the four corners of the document, it is not a good example document.
  3. If numbers or symbols make a document responsive, or if a spreadsheet contains predominately numbers—as opposed to text—then that document does not make a good example.
  4. A document that generally has poor-quality OCR, such as a handwritten document, does not make a good training document.
  5. Documents that are highly formatted and contain small amounts of text—such as schematics, engineering drawings, etc.—might be responsive, but make poor training documents.

Reviewers need to understand that the quality of the training document is independent of the document’s relevance. A reviewer is making a decision on both responsiveness and whether or not a document is a good example to use to train the analytics engine.

6)     The results of computer-assisted review must be validated.

After establishing domain experts and preparing the analytics index, it is important to use statistical sampling to gauge the effectiveness of the review protocol. Sampling the results of the document universe is how you verify the system is achieving the criteria established by the review team.

7)     If reviewers disagree, it will confuse the system and potentially cause high turnover rates. Make sure you take steps to track and review disagreements.

A single reviewer driving the computer-assisted review process would be ideal, but this isn’t always possible depending on your timeline. If you have multiple domain experts working through the computer-assisted review training and quality control rounds, having them on the same page and establishing a process for questionable documents can mitigate confusion as to what defines a responsive document. Consistency is key, and you’ll need to be prepared to address inconsistency through your quality control process.

It takes a well-trained team to drive the computer-assisted review process and ensure it is effective and defensible. Understanding how the process works and learning best practices for working with the technology is key.

Note: kCura’s seven expectations for computer-assisted review first appeared on Legal IT Insider at www.legaltechnology.com.



White Paper: Information Governance Future Tech Predictions



Leave a Comment

You must be logged in to post a comment.