X

4 Tips for an Efficient eDiscovery Workflow

Triaging

Triaging digital evidence sources minimizes the volume of data you must retain and helps you to identify key evidence. When developing a culling and searching strategy, the objective should always be to identify the most relevant content first and move it downstream to the review team.

Create Scalable Workflows

The business of eDiscovery is based on workflow. Collected electronically stored information moves through its various conversions on a path to its final destination, whether it’s a database or production volume.

Investigators can utilize technologies such as deep learning, skin tone analysis, facial identification and predictive coding on a dataset including documents, emails, attachments, and images in order to quickly “bubble to the surface” items of potential relevance and importance to help focus the investigation – and prioritize items to be reviewed first.  While techniques such as predictive coding (also known as technology assisted review) are relatively new, they can produce more accurate results than people.

Automate Your Processes

Different providers such as Nuix help you streamline and automate investigative workflows and processes, removing repetitive tasks while insuring court defensibility through consistency and compliance with international standards. You can automate OCR and flag responsive material based on keywords or hashes, extract entities, and visualize them in a dashboard.

For image analysis, you can find images with high skin tone, recognize faces, and automatically place these images into a folder for review. Discover documents that need translation and push them to a reviewer—or see if they’ve been translated before—and extract them for review.

Parallel Processing

Speed needn’t be at the expense of missing important data. Apply parallel and distributed processing to make efficient use of available resources and hardware—quickly making large amounts of evidence available for timely analysis—and make the best use of all available hardware to process high volume data sets with ease.

Josh Markarian:
Related Post