Back

User Guide

SLR Coding Tool Documentation

Complete guide to using the Systematic Literature Review Coding Tool for your research on policy process frameworks

Overview
Understanding the SLR Coding Tool and its purpose

The SLR Coding Tool automates the extraction and coding of academic articles for systematic literature reviews. It uses AI to identify theoretical frameworks, policy types, methodologies, and other key dimensions from PDF articles, then provides a human-in-the-loop verification interface to ensure accuracy.

Key Features:

  • Automated PDF text extraction and AI-powered coding
  • Five-dimension coding scheme aligned with your research protocol
  • Confidence scoring and auto-flagging of edge cases
  • Split-screen review interface with Accept/Edit/Flag actions
  • Inter-coder reliability tracking with Cohen's kappa calculation
  • Publication-ready Excel exports with audit trails
  • Keyboard shortcuts for rapid review workflow
Step 1: Upload Articles
  1. Navigate to the Upload Articles page
  2. Drag and drop PDF files or click to browse
  3. Upload multiple articles at once for batch processing
  4. Wait for AI auto-coding to complete (status changes to "Coded")

Tip: Ensure PDFs are text-based (not scanned images) for best extraction results

Step 2: Review Queue

The Review Queue shows all articles ready for human verification, prioritized by:

  1. Flagged articles (edge cases, uncertain codings)
  2. Low confidence scores (<70%)
  3. Chronological order (oldest first)

Use the search and filter tools to find specific articles by framework type, policy type, confidence level, or other criteria.

Step 3: Verify Coding

Click Review Article to open the split-screen interface:

  • Left panel: PDF text with highlighted evidence
  • Right panel: Coding fields with confidence scores

For each field, you can:

Accept

AI coding is correct

Edit

Modify the value with reason

Flag

Mark for discussion

Step 4: Export Results

Once coding is complete, export your data from the Dashboard:

  • Code-Document Matrix: All articles with five-dimension codes
  • Audit Trail: Every edit with timestamps and reasons
  • Descriptive Statistics: Framework distributions, confidence metrics
  • Cross-Tabulations: Framework × Policy Type, Framework × Regime Type
  • Flagged Articles: List of articles needing discussion
  • Raw Data: Complete dataset for statistical analysis

Need Help?

If you encounter issues or have questions about the coding protocol, refer to your systematic literature review documentation or consult with your research team.

For technical issues with the tool, check that PDFs are text-based (not scanned images) and that your browser is up to date.