Skip to main content

Detection accuracy

Detection accuracy is measured against the nvidia/Nemotron-PII dataset, a synthetic dataset of 50,000 documents annotated across 51 PII label types. Across all 51 label types, AI Smart Redact achieves an adjusted F1 score of 0.96. Detection logic is used as shipped — only the semantic recognizer’s entityMapping is extended so its outputs align with the dataset’s label set.

Out-of-the-box accuracy

The evaluation targets every label in the dataset (51 types) using AI Smart Redact’s built-in detection configuration. The only change from the shipped defaults is the semantic recognizer’s entityMapping, which is extended to cover all 51 dataset label types. No custom pattern recognizers, keyword recognizers, or exclusions were added.

Evaluation on the full 50,000-sample international test split:

MetricRawAdjusted
True positives399,967399,967
False negatives21,97913,016
False positives40,14318,161
Recall0.94790.9685
Precision0.90880.9566
F10.92790.9625

Adjusted scores exclude errors that per-case analysis attributed to the dataset’s annotations rather than to the detector. Refer to Adjusted scores.

Metric definitions

Three standard metrics describe detection quality. They appear throughout this page; the headline F1 score links here.

  • Recall: of all actual sensitive entities in the data, how many did the detector find? A recall of 0.95 means the detector found 95% of the entities that should have been detected.
  • Precision: of all entities the detector flagged, how many were genuinely sensitive? A precision of 0.95 means 95% of the detector’s flagged entities were genuine sensitive entities.
  • F1: the harmonic mean of recall and precision, on a scale from 0 to 1. A high F1 means the detector finds most actual entities (high recall) and most of its detections are correct (high precision).

What this configuration covers

The evaluation deliberately stops short of fitting AI Smart Redact to the dataset:

  • The built-in pattern recognizers are used as shipped, with their format and checksum validators. Recognizers for entity types the dataset doesn’t include (such as IBAN, MONEY, and DOMAIN_NAME) were disabled for the evaluation; their detections would otherwise be counted as false positives against a dataset that has no annotations for those types.
  • The built-in semantic recognizer is used, with entityMapping extended from the default 4 mappings to cover the dataset labels that aren’t handled by deterministic pattern recognizers.
  • No custom pattern recognizers were added for dataset-specific labels such as SSN, license plate, national ID, postcode, account number, or employee ID. These types have predictable formats and are best detected by purpose-built recognizers, but adding them would be fitting the configuration to this specific dataset.
  • No keyword denylists or exclusions were added.

Some of the 51 dataset labels are deterministic by nature (region-specific identifiers with their own validation rules, such as SSN, LICENSE_PLATE, and NATIONAL_ID), but in this evaluation they’re detected only through the semantic model. Adding custom pattern recognizers for those identifier formats typically raises F1 toward the levels seen for built-in deterministic recognizers in this evaluation, where CREDIT_CARD reached 0.97, EMAIL_ADDRESS 0.99, MAC_ADDRESS 1.00, and URL 0.99. Actual gains depend on the pattern. To close this gap, use the configuration options described in Improving accuracy in your environment.

Built-in label scores

Per-label raw scores from the same evaluation, for the 17 entity types that AI Smart Redact detects out of the box and that the Nemotron-PII dataset annotates:

Entity typeDetectionRecallPrecisionF1
MAC_ADDRESSDeterministic0.99910.99760.9983
EMAIL_ADDRESSDeterministic0.98930.99650.9929
PERSONSemantic0.99450.98800.9912
GPS_COORDINATEDeterministic0.98980.99140.9906
URLDeterministic0.98940.98120.9853
DATETIMEDeterministic0.96250.97930.9708
PHYSICAL_ADDRESSSemantic0.97280.96870.9707
IP_ADDRESSDeterministic0.96710.97290.9700
CREDIT_CARDDeterministic0.94430.99680.9698
DATEDeterministic0.94230.99020.9657
USERNAMESemantic0.93900.98300.9605
VINDeterministic0.91820.98890.9523
PHONE_NUMBERDeterministic0.96570.93260.9489
ORGANISATIONSemantic0.95810.91680.9370
HTTP_COOKIEDeterministic0.71360.96880.8218
BIC_SWIFTDeterministic0.67010.99170.7997
TIMEDeterministic0.67770.84020.7503

The lower raw F1 scores in the table reflect dataset annotation issues rather than detection mistakes. For example, BIC_SWIFT recall rises to near-perfect after the cases described in Adjusted scores are removed.

Adjusted scores

Each detection error was reviewed individually to determine its cause. A meaningful share of false negatives and false positives traced back to issues in the dataset’s annotations. The Nemotron-PII dataset is synthetic, which explains why some annotated values don’t conform to real-world format conventions. Most are clear annotation errors. Examples include:

  • Duration expressions like “30 minutes” annotated as TIME rather than as a duration.
  • BIC/SWIFT codes whose country position doesn’t appear in ISO 3166.
  • Bare years annotated as full dates without a month or day.
  • Credit card numbers that don’t satisfy the Luhn checksum.

The remainder are edge cases where the detector and the dataset apply different conventions. For example, the dataset annotates a datetime as separate date and time spans while the detector returns a single datetime entity.

Adjusted scores exclude both kinds. The combined impact is significant: about 41% of false negatives and 55% of false positives in the raw count come from these annotation issues rather than from detection mistakes.

Improving accuracy in your environment

The figures shown measure the built-in defaults without any dataset-specific tuning. Detection is fully customizable per request, and customer-tuned configurations consistently produce higher precision and F1 on production documents. To raise accuracy on your documents:

  • Add custom pattern recognizers for region-specific identifiers (SSN, license plates, national IDs) or domain-specific codes. This is the single biggest lever for higher precision on identifier-style entity types.
  • Use keyword exclusions to suppress recurring false positives, such as your own company name being detected as ORGANISATION.
  • Adjust scoreThreshold to favor precision (raise it) or recall (lower it).
  • Switch checksumValidationMode to relaxed if missed entities are more costly than occasional false positives in checksum-validated formats.

Industry context

No universally agreed benchmark exists for evaluating PII detection systems, which makes direct comparisons across products difficult. The closest established field is Named Entity Recognition (NER), where state-of-the-art models routinely report F1 of 0.93–0.94. Those benchmarks are narrower than PII detection: they typically cover 3 or 4 broad entity types (person, location, organization) on clean newswire-style text.

PII detection in real-world applications is harder. It covers dozens of heterogeneous entity types (emails, phone numbers, credit card details, national IDs, addresses, and other nuanced personal identifiers), often in messy, domain-specific, or conversational text. It also requires balancing two competing risks: missed sensitive data, which creates privacy and compliance exposure, and over-redaction, which reduces data utility.

Reported numbers in the field reflect this difficulty. General-purpose open source PII tools (rule-based and NER hybrids) commonly report F1 in the 0.81–0.85 range on realistic evaluations. Commercial or heavily domain-tuned systems frequently claim 0.91–0.98, though those numbers are often obtained on narrower entity sets, synthetic data, or proprietary benchmarks rather than broad standardized tests.

It’s worth restating that the Nemotron-PII dataset is synthetic. Real-world PDFs introduce additional challenges that synthetic text doesn’t capture: scanned pages where text is extracted through OCR and can include character recognition errors, multi-column or table-heavy layouts that fragment the surrounding context that detection relies on, mixed-language documents, and form-style PDFs where entities appear in field labels and values rather than in flowing prose. The accuracy figures earlier on this page should be read with these factors in mind.