How I Detect Tampered PDFs in 9 Seconds (A Forensics Story)

A PDF contract arrives. How do you know if it's been edited? Here's the xref table trick and 7 forensic markers I built into htpbe.tech to detect PDF tampering.

PDF
TypeScript
Next.js
SaaS
Security

A client sends you a signed PDF contract. Looks legitimate. The signature block is there, the date is right, the numbers look fine. But something feels off. How do you know if it's the original or if someone changed a figure on page 3 and re-exported it?

I spent 5 days and 7 algorithm versions figuring this out. The result is htpbe.tech — Has This PDF Been Edited? — a SaaS that analyzes a PDF in under 9 seconds and tells you exactly how many times it was modified, by what software, and which forensic markers triggered. Here's how it works.

How a PDF Is Actually Structured

Before talking about tamper detection, you need to understand what a PDF file actually contains at the binary level. It's not magic — it's a structured document format from 1993 that has some remarkably useful forensic properties.

A PDF has four sections:

  • Header — identifies the PDF version (%PDF-1.7, %PDF-2.0, etc.)
  • Body — the actual objects: pages, fonts, images, text streams
  • Cross-reference table (xref table) — an index that maps object numbers to their byte offsets in the file
  • Trailer — points to the xref table and contains document metadata

The xref table is the key. It's how a PDF reader finds objects quickly without scanning the entire file. And it's where tampering leaves an unmistakable trail.

The Core Insight: Count the xref Tables

When a PDF is created from scratch — by InDesign, Word, a PDF printer, anything — it contains exactly one xref table. That's the baseline.

Now someone opens the file in Adobe Acrobat and changes a number. Acrobat doesn't rewrite the entire file. That would be slow and could corrupt things. Instead, it appends the changes to the end of the file and adds a second xref table pointing to the new and modified objects. The original content is still there, underneath — just superseded.

This is called an incremental update. And it's the foundation of my detection approach:

One xref table = original document. Two or more xref tables = the document was modified after initial creation.

Every editor does this: Adobe Acrobat, Preview on macOS, LibreOffice, Foxit, PDF-XChange. They all append, never rewrite.

There is one legitimate exception, which took me several algorithm versions to handle correctly: LTV (Long-Term Validation) updates. When a digitally signed PDF needs to embed certificate revocation data for long-term validation, the signing software adds an incremental update with OCSP responses and CRL data. This is legitimate, expected, and should not be flagged as tampering. Missing this in v1 of my algorithm caused a flood of false positives on signed legal documents.

The 7 Forensic Markers

Counting xref tables is the primary signal. But a thorough analysis needs multiple corroborating markers. Here are the seven I check in every analysis:

1. xref Table Count

The count of xref or startxref keywords in the binary. More than one means incremental updates exist. I also check whether those updates are LTV-related before flagging them.

2. Incremental Update Signatures

Beyond just counting, I look at the structure of each incremental update: which object types were added or modified. A legitimate signing update touches only certain object types (signature dictionaries, DSS dictionaries). An update that modifies page content objects is a much stronger tampering signal.

3. Producer Metadata Field

Every PDF has a Producer field in its document info dictionary. This is set by whatever software created or last saved the file. A PDF generated by a hospital system might have Producer: Epic Systems. If the Producer field says Adobe Acrobat 23.0 but the document claims to be an original bank statement — that's a mismatch worth investigating.

I also look for multiple Producer strings across different revisions, which indicates the file changed hands between software tools.

4. Creation Date vs. Modification Date Mismatch

The CreationDate and ModDate fields in PDF metadata. If they differ significantly — especially if ModDate is after CreationDate by years — that's a signal. If CreationDate is missing entirely but ModDate exists, that's unusual. If both are present but identical after an apparent edit, someone tried to cover their tracks.

5. Orphaned Objects

When PDF editors modify or delete content, the original objects often remain in the file — they're just no longer referenced from the active xref table. They become "orphaned" objects: still in the binary, not reachable from the current document tree. I scan for these. Their presence indicates prior revisions, and their content sometimes reveals what was changed.

6. Encryption and Permissions Changes

If a PDF's encryption dictionary changes between revisions, that's notable. It can indicate an attempt to remove password protection, change editing permissions, or re-encrypt content. I compare encryption settings across xref revisions where detectable.

7. Font Embedding Changes

Fonts embedded in a PDF are large objects. Changing text requires access to the right fonts. If a new font appears in a later xref revision that wasn't in the original, and it's not a system font added by a viewer for display purposes, that's a meaningful signal — particularly if the new font covers a character range used in key fields like amounts or dates.

The 7 Algorithm Versions (What Actually Failed)

I want to be honest about the iteration process, because it wasn't linear.

v1 was exactly what it sounds like: count occurrences of xref and startxref in the binary, report the number. This produced catastrophic false positives. Signed PDF documents routinely have 2–3 incremental updates from the signing and LTV process. I was flagging every notarized document as tampered.

v2 added a heuristic: if the file contains a digital signature (/Sig object), ignore the first two incremental updates. Better, but wrong. The number of LTV updates is variable — some signing workflows produce one, some produce three.

v3 was the LTV breakthrough: instead of counting incremental updates after signatures, I actually parse the structure of each update to determine if it contains only DSS (Document Security Store) objects and OCSP/CRL data. If so, it's marked as LTV and excluded from the tampering score. This eliminated 90% of false positives on signed documents.

v4 added the Producer field analysis. This is where I discovered that some PDF generators write garbage into the Producer field — truncated strings, encoding artifacts, empty strings — which made simple string matching unreliable. I had to normalize and sanitize the field before comparison.

v5 introduced confidence scoring instead of binary yes/no. A single incremental update by itself is weak evidence. An incremental update plus a Producer field change plus orphaned content objects is strong evidence. Each marker contributes a weighted score to an overall confidence percentage.

v6 broke things. I tried to add binary-level entropy analysis to detect unusual compression patterns. It was interesting but the false positive rate on legitimately compressed PDFs made it useless in practice. I removed it.

v7 is what ships. Confidence scoring, LTV exclusion, seven markers, no entropy analysis.

Technical Implementation

Parsing with pdf-lib

I use pdf-lib for structural parsing. It handles the xref table and object access well. For raw binary analysis — finding byte offsets, scanning for marker strings — I work directly on the ArrayBuffer.

Here is the core xref analysis:

interface XrefEntry {
  offset: number;
  isLTV: boolean;
  producerField: string | null;
  hasContentChanges: boolean;
}

async function analyzeXrefTables(fileBuffer: ArrayBuffer): Promise<XrefEntry[]> {
  const bytes = new Uint8Array(fileBuffer);
  const text = new TextDecoder("latin1").decode(bytes);
  const entries: XrefEntry[] = [];

  // Find all startxref markers — each one marks an xref table or xref stream
  const startxrefPattern = /startxref\s+(\d+)/g;
  let match: RegExpExecArray | null;

  while ((match = startxrefPattern.exec(text)) !== null) {
    const offset = parseInt(match[1], 10);

    // Skip the terminal startxref before %%EOF — offset 0 is the document terminator
    if (offset === 0) continue;

    const entry = await parseXrefRevision(bytes, text, offset);
    entries.push(entry);
  }

  return entries;
}

async function parseXrefRevision(
  bytes: Uint8Array,
  text: string,
  offset: number
): Promise<XrefEntry> {
  const trailerStart = text.indexOf("trailer", offset);
  const trailerEnd = text.indexOf(">>", trailerStart);
  const trailerContent = trailerStart !== -1 ? text.slice(trailerStart, trailerEnd + 2) : "";

  // DSS (Document Security Store) presence indicates an LTV update
  const hasDSSObject = trailerContent.includes("/DSS");
  const hasOnlySigObjects = checkForSignatureOnlyUpdate(text, offset);

  const producerMatch = /\/Producer\s*\(([^)]+)\)/.exec(text.slice(offset, offset + 4096));
  const producerField = producerMatch ? producerMatch[1].trim() : null;

  const hasContentChanges = checkForContentModifications(text, offset);

  return {
    offset,
    isLTV: hasDSSObject || hasOnlySigObjects,
    producerField,
    hasContentChanges,
  };
}

function checkForContentModifications(text: string, offset: number): boolean {
  // /Contents, /Page, and BT/ET (Begin Text/End Text) operators signal content edits
  const revisionSlice = text.slice(offset, offset + 16384);
  const contentIndicators = ["/Contents", "/Page\n", "/Pages\n", "BT\n", "ET\n"];
  return contentIndicators.some((indicator) => revisionSlice.includes(indicator));
}

function computeTamperingScore(entries: XrefEntry[]): number {
  const nonLTVEntries = entries.filter((e) => !e.isLTV);

  // Base document always has one xref — anything beyond is an edit
  const editCount = Math.max(0, nonLTVEntries.length - 1);
  if (editCount === 0) return 0;

  let score = 0;
  score += Math.min(editCount * 25, 60);

  // Content modifications are a stronger signal than metadata-only changes
  const contentEdits = nonLTVEntries.filter((e) => e.hasContentChanges);
  score += contentEdits.length * 15;

  // Multiple distinct Producer strings across revisions indicate tool switching
  const producers = new Set(nonLTVEntries.map((e) => e.producerField).filter(Boolean));
  if (producers.size > 1) score += 20;

  return Math.min(score, 100);
}

Bypassing Vercel's 4.5MB Serverless Limit

PDF files up to 10MB need to be analyzed. Vercel's serverless functions have a 4.5MB request body limit. Sending a 10MB PDF directly to an API route fails with a 413 — or worse, silently truncates the body.

The solution is presigned S3 URLs. When a user selects a file, the browser requests a presigned upload URL, uploads directly to S3 from the client, then triggers analysis by passing the S3 object key. The analysis function downloads the file server-side where there is no body size restriction.

// app/api/upload-url/route.ts
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3 = new S3Client({ region: process.env.AWS_REGION! });

export async function POST(request: Request) {
  const { filename, contentType, fileSizeBytes } = await request.json();

  if (fileSizeBytes > 10 * 1024 * 1024) {
    return Response.json({ error: "File exceeds 10MB limit" }, { status: 413 });
  }

  const key = `uploads/${crypto.randomUUID()}/${filename}`;

  const command = new PutObjectCommand({
    Bucket: process.env.AWS_BUCKET_NAME!,
    Key: key,
    ContentType: contentType,
    // Auto-delete after 1 hour — user documents are never stored permanently
    Expires: new Date(Date.now() + 60 * 60 * 1000),
  });

  const presignedUrl = await getSignedUrl(s3, command, { expiresIn: 300 });

  return Response.json({ presignedUrl, key });
}

Async Processing with Polling

Analysis takes up to 9 seconds. That's too long for a synchronous Vercel function. I use an async pattern: the job is queued, a job ID is returned immediately, the client polls for results.

// app/api/analyze/route.ts — accepts the S3 key, returns a job ID immediately
export async function POST(request: Request) {
  const { s3Key } = await request.json();
  const jobId = crypto.randomUUID();

  await redis.setex(`job:${jobId}`, 600, JSON.stringify({ status: "queued" }));
  await analysisQueue.add("analyze-pdf", { s3Key, jobId });

  return Response.json({ jobId });
}

// app/api/analyze/[jobId]/route.ts — client polls this until status === "completed"
export async function GET(request: Request, { params }: { params: { jobId: string } }) {
  const raw = await redis.get(`job:${params.jobId}`);
  if (!raw) {
    return Response.json({ error: "Job not found" }, { status: 404 });
  }
  return Response.json(JSON.parse(raw));
}

The client polls every 1.5 seconds. Most files finish in 3–5 seconds. The 9-second ceiling is for maximum-size PDFs with complex xref structures.

Gotchas Worth Knowing

LTV updates will fool a naive implementation. If you just count xref tables, every notarized PDF, every DocuSign document, every government e-signature will appear tampered. You must parse the update structure and identify LTV-specific object types before flagging anything.

base64-encoded PDFs need special handling. Some APIs transmit PDFs as base64 strings. If you run binary analysis on the raw base64 string without decoding first, your byte-offset calculations will be completely wrong. Decode to Uint8Array before any binary analysis.

File size from binary vs. encoded differs by ~33%. The File API's .size property returns the correct binary byte count. The .length of a base64 string is approximately 33% larger. Keep this straight or your 10MB validation will reject valid files.

The terminal startxref before %%EOF has offset 0. This is the document terminator marker, not a real xref table. Naive regex counting includes it and inflates the xref count by one. Filter out any entry where offset === 0.

Some enterprise PDF generators produce split xref tables in the original document. Certain systems write multiple xref sections at creation time (not incremental updates). I handle this by checking whether all xref sections share the same Producer metadata — if they do and there are no trailer Prev pointers, it's a single logical revision.

Results

  • Analysis time: under 9 seconds for files up to 10MB
  • False positive rate on legitimately signed documents: near zero after v3 LTV handling
  • Detection coverage: 100% for the 7 deterministic markers
  • File size limit: 10MB, handled via presigned S3 uploads to bypass serverless body limits

The confidence scoring approach handles ambiguous cases better than a binary flag would. A document with one extra xref table from a PDF optimizer (score: 25, low confidence) is treated very differently from a document with three non-LTV incremental updates, two different Producer strings, and orphaned content objects (score: 85+, high confidence). Users get actionable context, not just an alarm.

Try it at htpbe.tech. Upload any PDF — a bank statement, a contract, a signed invoice — and see the full forensic breakdown.


If you're building fintech or legaltech tooling where document authenticity matters — loan origination, contract management, insurance claims, KYC workflows — this kind of verification layer is worth integrating directly into your pipeline. The seven markers described here are implementable in any language with binary file access.

If you need a senior developer who can build document verification systems, automation pipelines, or SaaS infrastructure from scratch — get in touch. I'm available for freelance projects and long-term engagements.

Iurii Rogulia

Iurii Rogulia

Senior Full-Stack Developer | Python, React, TypeScript, SaaS, APIs

Senior full-stack developer based in Finland. I write about Python, React, TypeScript, and real-world software engineering.