The first three parts of this series covered the DSR lifecycle, data discovery, and deletion architecture. This final part addresses what comes after execution: proving that you did what you said you did, reporting to regulators, and the architectural changes required when request volume scales.

A DSR system that executes correctly but cannot demonstrate its correctness is a liability. The audit trail is not a secondary feature. It is the product.

The audit trail

Every DSR must produce a complete, immutable record of what happened. Not a summary. Not a status flag. A detailed, timestamped log of every decision, every state transition, every system interaction, and every human intervention from the moment the request was received to the moment the response was delivered.

The audit trail must be append-only. No record may be modified or deleted after it is written. If a correction is needed — a task status was recorded incorrectly — a new record is appended that supersedes the prior one. The original record remains. This immutability is not a technical preference. It is a regulatory requirement. When a Data Protection Authority requests evidence of how a DSR was handled, the evidence must be tamper-proof.

The minimum content of each audit entry is the request identifier, the timestamp, the actor (system or human), the action taken, the prior state, the new state, and any supporting context. For example: "Request DSR-4827, 2026-03-14T09:41:12Z, system/deletion-orchestrator, task dispatched to billing-system, status changed from scoped to executing, identifier: account-id-38291." Every entry must be this specific.

The audit log must be stored separately from the operational data. If the DSR system's database is compromised, restored from backup, or migrated, the audit log must survive independently. In practice, this means writing audit entries to a dedicated, append-only store — a separate database table with delete permissions revoked, a write-once object store, or a dedicated audit service.

The compliance record

The audit trail is the raw material. The compliance record is the assembled document that answers a specific question: for this request, what did the organization do?

A compliance record must contain the original request (redacted of any content that is itself personal data of third parties), the identity verification method and outcome, the classification and scope determination, the list of systems queried or actioned with the outcome per system, any retention exemptions applied with the legal basis cited, the response delivered to the requestor, and the timestamps at each stage showing SLA compliance.

This record must be generatable on demand. When a regulator asks for the compliance record for a specific DSR, the privacy team must be able to produce it within hours, not weeks. This means the compliance record is not a manually assembled report. It is an automatically generated artifact that pulls from the audit trail, the task records, the exemption decisions, and the response archive.

Compliance records must be retained for a defined period. GDPR does not specify a retention period for DSR records explicitly, but supervisory authorities have indicated that records should be kept for the duration of their general statute of limitations — typically five to six years. The DPDP Act does not yet specify a retention period for DSR compliance records, but maintaining them for at least seven years aligns with standard audit practices in Indian regulatory contexts.

Regulatory reporting

Beyond individual request compliance, regulators increasingly expect aggregate reporting. How many DSRs did the organization receive? What was the breakdown by type? What was the average response time? How many were completed within the SLA? How many required extensions? How many were denied, and on what grounds?

These metrics must be derivable from the DSR system's operational data, not manually compiled. A monthly or quarterly privacy report should be a query against the request database, not a spreadsheet maintained by an analyst.

The metrics that matter, and that regulators tend to scrutinize, are volume by type (access, deletion, correction, portability, opt-out, restriction), volume by jurisdiction (GDPR, CCPA, DPDP), median and 95th percentile response times, SLA compliance rate (percentage of requests completed within the statutory deadline), denial rate and top denial reasons, and the number of requests requiring manual intervention versus fully automated completion. The last metric — the automation rate — is an internal efficiency measure, not a regulatory requirement. But it is the metric that determines whether the DSR system scales.

Scaling

A DSR system that handles fifty requests per month and a DSR system that handles five thousand per month are not the same system. The architecture must change at three thresholds.

Threshold one: manual to automated. Below approximately one hundred requests per month, a hybrid system with some manual steps — manual identity verification for unauthenticated requests, manual review for complex cases, manual response assembly — is operationally feasible. Above one hundred, the manual steps become the bottleneck. The first scaling investment is automating identity verification (self-service verification flows for authenticated users, automated document verification for unauthenticated users) and automating response assembly (templated response generation based on request type and execution results).

Threshold two: synchronous to asynchronous. Below approximately five hundred requests per month, synchronous orchestration — the orchestrator dispatches tasks, waits for responses, and assembles the result in a single process — may work. Above five hundred, the volume of concurrent requests exceeds the capacity of a synchronous orchestrator. The second scaling investment is moving to asynchronous orchestration with a message queue, per-task status tracking, and an aggregation layer that computes request status from task statuses. This is the architectural shift described in Part 1.

Threshold three: single-tenant to multi-tenant. Organizations that process DSRs on behalf of multiple entities — a parent company with subsidiaries, a data processor handling requests for multiple controllers — hit a third threshold where the DSR system must support multi-tenancy. Each tenant has its own service registry, its own retention rules, its own SLA configurations, and its own compliance reporting. The system must isolate tenant data while sharing infrastructure. This is a platform architecture problem, not a DSR-specific problem, but it is the threshold that transforms a DSR system from an internal tool into a product.

Operational concerns at scale

At volume, several operational concerns become critical that were ignorable at low volume.

Rate limiting. Downstream systems have capacity limits. The DSR orchestrator cannot dispatch a thousand deletion tasks to the billing system simultaneously. The orchestrator must rate-limit task dispatch per system, queueing tasks and processing them within the capacity constraints of each downstream system. This requires per-system rate configuration in the service registry.

Batching. Some systems are more efficient when processing deletions in batches rather than individually. The data warehouse may process a single deletion in three minutes but a batch of one hundred deletions in four minutes. The orchestrator should support batch dispatch where the downstream system supports it, accumulating tasks and flushing them on a schedule or when a batch size threshold is reached.

Monitoring and alerting. At volume, the privacy team cannot monitor individual requests. They need system-level observability: queue depth (are tasks accumulating faster than they are processed?), error rates per downstream system (is a specific system failing consistently?), SLA burn rate (what percentage of open requests are approaching their deadline?), and verification failure rate (are deletions failing verification, suggesting incomplete execution?). These metrics should feed into the same monitoring infrastructure the engineering team uses for production systems — the DSR system is a production system.

Capacity planning. DSR volume is not constant. It spikes after data breaches (users rush to delete their data), after regulatory announcements (a new enforcement action triggers awareness), and seasonally (CCPA requests spike in January after holiday data collection). The system must be provisioned for peak volume, not average volume, and the capacity plan must account for these patterns.

The complete picture

Across the four parts of this series, the DSR system has been defined from lifecycle to audit. Part 1 established the architecture, the data model, SLA management, and the orchestration patterns. Part 2 addressed data discovery — the service registry, identity resolution, fan-out orchestration, and response assembly. Part 3 covered deletion — the hardest operational problem — including soft and hard deletes, retention conflicts, cascading dependencies, backups, and event logs. This final part addressed audit, compliance reporting, and the architectural changes required at scale.

A DSR system built on these foundations is not a compliance tool. It is an operational privacy platform. It is the mechanism through which an organization demonstrates, concretely and repeatably, that it respects the rights of the individuals whose data it holds. The audit trail is the proof. The automation is the sustainability. The architecture is what makes it possible to do this at the scale of a modern enterprise.