Skip to main content

India's DPDP Compliance Obligations


What the law actually requires you to do — and how to think about each obligation before the rules are finalized

In Part 1, we covered the foundation: what the DPDP Act is, who it applies to, what rights it gives individuals, and why it demands CISO ownership. If you haven’t read Part 1, go back and start there — this article builds directly on it.

Here in Part 2, we’re going operational. We’re walking through each major obligation the DPDP Act places on Data Fiduciaries and breaking down what they actually mean in practice. By the end of this article, you should be able to look at your current data program and identify, at a high level, where your gaps are.

One important note before we dive in: as of early 2026, India’s Ministry of Electronics and Information Technology is still finalizing the DPDP Rules — the secondary legislation that will spell out specific timelines, formats, technical standards, and procedures. The Act itself is law, but some procedural specifics are still being defined. Where that uncertainty exists, we’ll flag it clearly and advise on the conservative approach — which is almost always the right posture when you’re building toward compliance with a law that’s actively being enforced.

Obligation 1: Notice — Tell People What You’re Collecting and Why

The foundation of DPDP compliance is the obligation to provide clear, accessible notice to Data Principals before or at the point of collecting their personal data. This isn’t a new concept if you’ve dealt with GDPR or CCPA, but DPDP has specific characteristics that deserve attention.

The notice must cover several elements: what personal data is being collected, the purpose for which it’s being collected, how the individual can exercise their rights under the Act, how and where they can withdraw consent, and contact information for your designated Grievance Officer. That last element — the Grievance Officer contact — must be live and functional, not a placeholder.

The language requirement is a real operational challenge that many organizations underestimate. India has 22 officially recognized languages and hundreds of spoken ones. DPDP requires that notices be available in languages that the individual can understand. While the rules will clarify the specifics, a single English-language privacy policy will almost certainly be insufficient for organizations serving diverse Indian users. At minimum, plan for Hindi and the primary regional language of your principal user base.

The plain language requirement is equally important and equally underestimated. The notice must be written in a way that an ordinary person can actually understand. Test your privacy notices against one standard: could a first-time user read this and genuinely understand what they’re agreeing to and what rights they have? If the answer is no, the notice doesn’t meet DPDP’s intent.

What this means technically is that your consent and notification infrastructure needs to be capable of surfacing the right language version of a notice at the right moment — at signup, at the point of data collection, and whenever the purpose for processing changes materially. A static privacy policy page updated once a year will not be sufficient.

💡 Pro Tip

Run your current privacy notices through a readability test — tools like the Flesch-Kincaid readability score are easy to use and give you a baseline. Aim for a reading level that an average adult can navigate without legal training. Then ask a colleague outside your security or legal team to read your notice and explain back to you what data you collect and why. If they can’t do it, your notice needs work — and DPDP will require you to fix it anyway. Better to discover that now than during a Data Protection Board inquiry.


Obligation 2: Consent — The Primary Legal Basis for Processing

Under DPDP, consent is the primary mechanism through which you earn the right to process personal data. Unlike GDPR, which gives organizations a broader menu of legal bases including “legitimate interests,” DPDP places consent front and center. This has meaningful operational implications for how you collect and document authorization to process data.

Valid consent under DPDP has four characteristics. It must be free — not coerced, not bundled with terms that effectively make refusing consent impossible. It must be specific — granted for a defined, stated purpose, not blanket permission to use data however you like. It must be informed — the individual actually understood what they were consenting to, which links directly back to the notice requirement. And it must be unambiguous — a positive, affirmative action such as a checkbox actively ticked or a button deliberately clicked. Pre-checked boxes, implied consent, and opt-out mechanisms don’t qualify.

Withdrawing consent is a right the law explicitly protects. Data Principals can withdraw consent at any time, and when they do, you must cease processing their data for the purpose for which consent was given. Critically, the Act specifies that withdrawing consent must be “as easy as giving consent.” If consent was a single click at signup, requiring a support ticket and a 72-hour wait to withdraw it will not hold up to scrutiny.

The law also recognizes “deemed consent” — scenarios where explicit consent isn’t required because the processing serves a clearly legitimate purpose. Deemed consent applies to processing necessary for the performance of a contract with the individual, compliance with a legal obligation, medical emergencies, certain employment and HR processing, and specific functions of the state. Think of deemed consent as DPDP’s narrower equivalent of GDPR’s contract and legal obligation bases. If you’re relying on deemed consent for any processing activity, document your rationale carefully and keep that documentation current.

Consent records are legal evidence. You must be able to demonstrate, at any point, that valid consent was obtained for each processing activity. This means timestamped logs, records of what notice was shown, what was consented to, and through what mechanism. If the Data Protection Board investigates a complaint, “we believe users clicked okay” is not an acceptable answer.

💡 Pro Tip

Conduct a consent audit across your top five highest-volume data collection points. For each one, ask: Is consent affirmative and specific? Is it recorded with a timestamp? Is there a working withdrawal mechanism? Is the withdrawal mechanism as accessible as the consent mechanism was? You will almost certainly find gaps in at least two of those four areas. Document what you find, prioritize by volume of affected data subjects, and build the remediation backlog from there. This audit takes a few days and gives you a defensible starting point if questions arise.


Obligation 3: Purpose Limitation — Use Data Only for What You Said

Purpose limitation appears in virtually every modern privacy law, and DPDP is no exception. You can only use personal data for the purpose for which it was collected and to the extent necessary to achieve that purpose. What seems obvious in principle creates real operational complexity in data-rich environments.

Purpose creep is a violation. If you collected a user’s email address to send order confirmations and you now want to use it in a behavioral analytics model to predict churn, that’s a new purpose. You need fresh consent or a valid deemed consent basis before expanding the use. Data minimization follows the same logic — you should collect only what you actually need for the stated purpose. The temptation to collect everything and figure out uses later is common in data-hungry organizations, but it directly contradicts DPDP.

Storage limitation is the third element of this pillar: once the purpose for which data was collected has been fulfilled and there is no legal basis for continued retention, you must erase it. Indefinite retention of personal data “in case we need it someday” is not a defensible position. You need documented retention schedules, automated deletion workflows, and the ability to verify that deletion has actually occurred.


Obligation 4: Data Security — What “Reasonable Safeguards” Means in Practice

Here is where the CISO’s role becomes most direct. DPDP requires every Data Fiduciary to implement “reasonable security safeguards to prevent personal data breach.” The law doesn’t prescribe a specific technical standard — the detailed requirements will come in the rules — but “reasonable” has legal meaning, and it will be interpreted in light of your organization’s size, the sensitivity of the data you process, and the current state of security practice.

In practice, reasonable safeguards for personal data mean several things with a high degree of confidence. Encryption at rest and in transit for personal data is a baseline expectation — storing personal data in plaintext is indefensible under any serious privacy regime. Access controls enforcing least privilege are expected — personal data should be accessible only to personnel and systems with a legitimate need. Authentication controls commensurate with the sensitivity of the data are expected — multi-factor authentication for systems processing significant volumes of personal data is a standard, not an enhancement. Logging and monitoring sufficient to detect unauthorized access, anomalous behavior, and potential exfiltration are expected — you cannot report a breach you didn’t detect, and failing to detect because you had no monitoring is itself a safeguard failure. Vendor security requirements are expected — your Data Processors must be contractually obligated to maintain equivalent safeguards.

For Significant Data Fiduciaries specifically, the bar is higher. The rules will almost certainly require formal Data Protection Impact Assessments for high-risk processing activities, periodic audits by independent data auditors, and transparency reporting for certain algorithmic decision-making processes.

💡 Pro Tip

Don’t wait for the specific technical standards in the rules before hardening your security posture around personal data systems. The standards that will be set will almost certainly align with globally recognized baselines — ISO 27001, NIST CSF, CIS Controls. If you’re meeting those frameworks with respect to your personal data processing systems today, you will be well-positioned when the specific DPDP rules are published. If you’re not, the gap analysis you need to do for DPDP compliance and the security improvement work you already know you need to do are essentially the same project. Frame it that way to your leadership team and you may find that DPDP becomes the business case that finally funds the security work you’ve been prioritizing for years.


Obligation 5: Breach Notification — Speed and Documentation Matter

DPDP requires Data Fiduciaries to notify both the Data Protection Board and affected Data Principals when a personal data breach occurs. This is one of the highest-stakes obligations in the law and one that requires preparation long before any breach happens.

Any breach of security that leads to unauthorized access, use, disclosure, alteration, or destruction of personal data triggers notification obligations. This is a broad definition. It covers external attacks, insider incidents, accidental disclosures, and third-party processor breaches where your data is involved.

The specific notification timeline will be defined in the rules. GDPR’s 72-hour timeline to the supervisory authority is the global benchmark, and many practitioners are designing DPDP programs to that standard rather than waiting for India’s rules to confirm whether more time is permitted. This is the right approach — build for speed, and if the rules allow a longer window, that’s a bonus, not a dependency.

The notification to the DPB must describe the nature of the breach, the categories and approximate number of individuals affected, what personal data was involved, the likely consequences of the breach, and the measures taken or proposed to address it. The notification to affected individuals must include enough information for them to take protective action. These are two separate deliverables with different audiences, different tones, and different content requirements — both need pre-approved templates developed before you’re ever under the time pressure of an actual incident.

Equally important: you must document the assessment process for breaches you determine do not require notification. The fact that you assessed the incident, applied defined criteria, reached a reasoned conclusion, and recorded the entire process is evidence of a serious compliance posture. Organizations that can’t produce this documentation for assessed-but-not-reported incidents look far worse in a regulatory inquiry than organizations that can.

💡 Pro Tip

Run a breach notification tabletop exercise specifically scoped to DPDP. Pick a realistic scenario — a cloud misconfiguration that exposed Indian customer data for 72 hours, for example — and walk your incident response team through it: How long does it take to determine what data was affected? How long to assess severity? Who drafts the DPB notification? Who approves it? Who contacts affected individuals? Who tracks completion? The gaps that surface in a tabletop are gaps you can close before they matter. The gaps that surface in a real incident are the ones that end up in the penalty record.


Obligation 6: Children’s Data — The Highest-Risk Category

DPDP has particularly stringent requirements for processing the personal data of individuals under 18, and the penalties for violations in this area are among the highest in the Act. This is operationally one of the most demanding areas of the law.

Before processing any personal data of a child, you must obtain verifiable consent from the child’s parent or legal guardian. Not self-attested consent from the child. Verifiable parental consent. The word “verifiable” is the hard part — simple checkbox confirmations like “I confirm I am 18 or older” are not sufficient under any serious regulatory interpretation. Solutions being deployed globally include credit card verification, government ID checks processed through age verification services, and parental consent flows tied to verified parent accounts. None of these are perfect, all carry cost and complexity, and your legal team will need to guide the approach for your specific service type.

Behavioral tracking of children is prohibited. You cannot use children’s personal data for behavioral monitoring, targeted advertising, or any form of profiling that tracks behavior over time. Services directed at or likely used by children must be designed with those users’ protection in mind.

What makes children’s data particularly challenging for CISOs is the verification problem. How do you know a given user is a minor? Many platforms don’t collect age at signup. Some do but don’t enforce it. And even platforms not designed for children can be accessed by them. Your obligation isn’t just to protect users you know are minors — it’s to implement reasonable measures to prevent minors from being processed without parental consent in the first place.

📌 Children’s Data Assessment Checklist

If any of your products or services could conceivably be used by someone under 18, work through this list before you build your compliance program:

  • Age collection — Do you collect age or date of birth at signup or registration?
  • Age enforcement — If you collect age, do your systems actually enforce a minimum age restriction?
  • Parental consent flow — If minors can use your service, is there a functioning parental consent mechanism?
  • Verification standard — Is your age verification mechanism defensible as “verifiable,” or is it a self-attestation checkbox?
  • Behavioral tracking controls — Is behavioral profiling or targeted advertising disabled for accounts identified as minors?
  • Design review — Has your product been reviewed for design elements that appeal specifically to children?

If you can’t check most of those boxes today, children’s data is one of your highest-priority compliance gaps. Start this workstream early — it typically requires product involvement and has significant lead time.


Obligation 7: Grievance Redressal — You Need a Real Mechanism, Not a Mailbox

Every Data Fiduciary must establish a clear, accessible grievance mechanism and appoint a named Grievance Officer whose contact details are published in the privacy notice. This requirement is simple to state and surprisingly easy to do wrong.

The Grievance Officer needs to be a real role with real authority and real capacity. It cannot be a junior team member with no ability to act, a shared mailbox that nobody monitors, or a legal contact who responds to everything with “we’ll get back to you.” The Data Protection Board will expect that individuals who contact your Grievance Officer receive substantive, timely responses. If they don’t, they have the right to escalate directly to the DPB — and a pattern of unresolved grievances is exactly the kind of evidence a regulatory inquiry finds compelling.

Build a grievance intake and tracking system — even a well-configured ticketing queue works for smaller organizations. Document your grievance response SLAs and hold yourself to them. Train whoever manages the queue on what kinds of requests they’ll receive, how to categorize them, and when to escalate. Your grievance mechanism is your first line of defense against formal enforcement proceedings. Treat it accordingly.


Obligation 8: Third-Party Processors and Cross-Border Transfers

Data Processor Obligations. You are responsible for the acts and omissions of your Data Processors with respect to Indian personal data. Your contracts with vendors who process Indian personal data on your behalf must include provisions that obligate them to DPDP-compliant data handling, security standards, breach notification to you within timeframes that allow you to meet your own DPB obligations, audit rights, and cooperation with Data Principal rights requests. Review your existing vendor contracts — the majority will need to be updated.

Cross-Border Data Transfers. DPDP restricts the transfer of personal data to countries not on a government-approved whitelist. That whitelist has not yet been published. Until it is, the right approach is to document all cross-border data flows involving Indian personal data, understand the business necessity for each flow, and be prepared to adjust when the approved country list is released. If you’re currently routing Indian personal data through infrastructure or processors in countries with uncertain regulatory environments, flag those flows for priority review.

Data Localization for SDFs. If your organization is designated a Significant Data Fiduciary, there may be requirements to store certain categories of data within India. This is one of the most consequential provisions for global cloud architectures, and beginning to assess the feasibility and cost of Indian data residency options now — before localization requirements are confirmed — is a sound use of your program’s early planning time.

💡 Pro Tip

When conducting your vendor contract review for DPDP, don’t try to update every contract simultaneously. Triage by risk: start with processors that handle the most sensitive personal data, or the largest volumes of Indian personal data, or that have the weakest existing security terms. Get those contracts remediated first. Then work down the list. This prioritization approach is defensible to a regulator — “we identified our highest-risk vendor relationships and addressed those first” is a coherent compliance narrative. Trying to do everything at once and getting nowhere is not.


Are You a Significant Data Fiduciary?

The government hasn’t published its initial SDF designations yet, but the criteria are defined in the Act. You are more likely to be designated an SDF if you process large volumes of personal data, you process sensitive categories at scale, you have operations that could impact national security or public order, you could significantly affect children’s rights, or you could have meaningful impact on democracy or elections.

Major platforms, financial institutions, healthcare providers, telecoms, large technology companies with Indian operations, and global data brokers are the most obvious candidates. But SDF designation is at the government’s discretion, and the specific thresholds will be defined in the rules. If there’s meaningful probability that you’ll be designated, the additional obligations — mandatory DPO, mandatory DPIAs, independent data audits, algorithmic transparency reporting — should be in your compliance planning now, not later.

📌 DPDP Core Obligations Summary Checklist

Use this as a high-level tracker as you begin your program:

  • Privacy notices reviewed for plain language, correct content, and multi-language accessibility
  • Consent mechanisms reviewed for affirmative action, specificity, and timestamped recordkeeping
  • Consent withdrawal mechanism confirmed to be as easy as giving consent
  • Purpose and retention policies documented and aligned with actual data use
  • Security safeguards assessed against a reasonable standard for each personal data system
  • Breach notification playbook drafted with DPB notification templates and Data Principal notification templates
  • Children’s data assessment completed; parental consent mechanism in place if applicable
  • Grievance Officer appointed and contact published in privacy notice
  • Vendor contracts reviewed; DPDP provisions identified as needed or in progress
  • Cross-border transfer flows documented and flagged for whitelist review when published
  • SDF designation probability assessed and communicated to leadership

Coming Up in Part 3

You now have a detailed picture of what DPDP requires across each obligation area. In Part 3, we translate all of this into a practical CISO’s DPDP Readiness Roadmap — a phased, prioritized action plan organized around real program-building stages with specific workstreams, team responsibilities, and honest guidance on where to start and what to do first.

If you’ve been reading along and thinking “okay, but where do I actually begin?” — Part 3 is for you.

💭 Final Thought

The obligations in DPDP aren’t exotic or unreasonable. Consent, notice, data security, breach notification, individual rights — these are the same building blocks that every mature privacy program is built on. What makes DPDP demanding isn’t the novelty of the requirements. It’s the operational rigor they require when you get into the details: the multi-language notices, the verifiable parental consent for children, the specific-to-purpose consent records, the breach notification infrastructure that has to work under pressure. The CISOs who will succeed here are the ones who don’t treat these as checkbox items but as capabilities to build and maintain. Build the capability. Test it. Know it works before you need it.

Questions about DPDP compliance? Drop them in the comments or connect with us at www.infosecmadeeasy.com.

Disclaimer: This article is for educational purposes only and does not constitute legal advice. Consult qualified legal counsel for guidance specific to your organization and jurisdiction. DPDP Rules are still being finalized; this content reflects the law and available guidance as of early 2026.

Popular posts from this blog

CISO Brief: February 11, 2026 – Critical Vulnerabilities, Nation-State Threats, and Ransomware Developments

Staying ahead of emerging threats is essential for enterprise resilience. This week brings a mix of critical vulnerabilities, advanced ransomware, and sophisticated nation-state activity. CISOs should prioritize patching, review detection capabilities, and prepare executive responses to evolving risks. Below are the top items requiring immediate attention, notable developments, and a concise action checklist. Top Items CISOs Should Care About (Priority) Microsoft Patches 59 Vulnerabilities Including Six Actively Exploited Zero-Days What happened: Microsoft released patches for 59 vulnerabilities, including six zero-days currently being exploited in the wild. Why it matters: Unpatched systems are at high risk of compromise and regulatory scrutiny. What to verify internally: All Microsoft systems are patched promptly, especially endpoints and servers. Vulnerability management processes are up to date and effective. Critical assets are prioritiz...

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...