Manoj Kern outlines how insurance brokers must shift to architecture-led data governance in the DPDP era, balancing dual fiduciary-processor roles, dynamic consent, third-party risk, and AI governance to build trust and accountability at scale.

As India’s insurance broking industry adapts to the DPDP era, the real challenge is no longer compliance in isolation, but building end-to-end data accountability across a complex, multi-stakeholder ecosystem. In this interaction with CIO&Leader, Manoj Kern, CIO at Prudent Insurance Brokers, unpacks how firms must navigate dual fiduciary-processor roles, embed consent into system design, strengthen third-party governance, and address emerging AI risks. His perspective highlights a clear shift from policy-led compliance to architecture-led governance—where trust, accountability, and technology must work in tandem to define the future of insurance advisory. Excerpts from the interaction.
CIO&Leader: As an intermediary between customers and insurers, how do you interpret your role under DPDP — Data Fiduciary or Processor — and where does accountability lie? Additionally, how should insurance brokers rethink data governance across distribution channels and third-party partners?
Manoj Kern: The honest answer is that an insurance broker occupies both roles and failing to recognise that duality is where most governance gaps originate.
When a customer engages with us to explore, compare, or purchase a policy, we are clearly the Data Fiduciary, determining why data is collected, how it is used, and where it flows. Once the client appoints us and data moves to the insurer for underwriting, issuance, or claims, the dynamic shifts, we act as processors on the client’s instructions, while the insurer assumes its own fiduciary responsibilities.
This makes accountability inherently distributed. It cannot sit with a single team, it must be contractually defined and operationally enforced at every point where data changes hands.
To strengthen data governance, I see three urgent priorities.
First, the distribution layer. The highest data risk lies not in technology, but in human behaviour at the point of collection across digital channels, branches, and third-party advisors. If data is over collected or misused, accountability traces back to the broker. Governance must therefore move beyond policies and be embedded directly into tools, workflows, and onboarding.
Second, the partner ecosystem. Insurers, TPAs, and technology vendors all handle customer data. Legacy contracts are no longer sufficient, agreements must clearly define data usage, retention, deletion, and breach obligations. Under DPDP, accountability extends across the entire data chain.
Third, consent architecture. A single interaction can trigger multiple downstream data flows, yet most firms still rely on one-time consent. This must evolve into a dynamic, auditable consent framework integrated with core systems, not as a compliance goal, but a regulatory necessity.
At a broader level, DPDP demands far greater precision in data accountability, something previously seen mainly in frameworks like ISO 27701. For firms without mature privacy practices, the complexity of the broking model makes compliance especially challenging without a deliberate, architectural approach.
Ultimately, accountability rests with leadership. Organisational complexity is not a defence and should not be treated as one.
CIO&Leader: With DPDP requiring granular, purpose-specific consent, how would you opine that managing consent and data sharing when the same customer data flows to multiple insurers?
Manoj Kern: This is arguably the most operationally complex challenge that DPDP presents to insurance brokers specifically. A customer who approaches an insurance broker for health cover expects the broker to find the best option across the market. That inherently means their data, health history, age, occupation, and existing conditions, needs to travel to multiple insurers for quotation and underwriting. Under DPDP, each of those flows is a distinct processing activity, potentially requiring distinct consent. Managing that without creating an unacceptable consent burden on the customer, while remaining fully compliant, is the design challenge the industry must solve.
The answer, in my view, lies in three interconnected principles.
The first is consent layering. Rather than presenting customers with a single blanket consent at the point of engagement, firms need to architect consent in layers, a foundational consent for the broking relationship itself, and purpose-specific consents triggered as the customer journey progresses. When a customer asks for health insurance quotes, consent for sharing their data with shortlisted insurers should be sought at that moment, clearly articulating which insurers, for what purpose, and for how long. This keeps consent meaningful rather than mechanical.
The second is a dynamic consent record. The days of signed proposal forms being the end of the consent story are over. What DPDP demands, and what good practice requires, is a consent record that is timestamped, auditable, and travels with the data. If a customer later withdraws consent for a specific insurer or purpose, that withdrawal must be honoured and traceable across the entire data chain. This requires investment in a consent management layer integrated with core systems, not bolted on as an afterthought.
The third is insurer accountability alignment. When customer data leaves the broking firm and reaches an insurer, consent obligations do not simply transfer, they must be contractually mirrored. Data sharing agreements must explicitly state what data was shared, under what consent, for what purposes, and what the insurer’s obligations are if consent is withdrawn or modified.
There is also a broader industry conversation needed around standardisation. Today, every firm is building its own interpretation of granular consent. IRDAI and industry bodies have an opportunity to define templates and protocols that create consistency across the ecosystem, reducing the risk of consent fatigue.
CIO&Leader: With increasing digitisation, how can brokers embed data minimisation and purpose limitation into platform design from day one?
Manoj Kern: Data minimisation and purpose limitation are not features you can retrofit into a platform after it has been built. By the time architecture, data flows, and integrations are established, redesigning for privacy becomes exponentially more complex. These principles must be treated as foundational design constraints from the outset.
For insurance broking platforms, three areas are critical.
The first is purposeful data collection within regulatory constraints. This means collecting only what is necessary for a defined purpose, ensuring underwriting data is not repurposed without consent, and avoiding unnecessary data accumulation.
The second is integration architecture with clarity of purpose. Not all integrations involve personal data; some serve operational functions. Where personal data is involved, APIs must be tightly scoped, sharing only required fields with proper access controls and audit logging.
The third is consent-aware architecture. If data is collected for a specific purpose under consent, the system must enforce that limitation and prevent reuse without fresh consent.
Customers are increasingly aware of their data rights, and firms that demonstrate responsible data practices will build stronger trust.
The broader point is that privacy by design is not a regulatory burden, it is a competitive advantage. Customers are increasingly aware of their data rights, and firms that demonstrate responsible data practices will build stronger trust. In a relationship-driven business-like insurance broking, that trust is a critical asset.
CIO&Leader: With insurers, TPAs, surveyors, and tech vendors in the ecosystem, how should insurance brokers strengthen third-party data governance for end-to-end DPDP compliance?
Manoj Kern: The broking ecosystem is both an operational strength and a data governance vulnerability. Customer data, often sensitive, flows across multiple entities, and under DPDP, accountability ultimately traces back to the originating firm.
The first principle is to treat certifications like ISO 27001 as a baseline, not a guarantee. Firms must assess how vendors handle data, their breach history, subprocessors, and their ability to honour deletion and consent withdrawal.
Second is contractual precision. Data Processing Agreements must clearly define what data is shared, for what purpose, retention timelines, deletion obligations, and breach notification requirements. Any ambiguity is a governance gap.
Third is risk-based tiering. Governance should be proportionate, with stricter oversight for high-risk vendors.
Fourth is managing fourth-party risk. When vendors use subprocessors, data moves beyond direct visibility, and firms must ensure transparency and control.
Finally, governance must be continuous, with periodic reassessments, audit rights, and escalation mechanisms. If done well, strong third-party governance becomes a competitive advantage in building customer trust.
CIO&Leader: As AI becomes embedded in insurance broking workflows, what governance controls are needed to address risks like model manipulation and data leakage, especially from third-party tools?
Manoj Kern: AI governance is becoming a critical priority in BFSI, driven by the pace of adoption and the need for controls tailored to AI-specific risks. In insurance broking, where AI powers advisory, comparison, and analytics, the regulatory and reputational stakes are high.
The starting point is a comprehensive inventory of all AI systems, including those within third-party platforms. Many firms lack visibility into vendor AI capabilities, yet accountability remains with them.
To address model manipulation, firms must implement strong validation practices, including adversarial testing, along with continuous monitoring to detect anomalies and bias in outputs.
Data leakage, especially through third-party AI tools, is a major risk. Firms need clear policies on what data can be shared, supported by technical safeguards such as Data Loss Prevention controls to prevent unauthorised exposure.
Data leakage, especially through third-party AI tools, is a major risk. Firms need clear policies on what data can be shared, supported by technical safeguards such as Data Loss Prevention controls to prevent unauthorised exposure.
Vendor governance must ensure transparency around model behaviour, data handling, and compliance obligations, including data residency and explainability requirements.
Equally important is human oversight. AI-driven decisions must remain reviewable and accountable, supported by clear governance structures and, ideally, a dedicated AI risk function.
Ultimately, firms that embed governance into their AI adoption from the outset will be better positioned to innovate responsibly while maintaining regulatory compliance and customer trust.
CIO&Leader: Five years from now, what percentage of insurance advisory and customer interactions do you believe will be AI-assisted, and what does that mean for data security and trust?
Manoj Kern: To answer this question with the precision it deserves, we must first acknowledge that insurance broking is not a monolithic business. It spans B2B, B2C, and B2B2C models across an extraordinarily diverse range of lines — from Employee Benefits and Cyber to Marine, Marine Hull, Aviation, Satellite, Engineering, Property, Casualty, Liability, POSI, Product Recall, and Reinsurance. The AI adoption trajectory, and its security and trust implications, looks very different across each of these segments. Any answer that treats them uniformly will be both inaccurate and misleading.
With that context, my view is that within five years, the overall percentage of AI-assisted interactions across the broking spectrum will range from 40 to 85 percent — but the distribution will be deeply uneven.
At the higher end of that range sit the B2C and B2B2C segments — retail health, personal lines, SME employee benefits, and group insurance products. These involve high transaction volumes, relatively standardised data inputs, and customer interactions that lend themselves well to AI-driven personalisation, comparison, needs assessment, and servicing. Here, 80 to 85 percent AI assistance within five years is not only plausible — it is already the direction of travel for the more digitally advanced firms in the market.
At the lower end sit the complex specialty and commercial lines — Aviation, Marine Hull, Satellite, Engineering, POSI, Product Recall, and Reinsurance. These are bespoke, high-value, relationship-driven transactions where the broker’s expertise, market knowledge, and negotiation capability are the core value proposition. AI will play an increasingly important role in data aggregation, risk modelling, exposure analysis, and document processing — but the advisory interaction itself will remain predominantly human-led, with AI serving as a powerful analytical layer rather than a customer-facing interface. Here, 40 to 50 percent AI assistance is a more realistic and responsible projection.
In the middle sit lines like Property, Casualty, Cyber, Liability, and standard Employee Benefits — where AI will progressively handle routine interactions, renewal workflows, claims tracking, and regulatory reporting, while complex risk placements and large account advisory work retains a strong human dimension.
Cyber insurance deserves a specific mention because it sits at a uniquely interesting intersection. The very technology that is driving AI adoption in broking — and the threat landscape it creates — is also the subject matter of Cyber insurance itself. AI will be essential in assessing dynamic, rapidly evolving cyber risk profiles. But it also introduces new risk vectors that underwriters and brokers need to understand deeply. In Cyber insurance, AI is simultaneously the tool, the risk, and the subject of the policy.
From a data security standpoint, the diversity of lines means the diversity of data types flowing through AI systems is extraordinary. Employee benefits interactions involve sensitive health and demographic data. Marine and Aviation transactions involve commercially sensitive cargo and asset information. Engineering and Satellite placements involve proprietary technical specifications. Reinsurance transactions involve aggregated portfolio data that could be competitively damaging if exposed. Each of these data categories demands a tailored security posture within the AI architecture — not a one-size-fits-all approach. Firms that deploy AI without calibrating their data security controls to the sensitivity of each line of business are creating asymmetric risk exposure.
Firms that deploy AI without calibrating their data security controls to the sensitivity of each line of business are creating asymmetric risk exposure.
On trust, the stakes vary by segment but are universally high. In B2C segments, customers need to trust that AI recommendations reflect their best interests, not commercial optimisation. In B2B and specialty lines, corporate clients and risk managers need to trust that AI-assisted analysis is accurate, explainable, and not substituting algorithmic convenience for genuine expertise. In Reinsurance, the trust question extends to the integrity and confidentiality of the portfolio-level data that AI systems will inevitably process.
The overarching principle is this: the breadth of insurance broking as a business demands a segmented, risk-calibrated approach to AI governance — not a single policy applied uniformly. The firms that recognise this complexity and architect their AI adoption accordingly will be the ones that earn the trust of their clients, satisfy their regulators, and sustain their competitive advantage across every line they write.