“You Can’t Bet Your Business on AI You Can’t Verify” – Tirthankar Lahiri, SVP, Mission Critical Data and AI Engines, Oracle

In this exclusive interaction with CIO&Leader, Tirthankar Lahiri, SVP, Mission Critical Data and AI Engines, Oracle, shares why CIOs must build verifiability, not just intelligence, into mission-critical systems.

Tirthankar Lahiri, SVP, Mission Critical Data and AI Engines, Oracle

As enterprises move AI from pilots to production, a harder question is emerging inside boardrooms and CIO offices: how do you add intelligence to core systems without breaking trust, governance, or uptime?

The problem is no longer whether AI works. The question is whether AI can be trusted to operate in mission-critical systems that run banks, telecom networks, insurers, and national infrastructure. The shift underway is architectural, not experimental.

In this conversation with Jatinder Singh of CIO&Leader, Tirthankar Lahiri, Senior Vice President, Mission Critical Data and AI Engines at Oracle, argues that the future of enterprise AI will not be defined by speed or scale alone, but by verifiability, structural constraints, and security anchored at the data layer. From why AI must live inside the database to why “guardrails” are not enough for agentic systems, Lahiri lays out a pragmatic blueprint for CIOs trying to move from automation to accountable delegation.

CIO&Leader: As AI shifts from experimentation to mission-critical workloads, what global changes do you see by 2026 in how enterprises are architecting data platforms that must balance AI innovation with determinism, reliability, and regulatory compliance?

Tirthankar Lahiri: By 2026, the biggest shift will be architectural. Enterprises are realizing that AI cannot sit outside the system of record. Moving data to separate AI engines introduces governance gaps and security fragmentation.

We are seeing three global patterns:

  • Security anchored at the data layer: AI must obey the same access controls as human users.
  • Verifiability built into workflows: Outputs must be auditable and reproducible.
  • Bounded intelligence: AI must operate within defined execution paths, not open-ended reasoning loops.

Innovation is no longer about experimentation. It is about embedding intelligence inside deterministic infrastructure.

AI should act like a clerk, not a judge. The final decision must remain human.

CIO&Leader: Why do you believe AI must be engineered directly into the data platform, and what risks arise when AI is layered outside core enterprise systems?

Tirthankar Lahiri: What we are doing at Oracle is focused within the database organization. The strategy is simple: you should not have to move your data elsewhere to run AI. Many older vector database products required moving data to a separate system for AI workloads. That creates security risks because once data leaves the corporate database, you lose governance and control.

For us, everything must remain in the same place. Enterprises can only deploy solutions they can not only trust but also verify. Trust implies faith. In enterprise systems, faith is not enough. You must be able to verify the output.

It starts with security embedded deep in the data layer. We call this Deep Data Security. When AI accesses data, it must follow the same rules as a human user. AI cannot bypass security controls. If security lives only in the application layer, AI can bypass it and access data directly. Security has to exist at the source.

We also focus on verifiable design. We call it Generative Creator Development. Generative AI is not one technique. It is a collection of approaches to ensure outcomes are trustworthy and editable.

If I receive an output, I should be able to review it and decide what I want and what I do not want. AI-generated code and SQL are risky because you may not know what they will ultimately do. Betting your business on unverified AI output is dangerous.

So we rely on trusted data APIs that prevent arbitrary access to tables and columns. They expose complete business objects and ensure any change preserves business validity. The same applies to agentic AI. Every interaction must be verified. Guardrails alone are not sufficient.

Innovation is no longer about experimentation. It is about embedding intelligence inside deterministic infrastructure.

CIO&Leader: You already touched on GenAI and described Database 26ai as AI native. How does that fundamentally change capabilities such as vector search and natural language interaction when these are built directly into mission-critical applications?

Tirthankar Lahiri: Vector search brings AI into any application by expanding how data can be queried. I can ask, “Find me the top-selling products.” That runs on any version of Oracle Database. But now I can also ask, “Find me the top-selling products that are similar to this product.” Similarity is no longer a traditional database construct. It is semantic. With an AI-powered database, that becomes possible.

Traditionally, databases excel at value-based operations: precise filtering, aggregation, and computation. With AI and vector search, you can search based on semantic similarity, which traditional databases could not do. That is foundational to modern AI systems, and that is why it is transformational.

Vector search itself may not be unique to Oracle, but Oracle’s strength lies in combining it with the database’s other capabilities. You can take complex applications in banking, financial services, or telecom and embed AI simply by adding vectors as another search dimension. That integration is what makes the shift fundamental. It also enables natural language interaction.

Looking ahead, I believe most systems will primarily use natural language interfaces. SQL and programming languages may become like assembly language, low-level outputs of higher-level conversational interactions.

But human language is ambiguous. If someone asks, “Find me the top-selling products,” the system should clarify: “Do you mean by revenue or by number of units?” The shift from an ambiguous question to a precise, deterministic query is critical.

This will be multi-stage. AI interprets the initial question, refines it into a more specific question, and presents it back for confirmation. For example: “Do you want the products with the highest units sold in Maharashtra last week?” Once the user confirms, the translation to SQL becomes straightforward and deterministic.

So AI is the bridge between natural language and structured business data through semantic matching. That is where Oracle’s strength lies, given the scale and importance of the business data repositories it manages.

“Guardrails are suggestions. Enterprise AI needs tracks it cannot leave.”

CIO&Leader: With AI-specific regulations emerging alongside DPDP, how should enterprises design AI systems to ensure auditability, accountability, and verifiability rather than blind trust?

Tirthankar Lahiri: For me, it has to be verifiable. AI cannot operate like a black box. Every action must be auditable and traceable. There must be reasoning behind why an answer was selected, and a human must always be able to edit it.

I prefer a multi-stage generation process. Instead of producing one long, unreviewable output, AI should generate structured responses that can be checked. Even if it runs automatically, everything must be logged so someone can go back, review what happened, and correct it.

The confusion comes from consumer AI. People think they can use tools like ChatGPT to summarize business data and rely on it. You should not bet your business on that. AI is nondeterministic. It may be right most of the time, but that is not enough for enterprise systems.

AI is like a smart teenager. Often correct, always confident, and ready with an answer, even when it is wrong. That can be useful, but it requires supervision. Enterprise AI has to operate within workflows that support editability, accountability, and retry. I think regulation will evolve around that reality. Verification will always be required.

CIO&Leader: Some of Oracle’s large customers, especially in banking and other regulated sectors, still value a predictable database over an intelligent one. How do you address concerns that introducing AI into the database could create opacity, governance challenges, or new security risks?

Tirthankar Lahiri: First, security must live in the data layer. That is non-negotiable. You cannot secure AI only at the application layer. Data is the foundation, so secure the foundation.

Second, introduce AI carefully. Start with internal use cases before exposing them to customers. Many banks are experimenting with internal chatbots to improve employee productivity. That is a lower-risk entry point.

You should not use AI for adjudicative workflows. An AI should not approve mortgages or claims. But it can assist a human agent by drafting questions, recommending additional information, and helping structure an application. The final decision must remain human.

AI should act like a clerk, not a judge. It can organize and assist, but it should not independently approve or reject critical decisions. That is how you add intelligence without compromising governance.

It will be a journey, especially in regulated sectors. The key is strong data security, controlled deployment, and keeping humans firmly in the loop.

CIO&Leader: As intelligence spreads across the stack, does it compound value or compound risk? What controls are essential at the engine level?

Tirthankar Lahiri: It can do both.

Intelligence compounds value when it improves efficiency inside bounded workflows. It compounds risk when it operates without structural limits.

Essential controls include:

  • Hard access restrictions, not soft guardrails
  • Deterministic execution pathways
  • Full logging of AI interactions
  • Human override capabilities

AI should operate like a train on tracks. It may choose among predefined routes, but it cannot leave the rails.

CIO&Leader: From an engineering level standpoint, what guardrails do you think are critical to prevent unintended or emergent behavior in production systems?

Tirthankar Lahiri: I personally do not believe in guardrails. They give a false sense of security. Guardrails are still suggestions.

Think of it like telling my teenage child, “Please don’t go into that room.” That is my guardrail. I can be pretty sure someone will go into the room. So what do you do? You lock the room.

You cannot expect AI to follow rules just because you told it to. You have to engineer trust into the core architecture. Guardrails are, at best, documentation of what the rules should be. What you really need are hard rails. Hard tracks you cannot deviate from. You can choose a direction, you can go to A or B, but you cannot go in an arbitrary direction.

That is the only way to build a secure, verifiable enterprise AI. It is part of our GenAI philosophy. You limit what the system can do, and you make those limits structural.

CIO&Leader: With the rise of vector databases and specialized AI data stores, do you think the traditional RDBMS model is moving away?

Tirthankar Lahiri: No, I think the opposite. My prediction is that vector databases were a flash in the pan because of generative AI. Companies quickly discovered that once you move data out of a relational database, you lose security and governance.

In the next five years, every major database will support vectors natively. We will not even discuss vector indexes in five years because they will be like the indexes everybody has.

When that happens, standalone vector databases will struggle. They are not bad products, but they are limited in scope. They can perform semantic similarity searches, but they do not support sophisticated relational business queries or filters. Once you export data to them, your security controls are no longer in place. You have to reinvent them, and those databases lack decades of enterprise security maturity.

Oracle, for example, has deep security capabilities, like only letting me see employees in my organization from a table. A vector database lacks that sophistication. If I run a similarity search there, it may run across everyone.

I think the benefits of vector databases will move into the traditional relational databases and expand what they do. But I do not think vector databases will have a long-lived run as document databases did. I would be surprised if, five years from now, many of them still exist.

CIO&Leader: Looking at India specifically, where do you see maximum demand for AI cloud solutions coming from? What are you seeing among Oracle’s largest mission-critical customers and their key pain points?

Tirthankar Lahiri: My largest mission-critical customers are focused on core systems like core banking and core telecom, the systems their businesses depend on.

A lot of them are enabling RAG and interactive workflows on those systems to answer questions like, “Which customers are similar to this customer?” or “What service request is similar?” That is a very common use case. Before taking a new service request or ticket, you first check whether any existing tickets match the symptoms. You can do that on the production service request database.

Service request databases are huge. For a large telecom, you could have millions filed daily. Earlier, you would just accept the request and process it. Now you can do a first diagnosis: “This looks similar to that issue. It may be the same problem.”

That creates massive operational efficiency. Instead of routing it to another rep and duplicating work, you can flag likely duplicates and speed up resolution. These core systems will be augmented with AI to drive better operational efficiency.

CIO&Leader: Is adoption of AI-enabled databases accelerating with trust, or are they deploying cautiously due to risks?

Tirthankar Lahiri: People are cautious. Not just in India. They are still exploring what they can do without harming anyone. That is the right way to start. Find use cases with no risky side effects that improve efficiency and give measurable ROI, but without taking over workflows.

Caution is the name of the game in enterprise, especially mission-critical systems. I do not see agentic workloads proliferating immediately on core systems. I see adjunct workflows that improve efficiency. Over time, agentic workflows will become more common as we get comfortable with them. In banking, especially given the legislation and regulations to follow, cautious adoption is the right approach.

CIO&Leader: What is the single biggest driver, cost, speed of innovation, efficiency, or something else?

Tirthankar Lahiri: Efficiency. Everybody wants to reduce costs and improve operations. That is a big driver.

As in the service request example, one customer saw a 60-70% reduction in service requests filed because they could detect duplicates quickly. That reduced the workload for the team managing them. These are the early use cases customers want: reduce costs and improve turnaround time.

CIO&Leader: What leadership or architectural trade-offs do CIOs face introducing intelligence into mission-critical databases without compromising trust, uptime, or accountability?

Tirthankar Lahiri: Ideally, you should not make trade-offs. Trade-offs imply compromise, and most CIOs do not want to compromise on system integrity or verifiability. They want to add AI seamlessly without introducing risk or vulnerabilities.

The ideal outcome is adding AI without compromising integrity.

CIO&Leader: How critical is upskilling to manage risks like data leaks from employees using rogue AI tools like ChatGPT?

Tirthankar Lahiri: Upskilling is essential. AI won’t replace humans, but humans who master AI will replace those who don’t. You can’t stop large workforces from using ChatGPT independently; education on safe AI practices is absolutely key.

Share on