AI Won’t Save Healthcare If the Data Feeding It Is Broken

Advertisements

India’s healthcare AI race is accelerating but Ashissh Raichura, Founder & CEO of Scanbo, believes the industry is building on a cracked foundation. While boardrooms debate algorithms and funding rounds chase the next diagnostic model, Raichura is asking a more uncomfortable question: what if the real problem isn’t the AI at all?

Ashissh Raichura
Founder & CEO
Scanbo

At the heart of his argument is a diagnostic data crisis hiding in plain sight — noisy signals, inconsistent readings, and hardware that was never built to feed intelligent systems. In this conversation, Raichura dismantles the demo-to-deployment illusion, challenges India’s policy priorities, and makes the case that before healthcare AI can truly scale; the infrastructure beneath it must first be fixed.

CIO&Leader: Healthcare AI will not scale on models alone. It needs a reliable diagnostic data layer beneath it.

Ashissh Raichura: Healthcare AI does not usually fail first at the model layer. It fails much earlier — at the point where the patient signal is captured.

In healthcare, the model is only the final layer of interpretation. If the ECG trace, blood pressure reading, oxygen saturation value or any other diagnostic input is noisy, inconsistent or poorly captured, the AI layer is working on weak ground. It may still produce an output, but that output may not be clinically reliable.

This is why the harder question is not only whether the model is advanced. The harder question is whether the diagnostic signal below it is clean, repeatable and fit for medical use. In healthcare AI, bad input is not a technical inconvenience. It can directly affect trust, workflow and clinical decision-making.

CIO&Leader: What makes a diagnostic signal “clinically usable” versus just technically correct?

Ashissh Raichura:
A signal can be technically correct and still not be clinically useful.

From a device or engineering perspective, a reading may look clean. But for a doctor, that is not enough. The signal has to be stable, repeatable and meaningful in the context of a patient’s condition. It should not work only in ideal settings. It should hold up when used across different patients, operators and real-world care environments.

Clinical usability begins when a signal can support a decision, not merely produce a number. A reading that cannot be compared over time, interpreted with confidence or connected to a patient’s broader health picture has limited value. The real test is not whether the device can capture data. It is whether that data can be trusted when care decisions are being made.

CIO&Leader: How do you ensure signal quality and reliability in real-world, point-of-care settings?

Ashissh Raichura:
In point-of-care diagnostics, signal quality has to be designed into the system from the beginning. It cannot be corrected at the end by an AI model.

Real-world settings are not controlled like labs. A device may be used in a clinic, a health camp, a pharmacy, a corporate health check-up or a remote-care setting. The person using it may not always be a specialist. The patient may move. The environment may not be ideal. These are the conditions under which the system has to work.

That is why the device architecture matters — sensor quality, placement, and calibration, isolation between different measurements, noise handling and checks before interpretation. AI should sit on top of reliable signals. It should not be used as a patch for poor signal capture. For healthcare AI to be trusted, reliability has to begin at the hardware and diagnostic layer itself.

CIO&Leader: What are the biggest gaps between AI demos and deployment in live clinical environments?

Ashissh Raichura: The biggest gap is that demos show performance in controlled conditions. Healthcare deployment tests performance under pressure.

In a demo, the data is clean, the user journey is smooth and the system is usually shown in its best environment. In a live clinical setting, the situation is very different. Patients move, operators change, workflows are rushed and the quality of the captured signal can vary. That is where many AI tools face their real test.

The second gap is adoption. A tool may look impressive technically, but if it slows down the nurse, technician or doctor, it will not be used consistently. In healthcare, deployment is not just about model accuracy. It is about whether the system fits into the rhythm of care without increasing friction.
The third gap is validation. A model that performs well on structured datasets has to prove itself on messy, real-world diagnostic inputs before it can be trusted at scale.

CIO&Leader: How should diagnostics infrastructure evolve to support scalable healthcare AI in India?

Ashissh Raichura: India’s healthcare AI story cannot be built only from the software side. It needs a stronger diagnostic layer closer to the patient.
A large part of healthcare in India still depends on delayed, episodic or fragmented diagnostics. If AI has to support early detection, preventive care or chronic disease management, reliable health data must be captured where patients actually are — in clinics, primary-care centres, pharmacies, workplaces, community settings and remote locations.

The next step is standardisation. Diagnostic data should not remain trapped in isolated reports or disconnected devices. It has to be captured in a consistent format, stored securely and made useful over time. AI becomes more meaningful when it can study health patterns, not just single readings.
For India, the opportunity is not only to build better healthcare algorithms. It is to build a more dependable diagnostic data layer beneath them.

CIO&Leader: Are current policy pushes like Startup India Fund 2.0 and SAHI/BODH addressing the right layer of the stack?

Ashissh Raichura: They are important steps, but healthcare AI policy also needs to go deeper into the infrastructure layer.

Capital support for deep-tech innovation is necessary. Responsible AI frameworks are also necessary. But in healthcare, the model is only one part of the stack. The quality of diagnostic data, validation of devices, interoperability between systems and real-world deployment conditions are equally important.

If policy focuses only on AI models and governance, the practical impact may remain limited. The real test will be whether India can support the full chain — from reliable data capture at the point of care to clinical validation, workflow integration and safe use by healthcare providers.

Healthcare AI will scale only when the system below the model is also ready.

CIO&Leader: What does a truly production-ready healthcare intelligence stack look like beyond pilots?

Ashissh Raichura: A pilot shows that a healthcare AI system can work. Production shows whether it can work repeatedly, safely and in different care settings.

A production-ready stack starts with reliable signal capture. The device has to generate consistent diagnostic inputs across different users, patients and environments. The next layer is data quality — the information must be structured, secure and usable over time.

Only after that does the AI layer become meaningful. The model has to be clinically validated, explainable enough for healthcare use and designed to support doctors rather than replace judgment. The system must also fit into existing workflows, because adoption depends on whether it makes care delivery easier, not more complicated.

In healthcare, readiness is not proven by a good demo. It is proven when the same system can perform in a clinic, a camp, a remote setting and a high-volume care environment with the same level of trust.

Share on