Aravind Putrevu, Director Of Developer Marketing at Coderabbit, discusses the risks of open source.
Open-source AI is now standard in most tech teams. Startups use it to move fast, bigger companies use it to cut costs and avoid lock in, and even governments look at it for “build in India” plans. The question is not “open or closed” anymore. The real question is: can we keep things open without risking everyone’s problem?
Here is the simple picture: most Indian AI startups are not training huge frontier models. They are wiring together open models, libraries, and cloud services to ship apps. Inside enterprises, it is the same story. If you try to block open source “for safety”, people will still download models and tools and run them on side projects or shadow IT. You do not reduce risk, you just lose visibility.
But risk is not imaginary. The same open models that power useful copilots can also be used for scams, fake videos, fraud, or targeted propaganda. Open source also makes it easier for bad actors to run these systems completely offline, away from any platform checks. Closing everything and trusting a few vendors does not solve that. It just means you are blind to how the system works and what data went in.
So the answer is not “open is good, closed is bad” or the reverse. The answer is design. You decide what you open, where you open it, and how you control it.
1. Be clear what you are opening
“Open-source AI” is a vague label. There are at least five different things you might be opening:
- Code
- Model weights
- Training data
- Evaluation sets
- Safety and policy layers
Open code and tooling are usually low risk. Open evaluation sets and test scripts are also helpful. Fully open weights are powerful and useful for local fine tuning, but they need more control around how they are used. Open training data that includes personal or copyrighted content is where legal and privacy trouble starts.
Treat each of these as a separate choice, not one big switch.
2. Ship products, not raw weights
A lot of “open” models are basically some weights on a hub and a short README. That is closer to a lab dump than a product.
Take a base model and turn it into a distribution your company can stand behind. That means adding things like:
- Safety tuning and filters
- Clear usage policies
- Logging and basic abuse detection
- Versioning and an update story
- Engineers can still access the lower layers when needed, but the default path should be the safer wrapped version. Anything that touches customers or sensitive data should go through this route.
3. Make governance part of the engineering work
Open-source AI also has licence and compliance issues. Different licences have different rules on commercial use, sharing changes, and patents. On top of that, you have data rules, sector rules, and whatever your board and regulators expect.
- If you only look at this during an audit, you are already late.
- Bring basic governance into the engineering workflow:
- Model cards and simple documentation
- Data lineage: where did the core data come from
- Do not use lists for certain models or datasets
- Central logs for prompts and responses
- Playbooks for incidents and rollbacks
- If you would not run payments without logs and limits, do not run powerful open models without the same mindset.
A short playbook for AI leaders
If you own AI in an Indian company, whether you are a CIO, CDO, CTO or head of product, here is a simple starting point:
- Tier the risk: internal copilots and log cleanup are low risk; citizens facing finance, health, and education are high risk. Match openness to that.
- Maintain a small list of “approved” open models or distributions with clear licences and benchmarks. Everything else stays in experiment land.
- Put a gateway in front of all models, open or closed. That layer handles safety checks, rate limits, PII controls, and logging.
- Open-source AI is not going away. The choice is not between open and safe. The real choice is between planned openness and chaos. Get the design right, and open source becomes an asset, not a liability.