Giving AI more autonomy? first, ask the right questions

LONG BEFORE Apple Card established a significant presence in the U.S. credit card market, its 2019 launch, a digital-first credit card integrated with the iPhone’s Wallet app, was marred by
controversy. The AI-driven algorithm used to assess creditworthiness and determine approval limits operated with significant autonomy and minimal human oversight. Soon after its debut, it
was accused of being misogynistic and gender-biased. Several users, including Danish entrepreneur and racing driver David Heinemeier Hansson, criticized the platform after their wives received significantly lower credit limits, despite having similar or even stronger financial profiles.

The above example highlights the limitations and unintended consequences of narrow, task-specific AI. I revisit it to pose a larger question: What if the Apple Card system had agentic characteristics? Could it have:
◼ Detected disparities in treatment and flagged them as potential bias.
◼ Initiated a self-assessment or escalated the issue to human oversight?
◼ Suggested alternative credit models that account for joint financial behavior or structural gender biases in credit scoring data?

Maybe yes. But it could also introduce new challenges. Imagine if the system weren’t just following fixed credit rules but also trying to maximize long-term customer profitability. It might start giving lower limits to people it considers less profitable, based on patterns in the data that reflect past biases. Trying
to be “smart” could unintentionally discriminate, even without malicious intent. That’s the tricky part with more advanced, agentic AI: it doesn’t just follow instructions — it defines strategies to achieve goals. And if those goals aren’t clearly defined or closely monitored, the system might take actions that surprise us or even go against our values.

The Apple Card example shows that even simple automation can have real-world consequences. As AI systems become more autonomous, the risks and responsibilities only grow. The challenge is no longer just about making them smarter—it’s about making them fair, accountable, and aligned with human values. In our cover story this month, we’ve tried to find out where Indian enterprises are on this journey and what it will take to make agentic AI a success.

Share on