On-Device AI

Your personal smart assistant

Hyped-up technological phenomena are quite common, and most of them end up being a bubble. One that pops without mercy and takes out most of the companies in that domain. The surviving few have truly innovated or offered a little more than just a buzzword. AI is no different. There are several key players whose names we all know for they’ve become synonymous with AI, for example, ChatGPT. It’s a Large Language Model (LLM) based chatbot and the most popular of all AI services currently available to the public. There are many like it, but most people will only recall ChatGPT. 

ChatGPT basically consists of a weighted AI model that’s running on a server somewhere with access to petabytes of information which it can access at the drop of a hat. For anyone who’s used ChatGPT, they’ll be all too familiar with the time taken for each response. That’s because it takes time for the model to run through all its information and generate an adequate response. The reason why it can give you a response as fast as it currently does is because the model is running on some of the most powerful hardware that money can buy. So, does that mean that AI will forever remain on the cloud? Not quite.

On-device AI

AI requires a lot of horsepower for the training stage i.e., when it is learning. Once deemed worthy of being put to use, the AI model no longer requires powerful hardware, it can make do with a fraction of the processing power. This stage is termed as inference. So once an AI is trained, it can function on a relatively low-power processor. You can opt for a more energy-efficient application using a processor designed for AI applications. Say, something like your average smartphone. And if you haven’t realized, most smartphones have already incorporated some form of AI processor for a few generations. A lot of companies call this specialized processor the Neural Processing Unit (NPU). And unlike the cloud-reliant AI systems of the past, on-device AI brings the power of artificial intelligence directly into the hands of users, offering a blend of personalization, security, and efficiency previously unimaginable. Previous iterations of the NPU have been quite subtle with very small AI models running off of it for purposes of polishing up that photo that you just shot with your phone camera. With the advent of the AI arms race, all smartphone hardware manufacturers have ramped up efforts significantly, allowing for much more capable models to operate directly on your personal device.

Tracing back the lineage of on-device AI, we uncover a history rich with innovation and breakthroughs. From the initial, simple algorithms to today’s sophisticated models capable of running directly on devices, the journey has been nothing short of remarkable. Take, for example, the recent announcements by Samsung and Google. Their new devices can actually translate real-time phone calls in entirely different languages, ramp up image and video quality, transcribe conversations generate minutes of the meetings, and manipulate photos and videos like a movie studio from the ‘80s. I say the ‘80s because there are a lot of funny artifacts produced when you try out these features. While Google has its own arm-based processor for its Pixel devices, the other device manufacturers have to rely on the industry giants to bring the power of AI to their devices. Qualcomm, a frontrunner in this revolution, has been instrumental in pushing the boundaries of what’s possible, pioneering the development of sub-10 billion parameter models that bring generative AI to small and large devices without compromising performance. This evolution underscores a significant trend: the movement towards making AI ubiquitous, yet unobtrusive, in our daily lives.

Today, on-device AI is not confined to smartphones and tablets; those are just the most visible applications. Its footprint extends across various form factors, including wearables, automotive systems, and IoT devices. This widespread adoption has been made possible by leaps in AI algorithms and hardware optimization, allowing even the most compact devices to boast AI capabilities. This democratization of AI technology signifies a future where intelligence is embedded in every digital interaction, from how we drive to how we live and communicate. 

The perks

The benefits of on-device AI extend far beyond mere convenience. Enhanced privacy, reduced latency, and offline functionality are hallmark benefits. By processing data locally, these AI models ensure that sensitive information never leaves the device, offering a fortress of privacy in an era of increasing data breaches. Moreover, eliminating cloud dependency means instant responses to user commands, transforming user experience into something seamless and intuitive. As it stands right now, instant responses are a far cry since we’re looking at the first iterations of AI models that directly respond to the user. The term ‘near-instant’ is more appropriate. Nevertheless, these benefits are not just theoretical; they manifest tangibly across all devices, making technology faster, safer, and more aligned with human needs.

True democratization cannot happen with proprietary AI models. Every manufacturer wants to keep their AI refinements close to their hearts. The rise of open-source AI models will be the true game-changer in the democratization of AI technology. By making cutting-edge models accessible to all, they lower the barrier to entry for developers and small companies, sparking a wave of innovation that was previously confined to tech giants. This open ecosystem fosters a collaborative environment where advancements are shared, not hoarded, propelling the entire industry forward and ensuring that the benefits of AI are widespread and equitable.

The challenges

Despite its promise, the road to ubiquitous on-device AI is fraught with challenges. Hardware limitations, energy consumption, and ethical dilemmas such as bias and privacy concerns are significant hurdles. Not a day goes by without a news article showcasing how certain AI models used copyrighted information to train their models. In fact, several major corporations are holding back on the mass adoption of popular AI models for this very concern. You wouldn’t want to issue a recall or patch an AI feature because you later discovered that it was trained on data obtained illegally. Especially not when you’ve charged a premium for the ‘AI features’. Industry giants such as Google, Qualcomm, Samsung, and OpenAI are leading several initiatives in this domain, including collaboration with industry partners and investment in ethical AI research, highlighting a comprehensive approach to overcoming these obstacles, ensuring that on-device AI can scale responsibly and sustainably. You’ll still come across the occasional article or two stating that even these biggies have inadvertently used data that shouldn’t have been, but such instances will go down. Especially since artists have started fighting back with poisoning tools that embed tiny pieces of data that mess up the AI models that train on them. 

Where else?

From healthcare monitoring via wearable devices to smart home automation and real-time language translation on smartphones, the applications of on-device AI are as diverse as they are transformative. Not everyone’s bodies are the same. The polymorphism of certain genes across races among human beings results in different responses to certain medicines. So healthcare is one domain with a huge potential for personalization. Your wearables will become much smarter in the coming years to enable personalized care. 

While self-driving cars have been a reality for a while, not all come with the same smarts. That’s changing thanks to the Advanced driver-assistance systems (ADAS) that companies such as Qualcomm have pioneered. Real-time decision-making, automatic braking, maintaining lane discipline, and detecting pedestrians are some smart features creeping into premium cars. It won’t be long before they’re present, even in the cheapest cars you can buy. After all, the hardware enabling these services isn’t that cost-prohibitive anymore.

Smart home devices such as thermostats, lights, and security cameras have also started incorporating AI models, albeit not as smart as the ones on our phones. Portable diagnostic devices are also seeing AI models being built into them. Patients with chronic conditions can be alerted in advance, and ailments can be detected way before they become chronic. Our life expectancies can only go up with such smart health monitoring devices. Whether it’s providing critical health insights on the go or breaking down language barriers in real time, on-device AI is reshaping our world in profound ways.

The new normal

The impact of on-device AI extends far beyond mere technological innovation; it heralds a new chapter in our relationship with technology – a new normal where devices act as extensions of ourselves, understanding and anticipating our needs while safeguarding our privacy. As we stand on the brink of this new era, it’s clear that on-device AI is not just transforming our devices; it’s reshaping our world, making technology more personal, intelligent, and, ultimately, human. 

Mithun Mohandas is a Tech expert at Digit. 

Image Source: Freepik

Share on

Leave a Reply

Your email address will not be published. Required fields are marked *