Oracle announced the general availability of HeatWave GenAI, which includes the industry’s first in-database large language models, an automated in-database vector store, scale-out vector processing, and the ability to have contextual conversations in natural language informed by unstructured content. These new capabilities enable customers to bring the power of generative AI to their enterprise data—without requiring AI expertise or having to move data to a separate vector database. HeatWave GenAI is available immediately in all Oracle Cloud regions, Oracle Cloud Infrastructure Dedicated Region, and across clouds at no extra cost to HeatWave customers.
With HeatWave GenAI, developers can create a vector store for enterprise unstructured content with a single SQL command, using built-in embedding models. Users can perform natural language searches in a single step using either in-database or external LLMs. Data doesn’t leave the database and, due to HeatWave’s extreme scale and performance, there is no need to provision GPUs. As a result, developers can reduce application complexity, increase performance, improve data security, and lower costs.
“HeatWave’s stunning pace of innovation continues with the addition of HeatWave GenAI to existing built-in HeatWave capabilities: HeatWave Lakehouse, HeatWave Autopilot, HeatWave AutoML, and HeatWave MySQL,” said Edward Screven, chief corporate architect, Oracle. “Today’s integrated and automated AI enhancements allow developers to build rich generative AI applications faster, without requiring AI expertise or moving data. Users now have an intuitive way to interact with their enterprise data and rapidly get the accurate answers they need for their businesses.”
“HeatWave GenAI makes it extremely easy to take advantage of generative AI,” said Vijay Sundhar, chief executive officer, SmarterD. “The support for in-database LLMs and in-database vector creation leads to a significant reduction in application complexity, predictable inference latency, and most of all, no additional cost to us to use the LLMs or create the embeddings. This is truly the democratization of generative AI and we believe it will result in building richer applications with HeatWave GenAI and significant gains in productivity for our customers.”