Digital transformation with generative AI and semantic models

Today, technology allows farmers to create more with fewer resources,” the CEO of John Deere, a global agri-solutions manufacturing company, said in CES 2023 Keynote. Since 2019, the organization has been building solutions that use technology like computer vision and advanced sensing, machine learning, and data analytics, embracing smart industrial strategy based on Industry 4.0 to drive the transformation journey.

One of the ways that John Deere helps the farmers with technology solutions is through its Operations Center platform. This platform provides farmers with access to data and insights from their equipment, which can help them make better decisions about their operations. A “See & Spray” innovation exemplifies the successful collaboration of technology, enterprise, and consumers. See & Spray is a precision spraying technology that uses cameras, computer vision, and machine learning to identify weeds and only spray them, leaving crops unharmed. This can help farmers reduce their herbicide use by up to 77% and save millions of gallons of water usage.

This is not the only example of innovative use of the latest technology trends. Many organizations, particularly in manufacturing, are putting severe efforts into embracing new technologies to benefit end consumers. 

Many of these use cases, which were never thought possible before, are becoming a reality due to accelerated technological development and affordability. 

The transformation model

A diagram of software components

Description automatically generated

The applications and insights drive the top layer to generate autonomy for consumers. The top layer relies heavily on the enterprise data housed in a modern data stack, which could be in any form: structured, unstructured, historical, real-time, etc.

The third layer forms the connectivity between the enterprise’s various hardware, machinery, equipment, sensors, processes, and software and harnesses the enormous data generated by multiple functions in the modern data stack. 


 

The foundation layer is the enterprise knowledge generated from ERP software, warehouse management, order fulfillment, consumer usage & behaviors, and differentiation created through standard operating procedures, best manufacturing practices, quality control processes, and various documents.

The gap (Information Technology + Operational Technology)

While enterprises strive to digitalize and automate every aspect of shopfloor entities, operational technology is undergoing a huge transformation. 

At every step, the real-time data emitted by the sensors and processes can generate crucial insights to boost productivity.  More than 90% of the data generated in the manufacturing processes is enormous and largely unstructured. 

Over the last few decades, enterprises have matured in digitalizing business applications through information technology.

A group of yellow robots in a factory

Description automatically generated

 

However, there is a strong need in the market to bring information technology and operational technologies together to generate better insights.

Enters Generative AI

A recent manifestation of generative AI by chatGPT, Bard, Dall-E, etc., has triggered the ‘Art of Possible’ in generating insights from unstructured data. With the help of large language models (LLMs), enterprises can harness the knowledge layer to form critical business decisions. Through summarization, Q&A, findability, etc., enterprises can augment their expertise with the help of machines.

Retrieval-Augmented Generation

Retrieval-augmented generation (RAG) is a technique in natural language processing (NLP) that combines the strengths of retrieval-based models and generative models to improve the quality and relevance of generated text.

Retrieval-based models find relevant information from large datasets, while generative models are good at creating new text. RAG uses a retrieval-based model to find relevant information from a dataset. This information is then used as input to a generative model, which creates new text based on the retrieved information.

RAG is effective for various NLP tasks, including question-answering, summarization, and machine translation. For example, in question answering, RAG can generate more comprehensive and informative answers by providing the generative model with access to relevant information within the enterprise or from the web.

The use case

Imagine a troubleshooting scenario for an injection molding machine that otherwise requires a company expert to visit the site and diagnose. 

What if the expert is not available at the time? How about a support engineer taking the help of generative AI and carrying out the following steps:

  • Uses Computer Vision to identify the machine’s ‘make’ and ‘model’.
  • Accesses the product catalog and retrieves the troubleshooting manual.
  • Follows the steps suggested by the app to carry out inspection and diagnosis.
  • Uses troubleshooting videos retrieved from the knowledge database using the AR/VR tool on the site.
  • Accesses service history and known error database to arrive at concise root causes to fix the issue.

This is just a sample; the use cases of generative AI would be numerous and vary from industry to industry. We have just started scratching the surface of this new innovative tool, which is still graduating.

Bring method to madness!

Every enterprise has realized the potential of generative AI. Technology is more than just hype since it makes everyone think differently about the same problems not solved earlier by using contemporary technologies.

When traditional AI (Predictive AI) started gaining a foothold in enterprises, it was highly limited to the IT department and Data Analytics within IT. The Enterprise Data Management group defined the AI strategy, created the common infrastructure, and generated data lakes to harness and deploy AI models for usage. The applications largely depended on the structured data, which was in the custody of data analytics units and required to be governed to avoid redundancies.

A person pushing a large rock

Description automatically generated

Generative AI has pierced every aspect of the enterprise, not only data management teams but also app development, integration, DevOps, customer experience, infrastructure units and CRM, Customer Support, Marketing, etc., in the hands of business. This means every unit is bubbling with innovative ideas to apply generative AI for their respective use cases and participate in the transformation journey. The results will be tangible and directly impact the business outcomes; hence, everyone is queuing up.

Very shortly, this will lead to a chaotic situation if not handled with maturity. While the generative AI matures, enterprises are still experimenting with different proprietary and open-source GPTs, unaware of which to use. Redundancy in creating data products and fetching unstructured information, which is pre-validated, are some of the challenges surfacing. 

There is paramount pressure on IT departments to be hyper-agile and build governance models with design patterns, reusability, and service discovery capabilities … all within the gamut of uncompromised information security within and outside and the ease of provisioning. The euphoria and parallelism in building the products to bridge this gap is taking its toll on the enterprise software products & platform providers as no single tool or platform can satisfy every enterprise’s aspirations.

Enterprise strategy to build Gen AI platform

There is no magic wand that will solve the challenges in this space. However, the good news is technology is fast evolving towards maturity. 

Enterprises need the following capabilities in their enterprise AI platforms:

  • Model repository & configuration management
  • Automation in Model training/ retraining and deployment
  • Model observability for drift
  • Model Security and explainability/ audibility
  • Composable apps/ services
  • Workflow / Chaining

The top-tier players in this space have complemented their Gen AI platforms with most of the capabilities listed above. Bedrock from Amazon, WatsonX from IBM, Microsoft Fabric, and OpenAI service on Azure and Google Vertex AI are some of the key platforms to help build and manage generative AI apps at the enterprise level.  Frameworks like Langchain are gaining popularity as they aid in building templatized AI applications.

Enterprises must embrace the fundamental principles of digital product engineering-

  • Solve business problems with a productized approach.
  • Build solutions like digital products that can be extended beyond the current use case, within and outside the organization.
  • The solution must be built using scalable, modular, cloud-native, microservices-based, event-driven architecture and serverless capabilities.
  • The architecture must support multi-tenancy.
  • Follow agile methodology equipped with DevOps/ CICD and Automated testing for scaling purposes.

The challenges of generative AI and how to overcome

The concept is very fresh and constantly evolving. It’s no surprise to know its caveats. 

  • Hallucination – occasionally, the model may throw irrelevant or even synthetic responses that appear true but not.
  • Generic/ Bookish knowledge – The insights generated from a large but finite data set that is generic.
  • No Guard Rails – Enterprise wants to apply guard rails with respect to who accesses what information, what to dispose of, and what not to.
  • Does not understand the context – Generative AI works on largely unstructured data, which lacks context. The next thing the enterprise would be looking for is how to narrow down the response with concise knowledge provided through context.

A blue rectangular sign with white text

Description automatically generated

The RRR of the New Frontier

As enterprises mature in experimenting and implementing generative AI solutions, there are 3 R’s that every enterprise would benefit from 

  • Relevance – The new solution architectures will combine the structured data with insights generated from unstructured data to be fed to generative AI models for “relevant” output rather than generic.
  • Reasoning – Most use cases around unstructured knowledge are to understand the entities and concepts, their attributes, and the complex relationships amongst them. The semantic model, which would help build the complex network of each entity’s relevance in the enterprise, will lead to understanding the “Reasoning” of certain events, defects, faults, failures, results, and achievements and help take corrective actions before they occur.
  • Recommendation – Once the enterprise brings the unstructured and structured data together, the new frontiers will build the AI models using machine learning to forecast the following link in the relationship, whether structured or unstructured, and generate fairly accurate recommendations.

Importance of Semantic models

Using generative AI for insights from unstructured data will be like looking for a needle in a haystack. Enterprises must start building semantic models for their business and data. 

The next crucial step is to converge generative AI-driven insights from unstructured data with the structured relationship between business entities and enterprise data. 

Relevant structured data will provide the much-needed “context” to the problem statement and allow the enterprise to generate intelligent insights through in-context learning.

The ultimate path will lead to a unified analytics platform using LLMs and Semantic Models to achieve hyper-personalization.

Ajay Malgaonkar is a Next100 2023 awardee and heads the engineering for Prolifics.
Image Source: Freepik

Share on

Leave a Reply

Your email address will not be published. Required fields are marked *