Opinion

“The potential for LLMs being plugged into enterprise platforms is phenomenal” – Dataiku

Kurt Muehmel, Everyday AI Strategic Advisor at Dataiku has penned an exclusive op-ed for September’s edition of CNME, in which he makes the case for the introduction of large language models (LLMs) into enterprise platforms, describing its potential as ‘phenomenal’.

The potential for large language models (LLMs) that can be plugged into enterprise platforms is phenomenal.

Not surprisingly, a recent global Dataiku-Databricks study showed almost two in three (64%) organizations were at some stage of evaluating generative AI for adoption in the next 12 months. Some 45% are already experimenting with it.

So, how do we approach adoption of these powerful technologies so they can become part of our Everyday AI culture?

There are two main ways to accomplish this.

The first would be APIs (application programming interfaces, which allow bespoke code to make calls to an external library at runtime) exposed by cloud-native services. The second would be self-managed open-source models.

Let’s chat

Providers like OpenAI, AWS, and GCP already provide public model-as-a-service APIs.

They have low entry barriers and junior developers can get up to speed with their code frameworks within minutes. API models tend to be the largest and most sophisticated versions of LLM, allowing more sophisticated and accurate responses on a wider range of topics.

However, the hosted nature of the API may mean that data residency and privacy problems arise — a significant issue for privately owned GCC companies when it comes to regulatory compliance.

There are also cost premiums to an API, as well as the risk of a smaller provider going out of business and the API therefore no longer being operable.

So, what about an open-source model, managed by the organization itself?

There is a wide range of such models, each of which can be run on premises or in the cloud. Enterprise stakeholders have full control over the availability of the system.

But while costs may be lower for the LLM itself, setting up and maintaining one necessitates the onboarding of expensive talent, such as data scientists and engineers.

In the end, different use cases within a single organization may require different approaches. Some may end up using APIs for one use case and self-managed, open-source models for another. For each project, decision makers must look to a range of factors.

They must consider risk tolerance when using the technology for the first time, and so they must choose a business challenge where the department has a certain tolerance for such risk.

Looking to apply LLM tech in an operations-critical area is ill-advised. Instead, look to provide a convenience or efficiency gain to a team.

Finally, traditional NLP techniques that don’t rely on LLMs are widely available and can be well adapted to specific problems.

The importance of moderation

Following on from the risk issue, every LLM product should be subject to human review. In other words, the technology should be seen as an extraordinary time-saver for first drafts, but organizations should retain their review structure to ensure accuracy and quality.

Let LLMs work to their strengths. LLMs are best used for generating sentences or paragraphs of text. To this end, it is also necessary to have a clear definition of what success looks like.

What business challenges are being addressed and what is the preferred — and preferably, measurable — outcome? Does LLM technology deliver this?

Discussions of business value bring us neatly to a further consideration that applies to the entire field of artificial intelligence and to matters of ESG (environment, social, and governance) — responsible use.

Organizations that build or use LLMs are dutybound to understand how the model was built.

Every machine-learning and neural-network model that has ever existed was only as accurate, equitable, and insightful as the data used in its construction.

If there was bias in the data, then there will be bias in the LLM products.

Responsible AI does not just cover the general public. What of the employee? LLM builders must have an appreciation of the model’s impact on end users, whether these are customers or employees.

For example, ensuring that users know they are interacting with an AI model is critical. It is helpful to be very plain with users on how and where models are used and be open with them about drawbacks, such as those regarding accuracy and quality.

The principles of responsible AI dictate that users have the right to full disclosure so that they can make informed decisions on how to treat the product of a model.

Governance and accountability

Many of these issues are addressed through a robust governance framework. Processes for approving which applications are appropriate uses for each technology are an indispensable part of an Everyday AI culture.

The rules of responsible AI make it plain that individual data scientists are not the right decision makers for which models to apply to which use cases.

Their technical expertise is invaluable input, but they may not have the right mindset to account for wider concerns. Those that do make the decisions should set up policies that can be easily followed without laborious consultation; and they should be held accountable for the results.

As with all business decisions, it is important not to run and join the LLM procession just because you hear the band playing.

Wait, watch, evaluate. And then make the moves that are right for your organization. LLM has a place in the modern enterprise. Make sure you place it well.

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines