For many, governance risks arise when deploying multiple AI agents across teams and systems; so, what are the options and best practices? Veronica Martin spoke to Sid Bhatia, Area Vice President & General Manager, Dataiku, about how the Dataiku LLM Mesh is unique – and how it’s breaking new ground when it comes to compliance, accountability, and transparency.
What are the most common organisational challenges when deploying Agentic AI at scale?
“The most common challenges we’ve seen in the industry, as we speak to many CTOs and CIOs – particularly around the agent AI side – are as follows:
“Challenge number one is reliability. How reliable are these agents in the first place? Once you start implementing one agent, then two, and eventually thousands, the accuracy of the output often takes a backseat. These agents require constant monitoring and maintenance, and as a result, trust in the agent side of the story can diminish”.
“Challenge number two is unification. With different providers, tools, models, and teams, every team tends to operate independently. There’s often no unified approach across the organisation, which hinders collaboration. For scaling AI efforts, a collaborative, well-governed environment is critical – and that’s frequently missing”.
“Challenge number three is legacy system integration. You can build a fantastic agent, but organisations still have CRMs, ERPs, HRMS, and other IT systems. Integrating agents with these systems is a major hurdle. Many companies are piloting and testing, but few move to full production. The engineering effort required to truly operationalise agents is significant”.
“Challenge number four is the cost. It’s expensive! Implementing the right infrastructure requires substantial investment. AI models also experience drift, requiring ongoing maintenance, which adds to costs. A recent MIT study highlighted that 95% of AI initiatives are not yet generating tangible value. That’s understandable for a new technology, but it does make it challenging to demonstrate direct ROI to senior management”.
“I would also add that the siloed nature of organisations poses a major challenge. Different departments and personas progress at different rates – some excel while others lag. There’s a lack of standardisation, and when running thousands of agents, the attack surface grows. This increases the risk of data breaches and underscores the need for robust global security and governance policies, which are often missing.”
What governance risks arise when deploying multiple AI agents across teams and systems?
“The answer to that starts with permissions. You need to ensure that permissions are managed effectively at scale because with so many agents, each requires access. Imagine a situation where an agent has unauthorised access: this could expose corporate or personal data, which would be a major problem. That’s point number one”.
“From a governance standpoint, the second key issue is accountability. Agent AI frameworks automate entire processes, but if an agent misfires, who is responsible? Different processes, departments, and people may be affected, making accountability difficult to establish. This remains an open challenge without clear solutions”.
“The third challenge is traceability. Many regulated industries are adopting agent AI, but automation can make it difficult to ensure traceability and auditability. This could result in significant compliance risks if not addressed properly”.
“Finally, security is critical. Security must be robust and uncompromising. One misstep could expose corporate IP or personal data, potentially causing a major PR disaster and irreparable damage”.
“All of these are real challenges that many companies face today when implementing agent AI, particularly from a governance perspective.”
How does the LLM Mesh help enforce compliance, accountability, and transparency across multiple AI agents?
“LLM Mesh is not a new concept; we actually introduced it at GITEX two years ago, but what’s happened recently is that we’ve taken LLM Mesh to the next level”.
“Currently, LLM workloads or customer use cases have two main phases: development and deployment. In the development phase, you typically start with one LLM. Because these setups are often hardwired, once a model is created and deployed, it becomes difficult to make changes. With LLM Mesh, you can now choose between multiple LLM services during development, which solves a key flexibility problem”.
“Another benefit is cost monitoring. You can track the cost of delivering a particular use case and compare it against other LLM providers, which might be a hundred times cheaper or a thousand times faster – giving real-time visibility into efficiency and spending”.
“We’ve also added several new features. Recently, Dataiku introduced build capabilities within LLM Mesh, enabling non-coders – which make up 95% of most organisations – to create modules. These are often business analysts who understand business logic but lacked coding ability previously”.
“Additionally, we’ve incorporated orchestration tools to manage multiple workloads and LLMs across business processes. We’ve also implemented smart switching, which automatically routes a business problem to the most appropriate model or agent, without the user needing to know the underlying details”.
“Finally, we’ve added a central library – what I’d call the ‘holy grail’ – containing the right agents, LLM services, and models. This allows businesses to implement agent use cases at scale through a well-defined interface, reducing shadow IT and creating an enterprise-grade environment. Essentially, this provides the foundation for scaling AI across the organisation efficiently and securely.”
How does Dataiku’s platform simplify integration and orchestration of multiple LLMs to deliver business outcomes?
“Dataiku’s positioning in this space, from an integration perspective, has really extended the envelope. Ten years ago, Dataiku was already an end-to-end data science platform, catering to every persona, from data scientists and business analysts to data engineers, line-of-business managers, and cloud architects, all within a collaborative environment”.
“That essence hasn’t changed, but we’ve expanded the envelope by offering one unified platform that serves all these personas in a collaborative, well-governed environment, complete with robust security and governance policies”.
“The value proposition has also expanded: from competitive analytics to machine learning, generative AI, and now the broader AI game. All of this can be managed from a single interface, which is what differentiates us in terms of integration.”
“From a customer perspective, this makes life much easier. With thousands of out-of-the-box integrations, teams can focus on what truly matters – creating business value and solving real-world problems – rather than spending time on technical integration challenges.”
What makes the Dataiku LLM Mesh so unique?
“What makes Dataiku truly unique is its business-first approach. From the beginning, we’ve been focused on creating tangible business value for our customers. Our founders envisioned a platform that any company could embrace; after all, not every organisation has thousands of data scientists, and this is where Dataiku’s user-friendly, collaborative platform comes to the fore”.
“The value proposition is straightforward. Customers don’t need to worry about complex integrations. Unlike other solutions where you must stitch together hundreds of tools, Dataiku comes pre-integrated, reducing the total cost of ownership. Teams can focus entirely on driving business outcomes rather than technical overhead”.
“Another key differentiator is that the platform caters to every persona in the organisation. From visual agents for no-code users to coding agents for developers, everyone can leverage the same platform to deliver value”.
“In short, Dataiku is the ideal platform for customers who want to scale AI and drive business impact at scale.”
Image Credit: Dataiku