Video
Overview: The Perforce Delphix MCP Server
A MCP Server is a game-changer for data operations, turning tedious, manual processes into seamless, intelligent workflows. Imagine asking a simple question — like "Is my test data current?" — and getting an instant, actionable answer. With AI-powered automation, the Perforce Delphix MCP Server can enable users to achieve new data efficiencies.
In this video, Delphix expert Jatinder Luthra walks through example use cases to show how the MCP Server can help you and your teams in four ways:
- Simplify Data Management: Use natural language queries to access and manage data effortlessly, without needing specialized knowledge or navigating complex interfaces.
- Automate Workflows: Rely on AI-powered automation to handle tasks like checking data readiness and troubleshooting refresh jobs, therefore saving time and reducing errors.
- Enhance Collaboration: Generate shareable, formatted reports and insights to align teams and streamline communication across development, QA, and operations.
- Ensure Compliance: Perform compliance checks, like verifying data masking and readiness, to maintain secure and protected environments.
Experience the Delphix MCP Server for Yourself
Want to see how the Delphix MCP Server could fit into your data operations? Get a custom demo from a Delphix expert.
Full Transcript
Hello, everyone. This is Jatinder Luthra. And today, I want to show you something that fundamentally changes how we think about data operations at scale. What you are about to see isn't just a new interface for Delphix.
It's a transformation in how teams interact with their data infrastructure. We are moving from a world where data operations require specialized knowledge and manual navigation through UIs to the one where natural language unlocks the full power of Delphix platform. The subtitle here from from clicks to conversations, it perfectly captures something really important here. Right now, provisioning a v d b, checking compliance status, or refreshing test environments require knowing exactly where to click, understanding Delphix terminologies and data relationships, manually coordinating multiple steps across different screens, tribal knowledge about policies, naming conventions, and operational procedures.
This creates bottlenecks. Data operation teams become gate pick gatekeepers not by choice, but by necessity. They are the only ones who know how to navigate these systems effectively.
Delphix MCP server changes this equation. It democratizes data operations knowledge by making Delphix's powerful capabilities accessible through natural language. But more importantly, and this is what makes it transformative, it doesn't just let you ask questions. It enables intelligent data aware automation that can reason about your environment and orchestrate complex workflows autonomously. This isn't about replacing clicks with chat. It's about moving from manual operations to intelligent orchestration.
Let's first talk about some challenges what we are facing currently. Let me start with the very first problem, fragmented data context. This is what we call the scattered puzzle problem. Here, what happens in organization using Delphix today, Information about your data state exist, but it's scattered everywhere across your Delphix landscape. The v d v configuration lives in Delphix. Job status requires logging into the UI and whether the data is current or not.
That's usually a Slack conversations, which environment says which snapshot. Someone on your team probably knows, but it's a tribal knowledge. So when an engineer needs to answer a very simple question, is my staging environment using current production data?
They become detectives. They are manually assembling the pieces from five different sources, hoping they didn't miss something critical. This fragmentation creates our second problem, constant contact switching. Let me show you what it really looks like in practice.
An engineer ask a simple question, is my test data current? Seems straightforward. Right? Here's what actually happens.
First, open Delphix UI to log in and navigate, then check snapshot details by clicking through, then verify the refreshed job actually worked by filtering through the job history. And now switch to GitHub to compare time stamps with your last deployment. The time to answer a simple question when accumulated over time can turn into a productivity challenges, And this constant context switching leads directly to a third problem we simply don't know if our environments are ready. And here where it can get really expensive, we have automated everything in our deployment pipelines except knowing if we can actually test.
Look at this modern CSED pipeline. It's pretty beautiful. Everything is automated from code checkout to application deployment. And then we get to is the data ready? The pipeline just hopes for the best. Let me show you these three scenarios we see constantly. The very first is stale data. Your pipeline runs test against two weeks old snapshot. Failed refresh. The overnight refresh job fails silently, but the pipeline proceeds assuming the fresh data. Schema drift is a third one. You deploy new schema changes. The VDP wasn't refreshed afterwards. Your application expects new columns that don't exist. Production like testing becomes impossible.
So you see the pattern here, deploy fast, wait hours, and debug data issues. I call this sophisticated automation that flies blind. We automate everything except knowing if we can actually test. Data readiness is still a guessing game, and that's a problem we need to solve.
So let's recap. Fragmented data, constant contact switching, and mystery readiness. Together, these three problems cost organizations productivity. But what if there is a different way?
And thanks to the advancement advancements in AIML field, we have MCP servers. Let me quickly explain what MCP is and how it works. MCP stands for model context protocol. It's an open standard that connects AS systems to external tools like Delphix.
Let me give you a very common example. Let's say you want your LLM maybe through ChatGPT or Cloud or whatever you're using maybe locally to manage your Google Calendar. That LLM doesn't have any idea about what your Google Calendar looks like. So what you really do, how you can connect your LLM to your Google Calendar.
So you get the Google Calendar MCP server, and the LLM use that MCP server to look into your Google Calendar and help you manage those calendars. That's the pretty much the context around MCPs. In Delphix language, let's take this flow. When a user or automated system asks a question in a very natural language, is the payment really be using current data?
In this case, an AI agent like Cloud, ChatGPT, Windsurf, or whatever you're using, any other LLM, it understands the intent and plans what data it needs. The MCP server acts as a bridge, translating that intent into specific Delphix API calls. It curates those virtual databases, snapshots, jobs, whatever is needed. That data flows back to the AI agent which analyzes, correlates, and reasons about it, and you will get the result which is which is an intelligent response with the full context.
Here is the key difference from traditional APIs. Instead of you needing to know what exact endpoints and parameters, you just ask a natural language and AI figures it out. MCP transforms Delphix from a platform you manually query into an intelligent layer that understands the context. So let's look into this different data operations and how MCP becomes an intelligent layer.
Instead of those context switchings, we now work directly with Delphix from your natural workflow. We are having a conversational discovery, ask the questions in plain English about data availability. It's an ambient intelligent. You get those data insights without leaving your development environment.
Proactive awareness. You understand the data state before making the decisions, and it introduces team collaborations. You can share the data context easily with developers and testers, and that can also be a very in simple natural language.
Let's start with a demo. Let's see a day in the life of our QA lead, Sarah. So Sarah starts her day with morning stand ups and ends with compliance audit. Pretty resourceful day. In the demo, we will show one prompt per scene, and do will also do a live MCP interaction and see how it flows into the system. So I already have an MCP server configured in my cloud desktop. So we're going to use that MCP server to do a demo journey for a day in Sarah's life. One thing to understand is to use this MCP server, you need to have a Delphix data control tower, which I already have configured, which has all the different we have multiple infrastructure connections across multiple Delphix Delphix engines.
And then I have a virtual databases, D sources, and all all the stuff what you need in Delphix. So I have an environment which is a simulation of a real how the real environment looks like, and we're going to use this environment for the demo.
And when it comes to MCP server, as I said, MCP servers are open source. We have this MCP server available in GitHub, and you can use this MCP server to configure in your cloud desktop or in any of the LLMs which you want to use with an MCP server. And you will find more details about the configurations on the GitHub page. Let's start with the demo.
The very first thing. So Sarah has a morning stand up, and Sarah needs to get a list of test environments which are which are running. She needs to build a report out of it. In this case, as Sarah has all the MCP servers being configured for the for the data control tower in Delphix, we'll simply put in the put in the prompt, the information what Sarah needs, and we'll wait for the response.
As you can see, the prompt is very specific, looking for a QA environment related to credit applications and when were they last refreshed. So you can be as specific as you want. You can be very generic. Give me a list of virtual databases when they last refreshed. So this LLM will handle it for you with an MCP server, to get all the details, from your data control tower and across your Delphix landscape. We found two virtual databases in QA environment. We we get the names of those virtual databases, and these are for credit applications.
One is ending with QA, QAR, and we get lot of good details around it when when it was last refreshed. So that's good. Sarah has a good report, what it needs for the two d morning stand up. But what we really see is it gives a cross object visibility.
We'll we'll talk about virtual databases, environments, snapshots. You'll get everything in with one single prompt, time based freshness context about when it was last refreshed. We didn't do any UI login for that. We just get the answer from our from our LLM.
Let's move to the next scenario. So let's say Sarah needs to set up a testing for a new payment processing feature. So Sarah has Sarah wants some data to find out for the bank module of the data is available for from last thirty days. So let's give the prompt to the mark in our cloud desktop and wait for the response.
And if you now think about it, how it's really if you need to do the same task using across your Delphix Delphix landscape with multiple Delphix engines across different modules, it's you have to go through different set of steps to achieve what you're you're trying to get out of it. But with this, you're just getting the getting the answer by just simply passing the prompt. So that's that's that's in general, that's the power of MCP servers, but how you can get that information faster out of out of any systems what it is connected to. In this case, it's Delphix.
Alright? So we from the payment database, it starts showing us all the available different snapshots of it and also the name of the resource. And along with associated virtual databases, that's a good chunk of information. It's a good context around what we're really looking for, and Sarah can just move at with confidence what to plan the critical task, what she's looking for.
So what it really does, snapshot filtering by natural dates. It's a data lineage, what you can see it here, and you get exactly what's needed, save a lot of time to get that information from your Delphic systems. Another scenario, troubleshooting, and it's it's very common. What just showed us off your refreshed jobs for QAVDB in last twenty four hours.
Were there any issues? Was that refresh failed or succeed? It can be easily done. So by checking those job job history, by monitor using those monitoring logs, WhatsApp by doing those error detections, we'll get the root cause in minutes.
If it's a failure, we'll see why the failure happens. In this case, luckily, we don't have any failures. So we see in last twenty four hours, the refreshes are successful. And even we get the context around it when was the last refresh happened before past the two forty eight hours, and we see it's ten thirty.
So we get a good summary of what is needed for it. Scared to our scenario four, cross team collaboration. Say the dev team needs, needs to know, what what are the different virtual databases available if for any specific environment. So developer ask which data version to use, and Sarah has to get that information.
And it has to be collaborative. It it has to be something which can be shared as a document, and that's what this prompt does. So we are asking for the data lineages. We are asking for the different snapshots with last refresh times, but at the same time, we are asking format this for sharing with development team.
Let that LLM do the job. Let it use the MCP server, get what's need for Delphix system, and then curate it in the way it's needed. In our case, we need to share with the team. Let's see how it ends up with.
So we see the document is being created. Alright. It's ready. Alright. So this seems to be it's doesn't it's failed to load the document, and we can tell back to LLM. Let's make it more interactive.
So we got our virtual database status report. That looks pretty impressive.
We got the executive summary, statistics, queue environments, development environments. We got the data lineages, different connection details, and that's a nice touch for questions or refresh request, connect the database administration team.
We see how we can build this document just by using MCP server. That's pretty cool. So what it really does, it's a multi environment visibility, shareable documentations instantly, and the team alignment. So it's introduced so much collaboration between teams where they can build and create and share the materials to pass information across different teams or inside the team.
And the last scene comes to compliance. Sarah being a good citizen, she goes an extra mile to make sure when she start the testing, she's using the golden copy of the v d p, which is compliant for any kind of floor environments.
So for her compliance audits or for her testings, she's passing this information, and she's looking for what's been masked. As we can see, she all she also put the intent. I would like to make sure VDBs are compliant before I create child VDBs out of them.
So that's where the intent is important. LLM understands what you're really trying to get out of it. The more expressive you are with LLMs, the better results you get out of it. So we are getting a report back.
It found twelve mass virtual databases and comparing that last refresh times with masking completions. So it's recommending what are the safe ones to use for compliant VDBs and what are not the safe ones, which are not which are not masked. There might be some, quick hard fix environments for production of some different use cases, but it's not really for the, not really for the development work. So that's where we see it really gives us an answers what we are looking for without investigating, without going through the multiple sources.
Alright. So let's move back. We just have a quick compliance visibility on our fingertips by just passing the prompt and using the power of LLMs with MCP server. Alright.
So moving forward, what we just witnessed, it's a change in the mode of work, not an incremental tweak. We see zero context switches for data discovery. Less than five minutes to answer complex data questions, inform decisions in real time, and we really democratize data knowledge across different teams. Let's summarize everything.
What problems we solve? MCP exposes Delphix context conversationally, ask and get your precise answers at one single place. What really turns into faster cycles, your failures, and you will get the predictability. So you are having the data context where you work, not where you have to go, and we really convert the con we convert this whole process from clicks to conversations.
When we talk about ROI altogether as an MCP server and with Delphix capabilities, we get time savings, productivity gains, and quality improvement, which leads to faster releases and lower defect leakage. And at the end, these capabilities build on read only operations of MCP and Delphix foundations, which are shown today.
Will have a complete MCP server soon to manage all operations in Delphix. What really next? What's the future of this? This can be turned into an automated workflows where AI agents will monitor your nightly production snapshots, refresh your QA environments, and notify you when it's ready for testing. It can be predictive insights based on current VDB usage patterns, which environment should you bookmark before the weekend.
It can be policy enforcement where you ensure no VDPs in production environments are using snapshots older than seven days. So if you remember that mystery environment readiness with modern CSED pipeline, MCP server will allow us to make real time decision as part of your DevOps pipelines, which we will see in next demos very soon. Follow Perforce for the future updates. Thanks for watching.