MLOps
Weekly Podcast

Episode 
25
AI Security & Governance with Sahil Agarwal
CEO, Enkrypt AI
white featureform logo

AI Security & Governance with Sahil Agarwal

February 1, 2024

Description

In this episode, we delve into the evolving landscape of AI with Sahil Agarwal, Enkrypt AI's CEO, focusing on the paramount importance of security, privacy, and governance. Through a discussion rich with insights, we uncover the strategies and challenges faced by organizations in safeguarding data and ethical considerations in AI deployment. The conversation highlights innovative solutions and the critical need for robust frameworks to navigate the complexities of AI integration in our digital age, emphasizing the balance between innovation and ethical responsibility.

Listen on Spotify

Transcript

[00:00:06.120] - Simba Khadder
Hey, everyone. Simba Khadder here and you're listening to the MLOps Weekly Podcast. Today, I'm speaking to Sahil, who's the co-founder and CEO of Enkrypt AI.

[00:00:13.330] - Simba Khadder
He's a PhD in Applied Mathematics from Yale. Before starting Enkrypt AI, he led the AI team at a startup that developed and deployed AI models for major clients, including the Department of Defense.

[00:00:23.370] - Simba Khadder
His vision for Enkrypt AI stemmed from his desire to bridge the gap between enterprise data and AI models, focusing on creating software that addresses compliance, security, and governance and enterprise AI deployments. Hey, Sahil, it's great to have you today.

[00:00:38.990] - Sahil Agarwal
Thank you. It's good to be here.

[00:00:40.690] - Simba Khadder
I'd love to get started by learning more about your journey. We gave a bit of an introduction on you, but I'm always really interested in learning, especially people who went far in academia, who then transitioned into industry, and in your case, being a founder. I'd love to hear your story, learn more about that transition.

[00:00:59.330] - Sahil Agarwal
Of course. My background is in maths and computer science. I was in academia for almost a decade. I did my PhD in applied maths where I was developing data algorithms, where my focus was more on astrophysics, geophysics, not necessarily the work that we're doing today.

[00:01:19.050] - Sahil Agarwal
My PhD work essentially allowed me to transition to the field of machine learning, especially in security, privacy, looking at noise characteristics. How can we enable privacy, preserving machine learning? How can we enable security for anyone who wants to use these technologies?

[00:01:37.780] - Sahil Agarwal
After I graduated, I worked at a very early startup leading their AI team. That's where I got to work with large enterprises including Department of Defense, Warner Music, JPMorgan, financial services institutions. That's where we saw the problem where one party has all the valuable data, and especially today, data and especially today data is the new goal.

[00:02:01.430] - Sahil Agarwal
These early-stage startups or model providers who are building these models have the models to analyze that data. What we're seeing with LLMs, our take is that while data is the new goal, models themselves are becoming the digital crown jewel of model providers as well as enterprises who want to use these models to improve productivity, efficiency in many different areas.

[00:02:26.260] - Sahil Agarwal
The point is how do we secure that transaction where companies are bringing together their sensitive data and machine learning models and LLMs, especially LLMs itself? That's where we started Enkrypt last year.

[00:02:41.410] - Simba Khadder
Before we go any further, just because we've started talking about what are you working there, maybe you can give maybe a quick elevator pitch. For those who don't know, what is Enkrypt?

[00:02:50.820] - Sahil Agarwal
Enkrypt is an AI visibility and security platform that provides enterprises with complete view of all AI assets across the enterprise so that they can use these models responsibly, securely, and in a compliant manner. Our technology allows all of these different challenges on security, privacy, and compliance to be solved in a single manner.

[00:03:17.330] - Sahil Agarwal
This also enables external model providers to do private deployments of their models within enterprise environments where they can protect their IP, they can manage, and effectively monetize their models as well. The same software is opening up new revenue streams for model providers as well.

[00:03:37.540] - Simba Khadder
I want to unpack some of the things that you've laid out just because I think that there's lots of nuance around the different types of, for example, one statement you made is that data is new gold, which I've heard a lot, data is new gold, data is new oil. Pick a commodity, data is that now.

[00:03:55.610] - Simba Khadder
There was even this time where there was this huge focus on data moat. The data is your moat, and there was a lot of startups that were funded on this premise. My sense is that that idea of the data as a moat is no longer, I think, just obvious and clear. I think in practice, it's a little more nuanced and complicated.

[00:04:20.110] - Simba Khadder
When you say things like data is the new gold, what does that mean beyond data is obviously valuable? Where are you seeing that come true? Is there counter examples? Are there places where you have actually feel like there's more value today than places where maybe isn't?

[00:04:39.090] - Sahil Agarwal
Yes. Some of the examples is when you're looking at foundation models, which are very generic, any specific data point by itself is not as valuable in itself. These foundation models are being used for very genetic tasks itself where there is no specificity in itself.

[00:05:04.650] - Sahil Agarwal
What we're seeing in enterprises is models being used for very specific tasks to solve very specific needs. If you're using in finance, if you're using in a product to do a very specific task. There, a lot of enterprises are fine-tuning these models. You see this being provided both from closed source models as well as open source models, where they're really promoting that you can fine-tune your models on your dataset to improve that performance. That's where we're seeing that data is playing a huge part in it. That's where we see this is where data is essentially really needed to have that performance.

[00:05:57.350] - Simba Khadder
Because there's fine-tuned models versus rag or other techniques to get your data into the context window of a model, and then versus straight-up zero-shot learning and just prompt engineering to get it to do what you want. Prompt engineering and zero-shot work pretty well, otherwise, I think ChatGPT would not be as popular as it is.

[00:06:19.300] - Simba Khadder
Obviously, having more specificity, adding more of your specialized data in theory could get you way more performance out of your models. Do you think the future is more... I guess, one last piece is that historically we've had specialized models. We would build models for specific tasks. That was how machine learning was historically done.

[00:06:41.310] - Simba Khadder
The idea of using generic models for frames existed in NLP, or were in the same concepts used more specifically to NLP, but now it seems like that space of generic model use has really expanded. I guess, help me unpack that.

[00:06:57.390] - Simba Khadder
Maybe firstly, do you think that there's a future of specialized models, or do you think it's all going to be generic models with a handful of fine-tuned generic models? Do you think that traditional models like building a random forest is still something that's going to happen in five years?

[00:07:12.460] - Sahil Agarwal
I think, yes, it'll still happen, especially where the tasks are clearly defined, where the choices are clearly defined. If you don't want to worry about hallucinations, if you don't want to worry about the model straying away from what you need it for, that's where traditional machine learning models will still be used and are still being used inside industries.

[00:07:34.940] - Simba Khadder
Then with the generic models and LLM use cases, where does it make sense to fine-tune? Should every model be fine-tuned? What is the future of how we think about LLMs? Is it going to be a set of these specialized models, or are people going to really focus on building and using that generic model?

[00:07:54.450] - Sahil Agarwal
I think it'll be a mixture of both. What I mean with that is, today, the way I see this happening is a lot of enterprises or a lot of businesses are using these closed-source models, APIs, to test out use cases. What are the potential use cases I can solve with LLMs? Figure out whether it'll give me a good enough ROI or business value or not, productivity boost and so on and so forth.

[00:08:21.050] - Sahil Agarwal
When it comes time for production, I think that's where they start to go for specialized models or fine-tuned models so that they can continually improve the performance, the efficiency, improve user experience, improve productivity, and things like that. I think that's my view on how this field is evolving, how people are using foundation models today.

[00:08:45.530] - Simba Khadder
Let's dive into privacy and encryption. I guess the concept that you shared is this idea of having, is it the data being encrypted being sent to the model or the model being encrypted sent to the data or both?

[00:08:58.460] - Sahil Agarwal
Both. We're working on, rather than bringing data to models, bring models to data. That's the key point. Encryption is one way to do it. Name is probably a misnomer from that perspective. It's not necessarily just encryption-focused. We're using other technologies, but the business value is enabling models to go to data itself.

[00:09:23.740] - Sahil Agarwal
Any enterprise data doesn't have to leave its environment. It can all be processed inside enterprise's own cloud, on-prem, or private cloud, and the model is still secure, and the enterprise still knows everything about their AI assets, their models, where it's being processed, having that complete control.

[00:09:44.030] - Simba Khadder
I know encryption is not exactly the right word, but the idea is that you are bringing the model to the data. I would assume that the assumption is, "Hey, I train this model, I give it to you. I don't want you to have just unlimited access to the weights." Essentially, what you're doing is finding a way to keep the weights hidden so that you can only use the model when I let you. Is that how I should think about it?

[00:10:09.410] - Sahil Agarwal
For external model vendors, yes. For anyone who's giving the model to an enterprise, that's the value proposition to them, or that's the business proposition to them. For enterprises themselves, it could be these third-party models that they're getting, but also internal models that they develop.

[00:10:27.840] - Simba Khadder
Why would that be relevant there?

[00:10:29.640] - Sahil Agarwal
For internal models, it's not necessarily encryption of the model itself, but controls on the model. If you think about compliance, the use case there is if any model has been trained on, let's say, European data and the loss is the data cannot move outside the EU, and by fact, the model should also not be moved outside the EU.

[00:10:52.470] - Sahil Agarwal
I think Meta was recently fined, or a few years ago, fined for this. We can put in controls at the model where you can have AI sovereignty or model sovereignty being enforced on the models. Even if developers have access to the model artifacts, the model will only run when it's physically deployed inside a data center that's within the EU and not anywhere else.

[00:11:16.870] - Sahil Agarwal
These are the compliance things or from a security perspective as well, you want to ensure that only authorized people have access to these models, internal or external models. That's the value proposition to enterprises. It doesn't necessarily have to be for external model providers, but for any internal model as well.

[00:11:33.810] - Simba Khadder
Got it. In that situation, is the model, I guess, is it more of a central hub, like a compliance tool? Or is the IP much more around the encryption piece? Or is the encryption piece just like an add-on?

[00:11:48.850] - Sahil Agarwal
For enterprises, encryption is not necessarily needed. It's more of a centralized hub for knowing all their AI models and the compliance hub or security hub.

[00:12:00.860] - Simba Khadder
I would love to expand on that more. Can you walk me through, of a large company you know of, worked with, whatever, how is that done today? How are they keeping track of what models are, where they are? Can you illustrate the problem space for me that companies are facing?

[00:12:20.970] - Sahil Agarwal
Sure. I'll give you a use case that we're working on with a customer. What they're looking for is, they're a public company, they're looking at FedRAMP compliance in itself. What FedRAMP compliance means is only a specific set of people who are authorized to work with the federal government should be able to access the specified models. They need to be present physically inside the US to access those models. Any dataset that's being used for the models has to have a signature that's correct or be compliant to be used with those models. There are, along with a lot of other things.

[00:13:00.120] - Sahil Agarwal
The use case that we're working with or working on with this customer is building these controls on the model specifically that any user or any employee, if they're traveling outside the US, they'll know, like the company knows or the customer knows that they're not in the right place. If the models are being accessed because these models can be deployed on their laptops as well when they're working on it. The model stopped working as they physically across the US as well. All these assets only work when they're inside the US.

[00:13:38.330] - Sahil Agarwal
Today, these are fireable offenses, and they're looking for ways to actually enforce it. These things are, to your earlier point, how are these things being solved for today? From what they've told us is mostly by trust in itself, but given new regulations are coming on AI systems itself, not just on data, but specifically on AI systems, and CISO's lines, like next being on the line as well now with SEC investigating different CISOs to ensure that their right practices are being followed. These challenges are coming out from enterprises, that they really need to know the security posture or compliance posture before these models and systems go into production. That's where we're coming in.

[00:14:29.760] - Simba Khadder
It's really interesting to hear how much of it is location-based and how much of it is based on the regulation. Is your sense that this will continue to expand? The regulation around what can be used where? It seems like it's expanded a lot recently.

[00:14:45.470] - Simba Khadder
I'm curious to where you think things are going on that realm. Regulations, obviously, has become a really hot topic around AI, especially with Europe taking a very different stance, or the EU rather, taking a very different stance than other places. No one knows. What's your take on where things are going in that sense?

[00:15:03.290] - Sahil Agarwal
I think people are worried. The biggest thing is generally people don't understand how these models work. It's more out of fear that regulations are coming in. That's my view, my personal view, where they think that these LLMs would be like artificial general intelligence, which I don't think we're there yet, not in the next five years as well.

[00:15:25.500] - Sahil Agarwal
Out of fear, people are thinking about regulations or bringing regulations. In case those things come up, they'll have a way to regulate the technology. That's my view.

[00:15:38.090] - Sahil Agarwal
Yeah, different countries, different regulatory bodies are discussing it around the globe, not just EU or US, but India, China, Brazil, South Africa, specific nations in the EU as well, not just for the EU AI Act, but regulations within member nations as well are being discussed just so that they can control, or they know what's happening.

[00:16:01.170] - Simba Khadder
Do you have any thoughts on how companies can prepare for that? Is there anything they can do other than just being ready to set up compliance rules?

[00:16:10.530] - Sahil Agarwal
Based on our customer conversations, there are two parties in it. One set of CISOs and chief compliance officers are really starting with a very restricted sandbox environment saying you'll only use very minimal use cases and very specific models within the sandbox. As and when regulations become more clear, will allow you to expand the sandbox environment.

[00:16:38.040] - Sahil Agarwal
Whereas other companies that you're talking to are saying, "Go berserk, and we'll figure out what happens or how do we bring in regulations once those come in." The thing with regulations is they take a long time to enforce. They take a long time to be discussed and even longer to enforce. That's another thing that is keeping enterprises from pushing these models into production, in my view, based on our conversations, is the reason why everyone's saying it will be 1–2 years before these models go fully into production.

[00:17:14.960] - Simba Khadder
Coming back to compliance and governance, so we've had the problem of data compliance, data governance for a lot longer and we've had model governance. Maybe starting first, are these things different? Is it different to govern a model than it is to govern data? Why come you can't use the same methodologies and tools? Why does the model need a special layer?

[00:17:38.620] - Sahil Agarwal
They're similar, but the difference that's coming in is while data was being used not necessarily for just machine learning models, but in "regular normal software," algorithmic software itself, with LLMs, they're essentially replacing that software itself rather than being an add-on to your product, they're becoming the product. That's where regulating models is becoming different than regulating data in itself. That's one view.

[00:18:14.040] - Sahil Agarwal
From other perspective, when you talk about hiring, when you talk about education, the use of models or AI in these scenarios is much more different because these are directly affecting human lives. You need to understand how these models work, you need to understand if there was bias or fairness, things like that. That people have already been thinking about, but from LLM perspective, you need to ensure that you're not sending any private data, you're not letting the model decide how the world works. You have humans behind the models who can think about how the world works.

[00:18:52.270] - Simba Khadder
How tied do you think governance and compliance is to explainability and hallucinations and things like that?

[00:18:59.160] - Sahil Agarwal
I think it's very intertwined, especially governance in itself. One aspect of governance is fairness and bias and those metrics. With LLMs, hallucinations have become a huge problem.

[00:19:11.130] - Sahil Agarwal
Just in December, I think there were memes floating around where I think it was Chevrolet or some company who put on a chatbot, and it was recommending Tesla. That buy a Tesla or buy it for one dollar or whatever. These are really reputational risks, legal risks for these enterprises, and the challenge that needs to be solved.

[00:19:35.690] - Simba Khadder
Is the solution here, compliance tools, is to understand what's being used where, et cetera, some of the stuff you're talking about? How much of it do you think is going to come down to explainability? Do you think that we need explainability or some form?

[00:19:51.540] - Simba Khadder
Because I guess maybe I have the assumption that hallucinations are an unsolvable problem. Do you buy that first? LLMs as they are. I don't think you can necessarily solve them unless you essentially give the LLM a rip out 90% of its brain.

[00:20:08.940] - Sahil Agarwal
Yeah, so that's a very interesting question. If you think from why an LLM was built or are being built is they're essentially mimicking humans. As humans, you can't stop them from saying what they can. You're absolutely right that it's not necessarily possible to stop hallucinations from an LLM as well. They can speak whatever or they can generate whatever.

[00:20:38.560] - Sahil Agarwal
You're not building internal mechanisms to stop hallucinations. The point is can you build stuff around the models similar to guardrails around the model itself that can either detect hallucination with respect to the context that you're providing it or with respect to the view of the world that you're providing it with. I think that's where hallucinations would come in or be stopped or be detected or prevented.

[00:21:05.990] - Sahil Agarwal
It's like you start as a kindergarten, graduate high school, and undergrad and graduate studies and so on and so forth. Just making it more and more specialized.

[00:21:17.480] - Simba Khadder
I guess that's maybe part of where I'm getting at. It sounds like based on this conversation so far, maybe projecting a bit of things I've learned, it seems maybe the hallucinations are unsolvable. You can continuously push the problem down. It's almost like a race condition.

[00:21:34.960] - Simba Khadder
If you have a complex enough piece of software, it has race conditions, period. There's almost no chance that you're completely... Even the Linux kernel surely has some edge cases where you can make it do things it shouldn't do. The whole idea is like, can you almost catch those things? Can you make those things so that when they are caught, they can be easily fixed? Can you maybe limit the pain caused by those sorts of things?

[00:22:01.170] - Simba Khadder
You will never fully fix it, but you can maybe put enough armor around it where it's not as painful or the most obvious, like the recommending a Tesla, if you're a Chevy, those sorts of things, you can catch. It sounds like almost like there's two parts to it, which one is just the guardrails itself, putting together rules or logic to be able to start catching these things before it causes issues. I guess, the other piece is just having some form of basic compliance and auditing of just knowing what's being used, where are you allowed to use LLMs, what use cases? Maybe should it not be allowed to what regions? Should you not be using it in, et cetera?

[00:22:43.010] - Simba Khadder
I guess if you tie both of those together, it provides the armor you need to be able to iterate on the actual performance in LLM without having to worry about what happens if... Not necessarily not worrying about what happens, but just having some more protection against doing really dumb things and edge cases or against actors who are trying to screw you over.

[00:23:05.750] - Sahil Agarwal
Yeah, I completely agree with that because one of the customers, their words were, "We're trying to make the system idiot-proof, where people know that they shouldn't be entering credit card numbers or SSNs, but they still do." It's a very simplistic example, but that goes into hallucinations and goes into all the different challenges that come with LLMs. It's that.

[00:23:33.440] - Simba Khadder
Yeah, there's two parts to it. One is I think what you're saying, which is just like getting like I can say idiot-proofing, I'll just use your words. The other piece of it is I haven't worked in recommender systems. One statement, which I've always loved, there was a time where GANs were the coolest thing ever.

[00:23:52.830] - Simba Khadder
Someone I knew and recommended the system said, "Yeah, the greatest adversarial network ever created is a million human binds that are financially incentivized to break your system." Google search is probably the greatest GAN of all time, except it's not generative. It's just constantly dealing with adversarial networks trying to gamify their results, say in the recommender systems. I think that we're falling into the same problem statement of LLMs, which is that you can do a lot to just catch the basic things, but humans, a lot of people who are incentivized to make your LLM look dumb, that's the worst adversarial opponent to go up against because there's really something you didn't think about, and if it can be done, it will be found. It's insane.

[00:24:44.790] - Sahil Agarwal
Yeah.

[00:24:45.930] - Simba Khadder
What things do you yourself read to keep track of what's going on? It just feels like there's also a fire hose of information about things. How do you keep on top of it?

[00:24:58.040] - Sahil Agarwal
Multiple things. I try to spend a couple of hours a day reading into what new regulations are coming up, how different people from different areas are thinking about, not just from technology, but also from business, keeping up to date with how thought leaders from politics or technology, small business owners are thinking about using LLMs and AI within their systems. That's mostly it.

[00:25:25.330] - Simba Khadder
You drop some names or channels or things, just like if someone's listening right now, and they're like, "Hey, who should I be following, but I'm not following?" Or, "What should I be reading?" Is there any maybe just even one that you really admire, think is a good one to keep up with things?

[00:25:39.170] - Sahil Agarwal
Yeah. People like Andrew Ng or Yann LeCun, these are obvious names, but also I have a couple of people who I know in the Indian government cybersecurity arm where they keep sharing articles, things like that. Those are two names that I regularly follow, but otherwise, social media.

[00:26:03.930] - Simba Khadder
Yeah, I mean, it's like whatever social media you use is great. It's how I'm keeping up with things. There's no other way too. It feels like it's just lost information. It feels like most of the breakthroughs are happening and being shared in small communities and networks way before they bubble up to big news.

[00:26:24.830] - Simba Khadder
If you're trying to keep up with what's the latest and greatest, I feel like you have to find the creators who's actually building things. I think also the CISOs and executives are a different perspective or different way of... You just want to get all the different perspectives around the LLMs because everyone's thinking about it. There's not one part of a company that hasn't at least thought about, "Hey, how does this change things for us?"

[00:26:46.570] - Sahil Agarwal
I mean, one actually great resource that I read, it is, again, obvious, but Hacker News, where there are more developers are thinking about integrating these LLMs into different applications as well. What are the different applications being thought about for LLMs as well? Challenges both from infrastructure and otherwise or data side. What are the performance metrics that they're seeing for specific use cases? A lot of people discuss that on Hacker News as well.

[00:27:14.300] - Simba Khadder
Awesome. This has been great. I've really enjoyed talking. It's like a perspective. We don't talk a ton about it in the podcast, but I think it's so important: governance, compliance, anyone who's working at a big company, it's like, unfortunately, you think about it more than you think about actually building models sometimes.

[00:27:31.190] - Simba Khadder
Thanks again for hopping on and sharing information. We'll include a link to your socials and to the company so that if anyone wants to follow along or reach out, we'll be able to. Thanks for hopping on.

[00:27:42.370] - Sahil Agarwal
Thank you.

Related Listening

From overviews to niche applications and everything in between, explore current discussion and commentary on feature management.

explore our resources