Reducing Friction on Data Teams with Harry Glaser

Episode 
20

MLOps Weekly Podcast

Reducing Friction on Data Teams with Harry Glaser
Harry Glaser, Co-founder and CEO at Modelbit

Listen on Spotify
Listen on Apple Podcasts


On Week 19 of the MLOps Weekly Podcast, Simba sits down with Modelbit Co-founder and CEO, Harry Glaser, to discuss the evolving landscape of data science roles, the friction between different data teams, and the importance of prioritizing organizational and workflow problems when building MLOps tools with impact.

Transcript

[00:00:05.360] - Simba Khadder

Hey, everyone. Simba Khadder here, and you're listening tothe MLOps weekly podcast. This week I'm speaking with Harry Glaser. Harry isthe Co-founder and CEO of Modelbit, the easiest way for data scientists todeploy ML models. He was previously the Co-founder and CEO of Periscope Data,which was acquired by Sisense in 2019. He started his career A/B testing thesearch results page at Google, and in his spare time, he still commit a littlebit of code to Modelbit, much to the frustration of his co-founder.

 

[00:00:31.100] - Simba Khadder

Harry, great to have you on today.

 

[00:00:33.680] - Harry Glaser

Thank you, Simba. It's great to be here.

 

[00:00:35.920] - Simba Khadder

We talked a bit about how your journey… I guess, lastly, youwere working more on the BI side. Now you're obviously working more on the MLside and deployment MLOps side. I want to go back to the BI days. I thinkthere's this question I've had that I don't think is a new question, and Idon't think it's a resolve question, funny enough, which is the marketisationof data.

 

[00:00:57.210] - Simba Khadder

Part of the pitch of BI was, "Hey, anyone can go andbuild their own dashboards and reports." We could argue there's a strongcountermovement to that of, "Hey, we have a ton of broken reports andpeople using data that's incorrect because data is tough." What's theperfect world look like? What's the state? How should, if I manage a large… Howdo I think about that dichotomy?

 

[00:01:18.620] - Harry Glaser

I think one of the first things to understand about this isthat this is not a new question or a new problem. BI as a category has existedin some form since the '60s. I mean, in some form, since the East India Companyand handwritten reports. You can go look at those ledgers, and they're doingbusiness intelligence. They're calculating their cost of customer acquisition.

 

[00:01:36.810] - Harry Glaser

But on computers since at least the '60s, and it really gotgoing in the '80s with Cognos and then MicroStrategy. In those days, there weredata analysts who worked directly on Cognos and on their Vertica warehouse andthen to deliver those reports to sales managers. Then sales managers would tryto do the reports themselves and get it wrong or something. That tensionexisted through Tableau. Then fast forward to what we call now the modern datastack, which typically includes a BI product like Looker, or if I'm flatteringmyself, Periscope Data, which was my company, or Mode, or any of thosecompanies.

 

[00:02:08.850] - Harry Glaser

I think to answer your question directly, the ideal space isjust provide a… Or the ideal outcome is provide a suite of capabilities.Provide a drag-and-drop measures and dimensions interface, Tableau style, forpeople who are comfortable working with a data model that you defined andunderstand how to make measures and dimensions and facts and how to get theirown report.

 

[00:02:34.620] - Harry Glaser

They'll be happy, and they'll be able to do that. Thenprovide a data mart and a SQL editor on top of it for people who want to workwith only correct data but know their SQL and can understand whether the jointheir writing might or might not be valid. Then provide something under thehood for the people who want to go under the hood and let people grow and letpeople learn. But understand that self-service for a sales manager is going tolook like, "Let me change the dimension or the filter on this report thatyou made for me."

 

[00:03:01.230] - Simba Khadder

It makes sense. The way you put it, this is the pattern Ithink that a lot of companies have tried to emulate. The amount of companiesthat maybe have been successful in emulating it is another question. It feelslike it should be easy. Like you said, [crosstalk 00:03:16] something we'vebeen doing forever. But there's many generations of companies that have gonepublic that have come and gone trying to make data easier. We're still here.

 

[00:03:29.090] - Harry Glaser

God bless BI as a category. Since those days of Cognos andMicroStrategy, every generation, there's a new couple winners and a new way todo it. One thing I will say about MicroStrategy is nobody was opening it up andwriting SQL. The idea that you could write SQL live to get reports was onlypossible with the advent of cloud data warehouses like Redshift and Snowflake.Before that, it was impossible. That's great. It gave rise to a new generationof winners just like before that; the rise of the internet gave rise to a newgeneration of winners like Tableau, where you could have a client-server modelfor the first time.

 

[00:04:00.670] - Harry Glaser

All good stuff. But yeah, this idea that your sales managershould be able to write their own SQL report, or on the flip side, this ideathat, no, the data analyst is the only one who will access data, and the wayyou get data should be to ask data analysts. That's not realistic. It's likeFord Motor Company. The size of the data team at Ford Motor Company or PizzaHut is just not equal to the task of writing individual reports for everybodywho wants reports.

 

[00:04:27.130] - Harry Glaser

Yeah, you're going to have to live in a world where youbuild some data marts and some business intelligence and some drag-and-drop forthose companies. Then on the flip side of that, if you're a really hyper-moderncompany, if you're OpenAI where you think probably the capability of theaverage employee to write SQL is high, then yeah, you want a differenttoolchain and you'll be able to do different kinds of data-driven decision making.

 

[00:04:49.230] - Simba Khadder

In software, I know you're at Google, I was at Google for alittle bit, I feel like we knew how to do software pretty well.

 

[00:04:57.120] - Harry Glaser

Yeah, sure.

 

[00:04:57.800] - Simba Khadder

In data, I actually have a hard time even thinking of a goldstandard company, especially for ML. When I think of MLOps, it's hard to eventhink of, oh, we all should be like this company. But with DevOps, I feel likethere were much more clear examples like, hey, in a perfect world, if we couldall do something that looked like Google, we would all be amazing at this. Whyis it so different? From your perspective, why is DevOps and MLOps sodifferent?

 

[00:05:21.450] - Harry Glaser

It's just newer. It wasn't that long ago. If you joined SunMicrosystems when they were a hot company, when they were in the Facebookcampus before Facebook was there. DevOps was FTP your build to the buildserver. Netscape famously, when they launched, this was not that long ago, 20years ago, 30 years ago when Netscape launched, they were uploading the latestbinaries to the FTP server and then people were downloading it. That wasDevOps. Works on my machine was literally a phrase that you would use. Seemslike it's good. I don't know. I'll upload a readme.txt with what I did in caseit doesn't work for you.

 

[00:05:55.060] - Harry Glaser

Then you had client-server models and then you had sourcecontrol merged with your build process to produce CI/CD and Heroku and allthese things. Now we've got it down to, at least for this generation, a prettygood science. ML, as a practice that all these companies… It's not justhappening in research labs, but it's happening as part of people's productionenvironments. It's just so new that best practices are still being worked out.You still get worked on my box. You get, well, I trained the model with Torch1.4.1, so is it going to work with 2.0.1? No, I'm sorry. Install an olderversion of Torch or your SOL. We're just still learning.

 

[00:06:36.160] - Simba Khadder

What's the current state of teams? We mentioned… We all knowthere's this idea of data scientists. There are ML engineers, data engineers.There are now even product people who live in the ML side and the data side,there are data analysts. There's so many, let's call them players in the gameof data. Maybe we can start. I'd love to get your take on what's an averagecompany look like from your perspective?

 

[00:07:02.610] - Harry Glaser

Well, I don't know if an average company is doing much ML atall, but I think… Or yet. But the average company that's doing this stuff, Iwould say, you have a head of data, director of data, VP of data running thedata team, and that person will typically report into the back officesomewhere, the CFO, the COO, something like that.

 

[00:07:22.270] - Harry Glaser

They will have three groups, the data analysts who do the BIthat we were just talking about, the data engineers who get the data into thesource of truth, maybe a Snowflake warehouse, maybe somewhere else, and ensurethat it's true. Then data scientists who are training and building ML models,and they report into the data org today, typically.

 

[00:07:40.770] - Harry Glaser

Then you'll see an organization of ML engineers who reportto the VP engineering, so different org, and they are responsible for deployingthe models to production. You'll obviously see some tension, and we can talkmore about this, between the ML engineers and the data scientists for thesimple root cause that they're supposed to collaborate closely together, butthey're in completely different orgs with different incentives and a totallydifferent skill set.

 

[00:08:03.380] - Harry Glaser

That, I think, is probably your state of the art right now.Super advanced companies might layer on additional roles like a data productmanager like you talked about, or an ML product manager. But those four teamsinside of two orgs are where I think we see the typical company today.

 

[00:08:18.390] - Simba Khadder

If you were running an org like that and you had the optionto change it, would you change it? Do you think this is a good state to workfrom?

 

[00:08:27.170] - Harry Glaser

I would buy an MLOps product that lets my data scientistsdeploy models without ML engineers, or I should say, maybe more accurately,supervised by ML engineers, but without them directly in the loop of eachmodel. I personally think Modelbit is a great choice, but there are others.

 

[00:08:41.730] - Harry Glaser

I would keep the ML engineers, though, to administer thatand integrate it. Also, you'll have the ML engineers more tightly in the loopfor your most important models. If you think about yourself as… I'll take arandom example. If you're Stripe, you have a fraud model, probably multiplefraud models, that are critical to the company and are built entirely customfor the company's business. The ML engineers will be deeply involved indeploying that model and also in building that model and training it. Probably,it will be an ensemble of many things.

 

[00:09:11.430] - Harry Glaser

But then you will have many other models supporting manyother parts of the business. Search rankers for your customer search bar andsome back office things like customer churn predictors. Those, I would getsomething. I would have the ML engineers buy something for the data scientiststo use so that they can just rock and roll in a Jupyter Notebook really fastand then deploy those models right out of the Jupyter Notebook.

 

[00:09:35.840] - Simba Khadder

I guess why can't ML engineers on data science or datascientists not learn, I guess, MLOps or ML engineering? Do you think it's toohard? Do you think that in the end, that role will combine, or do you thinkthere will always be a separation of the roles?

 

[00:09:49.870] - Harry Glaser

I think it's moving a little bit fast right now for us tomake confident predictions about where the roles will land. But I think mostpeople can learn most things. I don't think there's a reason why they couldn't,but they're different jobs. Can most product managers learn to code and becomesoftware engineers? Sure. Should they all do that? I don't know. I thinkthere's a need for product managers.

 

[00:10:07.850] - Harry Glaser

I've certainly known data scientists who get really excitedabout the DevOps part of it, are unsatisfied by simply handing off their modelsand want to know what it is the ML engineer is doing. But I think that's notmost data scientists that I know. Most of them are very into the experimentaland research and math part of it, which is cool because it's such a novel area.It's such a new area of research. It's so interesting and they want to push theboundaries of that. It's good that they're supported by ML engineers who canhandle a lot of the DevOps and software engineering side of things.

 

[00:10:38.590] - Simba Khadder

I've even seen a separation between data scientists, MLengineer, and MLOps engineer. MLOps engineer is like the platform engineer,which is another term we see. Then ML engineer is more like, I can put thingsin production, but the data scientists are very much like, I know the state ofthe art of what is possible to do with the data. I guess where are you seeingthe most friction in this? Is the friction in each silo themselves or is thefriction across the teams or how do you view it?

 

[00:11:09.150] - Harry Glaser

I think the biggest friction, I think, is between the datascientist and the ML engineer. I think it's because the data scientist wants torock and roll and move fast, and they are not generally responsible for whetherthe thing holds up in production. Then the ML engineer is incentivised to makesure the thing works well in production, doesn't crash, and is not incentivisedon data science, iteration speed.

 

[00:11:34.380] - Harry Glaser

You just have this tension where the ML engineer has a dayjob that they love. Maybe it's working on the core infrastructure for thecompany, or the most important models or something. Let me take your PyTorchthing that you made this week and put it into production on a Friday afternoon,is not what they had in mind for their Friday afternoon. Then also, reallyperniciously, let's say the data scientist makes a model that better predictscustomer churn and it saves the company a bunch of money, and they give it tothe ML engineer to deploy and the ML engineer deploys it and it saves thecompany so much money that the CFO is going to give out a bonus at the end ofthe quarter.

 

[00:12:11.460] - Harry Glaser

They're going to give the bonus to the data scientist whomade the model. There's no upside here for the ML engineer. It's all downsidebecause if the model crashes, well, who's responsible for preventing crashes?The ML engineer. You have that just tension. I think however you design anorganization, you're going to have different people in different orgs, and youwill always have tension at the boundaries of organization. It's not simply amatter of moving people into a different organization.

 

[00:12:36.780] - Harry Glaser

Let's say you move the ML engineers under the head of data.Well, now there's tension between the ML engineers and the software engineers.Or you move the data scientists into software engineering. Well, now there'stension between them and the data engineers who are getting them their data.You just have to pick your tension and manage to it, understand that it's goingto be there, some amount of it is healthy, and actively manage it.

 

[00:12:57.880] - Simba Khadder

Yeah, I think it makes a lot of sense. I'm curious when wethink about the parts of the plan, when I break down the ML lifecycle or themodel lifecycle, let's call it four pieces in my head. One is, let's call itdata. Everything from getting the right data, cleaning it, featuring during thewhole feature stores, the whole bit.

 

[00:13:18.650] - Simba Khadder

I would say there's experimentation, which is the training,hyper-parameter tuning. It's where ML flow lives. Deployment, which productslike yours are quite good at, like Modelbit. Then there's monitoring andevaluation. There's also a slew of companies in that space. Obviously, everycompany is different. When you think of those four, from what you're seeing,one, is that split correct? Do you think there's a part of that split that yousee as one of the more common friction points?

 

[00:13:48.950] - Harry Glaser

I think that that split is correct. I would probablyvisualise it as a cycle. What really characterises ML and data science from aproduct point of view relative to other product disciplines is the fact thatit's this continuously evolving thing. You want to really reduce the iterationtime on your model. You want to be retraining and redeploying as often aspossible. If you think about that from the get-go, you can really minimise thefriction, which otherwise tends to occur, I think, post-deployment to yourpoint.

 

[00:14:23.610] - Harry Glaser

We have a lot of customers who… What they really want tooptimise for is not having to call the software engineering team again. Datascientists want to deploy their model, and they want to deploy it in such a waythat they can redeploy their model over and over and over again to the sameend-point so that they don't have to recall the software engineers, which is alittle bit crazy because you would think in a healthy culture, you would wantto involve the software engineers. Maybe asking for another code push isunnecessary, but there's going to be a fundamentally new thing behind thisend-point, and you, as the ones who are responsible for this stack overall,ought to be collaborating with us on that decision.

 

[00:14:58.920] - Harry Glaser

But the tension has gotten to a point where they just don'twant to communicate post-deployment. They want to feel like, okay, I finallygot my sandbox where I can put my models and I'm going to play in it as much asI can without calling the software engineering team back.

 

[00:15:11.310] - Harry Glaser

I think post-deployment is the biggest source of friction,and I think the way to mitigate it is for everybody to be bought into the notionthat this isn't like feature development, feature as in a product feature,where we build the feature, we ship it, and we're done until we decide to comeback to it. This is going to be a cycle of constant evolution and iteration.Let's talk about what that means and let's all buy into it.

 

[00:15:34.620] - Simba Khadder

I think it's what makes data and ML especially unique isthat it's very iterative and there's a lot of… It's not a straight path. We'renot always completely sure where we're going. Sometimes when we're lucky, wehave a sense of what requirements of end state are, but we don't have thatsometimes. This is entirely like, hey, this would be cool. I think this is agood idea. Let's try it.

 

[00:15:54.780] - Simba Khadder

A lot of times, I'm like product where I think softwareengineering, it's very rare to work on a product feature that is completelyexperimental. I don't even know if this is possible. Usually, it's like,"Yeah, we think this is possible. How can we do it?" Then now, it's quitecommon to be like, "This might literally just not be possible, what wehave. We might not have the right data."

 

[00:16:15.680] - Harry Glaser

Yeah, right. It's very experimental. Will the thing evenwork? Will it successfully predict what we wanted to predict? Will it changebusiness outcomes if it does? All these things should be viewed as anexperiment. Absolutely.

 

[00:16:26.590] - Simba Khadder

Even we mentioned cycles and iterations. When we weretalking earlier about data scientists handing things over to ML engineers, italmost feels a little more, let's call it waterfall-y, which is like, I finishthe model, I pass it-

 

[00:16:40.430] - Harry Glaser

Yeah, right. I think waterfall has become a dirty word evenin software engineering. But software engineering certainly looks morewaterfall, even a very agile software engineering process would look verywaterfall compared to a ML process. Yeah, this expectation that, "Okay, Ideployed your model. Now I'm done. Why are you calling me again tomorrow?"That's the root of the issue.

 

[00:17:01.740] - Simba Khadder

Do you think that ML engineers and data scientists should betalking? Should the data scientists be aware and be thinking about, "Hey,this is what's going to take to deploy this thing?" Because I've seen MLengineers where they're like, "Yeah, the data scientists have it workingand they're mad at me, but I can't make this magical thing."

 

[00:17:15.820] - Harry Glaser

I think the ML engineers not appreciating the iterative,experimental nature of the thing is one side of the coin. The other side of thecoin is the data scientists not appreciating how big of a delta there isbetween what they made and what's ready for production. It comes from them notbeing software engineers, typically.

 

[00:17:32.680] - Harry Glaser

You're like, "Okay, here's my X 100 megabyte PyTorchasset. Please take it." The software engineer or the ML engineer is like,"Okay, what Python environment was this in? What version of PyTorch? Whatare all these dependencies? Do the dependencies have dependencies that aren'twell defined, and I have to go figure it out for myself? Also, we don't runPython in production. Am I rewriting this thing, or are we containerizing it insome way? That's not really how we deploy other things at this company."

 

[00:18:00.000] - Harry Glaser

Yeah, that's all an appreciation that at the end of the day,to put the model into the product, the model is going to have to meet youhalfway and get to a place where it can work inside of a productioninfrastructure that does many other things.

 

[00:18:11.840] - Simba Khadder

Going slightly different topic. Obviously, you're nowworking in MLOps. You're working the space. You had built a successful companybefore. You've grown to over 150 people, over a 1,000 customers. You sold it.What's the motivation? What made you, one, go back to being the masochism ofstarting a company over again? Then two, why choose this space?

 

[00:18:35.320] - Harry Glaser

I think there's something addictive about starting andrunning your own company, especially in the early growth phase when you getthat product market fit and then you're scaffolding the whole company and thenyou're hiring people as fast as you can and bringing on systems and bringing onnew customers, and you raise that series A. I'm old enough to raising a seriesA with what you did after you had a bunch of customers in product market fit.You're putting a board together for the first time, and all that happens reallyfast.

 

[00:19:05.220] - Harry Glaser

I think we all end up chasing the dragon. We want to getback to that. I'm fortunate that I have a co-founder who I love, and we'veworked together for a couple of decades now, and we know each other reallywell. I think probably the biggest factor is that he was willing to go again.It's like, "All right, yeah, let's do it."

 

[00:19:22.110] - Harry Glaser

We briefly experimented with things in a lot of differentspaces. I had a lot of startup ideas. Just building Periscope, we would alwaysjot down other startup ideas, and we thought about them. You wonder constantly,why does my finance team need to employ a small army of junior humans toreconcile spreadsheets with each other?

 

[00:19:40.990] - Harry Glaser

There's all these whys as you're building a company, but wedid end up gravitating back to the data space because we know it very well andwe like it. There's lots of little quality-of-life benefits in the data space.The customers tend to be really smart and positive and kind people. We justreally like that. We like building technical products. We think we're good atit.

 

[00:19:59.600] - Harry Glaser

Then, yeah, one of the problems we had observed was we sawthe formation of these modern data teams. Snowflake and Redshift launched. Thatcatalysed the formation of the data team as we now think about it. You havethis head of data managing data analysts and data engineers and datascientists. The data analysts and data engineers were collaborating on thismodern data stack that they would build together where you get Snowflake andthen we're going to get maybe DBT or we'll use LookML, and then we're going toget Looker or Periscope, and then we're going to get Hightouch or Census, andwe'll all collaborate together on the stack.

 

[00:20:34.490] - Harry Glaser

Meanwhile, the data scientists are still off in thesesingle-player user experiences where they're either on a Jupyter Notebook orthey're on a cloud notebook like Hex or Deepnote, but still by themselves.They're not part of the collaboration that the rest of the team is goingthrough, and they're stuck. They're doing all this cool stuff, but they'restuck.

 

[00:20:51.120] - Harry Glaser

We observed that because it was adjacent to what we weredoing. We were mainly serving these data analysts, but we met many datascientists, and we were confused used about why they were stuck and they weredissatisfied, and sometimes the organisation was dissatisfied with them.

 

[00:21:05.690] - Harry Glaser

When we got back together, we just started calling lots ofdata scientists we knew and asking and interviewing. It led us down this pathwhere we built a prototype of Modelbit to see if they liked it. Then theyreally did and they started using it. They started using it a lot. We were off tothe races.

 

[00:21:21.160] - Simba Khadder

What makes a good MLOps product?

 

[00:21:23.610] - Harry Glaser

How much time do you have? I think it's actually not sodifferent from building any other product. What makes a great one is trueempathy for the user and understanding of not just what the user needs from anuts and bolts point of view, but where they are in their employee lifecycleand how they are feeling. Something that's very important to the model thatproduct design is just this tension that we've talked about between datascientists and ML engineers and therefore, how everybody is feeling whenthey're using the product.

 

[00:21:58.220] - Harry Glaser

To use a concrete example, a lot of Modelbit's settings andadministration is designed with the fact in mind that the ML engineer who'ssetting those things up is concerned about a loss of control. They used tohand-deploy all the models, and now they are no longer going to be doing that.Model that's going to be automatically doing that. They're worried that they nolonger have control of the process.

 

[00:22:22.070] - Harry Glaser

Giving them a feeling of control, like they can micromanageevery aspect of it, they can configure it, they can shape it to theirparticular production constraints is really important to that buy-in.

 

[00:22:33.100] - Harry Glaser

I think you could probably tell an analogous story aboutevery product on Earth, but an empathy for how the user is feeling whilethey're using the product I think separates good and great. Yeah, maybe I'llstop there. I think that's probably the main thing.

 

[00:22:45.430] - Simba Khadder

I think one thing I've noticed, and I think BI fits thistoo, I think that companies that are successful in categories like MLOps and BIare solving organisational problems. It's just barely-

 

[00:22:59.960] - Harry Glaser

I think most generational companies solve organisationalproblems. I think Salesforce solves the problem of the VP sales needs to knowwhat his sales reps are doing and whether they're going to make the quarter inthe status of their pipeline. Snowflake solves an organisational problem of wefailed to log and ingest our data correctly. Snowflake is the first companythat lets you just solve the problem with money. We're just going to write a reallyinefficient query on a really inefficiently organised data set that's toolarge, and Snowflake is still going to return me a result in 30 seconds. It'sjust going to charge me for it. Whereas previous warehouses would have beenlike, "No, you should have done this in your ETL step."

 

[00:23:41.330] - Simba Khadder

That's a really funny way to put it. It's true. I think it'sbeen interesting to see some of the pushback towards the market data stack asof recent.

 

[00:23:49.920] - Harry Glaser

Oh, yeah. Well, I think once the market turned, it was like,wow, remodeling all the company's historical data since the dawn of time everynight was a zero interest rate phenomenon. It should no longer be how weorganise our data.

 

[00:24:02.940] - Simba Khadder

Looking forward, I'd be amiss to not bring up LLMs and thenew, let's call it paradigm machine learning. I guess first question, we spenta lot of this time talking first about data analysts, we then talked about datascientists and ML. We talked about both, let's call it metrics and machinelearning. Now there's this new thing, which I guess we're calling AI as of now.Where does it fit in? Is this a new paradigm? Is this replacing all things? Isthis augmenting ML and analysts, all of it?

 

[00:24:36.860] - Harry Glaser

I think it's just net new. To use a toy example, an LLM isnot a classifier. It will not classify. It'll make shit up about classificationif you want because it's basically a text generator. But it's an augmentation.We see companies where…

 

[00:24:51.960] - Harry Glaser

Let's take an example. They were using ML already to look atcustomer support chat requests and classify the kind of chat request it was.This person is asking for a product feature, this person is asking to churn. Ifthis person is asking to churn, then therefore, what is the rules-based systemor decision-tree type of system or whatever we're going to use to decidewhether they're going to churn? We might want to do some text extraction tolearn why they want to churn and therefore, whether we want to offer them adowngrade instead or whatever. That's all ML that already exists.

 

[00:25:22.540] - Harry Glaser

But then on top of that, we're going to layer on new ML,which is we're actually going to talk back to them without a human in the loopbecause we can take the response that we've decided to apply, whether it beoffering a downgrade, granting the churn, whatever, and we're going to talkback to them and communicate that back to them using an LLM.

 

[00:25:40.350] - Harry Glaser

I think this is a classic example of it just being that new.Certainly, it unlocks new things where companies who didn't have a use casebefore now have a use case for an LLM, but you also have companies that haveuse cases for other forms of ML, also having a use case for an LLM. I thinkit's net new. We haven't seen it replace other types of ML just because theother types of ML are so different. But it's still very young. We'll see whereit goes.

 

[00:26:04.760] - Simba Khadder

What makes the other types of ML so different? You mentionedobviously that's not a classifier, but I'm curious to dig into that more.

 

[00:26:10.560] - Harry Glaser

Well, I think it's really more of a spectrum than discretebuckets, but you can think of it as discrete buckets with spectrums in between.Okay, I want to predict the probability of a binary event, like a regression. Iwant to classify things into different groups. I want to classify things intodifferent groups without knowing ahead of time what the groups are. Then we canmove on from that to creating things. I want to create an image. I want tocreate a movie. I want to create a piece of a text. These are all differentthings that we might want to do. What the LLMs have done is create a net newtype of thing that was not previously possible, or at least not previouslypossible at this level of sophistication.

 

[00:26:55.080] - Simba Khadder

Yeah, I think that's spot on. I've framed it as there'salmost four categories of models or types of models that have to be reallycheap, fast, and explainable. We see this first pass fraud detection, and thosewill almost always be traditional ML because LLMs are way too slow. Could youimagine every single credit transact?

 

[00:27:16.080] - Harry Glaser

We also have no… Literally, as a text predictor. That is oneway to predict fraud. It might be better than some other ways. It might bebetter than certain humans, but it's certainly not better to your point than areal classifier.

 

[00:27:28.530] - Simba Khadder

Totally. I think there's that category, which is most modelsin production. You mentioned fraud detection, and it's like such a… One of theprobably the most commonly deployed ML model is like some flavour of [inaudible00:27:40].

 

[00:27:41.680] - Harry Glaser

I think the most profitable models in the world are probablystill Rankers, which is another form that we didn't talk about.

 

[00:27:49.300] - Simba Khadder

There's this middle layer, which I would call recommendersystems and other thing.

 

[00:27:57.080] - Harry Glaser

Collaborative filters, these kinds of things.

 

[00:27:59.080] - Simba Khadder

We're seeing it happen already where you're seeingembeddings, which aren't a new concept necessarily, but I think they're gettingsupercharged with all the LLM stuff. I think what will be interesting isseeing… I guess that's where we'll see LLMs mix into traditional models. Ithink they'll be going to this grey area is my sense of things where there aresome models that might be better if you augment them with an LLM or vice versa.You have an LLM and you augment the context with more traditional features. Youwant to give someone a recommendation on how to think about their finances?Well, you probably want to know how much money is in their bank account. Howold are they? A mix of other things.

 

[00:28:37.300] - Simba Khadder

I think where we really see the true net new things we werenever able to do before are these Copilot-esque experiences, which is more likeaugmenting humans, which is honestly one of the most profitable things you can…I joke, we spend 30 bucks a month or something for everyone on our team forGPT-4. Then I definitely make way more, save way more.

 

[00:29:03.230] - Harry Glaser

Oh, 100%. The models, once they're really cruising, canreally be a huge efficiency improvement. The GitHub Copilot… The first LLM Isaw, I think, was the sentence-long autocomplete in Gmail where I was like,"Whoa, what's this?" I think that was even before OpenAI startedmaking a lot of noise.

 

[00:29:22.480] - Harry Glaser

Then I think people talk about what happens when LLMs gethooked up to real sources of truth. I think there's no difference between thatand putting an LLM in a pipeline with a bunch of other stuff. What that's goingto look like is first run your classifier or your regression or your rank orwhatever you're doing, then use that output in a constraint that's a prompt toan LLM and then give the answer. Now you're not asking the LLM to answer the question,you're asking the LLM to take this piece of information and frame it as ananswer. That constrains the outputs that you get in a really positive way.

 

[00:29:51.830] - Simba Khadder

Yeah, exactly. We see a lot of that. I've also seen theopposite, where you use the LLM first and then you constrain the LLM's possibleoutput with a traditional model. It's like if the traditional model feels likethis is like it would never guess something like this, safer to just not playwith it.

 

[00:30:08.650] - Harry Glaser

I think a lot of it, to your point that you made earlier, alot of it is efficiency improvements where it's like, okay, how many supporttickets can a support person handle in parallel? Let's say five, I don't know.Then, well, what happens if an LLM is writing all the answers and all you'redoing is glancing at each answer and making sure it's right because one in Xwill be wrong? Well, now it's 50 or 500. The efficiency improvement is massiveand because there's still a human in the loop, we're much less stressed aboutthe accuracy or the hallucination or what have you.

 

[00:30:36.580] - Simba Khadder

Yeah, exactly. I think that's where we'll see, I think, themost interesting set of new use cases which are net new. But in an interestingway, I think it also causes an explosion of traditional ML because it's justneeded to make a lot of these applications even make sense. We'll see, in myopinion, a mix of everything. But we've been on this really long wave, youmentioned that since even the '80s, where there's been this push of data iseverything. We need to be focused on data. Data is unlocking new levels ofproductivity. Where I think there was first software. Every company is asoftware company now, but it's like, duh.

 

[00:31:17.260] - Simba Khadder

Then now I think where we see almost like, let's call itparabolic efficiency growth, it comes with data. It's the only way to pull itoff. Software is more like, I can write one line of code and apply it to everysingle user of Google as if I were to get Google. Now it's like with one model,not only is that true, but also it continuously gets better. As I create newmodels, it's like the compounding effect is. I guess.-

 

[00:31:43.570] - Harry Glaser

Yeah, this lets you model the economic output of all thesethings, where if you understand the lump of labour fallacy, then you're lookingat a set of people who are in the workforce that are able to produce X dollarsof economic output.

 

[00:31:56.520] - Harry Glaser

Well, now we're going to give them this augmenter, thiscopilot. To be completely cliché for a moment, you borrow the Steve Jobsbicycle for your mind, we're just going to give them, this is more like arocket ship for your mind now. Your efficiency goes up by 10X or 100X.

 

[00:32:10.720] - Harry Glaser

Well, now we can simply model the economic output of thesethings as taking the economic output that we had and multiplying it by thatcoefficient. If that's true, then we can look at, okay, the biggest impact wecould have is to apply these accelerators somehow to very large groups ofpeople. Let's get out of the tech industry. How can we apply this toagriculture? How can we apply this to transit? Where are the big area…Manufacturing. Where are the big areas where we could make a big impact here?

 

[00:32:37.610] - Simba Khadder

Yeah, I think it will be really fascinating to watch. We'reat this really perfect time to be an early-stage company. It's like, we knowenough, we've done enough, or we have something ready to go. At the same time,we're at the start of a huge explosion.

 

[00:32:54.650] - Harry Glaser

Absolutely.

 

[00:32:55.050] - Simba Khadder

It's almost like being dot-com. As long as you don't getstamped out in the hype, or you don't get too positive.

 

[00:33:00.690] - Harry Glaser

Well, it's a cyclical industry. There will be a popping of abubble, but it'll be less popping of a bubble and more a retrenchment, likewhat we have seen in the last year in other areas of the tech industry. That'snormal. This happens every 10 years or so.

 

[00:33:14.870] - Simba Khadder

For those listening, we've talked obviously a lot about theproblem space of MLOps. I'm curious, and I'm sure a lot of people listening arecurious, what is Modelbit?

 

[00:33:23.780] - Harry Glaser

Modelbit is a really fast and easy way for data scientiststo deploy ML models. Classically, for us, a data scientist will be in somenotebook experience, a Jupyter Notebook or a Hex, or a Deepnote, or a Colab, orsomething like that. They'll have trained and built over many days, weeks,months, a model that they're ready to deploy, and they're now encountering thisfriction with ML engineering.

 

[00:33:46.830] - Harry Glaser

From inside their notebook, they can run modelbit.deploy,and we will capture the model itself, all the code that calls the model, theentire Python environment, which Python packages are required, and whatversions, which system packages that those Python packages depend on, all thatgets put into a Docker container and delivered behind a REST API, backed by aGit repo with logging and load balancing and everything else that you mightwant. It takes about 10 seconds.

 

[00:34:13.320] - Harry Glaser

You get an automatically deployed ML model right out of yournotebook that you can then hand to your engineering team. It's battle-tested,it's bulletproof. We find that to be really popular with data scientists whoare doing this work.

 

[00:34:24.810] - Simba Khadder

How do you think of it in comparison to SageMaker, Vertex,or some of the more [inaudible 00:34:29]?

 

[00:34:30.490] - Harry Glaser

Yeah, I think SageMaker and Vertex tend to be a really goodfit if you are, let's say, a bank or a Fortune 500. You might have an army ofIT people who can help you with the kinds of VPC configuration and IM roles andeverything else that goes into deploying with AWS. Maybe you have AWS servicespeople who can also help you with that. But if you are a team of six datascientists just trying to rock and roll, Modelbit is much faster and easierthan those products.

 

[00:35:00.870] - Simba Khadder

That's awesome. You can use it now? Is it available?

 

[00:35:04.210] - Harry Glaser

Yeah. I hope you'll talk to us. We would value everybody'sfeedback. But if you do not want to do that, you can simply go to modelbit.comand click the big button that says free trial and try it out.

 

[00:35:14.580] - Simba Khadder

I said the question, I wasn't even planning to ask, but youmentioned governance and some of the stuff around big banks. I'm sure you've dealtwith this at scale with Periscope and the Sisense. We mentioned the frictionaround data scientists and ML engineers. There's definitely a lot of frictionwhen you start thinking about adding. If you're a bank, you sprinkle ongovernance and regulation on top of that, and it becomes 10 times worse. I'msure BI was very similar. How does that look?

 

[00:35:40.790] - Harry Glaser

I think the problem is much harder in ML. I think at least,yes, to your point, governance is big business, basically making people feelbetter that they have some way to audit where all the data is coming from andwhere it's going and how it's being used. But at least in BI and dataengineering, the data that we're talking about is countable and quantifiable.It's hard, but we can go through and identify every data source and every pieceof data. With ML, especially with these LLMs now, can we quantify every pieceof data it was trained on? Can we find the lineage for all this stuff? No, wecan't.

 

[00:36:15.920] - Harry Glaser

I think it's going to be a while as these technologies startto get figured out and we start to figure out what governance we would evenwant for a system like that. I think it's super early. I think it's superunclear how that will shake out.

 

[00:36:29.520] - Simba Khadder

That makes a ton of sense. I feel like I have so many morequestions I want to ask you. I know we're coming up on time. I just want tothank you again for taking the time to come on to the interview.

Related Listening

From overviews to niche applications and everything in between, explore current discussion and commentary on feature management.

explore our resources

Ready to get started?

See what a virtual feature store means for your organization.