MLOps
Weekly Podcast

Episode 
The Business Impact of MLOps with Liran Hason
Co-Founder & CEO, Aporia
white featureform logo

The Business Impact of MLOps with Liran Hason

Listen on Spotify

Transcript***

00:00:06.320] - Simba Khadder

Hey, it's Simba Khadder here with the MLOps Weekly Podcast. Today, I'm speaking with Liran Hason, the CEO and Co-Founder of Aporia. He's a veteran of the IDF's Elite 81 Intelligence Unit. He was one of the first employees of Adallom, which was acquired by Microsoft, and he led the ML production architecture that served millions of users.

[00:00:28.400] - Simba Khadder

Before starting Aporia, he was part of Vertex Ventures' investment team and was involved in over 30 investments, including Axonius, Spot.io, and others. Hey, Liran, thanks for joining me today.

[00:00:42.970] - Liran Hason

Hey, Simba. Thanks for having me.

[00:00:46.780] - Simba Khadder

I'd give a little intro on you, but I'd love to get a run bit about how you got to where you are today of running a Aporia.

[00:00:52.460] - Liran Hason

Sure. I spent over 20 years in software engineering across different roles. In one of the last companies that I worked at, I was leading, and I was responsible for the architecture for ML infrastructure, really everything from training, packaging, deployment, serving, and monitoring as well. We were running models at scale, and we got to a point where we had to have some monitoring solution.

[00:01:22.890] - Liran Hason

Really, what we did is just implemented some Python scripts running on a Cron Job. Nothing very fancy, to be honest, but it was good. I just realised that there has to be one centralised solution to track your models, to have dashboards, to get proactive alerts, so you shouldn't really implement this Python script which we're scraping. That's how Aporia came to life.

[00:01:49.400] - Simba Khadder

It's something you've built before and now you're taking to market as a vendor. I guess maybe for some timeline, when did you start Aporia? When did you build the original monitoring solution internally and when did Aporia begin?

[00:02:02.140] - Liran Hason

This monitoring solution we built it in channel world was a long time ago. It was 2015. But it was at a startup company that was later acquired by Microsoft. Then, to be honest, I moved to the evil side of the VC, investing in some companies, spent about three years in there. Then I just realised that more and more data science and ML teams are facing these issues. I realised that it might be a good timing to make this product and to make it live.

[00:02:33.420] - Liran Hason

This was 2018. That's where I left my role at the VC to start working at Aporia. By 2019, that's where we raised our first funding round.

[00:02:47.220] - Simba Khadder

Glad to have you back from the dark side. I guess since starting in 2019, I guess over the last, especially a couple of years, I'd love to understand how you see ML changing, how people do ML, who's doing ML. Let's start there. I really want to understand how things are changing over the last couple of years and also where things are going.

[00:03:06.880] - Liran Hason

Sure. Looking back for three years ago, I think few people used to use the term MLOps. But in the last three years, I think we clearly see more and more companies building their MLOps practise. I think that a lot of companies have invested so much time and effort in collecting data and also building data science teams, but they have yet to realise the full value of their models. In the last few years, they realised that the core missing part is really having the proper infrastructure for that.

[00:03:43.530] - Liran Hason

That's when they started hiring ML engineers and building their ML platform teams to create this centralised infrastructure. This is one thing we see. I believe one more interesting trend in the market that I see is as the market, the ML's market is maturing, we see more categories being formed.

[00:04:06.160] - Liran Hason

If beforehand you could find some tools trying to do all sorts of things in one platform, now you can see some specialised tools like Feature Stores, ML Observability platforms, like experiments tracking. I think it's a great sign for the market because by the end of the day, similar to the software engineering space, every company has different needs.

[00:04:29.050] - Liran Hason

When it comes to building and deciding on your tech stack, whether it's machine learning or not, you have to be able to choose what's best for your needs. In that case, having the ability to choose the best Feature Store, the best Observability platform, this really allows organisations to build a great ML platform in turn.

[00:04:48.940] - Simba Khadder

Do you think that trend of unbundling... Because if you zoom back, the companies in the way before us, companies like DataRobot, Domino, et cetera, they all started really building the end-to-end MLOps platform. Like you said, over this last generation of company that made both Aporia and Featureforganizationsbest-in-classorm fit into, there has been this trend towards unbundling, best in class components that you fit together based on your needs.

[00:05:19.280] - Simba Khadder

Do you think that will continue? Do you think there'll be more unbundling? Do you think, especially now if the market is looking a little shakier, there'll be more consolidation? How do you see things changing over the future?

[00:05:29.790] - Liran Hason

Oh, definitely. I think we can just take a look on the development or software engineering space. You wouldn't find that term of Dev platform. Maybe GitLab is trying to become one, and GitHub, but they didn't start there. I definitely see the same thing in our world of the MLOps.

[00:05:50.600] - Liran Hason

Obviously, there's a room for a company like DataRobot and Domino and others, but I think what we start to see is that there is segregation into what companies and what teams and stakeholders are using and buying platforms solutions versus best of breed. I think when you look on specialised data science teams that really take the data, build their own models with their bare hands, and they understand exactly what they're doing, yes, they have to have specialised tools.

[00:06:23.260] - Liran Hason

I definitely see this trend of best of breed tooling getting and growing over time. I do expect to see, by the way, more categories than we have today.

[00:06:33.620] - Simba Khadder

I think that's true. I love to use the DevOps market as an analogy to the MLOps market. I think my sense of why this is happening... Because if you look at early days of DevOps, there were more attempts. You could argue app engine, Parse.com, all those sorts of companies were really trying to build the all-in-one DevOps platform. Most of those have pivoted away or don't exist anymore.

[00:07:02.820] - Simba Khadder

I think a lot of what I've seen, especially in ML, is everyone has such different pain points. If you're doing your vision and you're a large enterprise or you're fraud detection at a startup, problems you face are different. The MLOps vendors you need, or let's say, Observability versus Feature Store, based on what you're doing, you might eventually need both.

[00:07:26.960] - Simba Khadder

But in the early days, maybe you hit problems where Observability is required first. It's like, "Hey, we can get away with our hacky thing for the Feature Store, but we need Observability now," or vice versa, where it's like, "Hey, the data processing, our data pipelines are all screwed up. We need the Feature Store Observability based on what we're doing. We can get away with something ad hoc for another six months or whatever."

[00:07:53.060] - Simba Khadder

Do you buy that? Is that how you see it? Is it just based on the complexity of use cases or do you think there's more to it than that?

[00:07:59.680] - Liran Hason

I agree with you. By the end of the day, it's a lot about the different use cases. Yes. If I'm developing recommender system as part of an ecommerce website, I need to deal with real-time, large scale of data. That's one thing. If on the other hand, I'm developing offline models running in batch, let's say, one every six months, well, I need whole different infrastructure for that. I think priorities will change accordingly. As you said, is it Feature Store beforehand or Observability?

[00:08:29.720] - Liran Hason

But also, when you think about it, there is so much depth in each of these areas. In Feature Store, there is time travel. There is how do you do collaboration, how do you do security and access, how do you serve the same features for training? There's so many aspects just in Feature Store.

[00:08:49.860] - Liran Hason

The same thing happens with Observability. How do I create dashboarding? How do I create effective monitoring system and alerting? How do I actually allow users to learn and gain insights from production data and how to improve their models? It's a whole world, and there is so much depth to it. I don't believe there could be one platform that could do everything greatly.

[00:09:16.740] - Liran Hason

That's the way I think about it.

[00:09:20.690] - Simba Khadder

One thing I'd love to zoom in on, a question I've been asked before on the Observability side, and I'm sure you're going to have a better answer than me because it's what you think about all the time. People ask me about, "Hey, I use, let's say, Prometheus and Grafana for my DevOps monitoring and Observability or whatever. I have this DevOps tooling." They're trying to understand why models are so different versus a lot of obvious things.

[00:09:47.410] - Simba Khadder

But I'd love to hear your words. Why does this category need to exist? Why does it differentiate so much from traditional monitoring that it requires a specialised tool?

[00:09:58.380] - Liran Hason

It's a great question. First and foremost, as an engineer, I tend not to reinvent the wheel if there is no need to do that. I can share with you, Simba, that even at Aporia, we were thinking about, maybe we can use Grafana internally. Let's not spend waste time on things that were already implemented very, very well.

[00:10:19.860] - Liran Hason

Why is it so different monitoring models versus monitoring traditional application workloads? When you think about Grafana, essentially what is this monitoring with Prometheus is metrics. It could be CPU, memory, and so on and so forth.

[00:10:35.100] - Liran Hason

With machine learning, when you want to monitor your model, what you essentially want to do is, one, you want to track performance. But usually, it takes a lot of time until we get ground truth in production. It could be in a few weeks, it could be a few months.

[00:10:52.720] - Liran Hason

In the meantime, until you get this data like the ground truth and the labels, you need to have some proxy metrics. What you do is you calculate data drift and other proxy metrics. Now, just the logic of calculating these metrics, even calculating accurate, let's say, you do have ground truth and you want to track accuracy or F1 score on your production data, you need to set up some ETL that will take this data, take the ground truth that is being received with delay, calculating accuracy historically, putting it in one place and then sending it to Grafana.

[00:11:31.860] - Liran Hason

It's not as easy as you might think, especially when dealing with large scale of data. This is just only one aspect of it. Once you've done that, and in reality, there are plenty of more edge cases that I'm not getting into right now, what you'll find is you just end up with some pretty dashboard in Grafana, but it just looks and analyses the data on a very high level.

[00:11:58.240] - Liran Hason

In reality, a lot of times the issues surface more on specific segments of your data. This is something you can't really do with Grafana. How my model is performing for people at the state of California versus people at New York, for example. All these things become much, much more complex.

[00:12:20.740] - Liran Hason

One more point here is, so we're not tracking metrics anymore, we're not tracking logs, we are tracking data. We are tracking the inputs, the feature of vector getting into the model, as well as their predictions. I hope it makes more sense why is it different. You can use Grafana, but we're very limited as compared to other dedicated Observability tools for machine learning.

[00:12:46.280] - Simba Khadder

Yeah, it makes sense. The way I think about it is you're trying to understand why the model is doing what it's doing and understand and observe when it's doing something strange. It's a lot more nuanced than, let's say, a server where it's like, "Hey, it's not responding to request. It's not working."

[00:13:06.730] - Simba Khadder

I think it's also the complexity of what correct is changes of an application. It's pretty obvious when it's correct. You can go with the code. If it's not doing stuff correctly, there's probably a bug. What does a bug even mean in a model is a whole different question where I think Observability becomes a core part as you move closer and closer to true understanding of what the models are doing.

[00:13:31.630] - Simba Khadder

I want to zoom out a little bit. You mentioned a bit about the creation of MLOps as a category. I have a similar story where I built a whole MLOps platform for my last company. It wasn't called MLOps. I don't even know what we called it.

[00:13:45.290] - Simba Khadder

People ask me. It just was like... Oh yeah, our data platform, which is now a Feature Store and Observability. I think we just called it that, Observability. It wasn't a model. Observability was just Observability, but for the models.

[00:13:58.000] - Simba Khadder

Part of that is like you said, there's an explosion of data scientists. People understood and saw the promise of ML. They hired a ton of data scientists. They were building models. They'd never put anything in their model in production before.

[00:14:09.200] - Simba Khadder

The company and organisations themselves hadn't really done that before. This grew into the explosion of MLOps of like, "Cool, we know how to build models. How do we actually build a system around models, operationalised, literally MLOps models?"

[00:14:25.000] - Simba Khadder

I guess one thing we've talked a little bit about before, and I've seen you've talked about is this idea of an ML product and getting to that next level of, "Cool, now we can get models in production." Well, there's almost all these questions of, "Well, why, and what models do we put in production?"

[00:14:39.450] - Simba Khadder

I'd love to first get your definition of how to think in terms of an ML product, what that means. Yeah, let's start there.

[00:14:46.600] - Liran Hason

You mentioned before that companies are now really focusing on operationalising their models. I'll use another word that is similar but yet different, which is how do we productionise these models?

[00:15:00.440] - Liran Hason

I think a lot of data science teams and organisations fund themselves developing models. Probably they have artifacts lying somewhere, maybe they even run them on inference. But are the business stakeholders are familiar with the results, with the business impact that these models are making?

[00:15:20.780] - Liran Hason

This is a huge question. These data science teams and these ML engineering teams, they were built in order to achieve some goals for the company. Now, they might be doing extremely well, but how well is this information? Is this success being shared with other stakeholders?

[00:15:41.140] - Liran Hason

That's where I see a lot of our clients and more and more ML teams shifting their mindset, really, as you said, from building machine learning models to building machine learning products. When I talk, what's the difference? It's not just terminology or buzzwords. I'm not a huge fan of buzzwords.

[00:16:03.820] - Liran Hason

Let's talk what is an ML product. An ML product, like the broad definition, is a software product that facilitates an end goal of a user base through the use of machine learning. The machine learning model is a core part of the ML product. But now, in addition to the model, we've added a couple of more parts, which is, one, we have this is a software product. Number two, there is an end goal, and number three, we have a base of users which we want to satisfy.

[00:16:35.470] - Liran Hason

This means we need to think more on the large picture when it comes to building ML products instead of building ML models.

[00:16:47.420] - Simba Khadder

I guess there's this concept of a data scientist and a data scientist today depends on the company, but arguably they used to just do everything. They put things in production, they were working notebooks, they did everything. Do you think that this idea of the ML product thinking is going to be something that takes traditional roles, like, "Hey, you still have software engineers, you still have PMs, everyone does ML everywhere?" Or do you think there'll be a specialisation, there's like MLPMs? What's your take in that?

[00:17:19.380] - Liran Hason

Let's start with today. Today, in most cases, well, there are some MLPMs, and I'm really jealous on the teams that have this functionality. I think that's really amazing. But in most organisations, what you'll find is that either the head of data science or data science, the lead, the one actually performing this role in practice.

[00:17:44.680] - Liran Hason

Going forward, I do believe, and as we see more and more MLPMs and we see more ML engineers, the job position of ML engineer is one of the most sought-after jobs in the US today. I definitely see over time; I expect to see at least more demand for MLPMs.

[00:18:04.880] - Liran Hason

What is the exact definition or what is the exact background and skill set needed for that role? It's going to be a very interesting question because it's a combination of having people skills as product manager, being able to look on the large picture, as well as to understand data scientists and to really translate business needs into data science metrics.

[00:18:30.120] - Liran Hason

Quick example, let's say, I'm a data scientist at Uber Eats, and I was tasked with predicting how long is it going to take for a delivery. An important question to ask from product management perspective, what are we trying to solve?

[00:18:45.580] - Liran Hason

Do we want to have an exact time frame on the matter of seconds? Is it going to take 700 seconds to get the delivery and then we're talking about regression problem? Or are we talking about classification where we just want to know it's between 5-10 minutes? Now, this reduces the whole complexity from building a model here.

[00:19:07.180] - Liran Hason

But asking these questions and then translating them into machine learning related metrics like, what is the F1 score we're aiming to, or what's the accuracy, having this ability, I think it requires a lot, and it's going to be interesting what people are going to take these roles.

[00:19:28.720] - Simba Khadder

Yeah, it makes sense. The product managers, in general, are almost like glue. They combine different parts of the company. They typically spend a lot of time speaking different languages.

[00:19:39.540] - Simba Khadder

To a data scientist, you might need to specify it one way when you go talk to head of whatever, to revenue officer, you'll frame it a different way, or CFO, you frame it a different way. You talk to an end user a different way. I think it's someone who can glue all those things together, speak all those different languages, and then translate as needed to make sure that everyone focuses on their specialties, and they make sure it all fits together. Is that how to frame or think about the MLPM role?

[00:20:07.680] - Liran Hason

Yeah, absolutely. Now, the main question is, until we'll have plenty of them, if we will get there, what should we do? What should data science team do? I was just last week was talking with a group of other lead data scientists about how can we better connect the work we're doing with the business value and how can we better communicate it to other stakeholders? It could be the marketing manager, it could be other stakeholders. I think it's really an interesting problem that needs to be solved.

[00:20:41.300] - Simba Khadder

I agree. I think it's really interesting to see how org structures change. Even if you look back an early ML, an enterprise, it used to all live under one VP. It used to be very siloed. There was the ML team.

[00:20:57.440] - Simba Khadder

Over time, I've been seeing more and more data scientists just get thrown in different organisations. We're still a little bit siloed. I think historically, ML used to exist in silos. There was like one VP, and they had their ML project, and it was almost like a proof of concept.

[00:21:15.050] - Simba Khadder

Over time, a lot of us have been successful. We've seen data scientists recently. When I go talk to larger organisations, they have data scientists in lots of different places. Every org has a data science team.

[00:21:28.900] - Simba Khadder

I think more and more, you will see data scientists throwing in a software engineering team typically will have a DevOps person or a few DevOps people. I think data scientists will also begin to fit in to these pods. It will, let's say, N software engineers per M DevOps engineers per X data scientists. I think as that happens, yeah, there will need to be this familiarity in context switching and just thinking.

[00:21:58.860] - Simba Khadder

I just think ML, in general, is going to become such a huge part of... It makes sense as a productivity per head count is increasing when people are really doing ML well. Because ML used to be software. Now that software has become everyone can do it. The next value creation to be created is taking data and building, I guess, an engine to continuously drive more end user value.

[00:22:24.480] - Simba Khadder

Do you see ML continuing to grow? I guess, given the ML product thinking, do you think there needs to be more ML products? Do you think that we need to be more clear about what we're doing with the products today? How do you see, I guess, the growth of ML in the data science world over the next few years?

[00:22:46.520] - Liran Hason

I believe there is a huge space to fill in with the mindset and thinking of ML product. A few weeks ago, we shared our framework for great ML products. Really, it's how to be user-focused and goal-driven. One of the things that I mentioned is how do you visualise from the very beginning?

[00:23:10.140] - Liran Hason

At the beginning of machine learning project, I highly believe that you should be thinking at the end result and visualise it in your mind, but not only in your mind, in a shareable place with other people and other colleagues. Just to give an example, let's say, that we're being tasked with building a recommendation system for an ecommerce website.

[00:23:32.950] - Liran Hason

How does the end result look like? Where exactly on the website are my recommendations are going to be presented? What exactly is going to be presented? Is it just an image of the product? Does it include the price? And so on.

[00:23:49.280] - Liran Hason

I highly recommend using simple tools. I like simplicity. There's a great tool called Excalibur, for example. It's online whiteboard, very simple to use. It's free. Just to mock up some high-level wireframe of how it's going to look like.

[00:24:07.200] - Liran Hason

In other cases, when we build machine learning models, the end result is not really web interface or not something visual, and still, I believe you can visualise that. What do I mean? Maybe there's another microservice consuming predictions for my model by calling REST API.

[00:24:25.860] - Liran Hason

Then you should be asking yourself, how does that REST API look like? What are the endpoints? What parameters can it get? What are ad cases that my model can expect from my model? And how do I deal with all of that?

[00:24:41.620] - Liran Hason

Really, by thinking about the end result, you can really make sure you are aligned on the right goal from the very beginning of the project. This is just one example of one principle from the framework that I believe, and we saw it so many times, once you follow these steps and these principles, you end up with more success at the end of the project.

[00:25:03.760] - Simba Khadder

This applies also to something, let's say, like fraud detection. In that case, the PM or whoever, the product thinking would go into the API like you were saying.

[00:25:13.440] - Liran Hason

Yeah, exactly. In fraud detection, also, what's a good case? When we think about fraud detection model and we think about the end result and how our fraud detection ML product is going to look like and operate at the end of the day, what happens, for example, when our model is identifying or classifying a transaction as fraud? Does that mean a transaction is getting blocked? Does that mean the transaction is getting... Maybe the user gets notified.

[00:25:44.840] - Liran Hason

This is important because that also allows us... When thinking about goals and metrics, one of the principles is measure. I like to think and set three different types of goals for these projects. One is business related metrics. In terms of fraud, it could be chargebacks, so this is one.

[00:26:07.130] - Liran Hason

Number two is usage. When we build a model, it usually takes a long time. How many people are using it? How many transactions are going through my model for the detection model?

[00:26:18.480] - Liran Hason

And the last part, but also very important, is by deriving from the business goals, what are the data science KPIs? Yeah, this definitely also ties to credit risk, fraud detection, and plenty of other use cases.

[00:26:34.800] - Simba Khadder

Yeah, that's super interesting. I really like that framework of understanding the model beyond just, "Hey, this is what it predicts," but truly, how does that fit into everything and what the success look like.

[00:26:45.920] - Simba Khadder

I want to take the conversation somewhere else because a lot of this is making me think of the current big focus and hype, especially around ML has been all the new generative models, foundational models. I'm curious to if this concept fits into your framework, do you think it will change anything of how we think about ML? How do you think that that will shake up the ecosystem of how companies do ML?

[00:27:11.340] - Liran Hason

Are you asking about the framework or the generative AI? Sorry, I missed that.

[00:27:16.320] - Simba Khadder

Yeah, specifically generative AI foundational models. I'm curious to how... Because when I think of ML product, I guess something else comes to mind is a lot of companies are taking, let's say, an open AI API, and they're essentially trying to build a product around it. It seems very ML product heavy. Is that true? Do you think that all these new foundational models are going to change how we think about ML, how enterprises do ML? Or how do you think it's going to change things going forward?

[00:27:50.140] - Liran Hason

That's a super interesting question. I don't think that I have the answer. I have only my humble opinion here. But I do think that these generative models are pretty much changing the world that we are living in in so many ways.

[00:28:05.500] - Liran Hason

I think ChatGPT, for example, is just one and pretty much one of the first great examples for that. Also, when you look at ChatGPT, it's not the first of its kind. You can find other examples in other companies that came up with very similar models.

[00:28:24.380] - Liran Hason

But what made ChatGPT so successful other than the pretty much crazy results and performance that they have is also the accessibility of it as a product. It's as easy as now going to chat.openAI.com, sign up with your Google account, write down your prompt, and you'll get an answer immediately.

[00:28:49.640] - Liran Hason

With another solution like that, you are required to go to understand exactly how the API works. You need to fit it with some parameters. Yes, even when you reach amazing machine learning model, it's really important how do you expose it to the world, what's the interface look like? That's, in a nutshell, about ChatGPT as an example and connecting it with the ML product framework. I hope it makes sense.

[00:29:17.530] - Simba Khadder

Yeah, it definitely does. I think that it's almost most pure form of it. It's like, "Hey, let's say, you have an amazing model that was just given to you. Now what? What do you do with that?" That's where I think the product thinking really kicks in.

[00:29:32.240] - Simba Khadder

You could argue a lot of these new companies built around those models are purely just ML product companies. The model is just given. How do you build a product around it? How do you derive end user value from that?

[00:29:44.580] - Simba Khadder

How does Observability fit into this ML product framework? How does a Aporia begin to collect business metrics and fit that in? Or does Aporia live in the ML stack and then it's the PM's role to take that and compare it to the business stack? How do you think about that going forward?

[00:30:03.720] - Liran Hason

I like to think about Aporia as the Mixpanel for AI. If you think about Mixpanel, this is the place for a product manager to get in and see who's using our product, how are they using it, where are we lacking and where do we need to develop these features or to get improved.

[00:30:21.900] - Liran Hason

The same thing happened with Observability and ML product. When it comes to starting a new ML product, you should be thinking with the end in mind. We talked beforehand about measurements and KPIs and metrics. You should be able to measure them as your model, as your ML product is running in production.

[00:30:45.260] - Liran Hason

This is where Aporia fits into the picture. Our users, whether they're data scientists, ML engineers, or product managers, they can see what models are running currently in production, what predictions they're making for users, and how well they are performing. Now, not only that, they can track usage, they can track business KPIs, and really make this connection, which a lot of times is very difficult to make it between the predictions the model is making in production to the bottom line business results. This is where I see and we see our users using Aporia for.

[00:31:23.820] - Simba Khadder

I love it. This Mixpanel analogy is great because it really shows that you're trying to break down this ML silo of, "Yeah, we have data scientists; they live over there; they build models over there; they deploy them," to breaking that down and "Hey, ML is just directly interconnected into our organisation everywhere. It's one of the many abilities we have to drive business value." That's how data scientists think. That's how everyone in the org thinks.

[00:31:55.020] - Simba Khadder

This has been great. I would love if you could give maybe a tweet length takeaway. If someone's listening to this, they're explaining it to one of our friends, "Hey, you should go listen to this great podcast of Liran." What should they say? What's the takeaway that you think they should leave with?

[00:32:09.920] - Liran Hason

The main takeaway, I think, is we as technical people... I'm an engineer. I like writing cool stuff. I like challenges. I like cool algorithms.

[00:32:20.220] - Liran Hason

But by the end of the day, we should be asking ourselves, what is the impact we're making and what is the impact of what we're building? Can we answer this question?

[00:32:32.800] - Liran Hason

I think this is really, really crucial, Simba, I guess, also when you started Feature Form. At the very beginning, you just want to build something really, really cool. But later on, you realise that you need really to connect this cool thing that you build with value to users. Machine learning, in that sense, is no different. I think that will be the takeaway.

[00:32:55.000] - Simba Khadder

That's great. Thanks so much for joining me today. This has been a really, really great conversation. I love your perspective on, I guess, ML in general and how it will change. Yeah, we'll put this out soon, and we'll put lots of the links that you mentioned in the summary below.

[00:33:08.920] - Liran Hason

Thank you, Simba. That has been fun for me.

Related Listening

From overviews to niche applications and everything in between, explore current discussion and commentary on feature management.

explore our resources