LinkedIn's Feature Store & MLOps Workflows with David Stein

Episode 
9

MLOps Weekly Podcast

LinkedIn's Feature Store & MLOps Workflows with David Stein
Senior Staff Software Engineer, Machine Learning Infrastructure, LinkedIn

Listen on Spotify

***Transcript:

[00:00:03.000] - Simba khadder

Hey, I'm Simba Khadder and you're listening to the MLOps Weekly podcast. This week, I'm chatting with David Stein, the tech lead of the ML feature infrastructure team at Linkedin. Most recently, he's been working on a recently open- source fabric, the Linkedin's Feature store. David, it's so great to have you on the show today.


[00:00:20.830] - David Stein

Thank you Simna, it's good to be here.


[00:00:22.630] - Simba khadder

Maybe we can start by just talking about your journey to MLOps. What got you into MLOps? How did that journey look like?


[00:00:29.230] - David Stein

I've been at Linkedin for about 10 years. Pretty much a little time has been working on problems relating to MLOps. We didn't call it MLOps back at that time. The team I joined when I was new at Linkedin was focused on the recommendation systems, the recommender Systems Stack and Experimentation Stack.


[00:00:46.690] - David Stein

As you probably know, and probably many of the listeners know, Linkedin has a lot of use cases of machine learning in production to improve the personalization and the quality of a lot of different aspects of the product.


[00:00:59.950] - David Stein

I was working on this team as a new hire that was building a bunch of the different recommender components and there was definitely like an awareness, going back 10 years ago that there are a lot of common pieces and a need, a definitely utility of having good abstractions to minimize the redundant infrastructure work that is needed in order to maintain and support any one of those kinds of vertical areas.


[00:01:24.990] - David Stein

Now, that's probably obvious stuff to most people who are thinking about MLOps now, but I've really been just working on these projects to support multiple different recommender and ML systems over this time at linkedin.


[00:01:39.060] - David Stein

It's been a cool space to be in, definitely with all of the increasing interest, I guess, in the industry in this space around how do we streamline putting machine learning and production, what tools should be in place. That's how i got here.


[00:01:55.290] - Simba khadder

You mentioned how you've been doing MLOps since before it was MLOps. That makes a lot sense, machine learning has been around a lot longer than the term MLOps has been around. It's been production a lot longer than the term MLOps has been around. Let's say consistent. What has remained the same in how you think about MLOps, how you think about ML structure over your career?


[00:02:15.130] - David Stein

Good question. What has been consistent, I would say, is a lot. Fundamentally, I think that machine learning development, applying machine learning to solve business problems for a company like Linkedin, it's a very experimentation- driven process. It's like we have teams of researchers and engineers, searching for the best algorithm, the best models and feature engineering, feature definitions.


[00:02:43.800] - David Stein

Searching to be able to find the best algorithm to do the best possible job at helping linkedin members find Job postings that are a good fit for them or helping people find relevant search results or these problems. This search, we need to be able to search this space efficiently.


[00:02:59.720] - David Stein

I think that there has been an awareness for a while about the obvious importance of making that efficient as possible, to be able to productively search the space of solutions, to be able to try new things, to be able to try new things effectively and safely. To be able to get them into production, to be able to understand the impact.


[00:03:21.300] - David Stein

I would really say that it's about being able to try things easily and also being able to understand the current state so that you know like what next things be to try. There's like a lot more I could say. If you have any questions about what I said, you can go in any direction with that.


[00:03:35.500] - Simba khadder

Yeah. One thing you've said actually a few times is you've said MLOps ML infrastructure and you have a distinction, I think, in how you think about them, which I think I do, too. We made up the term virtual Feature store which I think or like that whole term exists around the whole MLOps idea as opposed to ML infrastructure idea. It creates a separation. How would you define maybe first MLOps and then ML infrastructure in your own words?


[00:04:01.750] - David Stein

Sure. I think the MLOps to be about operations, the ability to move models to production. Similar to how I think about things like CICD pipelines for general software engineering and development.


[00:04:16.940] - David Stein

I think that we could probably describe like a subdomain of machine learning infrastructure and tools that are specifically focused on being able to try things quickly, safely put models in production, define new features easily, readily, predictably, reliably with transparency into their metrics and things like that.


[00:04:36.990] - Simba khadder

Yeah. I think another way to say it, I think you're getting at is MLOps is the workflow part and ML infrastructure maybe it's like the underlying all the tools that come into play, like the serving infrastructure and training infrastructure. That's where the CICD analogy maybe comes in. Like CICD doesn't do anything by itself, it coordinates everything.


[00:04:59.230] - David Stein

Yeah, I would say that that's right. Practically speaking, we've been able to benefit a lot from tools maturing, like basic ML infrastructure, tools and frameworks and platforms that let you define and author models and machinery for training models.


[00:05:15.460] - David Stein

Like over the span of time that I've been talking about over the past 10 years, we've seen things like Tensorflow and Pytorch become prominent. I think the infrastructure pieces that we invest in now have probably changed a little bit because we're not really trying to reinvent things like that where good standards have really emerged in the industry.


[00:05:37.530] - David Stein

Instead, it's like you were saying, like integration, making it so that people can do their experimentation and then take it and evaluate what it's going to mean to run this in production. Then safely try running it in production and have that CICD but for machine learning development pipeline, built around those standard golden set of industry tools.


[00:05:58.510] - David Stein

I think that to some extent, the game has changed in the past few years. It's less about inventing an entire set of tools and more about finding the good ones that are really built a lot of momentum around themselves, that are well understood in the industry.


[00:06:13.150] - David Stein

That are really well built and employed at many of the biggest places, are also well plugged into the education system and that people are using in their graduate programs and things like that. Building the CICD pipeline to make it so that you can put those tools to use efficiently, effectively for that ML prospecting.


[00:06:34.390] - Simba khadder

It's so interesting how like the DevOps versus MLOps analogies, they come up a lot. Like CICD obviously is a great example of this. It's something that I talked about a lot. I think that there's a lot of even beyond just the problem to be solved, it's a similar problem applied to different people. There's also...Even the markets, which I won't quite get into in this conversation yet.


[00:06:55.340] - Simba khadder

But even the market looks the same, in the early days of DevOps, it was just everyone built their own things in-house. Borg existed today at Google. All these people just built everything from scratch because that's what you did and then there's some standards that got built in the early days as well blessed. There's a billion tools that people use and now it's like, yeah, you use Kubernetes and you use Terraform, you use whatever CICD tool.


[00:07:15.820] - Simba khadder

It's finally become a little more standardized. One big difference, though, between DevOps and MLOps is experimentation like you said. In DevOps, it feels like you know where you're going. It's rare that you're just, I'm going to try this and see if that works and if that doesn't work, I'll try this stuff or thing.


[00:07:34.610] - Simba khadder

It doesn't happen as often. It definitely doesn't happen. It's not a constant part of the process. Is that fair? I guess how do you think about the difference? Maybe what is your view of the difference in DevOps and MLOPs? Why are they different fields? Why don't we just expand our CICD to just have them all trained step?


[00:07:52.680] - David Stein

Yeah, that's a really interesting question. You mentioned that with DevOps, there's a difference in terms of the focus on experimentation?


[00:07:59.440] - Simba khadder

Yeah, I feel like MLOPs is much more experimental. It's like a lot of what I'm doing as a data scientist, is exploring data, trying different models out. There's a lot more, I guess, random walks, like a random walk algorithm and just more research, I guess, in the data science side.


[00:08:18.840] - Simba khadder

I prefer DevOps as a developer working on software, I know what the next thing I need to build is. It's rare. I'm well, I'm going to try to build this feature, I'll build this feature, I'll build this feature and see what happens.


[00:08:29.580] - David Stein

That is interesting. I think that is true to an extent. My understanding of DevOps, at least like I'm most familiar with like the machinery that's within Linkedin since I've been at Linkedin for a long time. I do see it as being to some extent about the experimentation, not exactly in the same ways that you do in order to decide what direction you're going to go in your ML project.


[00:08:51.500] - David Stein

But there is a system in place where you make modules prove that they are safe to deploy by validating it with metrics and measuring what happens when you deploy it just on one piece of the cluster. Things like feature flags that exist like another space where you're able to like, I don't know if this is really considered CICD.


[00:09:09.830] - David Stein

But I consider all these things in like a same group of things that lets us manage changes. There might be a risk that a change is not safe, that it may cause some failure when things are pushed into production. Having automated testing machinery that makes sure we can safely evaluate how all of that works and that we can understand and push a button and revert it to a previous safe state.


[00:09:31.150] - David Stein

Not fully trusting any change that's about to be made, need to be measure some things about it before deciding to put it in there. I think that a lot of those pieces are maybe less experimentation in one sense, but still needy to measure and validate before pushing things out. In front of everyone are something that's actually, from my point of view, to a large extent, common.


[00:09:56.690] - David Stein

There are definitely differences. I think a lot about not just like DevOps tools, but other tools that we use to solve various productivity challenges in software engineering in terms of trying new things or moving them into production. There are, as I think you said, a lot of analogies that we should as an industry be exploiting more.


[00:10:16.250] - David Stein

What are the lessons that we can learn from how collaboration problems or rapid artifact deployment or things like that good solutions that have already been built for software engineering. Taking those lessons and applying them in the ML domain is a big part of what MLOps is, in my opinion.


[00:10:34.120] - Simba khadder

I think there's so much here still to unpack and I think a lot of it just hasn't really been figured out in the space yet. Like, I think there's just a lot of open questions. I think the reason that's true is I don't know if anyone really has like a perfect MLOps workflow, even in DevOps.


[00:10:50.790] - Simba khadder

Even in the early days of DevOps, people are like, well, this is what Google does. We'll do it like that or this is what name any top technical company does. That is like the gold standard. In ML, I've worked with and talked to a lot of companies and I don't know if I've ever met anyone who's like, yeah, we do it perfectly here. We got it down. This is how it should work. Everyone should do it like us.


[00:11:12.550] - Simba khadder

Not even from an ego point of view. But don't forget anyone's even close to that yet. A lot of people are like, yeah, we're still very in the early days. It would be really interesting to understand, let's say you are a data scientist at Linkedin working on machine learning. How does their workflow look like?


[00:11:25.340] - Simba khadder

You talked about the analogy of them, but also I'd be interested to also understand where are they interacting with. Like oh they use Data Hub or something else, like understand like features and data that exist.


[00:11:36.060] - David Stein

Good question. Things that I can comment publicly about are there are some blog posts that our machine learning infrastructure team has released over the past few years about our productive machine learning and machine learning infrastructure initiatives.


[00:11:52.970] - David Stein

I encourage listeners to go check that out. Just if you search for Linkedin Pro ML like Pro- Hyphe ML, it's like a project that we publicly described before and blog posts that talk about some of the aspects of these things definitely. Like the pillars of this are around giving users who are linkedin engineers working in the applied ML space, giving them tools for exploring data sets and trying model authoring or trying to run modeling ideas and feature definition ideas.


[00:12:26.690] - David Stein

Being able to do that with open standard tools like Tensorflow is one example that Linkedin is definitely using heavily, I'm pretty sure that's publicly been described. Moving along in the workflow, going step by step. It depends on, I guess, what you're doing. Like if you're developing a new model pipeline for a new problem versus iterating on an existing one, the workflow could look a little bit different.


[00:12:47.640] - David Stein

But I guess the notional thing is that you are able to explore data sets. Linkedin does have tools like data Hub, for example. Data hub, of course, is an open source solution, which originally came from Linkedin. We have solutions like this. We have heavy use of Spark.


[00:13:01.120] - David Stein

We have model training workflow definition systems that are in-house solutions that I think are described in some of these public engineering blog posts that our engineers are able to use in order to find a training pipeline, identify which features from the Feature store to pull in.


[00:13:16.770] - David Stein

A Feature store is obviously a whole other piece here and be able to define what the features are in terms of the raw data sets that the company has. Anyways, it; going to be able to in the model, run the training, see some of the metrics, see the impact, find an idea that you want to deploy because it looks like it is having a positive benefit.


[00:13:33.520] - David Stein

Then use the other parts of the workflow and other parts of the platform, which are linkedin's model deployment and registration systems that push the model into linkedin's production environment for running an inference. There are more details I could give on some of those areas. I obviously have the most focus on the feature definition and productionalization feature exploration side of this.


[00:13:57.780] - David Stein

Yeah, that is the high level workflow vaguely described and folks can look at those blog posts to see a few more details.


[00:14:05.010] - Simba khadder

Is there any design decision that you all made on the ML platform? If you can't share necessarily on the whole platform or you could just be about Feathr? But maybe just a design decision that was made that you think is unique, like it's is something that's uncommon that really worked out for you?


[00:14:21.600] - David Stein

Thinking about whether to try to answer this for the whole machine learning platform or to focus more on my subdomain around the feature infrastructure in the Feature store. I'll talk probably first about Feathr and we could talk more about other aspects of the ML platform maybe if I come up with top ideas I'd like to share more generally.


[00:14:41.980] - David Stein

Something with Feathr that we prioritized in the early days, a design principle, it was focused on collaborative machine learning feature definition to be able to share work. We didn't just want to have a system that lets you define some features and ship those to production and use them.


[00:15:04.360] - David Stein

We wanted to be able to define features based on raw data sets in a way that the definitions one engineer writes can be used right alongside definitions of other engineers writes. That may sound like a simple or obvious thing, but there's a set of like low-level, important details at the core of how a Feathr works that required us to lean hard on that assumption in order to be able to efficiently.


[00:15:30.900] - David Stein

For example, only load certain source data sets once even if many different engineers decided to base feature definitions on those common sources. It required us to make design decisions and the details of how Feathr works to allow it to be this collaborative system, and it would have really hard to try to add that on as a feature down the road.


[00:15:54.610] - David Stein

We leaned in pretty hard on that and I think that paid off because it is one of the cool things that we provide that you can use feature definitions that go against common sources. The definitions don't need to know anything about each other. They can be written in different places. They can be consumed and used in a commonplace, computed efficiently together in a way that facilitates this collaborative setup.


[00:16:20.710] - David Stein

That also enables simple feature definitions. It's not as though like every feature that's defined on certain data sources needs to be jammed into one place. There's other details like that I think have led to a better ecosystem than we would have had if we didn't focus on that as a principle.


[00:16:41.330] - Simba khadder

I want to dig into it. We both know search very well so I can assume like what you're getting at. But just for people who maybe don't in our case, like one thing that we're leaning into is immuability that's one way we're able to do feature sharing and other things. You mentioned one thing you decided in designed API is that, hey, feature sharing, that's a first class thing that always needs to be possible to do.


[00:17:03.880] - Simba khadder

It's not something that can be added, like you said, you have to have this as a priority from the beginning for an implementation to be possible. Otherwise, there's no way to do it without a full rewrite, more or less. What are some of the things you actually did to actually allow you to be able to implement feature sharing? Maybe what did you have to limit could be what I think of it.


[00:17:23.690] - David Stein

Sure. The first thing that comes to mind is that the Feathr actually controls the loading of the underlying data. It doesn't put into the hands of the engineer who is writing the feature definition. The ability to use Spark APIs to load the underlying data directly.


[00:17:42.150] - David Stein

Users specify the location of the underlying data, but then they're subject to some API constraints that Feathr provides in terms of defining what transformation aggregation and operations can happen on those data. But it's not as though the users are able to provide the loading of the data and the transformation of the data all in one bundle.


[00:18:03.770] - David Stein

At least I say that a little bit simply like basically saying, you load the data in one place whether you can actually manage that independently and then run all of the related pieces of transformation logic that may have been written by various engineers.


[00:18:16.370] - David Stein

This is like a limitation in the API. This was from the early days. This may give rise to other questions. This is like painting with broad strokes. There's some details there that may be slightly different from the main rule that I described, but hopefully as a general rule for designing a platform, there are certain things that you need to be careful about if you want to be able to apply just API design right, abstractions design.


[00:18:43.110] - David Stein

Forcing the specification for transformation to be defined separately or at least in a separate UDF or in a separate definition module compared to where the data gets loaded is just like one example of something that enables that. It's like one small thing.


[00:18:56.980] - David Stein

There are other things in terms of we built machinery early on in order to allow teams to import feature definition modules that were written by other teams. We basically built that capability like early on in the lifespan of the project and trying to think what other lower level details here or even higher level insights to comment on the question.


[00:19:18.960] - David Stein

But hopefully that's making some sense, or I don't know if you have any specific angles of it you want to go further into.


[00:19:23.980] - Simba khadder

I actually love to zoom back out now. I think one thing everyone can take from this so you're building a Feature store and ML platform team is, I feel like APIs...A lot of the hard parts of MLOps nowadays, I think, is API problems. How do you design an API that actually works?


[00:19:42.060] - Simba khadder

Like implementing the API? It's hard. Obviously, it's not easy, but I feel like that part is just more understood. I think you can get a team of smart data engineers or MLOps people and you can build anything. But deciding what's the right abstraction is coming up with the right API and every API comes of cost and benefits.


[00:20:01.950] - Simba khadder

There's no perfect API. There's always going to be some trade off. What do you think of it? Is that a fair way to think about MLOps in building platform tools?


[00:20:12.750] - David Stein

Yeah, I definitely agree with that. Defining right APIs is huge because it's related to defining the clean concepts that people are supposed to work with in order to use a system. I personally think that, like a lot of the biggest innovations and breakthroughs that have been the most important things or feel like big data over the past 10 or 20 years have been related to like apis that really got it right.


[00:20:37.630] - David Stein

Things like Mapredu being like a really obvious example. It's like that's basically an API. It actually previously existed obviously, because functional programming and all this fundamentals. But the application of a good API that lets you couch the problem in a specific way can really be extremely powerful, because it can help people think about how to formulate their problem in a way that a good platform can actually serve them.


[00:21:00.640] - David Stein

Finding the right abstractions, the right APIs is the name of the game. I could go a little bit into one facet of this that was important in Feathrs design, and that is the idea that may even seem obvious now, I guess, like now that we have Feature stores and we know what they are and we talk about them.


[00:21:20.460] - David Stein

But I would say that early on when we proposed and built this out originally, it was not a settled, obvious idea that like a feature should be an abstract like an entity, a concept that should also exist outside of a specific model, for example, or a specific feature transformation program.


[00:21:44.280] - David Stein

The idea that a feature exists as an entity outside of the big data lake environment or outside of the online system that it exists, we could have a registry for them. That they should have names and they should have types and we should be able to reason about them separately from the code that produces them.


[00:22:03.020] - David Stein

I think this like an innovation that many of us collectively figured out while trying to solve the problems that we had at these businesses in terms of being able to organize the work. We had smart engineers designing features before that but it was entangled in the model logic or other parts of the data pipeline.


[00:22:20.900] - David Stein

The core piece in Feathr's conceptual model for features is it's like you define named features and then you get the features by their names in the right context. You define features that are named features and then you get those features by their names.


[00:22:37.690] - David Stein

That was the simple abstraction, like API, basically. It's really like a meta API, actually, because then you have a couple of different APIs that actually make that happen in different contexts, one for training and one for bulk data generation and one in the online setting for inferencing and stuff like that.


[00:22:55.480] - David Stein

But the heart of the abstraction, that was like the nut of it. Defined features that are named, get those features by their names and that's a big part of what makes Feature stores useful, is giving people the platform to cut this down in that way.


[00:23:08.550] - Simba khadder

Yeah, I love that. Everything I found when you come up with a good design, that produces a great analogy for this, all these use cases that you didn't think about, start to work naturally. Like, oh, I never thought of that but yeah, it just works by design.


[00:23:25.810] - Simba khadder

The design is clean enough where the way you're logically thinking about, let's say, a feature is so well encapsulated by this API. I'll give you an example. The one thing that came up quickly for us was access control, governance, all this stuff. Having that be like a first class part of a feature you can actually set not just table wise governance, row wise or whatever. You can actually set feature wise governance. That's possible now.


[00:23:54.100] - Simba khadder

Now to be honest, we built this at...Our last company we weren't exactly enterprise that really had amounts of requirements that a bank would need. But because the traction was so clean, it was very easy to layer on top all these other things. Then one that's come up recently, I'll share the same thing.


[00:24:12.000] - Simba khadder

We have the separation of the application layer, like the Feature store application and then the infrastructure that people bring. One thing that's come up is, hey, you can actually have different clusters. Like you have a spark cluster in the US and a spark cluster in Europe or, hey, we use Cassandra here and read this here.


[00:24:27.240] - Simba khadder

Not something that when I was designing all of this stuff, I was like, oh this is a use case that's going to come up a time. We should definitely design for it. It just worked out. That by design, I think there's a lot to be said and I think it's a thing a lot of people miss because it's so much easier to every single time use case comes up, oh, we can build that.


[00:24:46.220] - Simba khadder

You build it, you build it and you build it. Enough of this really opinionated, deeply bolted together platform that works if you use it exactly as designed, but doesn't really have any flexibility. I think the good thing like CICD, going back to DevOps is some of the best tools there, Terraform is a good example of this. They're pretty generic.


[00:25:08.440] - Simba khadder

Like if you can almost abstract into this generic. It's a config file that creates metadata and there is a coordinator that makes it so. That's actually a description of a Feature store in a funny way. But that concept, the traction, the style they took, the architecture, this worked so well.


[00:25:25.690] - Simba khadder

That if you're a bank, you can use Terraform. If you're working on a small project, but if you're on the AWS cluster, you can use Terraform. Anyway, I think there's just so much to be said here and I think that people don't start with API. They think about functionality and I think that's a huge mistake.


[00:25:39.710] - David Stein

Yeah, clean concepts and APIs are really essential. I like your story, that totally makes sense about having a registry of named features gives you a foundation in which to add things like you mentioned, like governance support and things like that. That would be very difficult to just implement an ad howay if you didn't have the benefit of this named abstraction.


[00:26:02.500] - David Stein

That should be like a tweet or something like names are really powerful. Having a system of naming things is the foundation to be able to organize them in a meaningful way and be able to do things with them, names, symbols, identifiers. They really matter. They're very important things.


[00:26:17.860] - David Stein

When I think about like Feature store and like Feathr, I tell people sometimes like you mentioned Terraform, but I also think there's an analogy to like how package management works for software build. Being able to define the dependencies of your module on whatever packages are required and whatever versions are required.


[00:26:35.220] - David Stein

That is a foundational piece of being able to do software engineering and it relies on the idea that we all take for granted, which is that these artifacts actually have names and that they exist on their own as entities. Someone needed to invent that. Going back to computer engineering classes, having symbol tables and linkers and loaders and libraries with names, I should learn more about the history of computer engineering.


[00:26:59.400] - David Stein

I'd love to read about the folks who actually invented the library abstraction for compilers and linkers and stuff like that. Because if you couldn't do that, first of all, it's not totally obvious, especially if you know, like machine code and things like that work at low levels, like at the level people were dealing with decades ago.


[00:27:13.260] - David Stein

The idea that these modules should exist in named entities called libraries that have contracts, I assume, based on what I understand, is revolutionary because it lets you actually manage these relationships and link things together without you need to do it manually.


[00:27:29.290] - David Stein

I try to imagine what it might have been like to work in software engineering 40 years ago. Two laboratories on the same street probably have different architectures, different ways of writing code. It would have been impossible for them to share a library. That is how I have seen a lot of machine learning development over the past several years.


[00:27:45.280] - David Stein

It's like, if you have totally different systems you don't have a common set of names, things like features or registry for features and models. You can not collaborate. It's hard to even collaborate with yourself. What you did last year may be really opaque now. Inter-team collaboration, cross-org collaboration relies on named concepts that are well organized in a registry. It's like a foundational piece.


[00:28:07.380] - Simba khadder

I love that. I have worked on the Astrophysics project when I was back in school and had to learn some Forkran. I can tell you it was not pretty. We take a lot of things for granted nowadays and it's funny that we're always learning. We always look back and we're always like, man, that was really silly.


[00:28:23.560] - Simba khadder

I can't believe that's how we used to do things. It's so obvious we should have had Python back then. But back then Fortran was like a step up from writing sample directly. It's just so much here. I want to make sure that we talk about Feathr. Obviously you guys open source Feathr. It's a Feature store that Linkedin built uses as listeners probably have gathered by now. But tell us about Feathr.


[00:28:46.650] - David Stein

Yes. Feathr is Linkedin's Feature store solution. We have a blog post about it describing the problems we're solving, I can briefly give the quick explanation. A lot of it probably would have come through and some of the stuff that I've been saying previously.


[00:29:01.120] - David Stein

But before Feathr before Feature store and feature register abstractions, we saw a pattern on many different teams where feature preparation is like the most complicated part of the workflow for developing a model. You have huge workflows that get more complicated over time. The more researchers and interns or whatever would join a project and add pieces, people look for where can I insert a little bit of extra data here, how can I add another facet there?


[00:29:31.660] - David Stein

Like the systems are hard to keep organized over time without common named registry for things and so in order to avoid these different applications or having to have all the management of feature preparation, we replace the need to do most of that stuff with having a common abstraction for a defining features and then getting the features by their needs.


[00:29:57.370] - David Stein

We deployed this. There was a talk about it a few years ago, I think it was 2018 or 2019, the ML platforms meet up. I think there's a link to the slides for that talk on our blog post, but project had a different name, which was Frame at that time.


[00:30:12.560] - David Stein

We've built this thing, deployed it for many of the Linkedin machine learning use cases and optimized various parts of it over time for various large workflows. We have seen that there's been a big interest in Feature stores and feature engines in the industry. We thought this would be a cool thing to put out there in terms of showing the great work that we do at Linkedin and potentially collaborating with the community.


[00:30:40.450] - David Stein

We have been able to get some exciting traction there. We have some colleagues at Microsoft and the Azure team who are working with us. They wrote a blog post also about this collaboration where we have made Feathr to be like an Azure native support thing with very nice how to get started guides on how to get started playing around with Feathr on azure.


[00:31:02.440] - David Stein

If you go to our GitHub page Feathr spelled F-E-A-T-H-R, with no second E in the word. You can find our GitHub page and you can get started with that stuff. We definitely welcome any input or anything from the community. It's been exciting to see what folks have said so far. It's exciting days for this project.


[00:31:22.750] - Simba khadder

Where's Feathr headed? I guess it could be like the next big thing you say about Feathr. Could this become the long term vision for it?


[00:31:30.070] - David Stein

This could be like a huge topic. I think that the things that I most want to see solved wel, it's like what you mentioned, going from Fortran to Python. Seeing how far we've come but yet there's still room to grow in terms of making the tools that we use even better and clearer and solving the next round of issues and finding the next round of opportunities to make these things better.


[00:31:52.900] - David Stein

I think that now we have these foundations of this named feature registry and the ability to define features based on raw sources and the ability to do it in a collaborative way. There are other pieces that we have some support for, but that is going to, I think, need to enrich the support for these things, especially around making it easy for any use case that involves a highly dynamic or real time feature.


[00:32:19.460] - David Stein

This is something that I watch and look at various solutions that support this to varying degrees. Like filling in the support for real time signals, I think is going to be really important in the industry. Like having some level of support is, I think where a lot of tools are at today.


[00:32:36.950] - David Stein

But where I see Feature stores going and I think a direction Feathr is going to go is solidifying that support so that it becomes very easy experimental workflow that a person can follow to try a real time feature tracking something. Be able to get that ultra refresh seamlessly, easily computed going back and then instantly available with zero latency.


[00:32:59.620] - David Stein

That is supported, but not as well as it will be in terms of covering a variety of cases. I'm going to be focusing on that area and making sure that we hit that really well. Because I think that it is going to be the thing that everybody's going to want to be doing, like in a few years.


[00:33:16.390] - David Stein

Everyone's going to want to have real time signals for all their models. Because like, why would you want stale signals. Every signal that you're producing, you might as well be able to get it in a really real time way and one the feature abstractions, make it easy to do, everyone will do that.


[00:33:30.190] - Simba khadder

I love that. That's awesome. I think there's almost like a hierarchy of needs for MLOps. I think it features store particular. I think a lot of people learn this too because different companies I'm sure Linkedin was much more focused on the real-time punt first and all the naming and version and that stuff came with it was much part at the same time.


[00:33:47.170] - Simba khadder

For a lot of companies I talk to, it's like step one. Can we actually have a feature exist as a abstraction. Step two, it's like, yeah, that feature real time, especially for a case it is easy to do as batch, that would be a huge plus. I love that that's the direction that you guys are going and this space is moving to in general. I think we could keep talking all day. I do want to be sensitive of your time. We will have to bring you back on one of these days.


[00:34:13.540] - David Stein

Yeah. Thanks a lot, Simba. It's been my pleasure to be here talking to you. This has been a great conversation. I'd be happy to talk more. These are exciting times for this work, making it easy for people to use whatever data is available in order to be able to improve the quality of their products using machine learning.


[00:34:29.630] - David Stein

It's a really good area to be in. Happy to talk further and hopefully we'll get some other folks who are interested in what I've been talking about and happy to talk to folks about these things.

Related Listening

From overviews to niche applications and everything in between, explore current discussion and commentary on feature management.

explore our resources

Ready to get started?

See what a virtual feature store means for your organization.