MLOps
Weekly Podcast

Episode 
10
Vector Embeddings and AI-First Databases with Bob van Lujit
Co-Founder and CEO, SeMi Technologies
white featureform logo

Vector Embeddings and AI-First Databases with Bob van Lujit

August 30, 2022

Listen on Spotify

Transcript:

[00:00:03.120] - Simba Khadder

Hey, Simba Khadder, and you're listening to the MLOps Weekly podcast. Today, I'm chatting with Bob van Luijt, the CEO of SeMI Technologies, which are the creators of Weaviate. Bob spent the majority of his career working on open source and now he's fully focused on building an open source effective database. Bob, it's so great to have you on the show today.


[00:00:23.890] - Bob van Luijt

Thanks for having me, Simba. It's great to be here.


[00:00:26.690] - Simba Khadder

I gave a little intro on you, but I'd love to hear your journey. How did you get into MLOps?


[00:00:31.730] - Bob van Luijt

Yeah. It depends with how far you want to go back? But I'm working in software for a very long time so the story that I always like to tell us that I'm at an early millennial. I grew up with the Internet. It was when I was born, it wasn't really there yet, but it was like when I was like 10 years old or something. We got it in the home.


[00:00:54.860] - Bob van Luijt

At some point when I was like in high school, actually, and this is just the standard story. I make a few web websites, those things. Somebody said, "Hey, I need a website." Then, you can charge some money for that. Then that is how that started and that is how my career basically grew. Now, fast forward a few years in the future, I was very intrigued by everything that was happening from an NLP perspective. It Word2Vec and a little bit later fastText, but before transformers.


[00:01:28.690] - Bob van Luijt

It's in between, that time in between. That was really from a software perspective or dare I say maybe with a hacker mindset so I was like, "How can I use this? How can I solve problems with this?" One of the problems that I was struggling with was the problem like how do I deal with data objects that are similar in nature, but don't describe things in the same way?


[00:01:53.990] - Bob van Luijt

I was back then working actually on an IoT-related project where I got data in from different vendors that talked about the same stuff that was from a semantic perspective, similar, but it was described differently. I was playing around with calculating centroids back then, just individual word vectors. Out of that, the idea was born like "Hey, wait a second, maybe we can actually store data based on these vectors."


[00:02:22.670] - Bob van Luijt

Actually in the early days and those who go back to the Weaviate repo and go back in history can actually see that we tried to do that in other databases. Then I met my co-founder HN and HN's like, "Wait a second. I actually think that a database that is really optimised for doing this well."


[00:02:45.540] - Bob van Luijt

Specifically for factors and maybe later in the internal conversation, we can talk about why that is the case. I started to become more interested in everything that's surrounded, maybe a little bit more from a product perspective, go-to-market marketing, those things.


[00:03:02.290] - Bob van Luijt

With a small team, we started to work on Weaviate and that's actually how it's born. We came from this software engineering background believing that the whole, depends how you call about ML first or AI first or if you want to approach to solving problems with software whilst being the next big thing. That's how we ended up in this space. It's not a data science background, but really like a software background. I guess that's the almost short answer to your question.


[00:03:36.280] - Simba Khadder

I want to jump into talk about vectors and embeddings. I remember when I first learned about Word2Vec and use it for the first time, I really dug in. My background being in recommended systems, I remember creating user embeddings, side embeddings, different types of embeddings. Just finding these techniques magical. But I think a lot of people even in data science maybe don't fully understand what embedding or vectors as we're calling it is. How would you define that?


[00:04:09.400] - Bob van Luijt

There's one word that resonates very well with me, and that is you use the word magical. I sometimes use the exact same word because there's something innately beautiful in working with embeddings, in my opinion. There's this famous example with these word embeddings where you do that calculation that I believe it goes like king, minus, men, plus, woman or something is queen. It does the example that they give.


[00:04:40.530] - Simba Khadder

It does analogies. It's able to take analogies and fit them into spatial coordinates. I remember we had one. The famous one is the king is to man, queen to woman, there's also Paris is to France as DC is to the US. It does a lot of these really interesting things. Actually myself, I've been able to see it do that for really interesting analogies. One analogy we built once was it was Diet Coke is to Coke as Coca-Cola Zero is to Cherry Coke. I saw that. I was like, "Oh, that makes sense. But I can't explain why."


[00:05:25.360] - Bob van Luijt

That exactly. What you just described, that was something that I hadn't had for a long time in working with software. I remember the first time I had it when I was 10 years old or something. That's also almost like the cliche story. You have an old machine. We had Q basic. Then the first time, you have to input what is your name. Then I did Bob and then it said like, "Hi, Bob."


[00:05:54.140] - Bob van Luijt

I was like, "Whoa." I felt like that 10-year-old kid again when I started to work with these embeddings. Because I didn't come from the data science background, I had to reverse engineer in my mind how these things work and where they're coming from and learn what they not only could do for NLP, but also for images and just data in general.


[00:06:17.930] - Bob van Luijt

When I got that and got that understanding, I was like, "Wow, I really believe that this is an amazing opportunity." Because I was back then very much interested in the semantic web and that was just one thing that... I wouldn't say annoying is word but there was just one thing that just kept like, that this problem that kept popping up and that was the thing that people couldn't agree on stuff, how to call stuff. I was like, "Oh, this is like these embednings. They going to help us solve the problem." That was back then how I looked at it.


[00:06:56.570] - Bob van Luijt

Of course, now that is more sophisticated and that grew further. But it just was so clear to me that placing stuff in whatever the data is that you're having, regardless of the NLP-related, image, whatever, place in a vector space and then having a search engine, that is good at helping you navigating, re-ranking, reorganising, fetching, whatever from vector space. I think that's the future. That's how I look at that. That is why it just grab my attention.


[00:07:30.250] - Simba Khadder

Totally. Yeah, it makes a lot of sense. I know when I talk about embeddings, one thing I was supposed to when I talk to people who don't know an embedding is I try to help them imagine like "Hey, think of a movie or think of your buying behavior on amazon. Think of wherever you buy." You can't describe in a sentence or a word.


[00:07:53.840] - Simba Khadder

Typically it's like, "Yeah, sometimes I buy a lot of clothes, but sometimes I buy stuff from my laptop or if buy a speaker." It's just is a lot of noise, a lot of randomness. One way is just call the randomness, call the noise, just make it very once if this person buys clothes. But embeddings, I think, are so magical because they're able to capture the whole of the person and the semantic, like who is this person? How do they do things? How do they think about this? Somehow put that into a vector.


[00:08:31.940] - Simba Khadder

The vector alone is useless. But the vector in space of other vectors is where it becomes interesting because the analogies the similarities come up. The other thing is you can start to train models with embeddings. Like multimodal, we used to do a ton of that. Where it's like, "Hey, take the user embedding, plug it into a ranking model and just use the user embedding as opposed to using the user ID or whatever else you're going to do."


[00:08:56.870] - Simba Khadder

I've seen a lot of big companies actually create embedding teams to try to centralise embeddings and actually use embeddings. Same embeddings, like have one user embedding or a handful of user embeddings that they use across the organisation just to pepper onto models that use user data and see if it makes it better.


[00:09:13.970] - Simba Khadder

No, that's actually what you're saying is super interesting. If I may, there are two things that are that I would like to add to that. I really like the analogy that you're using. The metaphor that I often use when I explain them is a supermarket. If you have a shopping list and the shopping list says I need apples, bananas and washing powder, then intuitively you know that if you found apples in the space, the three-dimensional space that is a supermarket.


[00:09:41.040] - Simba Khadder

You know intuitively that bananas are going to be closer by than the washing powder. The more you move towards the washing powder, the further you would move away from the food section. Basically, one, we're doing that in a digital space. Secondly, because it's digital, we don't have to limit ourselves to three dimensions.


[00:09:58.660] - Simba Khadder

We can give more context in these hyper dimensional space. I said the last thing is we have these models that help us learn these representations and place them in space. The second thing that I wanted to say is what I find very interesting, which is that I think that's something we will also be seeing is that we now reason a lot through the model.


[00:10:20.450] - Bob van Luijt

We say, "Okay, we have a piece of data. We run it. We do interference on a model and we store these factory presentations. We want to do similarity search through them, et cetera. Done." That's definitely going to stay. I think that we actually going to see a lot of algorithms and computations on the factorss in the database.


[00:10:41.440] - Bob van Luijt

For example, calculating real-time centroid to buy certain search results and those things, something that you can't per se do with the model, but that you can do in the database as well. I really believe that we're going to move to this world where people just start to play around, manipulate organise the data from the perspective of the embedding regardless if that it starts from inference on a model, but then it just can live a life of its own.


[00:11:09.780] - Simba Khadder

It's a really interesting point. It's funny because it's actually almost exactly what we did in my last company. What we did was we create user embeddings. We'd cluster around. We call those clusters personas. We would have many millions of users. We were trying to figure out why users would subscribe. If you subscribe to the New York Times, why do you subscribe to the New York Times?


[00:11:32.930] - Simba Khadder

If you look at individual users, it's not really useful. You can look at all the users at once, but that doesn't work either because there's just so many different types of users and there's not really one thing that's true across them, typically. There are some, but it's not enough. The trick was, well, we need a better microscope.


[00:11:50.130] - Simba Khadder

We would just cluster embeddings and we'd create all these personas and then learn off of the personas. That was where our breakthroughs came from. I can totally see that happening. I think that's the future of most machine learning is going to be multi-modal embeddings based.


[00:12:07.410] - Bob van Luijt

I 100% agree with that. It's so interesting that you bring this up because that use case that you're describing, similar use cases to that is really where we see that the power of the combination of the models and in this case, de facto search engine that we're working on, where it shines. We have on our website and our documentation, we have every example has a real-life demo, which is based on news articles.


[00:12:35.290] - Bob van Luijt

We have one example in the documentation where you can say show me articles about travelling. I believe that it is, just show me articles about travelling. But what you now can do is you can say, "Well, the previous article that I read was about Japan." Then you say, "Okay, I do inference on the model on the query for travelling."


[00:12:54.480] - Bob van Luijt

I get an embedding bag. Then I take the embedding from the database about this article about Japan. Then I bias the search results towards in fact the space, towards that article. What's happening, now news articles of travelling in Japan or to other countries in Asia pop up first. Doing that real-time for users on a large scale, that's basically the problem we solve. I really believe that that's going to be the future of search.


[00:13:25.320] - Simba Khadder

I want jump into vector basis and that. But I guess before we jump off with embeddings topic, we went over at a high level, but there's so much there. If someone's listening and we're like, "Oh, this sounds super cool, how do I learn more, what would you recommend they do to learn about embeddings?"


[00:13:41.950] - Bob van Luijt

That depends a little bit of what your background is. If your background is in the data science space and you're coming out of that, then I can't really help background. But if you have my background so you come more from the software engineering space, I would highly recommend just start to play around with these embeddings. Just learn by doing.


[00:14:03.640] - Bob van Luijt

Regardless if you use a service like get a bunch of embeddings from OpenAI or you use your own model coming from Hugging Face, just loading them in whatever in your language of choice, having an array of these embeddings based on a sentence or an image. Start to play around with them, start to compare them like we just described with single words with sentences, those kinds of things. Then just get that feeling of how it works, because it's different than traditional keyword-based search.


[00:14:34.030] - Bob van Luijt

That is what I would suggest because you don't have to train these models anymore. You just can get a bunch of them off the shelf. I would argue go to Hugging Face, go to OpenAI, go to Cohere, wherever you want to go to get your embeddings from. Load a bunch of them in just a little piece of software and start to work with them, start to play around with them, because the just distance calculations that you need to do and within 10, 20 minutes to up and running. You can experience that same magic that we just described.


[00:15:06.740] - Simba Khadder

Yeah, that's awesome. I remember actually your analogy of the grocery store, reminds me of one of the first projects that dealt with embeddings, which was I would allow you to put in two songs and they would try to find a song in the middle of those two songs. It was really funny because, if you did songs that are similar, you would find the cool, that would be a good recommender. But sometimes you would do some really weird things like, "Oh, there's a pop song and a metal song. What will we hear?" It was really funny. It wasn't that accurate, but it was really a fun project.


[00:15:37.240] - Bob van Luijt

Yeah. Actually, if I may add one more thing. Of course, a lot of this stuff is born in research. If you dive into research first, you get into the role of that you need, I don't know, re-rankers or you get into certain model benchmarks or you get into fine-tuning, then you're not experiencing that magic first. The example you just gave, I love that. Build a simple music recommendation tool or something like that, because if you do that, you're experiencing the beauty of it. Then you can go into the resource and learn more about it. But I would highly recommend put on your hacker hat and just start building.


[00:16:22.270] - Simba Khadder

Totally, yeah. Getting that magic feeling is going to let you make it through the research because it can get pretty heavy.


[00:16:28.870] - Bob van Luijt

Yeah.


[00:16:29.920] - Simba Khadder

Well, let's talk about vector databases. We talked about embedding, the banks are great. Definitely, similarities which obviously is a key component of it. But I guess maybe, the first question is just what is a vector database?


[00:16:40.800] - Bob van Luijt

Sure. First, let's look at the second word database. A database comes in many flavors. You have databases that are good at doing time series, you have databases that are good at doing graph, making graph relations and those kind of things. I have databases that are good at search. We focus on search. Weaviate is the database of the flavor search engine. Then we specifically focus on working with vector embeddings. We call it a vector search engine.


[00:17:13.260] - Bob van Luijt

Weaviate is definitely database, but we position it as saying it's a effective search engine because it's very descriptive of what it does. I think that, for example, in my opinion, feature stores are also a form of database. You could argue that there are also a form of like factor databases. That is why I very much like the term feature store because it's very descriptive and what it does. Long answer to your question, but a effective database is often actually search engine and focusing on vectors first. It's a vector search engine.


[00:17:49.280] - Simba Khadder

I know the answers to this, but just for those listening. When I was working embeddings, it was before any vector database existed at least to my knowledge. We used to use face [inaudible 00:17:59] and face being Facebook's being [inaudible 00:18:01] index and [inaudible 00:18:04] Spotify's. What's the difference? When do you go back to database? When they use [inaudible 00:18:10]? Is there a difference?


[00:18:11.910] - Bob van Luijt

That's a great question, actually. A database has a specific definition. One thing that you want to have for a database is that you want to have cross-support. Create, read, update, delete. Not all of these, they are often called in a libraries. They're not databases, they're libraries.


[00:18:29.740] - Bob van Luijt

Sometimes these libraries, they get their optimisation, for example, from building as an immutable index. Now, you have the problem that if you have real-time data. Think, for example, that you have a website with reviews and you want to add a review or you want to delete a review, you have to rebuild the whole index, might be problematic.


[00:18:49.520] - Bob van Luijt

Another way to look at it is that these databases, you want to scale them. They're built. You want to have horizontal scalability. You want to have replication. You want to guarantee up times. You want to if they go down, you want them to restore themselves, those kinds of things.


[00:19:04.600] - Bob van Luijt

I sometimes say you could compare it with how Lucene sits at the heart of something like solar or elastic search. But now, from the perspective of ANN. One thing that we've learned, not only us, also others in the space learned, is that you probably don't want to take an ANN library because then you can't solve these things like product and scalability down the line.


[00:19:30.590] - Bob van Luijt

You want to take ANN algorithm because certain ANN algorithms are very well suited to build ANN databases with. That is the big difference if you need a database that can run in production and do database-like operations, then you want to go for an effective database. If you have a cool research project or whatever and you just want to quickly just load something and embed something in your Python script and get it up and running, by all means use a great ANN library.


[00:20:01.250] - Simba Khadder

Yeah, the way I thought about it supposed our project embedding up on this was... It's an index versus a database. If I give you a V3, that's not postgres. Yeah, exactly. That's how I think of it.


[00:20:19.420] - Bob van Luijt

If I may add something to that as well, and this is really our vision on where this is going because, everybody in space solves problems in a different way. We have learned, that purely doing vector search is a great starting point for you to solve a use case, but by no means the end of the use case.


[00:20:41.910] - Bob van Luijt

For example, mixing vector search with keyword search or BM25 search, helping in re-ranking of these search results coming from vector space, we believe that those are operations that should be part of the database. That is how we build Weaviate. Weaviate just doesn't simply build like a vector index.


[00:21:02.200] - Bob van Luijt

That's just one of the things that it does. It also works with inverted indexes. It mixes them together. It makes sure that you can do regulate pre and post-filtering one of the big tricks here, and this is important for people to know, is that the first iterations of this with more traditional databases was that you could do vector search post filter. The problem was you first had to get your candidates in a traditional way from the database and then in a post filter applied vector search.


[00:21:31.520] - Bob van Luijt

Problem was, how do you know what you need to get from vector space? Because you don't know that. All these things combined are solved inside the database and then giving great UX how people can add data and retrieve data.


[00:21:46.660] - Bob van Luijt

That is where a lot of that value comes from the database. Yes, these ANN libraries have been super important because they were the first to implement these algorithms, but we really believe that the next step is the actual full-fledged database that comes with everything that comes that you would expect from a database as opposed to just a library.


[00:22:09.860] - Simba Khadder

Could you go to that more? You talked a little bit about it, but when you say the next step is more of a database functionality. Can you maybe even give us a story or something like, "Oh, this is something that the database did that changed how people worked with embeddings."?


[00:22:25.010] - Bob van Luijt

Yeah, sure. Let me answer this based on an example, because I think that's easiest. Let's say that I have a database with products. I have a million products. I create vector embeddings for these products and I load them in the database. The first thing is that products change. You want to update a project, you want to remove products, add more products, those kinds of things. You want to do that real-time.


[00:22:51.210] - Bob van Luijt

This is the first thing where the database adds value. That's just in storing the data and just having your traditional product operations on the data as well. Secondly, if we start to query, then initially you might say, "Okay, let's say that I have a query that goes like, show me Adidas shoes for the summer. If I factorise the query Adidas shoes for the summer, I do my candidate selection in vector space then there will probably also be Nike shoes in there for the summer or there might be Pumas in there for the summer as well."


[00:23:22.750] - Bob van Luijt

Purely doing that vector search and presenting these results to the end users is great because you now might also make matches on where there are no keyword matches. These Adidas shoes are great during spring. Even if you search for the summer, but there also might be Nike shoes in there.


[00:23:39.900] - Bob van Luijt

Now, the second thing that you want from such a database is that it says, "Wait a second, I want to mix in other ways of filtering like traditional aware filters or BM25 filters. I'm going to say Adidas is actually very important. I'm going to do a hybrid search. In the candidate selection, I'm going to filter out everything that says Nike or I'm going to buy up everything that says Adidas." That is the second added value that you get from that.


[00:24:06.570] - Bob van Luijt

Then thirdly, you might have certain real-time operations that you say, "Well, maybe Simba, you might have a cart that is full of running gear and I might have a cart that is full of basketball gear. If we now both have the exact same query with the exact same factor Adidas shoes for the summer and we do get this from vector space and we include this, for example, our BM25 search engine. We might actually want to bias your results to Adidas shoes for the summer towards running gear and mine towards basketball shoes."


[00:24:41.320] - Bob van Luijt

All these operations, people want to do that real-time and they want to do that on a high scale. We see QPes of well into the thousands in a users that we have. That is that. The fourth thing that's actually that you operationalise these things. All that combined and probably more in the future is what all comes together inside the database and not in the library or purely ANN algorithm.


[00:25:08.390] - Simba Khadder

I love that example. It's really interesting to see how... Yeah, the basic use of embeddings is one thing, I thought it was a similarity search. But embeddings the power that comes in when you start using them and using combining numbers, other types of models, or the types of algorithms.


[00:25:22.500] - Simba Khadder

You talked about the future of everything will keep growing in that direction. We talked about the past, we've talked a little bit about it, used to be just elastic search and then it became indices. Then now there's the vector database boom. What's next? What's the future look like for the space and let's say a few years?


[00:25:41.620] - Bob van Luijt

There's one big assumption. When I say the assumption, I think both of us are going to be very biased towards what the outcome of that assumption is. But there's one big assumption and that assumption is that the world, developers, data scientist even people working on digital products, everybody. The world is moving towards a place where we want to do stuff what I'd like to call a AI first, which practically means we work with these embeddings and these models and these vectors.


[00:26:13.750] - Bob van Luijt

We need to have the tooling that helps us to easily have a great [inaudible 00:26:19] and allows to easily work with these tools. The first part of my answers that I think that, I believe in that future will be ML first. That's one thing. Then the second thing is that, and we talked about this earlier in the conversation, is that I think that a lot of operations on the embeddings will be done without interference on any models.


[00:26:46.910] - Bob van Luijt

The example that I just gave about the e-commerce example, if I want to buy results for you towards running and for me towards basketball, that's an operation we can do without touching a malever. Why is that important? That's important because it's fast. We can scale that. Now, you can have real-time search and recommendations for big e-commerce stores just out of the box. That's one thing. Then secondly, what I think I'm a big believer in innovation through engineering.


[00:27:17.890] - Bob van Luijt

One thing that we're seeing is we hit certain, I wouldn't say limitations, but more like challenges that for really big data sets, that these databases are still a little bit memory intensive, those kinds of things. We currently have a research team that's working on everything that you would expect, working from disk and those things and optimise inference and what can we do in the database there as well.


[00:27:40.260] - Bob van Luijt

We will also see that these databases will get significantly better and easier to... I wouldn't say easier to operate, but cheaper to operate. It just gets relatively cheap at some point to just have a high-quality factor search engine like Weaviate up and running and I think we're going to see that soon, not like in 10 years, but like a year from now.


[00:28:05.650] - Simba Khadder

One thing you've written about before is the AI-first database ecosystem, you just touch on it for a second, but could you elaborate? What does that mean? What does that look like?


[00:28:15.590] - Bob van Luijt

Yeah. That's an article I wrote. The reason why I wrote that article is because probably like yourself, we often need to answer the question, why do you sit in this whole landscape? If you, for example, come up with a great idea to compete with, I don't know, with Snowflake or any other data warehouse that you have, or Cassandra, whatever.


[00:28:37.680] - Bob van Luijt

Then everybody instantly gets like, "We know you're a data warehouse. You do something more efficient or something faster or a combination of these things." You don't have to explain the space. People go like, "Okay, I get what a data warehouse does. Okay, great, good for you."


[00:28:54.110] - Bob van Luijt

In our case, that's a little bit different because that space now starts to emerge. We need to be able to have the right words and the right syntax and write nomenclature to talk about that space. I wrote that article to describe how I see that space. I basically divide the space into four sections, so the first section is what I call the embedding providers.


[00:29:18.150] - Bob van Luijt

Those who make sure that we have these and embeddings that can be OpenAI, that can be Hugging Face, that can be Cohere. All these players that do that, can be open source, can be SaaS-based, can be a combination of both doesn't matter.


[00:29:31.220] - Bob van Luijt

It's just those who make sure that we get the models and we can do inference on models as well. Then the second group that I identify is the groups that these companies call themselves that they create neural search frameworks. Those are the hammers and scissors to work with these models.


[00:29:50.960] - Bob van Luijt

If you maybe want to do fine-tuning, if you want to maybe have a specific pipeline to do something when you do interference, those kinds f things. Those are the general search frameworks. Then we have something which is the data warehouse equivalent for the feature stores. What if I want to store a bunch of them?


[00:30:08.870] - Bob van Luijt

What if I want to make them available in my company for people to use that they don't have to train these models all by themselves or whatever you do on a large scale with them. These are the feature stores. Then lastly is what I call the vector search engines, which is a flavor of database.


[00:30:23.740] - Bob van Luijt

But again, I prefer to call them search engines because it's very descriptive and that's, for example, where we sit it. We make sure that you can do search on them. You index data, you can make it real-time available. These four groups I've identified. Together, they make up this AI-first ecosystem. That's what I mean when I talk about that AI-first ecosystem.


[00:30:47.300] - Simba Khadder

Yeah, I love it. It's almost like a lot of MLOps and a lot of machine learning when people think about MLOps, it's relatively simple... Banks don't come in. We both know if you go to many companies. It's almost like, it used to just be the most cutting-edge companies.


[00:31:02.600] - Simba Khadder

Nowadays, it's like most companies do something with embedding and almost needs to be this other pipeline of embeddings. Fleet Shore is a good example of something that hits both sides and it creates that eventual combination and it just all comes together too, like you said, the AI-first ecosystem. I really like that.


[00:31:22.590] - Simba Khadder

When I read that, I was like, "this is a really a new way for me to think about it." We'll link that article for anyone who wants to dig in. Well, last thing. If you had to give a tweet wave takeaway. Someone listening to this or like, "Man, I know what to tell my team about this in a tweet, what would do you think your takeaway would be or should that takeaway should be?"


[00:31:42.370] - Bob van Luijt

I think this fits exactly in a tweet. Go to weaviate.io, spin up the docker container, give it a go and experience that magic.


[00:31:48.650] - Simba Khadder

We'll have a lot of links for things for people to look at through this. Bob, thanks so much for hopping on and having this great conversation with me. It's always great to be able to chat with you about this stuff.


[00:31:57.090] - Bob van Luijt

Yeah. Again, thanks for having me and I love what you guys are working on. Just keep up the good work and let's build that ecosystem together.


[00:32:04.610] - Simba Khadder

Sounds great.

Related Listening

From overviews to niche applications and everything in between, explore current discussion and commentary on feature management.

explore our resources