How to survive with 1 billion vectors and not sell a kidney: our low-cost cluster [ukr]
Let's talk about our history. How we started the project with a small vector database of less than 2 million records. Later, we received a request for +100 million records, then another +100... And so gradually we reached almost 1 billion. Standard tools were quickly running out of steam - we were running into performance, index size, and very limited resources. After a long series of trials and errors, we built our own low-cost cluster, which today stably processes thousands of queries to more than 1B vectors.

Maksym Mova
MacPaw, Engineering Manager
- Over a decade of expertise in Backend development
- Proven commercial experience with PHP, Python, and NodeJS stack
- Actively engaged in AI/ML projects for the past 3 years
- Passionate about tackling challenges and solving complex problems