Raj talks tech

Got my hands dirty with Unsplash dataset and built a CLIP based semantic image search.

This is what I did:
- Embed 5000 image features using openai/clip-vit-large-patch14 and store vector on pinecone
- Run the CLIP model as a docker image to expose an API to convert text to text-embedding
- Run a Next.js app to use the API to convert search query to text-embedding
- Use the text-embedding to search for closest image-embeddings

Works surprisingly well ✨

x.com/rajeshdavidbabu/status/1848104914570121644

1 year ago | [YT] | 4