Using OpenAI’s CLIP model + Streamlit to deploy an image recognition app. But make it yummy. It recognises between my favourite comfort food (hotdogs) and anything else.
The motivation behind this project is to apply one of the many AI tools that we have nowadays and be able to see it in a nice web app with the less amount of code possible.
FYI
CLIP stands for Contrastive Language-Image Pretraining, and it’s more than just a mouthful of technical jargon—it’s the mustard of this hotdog: The secret sauce behind the app’s ability to understand the relationship between images and their corresponding descriptions.
In this case, it is just prompted to identify if a image is a hotdog or not ðŸŒðŸ”¥!
Try it out here.
Or run it locally using instructions from my github here.