It did a whole ass analysis about how to identify snakes only to read the image wrong
1 year ago
| 218
Is gpt incapable of discerning the colors, thus it failed? Or is gpt incapable of following the simple rules it just stated, thus it failed? If the reason is latter, openai is so cooked.
1 year ago (edited)
| 82
If you’re gonna fail; make sure to do it in such a manner where people ask how it was even achieved in the first place.
1 year ago | 15
In 10 years we will either be showing things like this as an example of why we should push through and not give up even when things appear really broken (AI improves), or as an example of why continuing to attempt to throw more computing power at a fundamentally broken solution doesn’t make it not broken (AI doesn’t improve). Only time will tell
1 year ago | 18
It's first and foremost a language model. With image recognition crudely hacked on as a sort of bonus feature. I'm not sure how they are doing it but it doesn't seem fully fleshed out. I think it might just be getting a text description of the image produced by another model.
1 year ago | 21
It said that the snake is non-venomous, when it actually was, according to its description
1 year ago | 12
Space Kangaroo
Can ChatGPT tell if a snake is venomous? 🐍
1 year ago | [YT] | 303