HighlyRegardedArtist@lemmy.worldtoLemmy Shitpost@lemmy.world•How to clean a rescued pigeon
4·
11 days agoYou can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
You can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
I have to disagree with that. To quote the comment I replied to:
AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued
Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.
Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?
Ok, great to know. Nuance doesn’t cross internet well, so your intention wasn’t clear, given all the uninformed hype & grifters around AI. Being somewhat blunt helps getting the intended point across better. ;)