This doesn’t mean LLMs aren’t useful, this is just a funny example and the title is from the YT video.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    20 days ago

    This is proof of why OpenAI’s… opaqueness is so dangerous.

    Chat LLMs tend to treat everything like an exam question or essay prompt, as a direct consequence of how the base models are finetuned. The hand is like a pivot point in a physic problem. But more importantly:

    • The chat context is sort of their whole world. Again, due to training format. So they tend to stubbornly adhere to what has already been said, and have no real means of self correction.

    • While we have no idea what OpenAI actually does, in basically every other open model, the vision component is trained separately from pure text input. Point being these models are alright at the very specific set of vision tasks they’re trained for, but the “coupling” of image input to the bulk of the LLM is very weak. The reasoning they can do over text does not carry over well.


    Point I’m trying to make is the biggest lie of Altman is pitching ChatGPT as a general intelligence… It’s not. It’s a dumb, narrow tool, like a drill with specific changeable bits. But they package and market it like it’s “smart”, which is a big fat lie.

    Go to any of the smaller AI vendors/models (like Minimax, with a new model today) and they do the opposite of this. They show specific uses in specific harnesses, and hyper optimize for that.