• 3 Posts
  • 2.35K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • My observation is that doctors are getting squeezed, other staff moreso. They’re getting pushed harder and harder for more and more productivity out of them.

    A doctor in my family quit and retired early because (basically) their group got more corporate and burned him out. I heard of a dentist who quit over ethics issues once their group was acquired by private equity.

    Not that they aren’t well off, but I’d be careful blaming working professionals like doctors, engineers and such so much.




  • “If I wish for three more wishes, you will grant them with no catches.”

    Keep doing this, over and over again, trying different strategies. It’s a way to test and validate genie loopholes (as you will be unable to state that if it’s wrong).

    Though if the genie was smart, being “wrong” would be qualified to your own internal knowledge, I suppose. E.g. you can’t knowingly lie.


  • On a technical level, that makes zero sense.

    AI “agents” are basically just fancy prompts with a tool calling harness. They are infinitely replicable, at zero cost, with no intrinsic value; the cost comes from the generic CPU host, and the API calls to GPU servers, databases, or whatever else that are all centralized anyway.


    Wanna hear a dirty secret?

    “AI” cost is going to zero.

    Model capabilities aren’t scaling, but inference efficiency is exploding, thanks to more resource-constrained labs and breakthroughs in papers. The endgame of the current bubble is mediocre but useful tools anyone can host themselves, dirt cheap. Maybe a bit more reliable and refined than what we have now, but about as “intelligent.”

    And guess what?

    Microsoft can’t profit off that. None of the Tech Bros can.

    Point being, this exec is either delusional, or jawboning, so the world doesn’t realize that “AI” is a dumb utility/aid, and they can’t make any profit off it.










  • Actually this makes perfect sense.

    Starfield is… trying to be part Mass Effect with big-budget cutscenes, but it has less charisma than Wrex has in his toe.

    I’d argue it’s a bad “Bethesda wandering RPG,” without the quirky, charming side areas Oblivion or even Fallout 76 have.

    But it’s an alright No Man’s Sky-like.

    You want some crafting? Looting? A vast amount of chill exploration area? Reasonable “I’m in space” fidelity and tasks to tickle your brain? Starfield’s got it in droves. BGS games scratched this NMS kind of “looting exploration sandbox” itch for some, when there was no big-budget alternative back then, and I think Starfield leans into it more.


    Hence my hypothesis is that gamers who love No Man’s Sky like Starfield, those who are looking more for “Mass Effect 2” loathe Starfield. And you and @[email protected] seem to be further datapoints supporting my observations.

    The problem is Starfield’s expectation for most us internet dwellers was “Skyrim but Mass Effect.” And it’s kind of Bethesda’s fault for setting that expectation instead of leaning into Starfield’s real niche (and wasting cash on what BGS isn’t very good at).





  • I dub it “induced astroturfing”.

    E.g. something artificially popular becoming organically viral through sheer critical mass. It is said, therefore it is believed (and reposted).

    …And it seems reposters don’t stop to consider “Should I really amplify a blue-checkmark Polymarket tweet?”


  • This is proof of why OpenAI’s… opaqueness is so dangerous.

    Chat LLMs tend to treat everything like an exam question or essay prompt, as a direct consequence of how the base models are finetuned. The hand is like a pivot point in a physic problem. But more importantly:

    • The chat context is sort of their whole world. Again, due to training format. So they tend to stubbornly adhere to what has already been said, and have no real means of self correction.

    • While we have no idea what OpenAI actually does, in basically every other open model, the vision component is trained separately from pure text input. Point being these models are alright at the very specific set of vision tasks they’re trained for, but the “coupling” of image input to the bulk of the LLM is very weak. The reasoning they can do over text does not carry over well.


    Point I’m trying to make is the biggest lie of Altman is pitching ChatGPT as a general intelligence… It’s not. It’s a dumb, narrow tool, like a drill with specific changeable bits. But they package and market it like it’s “smart”, which is a big fat lie.

    Go to any of the smaller AI vendors/models (like Minimax, with a new model today) and they do the opposite of this. They show specific uses in specific harnesses, and hyper optimize for that.