

Right? Qwen 3.6 27B is fantastic, for example. Data centers need to stop hoarding all the hardware so we can afford to run models like that.


Right? Qwen 3.6 27B is fantastic, for example. Data centers need to stop hoarding all the hardware so we can afford to run models like that.


Spanking the code monkey.
deleted by creator
deleted by creator
Gabe doesn’t do a lot of interviews and mostly keeps out of the public eye. Elon Musk also had a lot of fans until he started running his mouth too much and revealing who he really is.


I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.
Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.
Duvets are a pain in the ass and both the duvet and cover need to be washed anyway. Why not just stick with comforters with sheets and save yourself the trouble?


deleted by creator


Are all these data centers really going to be running at full capacity with open models like Qwen 3.6 27B that have performance approaching frontier, but can run on consumer hardware? Sure, it’s slow as of now, though there are tweaks to optimize it, and how long until we see open models that run reasonably fast and give frontier models a run for their money? My company MacBook can run models like this, so will there be a point where companies stop paying hundreds per user per month for cloud AI and have devs run open models on the laptops they already have? I definitely won’t be surprised if that’s the case.


An official member of the US military industrial complex is making a phone with a proprietary OS that hoovers up your data and shoves AI slop in your face 24/7. What’s not to like?


Maybe try old.reddit.com?
Lots of fond memories watching Planet Earth in the evenings with a cup of herbal tea.


I got an espresso machine a few years ago and learned to make a proper latte with it. At this point, a $9 cup of charry sugar water made by a teenager in a fast food restaurant doesn’t really appeal to me.
Listening to a group just jamming in any genre is boring. Selecting the best parts and composing them into to a refined piece of music with a lot of deliberate thought behind it goes a long way towards creating something worthy of a listener’s time.


I’d be curious about stats on downvoting as well. Anecdotally, downvote frequency seems to vary by community. Personally, I don’t downvote for differences of opinion, and instead withhold upvotes, with downvotes reserved for blatantly toxic behavior. The etiquette across Lemmy seems varied, though.
This is the bee’s knees.


deleted by creator


AIsubscriptions
Interesting read. Here’s a link without the paywall: https://archive.ph/20260507223322/https://www.bloomberg.com/news/articles/2026-04-27/why-china-s-deepseek-qwen-and-moonshot-are-a-worry-for-us-ai-rivals
Edit:
Anthropic’s says:
So open weight models are a national security risk? Guess we’d all better pay subscriptions, let the data centers be built, and let a couple companies building proprietary models have a monopoly to protect our security, then. 🙄