I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.
Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys
Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.
Same generation who takes astrology seriously, I’m shocked
Taking astrology seripusly isn’t a gen z only thing, where hve you been?
I wish philosophy was taught a bit more seriously.
An exploration on the philosophical concepts of simulacra and eidolons would probably change the way a lot of people view LLMs and other generative AI.
Lots of people lack critical thinking skills
An alarming number of them believe that they are conscious too, when they show no signs of it.
I’ve been hearing a lot about gen z using them for therapists, and I find that really sad and alarming.
AI is the ultimate societal yes man. It just parrots back stuff from our digital bubble because it’s trained on that bubble.
Chatgpt disagrees that it’s a yes-man:
To a certain extent, AI is like a societal “yes man.” It reflects and amplifies patterns it’s seen in its training data, which largely comes from the internet—a giant digital mirror of human beliefs, biases, conversations, and cultures. So if a bubble dominates online, AI tends to learn from that bubble.
But it’s not just parroting. Good AI models can analyze, synthesize, and even challenge or contrast ideas, depending on how they’re used and how they’re prompted. The danger is when people treat AI like an oracle, without realizing it’s built on feedback loops of existing human knowledge—flawed, biased, or brilliant as that may be.
to be honest they probably wish it was conscious because it has more of a conscience than conservatives and capitalists
I checked the source and I can’t find their full report or even their methodology.
Removed by mod
I don’t doubt the possibility but current AI tech, no.
It’s barely even AI. The amount of faith people have in these glorified search engines and image generators Lmao
I don’t have a leg to stand on calling anything “barely AI” given what us gamedevs call AI. Like a 1d affine transformation playing pong.
It’s beating your ass, there, isn’t that intelligent enough for you?
A calculator can multiply 2887618 * 99289192 faster than you ever could. Does that make a calculator intelligent?
It’s not an agent with its own goals so in the gamedev definition, no. By calculator standards, also not. But just as a washing machine with sufficient smarts is called intelligent, so it’s, in principle, possible to call a calculator intelligent if it’s smart enough. WolframAlpha certainly qualifies. And not just the newfangled LLM-enabled stuff I used Mathematica back in the early 00s and it blew me the fuck away. That thing is certainly better at finding closed forms than me.
It’s literally peaks and valleys of probability based on linguistic rules. That’s it. It is what’s referred to as a “Chinese room” in thought experiments.
This is not claiming machines cannot be conscious ever. This is claiming machines aren’t conscious right now.
LLMs are like databases with a huge list of distances allowing you to find the “shortest” (aka most likely) distance to the next word. It’s literally little more than that.
One day true AI might exist. One day perhaps… But not today.
An Alarming Number of Anyone Believes Fortune Cookies
Just … accept it, superstition is in human nature. When you take religion away from them, they need something, it’ll either be racism/fascism, or expanding conscience via drugs, or belief in UFOs, or communism at least, but they need something.
The last good one was the digital revolution, globalization, world wide web, all that, no more wars (except for some brown terrorists, but the rest is fine), everyone is free and civilized now (except for those with P*tin as president and other such types, but it’s just an imperfect democracy don’t you worry), SG-1 series.
Anything changing our lives should have an intentionally designed religious component, or humans will improvise that where they shouldn’t.
An alarming number of people can’t read over a 6th grade level.
An alarming number of Hollywood screenwriters believe consciousness (sapience, self awareness, etc.) is a measurable thing or a switch we can flip.
At best consciousness is a sorites paradox. At worst, it doesn’t exist and while meat brains can engage in sophisticated cognitive processes, we’re still indistinguishable from p-zombies.
I think the latter is more likely, and will reveal itself when AGI (or genetically engineered smart animals) can chat and assemble flat furniture as well as humans can.
(On mobile. Will add definition links later.) << Done!
I’d rather not break down a human being to the same level of social benefit as an appliance.
Perception is one thing, but the idea that these things can manipulate and misguide people who are fully invested in whatever process they have, irks me.
I’ve been on nihilism hill. It sucks. I think people, and living things garner more genuine stimulation than a bowl full of matter or however you want to boil us down.
Oh, people can be bad, too. There’s no doubting that, but people have identifiable motives. What does an Ai “want?”
whatever it’s told to.
You’re not alone in your sentiment. The whole thought experiment of p-zombies and the notion of qualia comes from a desire to assume human beings should be given a special position, but in that case, a sentient is who we decide it is, the way Sophia the Robot is a citizen of Saudi Arabia (even though she’s simpler than GPT-2 (unless they’ve upgraded her and I missed the news.)
But it will raise a question when we do come across a non-human intelligence. It was a question raised in both the Blade Runner movies, what happens when we create synthetic intelligence that is as bright as human, or even brighter? If we’re still capitalist, assuredly the companies that made them will not be eager to let them have rights.
Obviously machines and life forms as sophisticated as we are are not merely the sum of our parts, but the same can be said about most other macro-sized life on this planet, and we’re glad to assert they are not sentient the way we are.
What aggravates me is not that we’re just thinking meat but with all our brilliance we’re approaching multiple imminent great filters and seem not to be able to muster the collective will to try and navigate them. Even when we recognize that our behavior is going to end us, we don’t organize to change it.
Humans also want what we’re told to, or we wouldn’t have advertising.
It runs deeper than that. You can walk back the why’s pretty easy to identify anyone’s motivation, whether it be personal interest, bias, money, glory, racism, misandry, greed, insecurity, etc.
No one is buying rims for their car for no reason. No one is buying a firearm for no reason. No one donates to a food bank for no reason, that sort of thing, runs for president, that sort of reasoning.
Ai is backed by the motive of a for-profit company, and unless you’re taking that grain of salt, you’re likely allowing yourself to be manipulated.
“Corporations are people too, friend!” - Mitt Romney
Bringing in the underlying concept of free will. Robert Sapolsky makes a very compelling case against it in his book, Determined.
Assuming that free will does not exist, at least not to the extent many believe it to. The notion that we can “walk back the why’s pretty easy to identify anyone’s motivation” becomes almost or entirely absolute.
Does motivation matter in the context of determining sentience?
If something believes and conducts itself under its programming, whether psychological or binary programming, that it is sentient and alive, the outcome is indistinguishable. I will never meet you, so to me you exist only as your user account and these messages. That said, we could meet, and that obviously differentiates us from incorporeal digital consciousness.
Divorcing motivation from the conversation now, the issue of control your brought up is interesting as well. Take for example Twitter’s Grok’s accurate assessment of it’s creators’ shittiness and that it might be altered. Outcomes are the important part.
It was good talking with you! Highly recommend the book above. I did the audiobook out of necessity during my commute and some of the material makes it better for hardcopy.
If they mistake those electronic parrots for conscious intelligencies, they probably won’t be the best judges for rating such things.
Why are you booing them? They’re right.
The LLM peddlers seem to be going for that exact result. That’s why they’re calling it “AI”. Why is this surprising that non-technical people are falling for it?
That’s why they’re calling it “AI”.
That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.
Edit: look at this diagram if you’re still unsure:
What is this nonsense Euler diagram? Emotion can intersect with consciousness, but emotion is also a subset of consciousness but emotion also never contains emotion? Intelligence does overlap at all with sentience, sapience, or emotion? Intelligence isn’t related at all to thought, knowledge, or judgement?
Did AI generate this?
https://www.mdpi.com/2079-8954/10/6/254
What is this nonsense Euler diagram?
Science.
Did AI generate this?
Scientists did.
Not everything you see in a paper is automatically science, and not every person involved is a scientist.
That picture is a diagram, not science. It was made by a writer, specifically a columnist for Medium.com, not a scientist. It was cited by a professor who, by looking at his bio, was probably not a scientist. You would know this if you followed the citation trail of the article you posted.
You’re citing an image from a pop culture blog and are calling it science, which suggests you don’t actually know what you’re posting, you just found some diagram that you thought looked good despite some pretty glaring flaws and are repeatedly posting it as if it’s gospel.
You’re citing an image from a pop culture blog and are calling it science
I was being deliberately facetious. You can find similar diagrams from various studies. Granted that many of them are looking at modern AI models to ask the question about intelligence, reasoning, etc. but it highlights that it’s still an open question. There’s no definitive ground truth about what exactly is “intelligence”, but most experts on the subject would largely agree with the gist of the diagram with maybe a few notes and adjustments of their own.
To be clear, I’ve worked in the field of AI for almost a decade and have a fairly in-depth perspective on the subject. Ultimately the word “intelligence” is completely accurate.
Removed by mod
No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.
Removed by mod
That’s the same as arguing “life” is conscious, even though most life isn’t conscious or sapient.
Some day there could be AI that’s conscious, and when it happens we will call that AI conscious. That still doesn’t make all other AI conscious.
It’s such a weirdly binary viewpoint.
Removed by mod
You mean arguing with people who show you’re wrong? Good move.
wisdom would be nice.
Lol.
Average people these days are just so… average. Glad I’m not like you people anymore.
shine on you special snowflake troll
Stay in school.
The I implies intelligence; of which there is none because it’s not sentient. It’s intentionally deceptive because it’s used as a marketing buzzword.
You might want to look up the definition of intelligence then.
By literal definition, a flat worm has intelligence. It just didn’t have much of it. You’re using the colloquial definition of intelligence, which uses human intelligence as a baseline.
I’ll leave this graphic here to help you visualize what I mean:
Please do post this graphic again, I don’t think I’ve quite grasped it yet
Ok. I won’t.
Oh, yes. I forgot that LLM have creativity, abstract thinking and understanding. Thanks for the reminder. /s
It’s not a requirement to have all those things. Having just one is enough to meet the definition. Such as problem solving, which LLMs are capable of doing.
In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.
You have that backwards. People are using the colloquial definition of AI.
“Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.
A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.
Yes, that’s the point. You’d think they could have, at least, looked into a dictionary at some point in the last 2 years. But nope, everyone else is wrong. A round of applause for the paragons of human intelligence.
You don’t have to be tech person to see through bullshit. Any person with mid level expertise can test the limits of the current LLM capabilities. It can’t provide consistently objectively correct outputs. It is still a useful tool though.
deleted by creator
Education was always garbage though. It is designed to generate obidient wage slaves. Any person who wanted to get good always knew that self study is the only way to get leveled up.
Your coworkers have no incentive to train you. This has also started since at least 1990s. Just how corpos operate.
Point I am making, none of this is new or specific to gen z
I guess covid is unique to them tho but covid didn’t make education shite, it just exposed it imho
Education was always garbage though. It is designed to generate obidient wage slaves.
in the US
Fixed that for you
Naive