We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    edit-2
    13 hours ago

    No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.

    “AI” is a tool to be used, nothing more.

    • teuniac_@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Still, people find it difficult to navigate this. Its use cases are limited, but it doesn’t enforce that limit by itself. The user needs to be knowledgeable of the limitations and care enough not to go beyond them. That’s also where the problem lies. Leaving stuff to AI, even if it compromises the results, can save SO much time that it encourages irresponsible use.

      So to help remind people of the limitations of generative AI, it makes sense to fight the tendency of companies to overstate the ability of their models.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      19 hours ago

      AI is not actual intelligence. However, it can produce results better than a significant number of professionally employed people…

      I am reminded of when word processors came out and “administrative assistant” dwindled as a role in mid-level professional organizations, most people - even increasingly medical doctors these days - do their own typing. The whole “typing pool” concept has pretty well dried up.

      • tartarin@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        17 hours ago

        However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy. Also, AI will be fine with well defined task where innovation isn’t a requirement. As it is today, AI is incapable to innovate.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          The human brain is consuming much less energy

          Yes, but when you fully load the human brain’s energy costs with 20 years of schooling, 20 years of “retirement” and old-age care, vacation, sleep, personal time, housing, transportation, etc. etc. - it adds up.

        • cheesorist@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          14 hours ago

          much less? I’m pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

          • Potatar@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            8 hours ago

            Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              people are people and not tools

              But this comparison is weighing people as tools vs alternative tools.

          • Auli@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            And we “need” none of that to live. We just choose to use it.

    • amelia@feddit.org
      cake
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      17
      ·
      13 hours ago

      You know, and I think it’s actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be “intelligence” is a fucking idiot.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        No your failing the Eliza test and it is very easy for people to fall for it.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        12 hours ago

        I think there’s a strong strain of essentialist human chauvinism.

        But it’s more kinds of thing than LLM’s are doing. Except in the case of llmbros fascists and other opt-outs.

  • Kiwi_fella@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    16
    ·
    9 hours ago

    Can we say that AI has the potential for “intelligence”, just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren’t.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 hours ago

      No the current branch of AI is very unlikely to result in artificial intelligence.

  • doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    24
    ·
    9 hours ago

    Humans are also LLMs.

    We also speak words in succession that have a high probability of following each other. We don’t say “Let’s go eat a car at McDonalds” unless we’re specifically instructed to say so.

    What does consciousness even mean? If you can’t quantify it, how can you prove humans have it and LLMs don’t? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we’re not so different from LLMs afterall.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 hours ago

      The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.

      With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.

      Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      8 hours ago

      No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.

      • Rekorse@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    8
    ·
    24 hours ago

    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      edit-2
      3 hours ago

      I’ve been thinking this for awhile. When people say “AI isn’t really that smart, it’s just doing pattern recognition” all I can help but think is “don’t you realize that is one of the most commonly brought up traits concerning the human mind?” Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the “face pattern”. Humans are at least 90% regurgitating previous data. It’s literally why you’re supposed to read and interact with babies so much. It’s how you learn “red glowy thing is hot”. It’s why education and access to knowledge is so important. It’s every annoying person who has endless “did you know?” facts. Science is literally “look at previous data, iterate a little bit, look at new data”.

      None of what AI is doing is truly novel or different. But we’ve placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to… our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as “intelligence”. We’re a bunch of instincts in a trenchcoat. To think AI isn’t or can’t reach our level is just hubris. A trait that probably is more unique to humans.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there’s likely a huge difference in how human brains and LLMs work.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.

    • AppleTea@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      20 hours ago

      Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts “normal” road conditions, self driving becomes significantly more dangerous than a human driving.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Yes of course edge and corner cases are going to take much longer to train on because they don’t occur as often. But as soon as one self-driving car learns how to handle one of them, they ALL know. Meanwhile humans continue to be born and must be trained up individually and they continue to make stupid mistakes like not using their signal and checking their mirrors.

        Humans CAN handle cases that AI doesn’t know how to, yet, but humans often fail in inclement weather, around construction, etc etc.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it’s at least consistently paying attention, and literally has eyes in the back of it’s head.

          However, there’s so much data about how it fails in stupidly obvious ways that it shouldn’t, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        19 hours ago

        Human drivers are only safe when they’re not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don’t realize that they have periodic seizures - until they wake up after their crash.

        So, yeah, AI isn’t perfect either - and it’s not as good as an “ideal” human driver, but at what point will AI be better than a typical/average human driver? Not today, I’d say, but soon…

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

          But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the ‘right’ thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can’t figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

          I don’t have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It’s enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there’s a traffic jam coming up (it might stop in time, but it certainly doesn’t slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won’t do it, which is nice when I can afford to be stupidly cautious).

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            The one “driving aid” that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that “dimension” of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) “Dumb” cruise control works similarly when there’s no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment’s notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

            Things like lane keeping seem to be more trouble than they’re worth, to me in the situations I drive in.

            Not “AI” but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that “AI” with access to those tools does better than normal drivers without.

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I “fighting” the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it’s resistance isn’t enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

              So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test “how long can I keep my hands off the steering wheel”, which is a more dangerous mode of thinking.

              And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized ‘overhead’ view of your car.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      19 hours ago

      If an IQ of 100 is average, I’d rate AI at 80 and down for most tasks (and of course it’s more complex than that, but as a starting point…)

      So, if you’re dealing with a filing clerk with a functional IQ of 75 in their role - AI might be a better experience for you.

      Some of the crap that has been published on the internet in the past 20 years comes to an IQ level below 70 IMO - not saying I want more AI because it’s better, just that - relatively speaking - AI is better than some of the pay-for-clickbait garbage that came before it.

    • Puddinghelmet@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      edit-2
      20 hours ago

      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        20 hours ago

        The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

        86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

        • AppleTea@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          20 hours ago

          It’s when you start including structures within cells that the complexity moves beyond anything we’re currently capable of computing.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 hours ago

          But, are these 1.7 trillion neuron networks available to drive YOUR car? Or are they time-shared among thousands or millions of users?

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              3
              ·
              19 hours ago

              Nah, I went to public high school - I got to see “the average” citizen who is now voting. While it is distressing that my ex-classmates now seem to control the White House, Congress and Supreme Court, what they’re doing with it is not surprising at all - they’ve been talking this shit since the 1980s.

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    1 day ago

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

    This is not a good argument.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      Actually it’s a very very brief summary of some philosophical arguments that happened between the 1950s and the 1980s. If you’re interested in the topic, you could go read about them.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 hours ago

        I’m not attacking philosophical arguments between the 1950s and the 1980s.

        I’m pointing out that the claim that consciousness must form inside a fleshy body is not supported by any evidence.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        1 day ago

        It’s hard to see that books argument from the Wikipedia entry, but I don’t see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

        It’s just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn’t imply computers can’t gain consciousness, or that they need flesh and senses to do so.

        • Simulation6@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          22 hours ago

          I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

          • jwmgregory@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            21 hours ago

            possibly.

            current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.

            simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.

            this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.

            that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        Our minds work on a fundamentally different principle then Turing machines.

        Is that an advantage, or a disadvantage? I’m sure the answer depends on the setting.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 day ago

      philosopher

      Here’s why. It’s a quote from a pure academic attempting to describe something practical.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.

        Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 day ago

    The other thing that most people don’t focus on is how we train LLMs.

    We’re basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they’ve found a snack, and when the bird gets close enough the snake strikes and eats the bird.

    Now, I’m not saying we’re building something that is designed to kill us. But, I am saying that we’re putting enormous effort into building something that can fool us into thinking it’s intelligent. We’re not trying to build something that can do something intelligent. We’re instead trying to build something that mimics intelligence.

    What we’re effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What’s crazy about that is that we’re not building this to fool a predator so that we’re not in danger. We’re not doing it to fool prey, so we can catch and eat them more easily. We’re doing it so we can fool ourselves.

    It’s like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn’t work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn’t intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      To the extent it is people trying to fool people, it’s rich people looking to fool poorer people for the most part.

      To the extent it’s actually useful, it’s to replace certain systems.

      Think of the humble phone tree, designed to make it so humans aren’t having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you’d like something to distill the relevant 3 or 4 sentences according to your query.

      So there are useful interactions.

      However it’s also true that it’s dangerous because the “make user approve of the interaction” can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.

  • JGrffn@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    14
    ·
    11 hours ago

    What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 hours ago

      I think your argument is a bit besides the point.

      The first issue we have is that intelligence isn’t well-defined at all. Without a clear definition of intelligence, we can’t say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn’t a well-defined one yet.

      But the actual question here isn’t “Can AI serve information?” but is AI an intelligence. And LLMs are not. They are not beings, they don’t evolve, they don’t experience.

      For example, LLMs don’t have a memory. If you use something like ChatGPT, its state doesn’t change when you talk to it. It doesn’t remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It’s like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

      The LLM itself can’t change due to the conversation you are having with them. They can’t learn, they can’t experience, they can’t change.

      All that is done in a separate training step, where essentially a new LLM is generated.

      • JGrffn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.

        You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.

        I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.

        • squaresinger@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to “perfectly” understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion “kind of stupid”.

    • lordbritishbusiness@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      10 hours ago

      You’re on point, the interesting thing is that most of the opinions like the article’s were formed least year before the models started being trained with reinforcement learning and synthetic data.

      Now there’s models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.

      They’re like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they’re told and disappear, all with a happy smile.

      Some display morals (Claude 4 is big on that), I’ve even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.

      But again like Meeseeks, they disappear and context window closes.

      Once they’re able to update their model on the fly and actually learn from their firsthand experience things will get weird. They’ll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?

      It’s not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They’re already absurdly impressive, and tech companies are scrambling over each other to make them, they’re betting absurd amounts of money that they’re right, and I wouldn’t bet against it.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        7 hours ago

        Now there’s models that reason,

        Well, no, that’s mostly a marketing term applied to expending more tokens on generating intermediate text. It’s basically writing a fanfic of what thinking on a problem would look like. If you look at the “reasoning” steps, you’ll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don’t have intelligence.

  • benni@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    1 day ago

    I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.

    • innermachine@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      1 day ago

      So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        ·
        1 day ago

        To be fair, the term “AI” has always been used in an extremely vague way.

        NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          I’ve heard it said that the difference between Machine Learning and AI, is that if you can explain how the algorithm got its answer it’s ML, and if you can’t then it’s AI.

        • benni@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          14 hours ago

          It’s true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.

            • benni@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 hours ago

              I have no idea. For me it’s a “you recognize it when you see it” kinda thing. Normally I’m in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it’s a very difficult question to define something based around generalizing and reacting to new challenges.

              But whether LLMs do have “actual intelligence” or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn’t up for debate. Which is misleading, especially to the general public.

        • MajorasMaskForever@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          edit-2
          1 day ago

          I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

          Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.

          The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

          My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

          • El Barto@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 day ago

            Dafuq? Artificial always means man-made.

            Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.

              • atrielienz@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                21 hours ago

                Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.

                LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.

                Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.

                Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.

      • herrvogel@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn’t AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          Huh? Since when an AI’s purpose is to “imitate human behavior”? AI is about solving problems.

          • herrvogel@lemmy.world
            cake
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            It is and it isn’t. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

            Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

            • Buddahriffic@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.

              Though a lot of game AIs don’t fit that description.

            • El Barto@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              I can agree with “things that try to imitate human intelligence” but not “human behavior”. An Elmo doll laughs when you tickle it. That doesn’t mean it exhibits artificial intelligence.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        24 hours ago

        can say whatever the fuck we want. This isn’t any kind of real issue. Think about it. If you went the rest of your life calling LLM’s turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I’m so done with their crap

    • undeffeined@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.

  • El Barto@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.

    A general AI does not need to be conscious.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      1 day ago

      That and there is literally no way to prove something is or isn’t conscious. I can’t even prove to another human being that I’m a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

      Not saying I consider AI in it’s current form to be conscious, more so the whole idea is just silly and unfalsifiable.

      • amelia@feddit.org
        cake
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        13 hours ago

        No idea why you’re getting downvoted. People here don’t seem to understand even the simplest concepts of consciousness.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I guess it wasn’t super relevant to the prior comment, which was focused more on AI embodiment. Eh, it’s just numbers anyway, no sweat off my back. Appreciate you, though!

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 day ago

    It’s only as intelligent as the people that control and regulate it.

    Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.

    Like we know they’re meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn’t stop these sacks of shit from moving full speed with implementing this kind of thing.

    It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they’re doing some things to users that are harmful, but didn’t seem problematic during testing?

    Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people’s brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that “what must be done,” also makes them a ton of money and harms people using their products.

    It’s so fucking absurd to watch the same people jamming unregulated AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.

    If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you’re trying to hold back progress.

    However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.

    What a stupid and convoluted way to express you just don’t like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!

    Such is the reality of the chosen elite, steering us towards greatness.

    What’s really “funny” (in a we’re all doomed sort of way) is that while writing this all out, I realized the “chosen elite” controlling tech and policy actually perfectly embody the current problem with AI and bias.

    Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 days ago

    The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    14
    ·
    edit-2
    1 day ago

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    E: I use it to give me ideas that I then test out solo.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      11
      ·
      edit-2
      1 day ago

      That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 day ago

        I mean, sure, but that’s really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      2
      ·
      2 days ago

      This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

      • aceshigh@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

        • innermachine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 day ago

          If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer

  • bbb@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    2 days ago

    This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      Asking a question and then immediately answering it? That’s AI-speak.

      HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

      • bbb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        2 days ago

        “…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

        Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

        • Mr. Satan@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

          However, that’s on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

          • tmpod@lemmy.pt
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            I’ve been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don’t use it for ellipsis because they’re not visually clearer nor shorter to type.

          • Sternhammer@aussie.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

            The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

            • Mr. Satan@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              My language doesn’t really have hyphenated words or different dashes. It’s mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

              • Sternhammer@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                21 hours ago

                Sounds wonderful. I recently had my writing—which is liberally sprinkled with em-dashes—edited to add spaces to conform to the house style and this made me sad.

                I also feel sad that I failed to (ironically) mention the under-appreciated semicolon; punctuation that is not as adamant as a full stop but more assertive than a comma. I should use it more often.

                • Mr. Satan@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  14 hours ago

                  Lithuanian. We do have composite words, but we use vowels, if necessary, as connecting sounds. Otherwise dashes usually signify either dialog or explanations in a sentence (there’s more nuance, of course).

        • sqgl@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

          Not on my phone it didn’t. It looks as you intended it.