• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2024

help-circle


  • There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

    Incorrect. You might want to take an information theory class before speaking on subjects like this.

    I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.

    Lmao yup totally, it’s not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it’s a dead research field because it’s already “settled”. (You’re wrong 🤭)

    LLMs are just tools not sentient or verging on sentient

    Correct. No one claimed they are “sentient” (you actually mean “sapient”, not “sentient”, but it’s fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you’re sentient, if you can “I think, therefore I am”, you’re sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks’ ability to mathematically reason or use logic, you’re just moving the goalpost. But at least you moved it far enough to be actually correct?



  • To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with “grab it”), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.

    Instead, we found that Claude plans ahead. Before starting the second line, it began “thinking” of potential on-topic words that would rhyme with “grab it”. Then, with these plans in mind, it writes a line to end with the planned word.

    🙃 actually read the research?




  • I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.

    Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”

    That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.


  • No, you’re misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that “sounds” right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can’t ask an LLM to explain its own reasoning. But the article shows how they’ve made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just “autocomplete”






  • It’s true that LLMs aren’t “aware” of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like “they can never reason” is provably false.

    Its obvious that you have a bias and desperately want reality to confirm it, but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

    EDIT: lol you can downvote me but it doesn’t change evidence based research

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.