• anarchrist@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    LLMs do not reason, they probabilistically determine the next word based on the words you prompt it with. The most perfect implementation of “AI” was the T9 predictive text system for dumb phones cmv.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      2 months ago

      And you’re just a fancy electro-chemical reaction.

      Who says that an LLM with complete access to the sensory world could not pass the Turing Test?

      • MonkderVierte@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It’s already fact that the Turing Test only determines how much it can simulate human behavior. Nothing with intelligence to do.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Exactly. You could ask a human a lot of questions and make an “AI” that literally just looks up answers to common questions and have it pass the Turing test, provided the pre-answered questions cover what the human proctoring the “test” asks.

          If we take it a step further and ask, why can’t an LLM be “conscious,” there’s a lot of studies by experts that explain that. So I’ll refer OP there.