• 2 Posts
  • 11 Comments
Joined 9 months ago
cake
Cake day: March 2nd, 2024

help-circle


  • LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it’s used properly and combined with a formal reasoning engine.

    I’m going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.

    By “does some reasoning steps,” OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It’s not a new idea.


  • I do agree that grad students don’t exactly live in luxury, and frequently develop mental health crises. But their contributions and insight are what power their labs. Profs often have to spend so much time teaching and chasing grants that they can’t do much real research. Academia overall is in a sad state.

    But Tao is a superstar, and a charismatic blogger. I’d be disappointed to learn he mistreats his grad students. (I don’t know if he even has any tbh)






  • jsomae@lemmy.mlOPtoPrivacy@lemmy.mlI'm losing faith
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    where did you get the idea that gpt4 is capable of this? this is concerns for 10+ years from now, assuming AI makes the same strides is has in the past 10 years, which is not guaranteed at all.

    I think there are probably 3-5 big leaps still required, on the order of the invention of transformer models, deep learning, etc., before we have superintelligence.

    Btw humans are also bad at arithmetic. That’s why we have calculators. if you don’t understand that LLMs use RAG, langchain (or similar), and so on, you clearly don’t understand the scope of the problem. Superintelligence doesn’t need access to anything in particular except, say, email or chat to destroy the world.



  • jsomae@lemmy.mlOPtoPrivacy@lemmy.mlI'm losing faith
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    AI could kill everyone, though it most likely won’t IMO. 10% chance I think. That’s still very bad though. Despite the fact that Ilya Sutskever, Geoff Hinton, MIRI, heck even Elon Musk have expressed varying degrees of concern about this, it seems the risk here is largely dismissed because it sounds too much like science fiction. If only science fiction writers had avoided the topic!