Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 months ago

    scientific accuracy is anathema to AI marketing

    Even though I agree in this context “hallucination” is actually the scientific term. It might be poorly chosen but in LLM circles if you use the term hallucination, the vast majority of people, will understand precisely what you mean, namely not an error in programming, or a bad dataset, but rather that the language model worked well, generating sentences that are syntactically correct, that are roughly thematically coherent, and yet are factually incorrect.

    So I obviously don’t want to support marketing BS, in AI or elsewhere, but here sadly it matches the scientific naming.

    PS: FWIW I believed I made a similar critic few months, or maybe even years, ago. IMHO what’s more important is arguably questioning the value of LLMs themselves, but then it might not be as evident for many people who are benefiting from the current buzz.