

It’s kind of indirectly related, but adding a query parameter udm=14
to the url of your Google searches removes the AI summary at the top, and there are plugins for Firefox that do this for you. My hopes for this WM project are that similar plugins will be possible for Wikipedia.
The annoying thing about these summaries is that even for someone who cares about the truth, and gathering actual information, rather than the fancy autocomplete word salad that LLMs generate, it is easy to “fall for it” and end up reading the LLM summary. Usually I catch myself, but I often end up wasting some time reading the summary. Recently the non-information was so egregiously wrong (it called a certain city in Israel non-apartheid), that I ended up installing the udm 14 plugin.
In general, I think the only use cases for fancy autocomplete are where you have a way to verify the answer. For example, if you need to write an email and can’t quite find the words, if an LLM generates something, you will be able to tell whether it conveys what you’re trying to say by reading it. Or in case of writing code, if you’ve written a bunch of tests beforehand expressing what the code needs to do, you can run those on the code the LLM generates and see if it works (if there’s a Dijkstra quote that comes to your mind reading this: high five, I’m thinking the same thing).
I think it can be argued that Wikipedia articles satisfy this criterion. All you need to do to verify the summary is read the article. Will people do this? I can only speak for myself, and I know that, despite my best intentions, sometimes I won’t. If that’s anything to go by, I think these summaries will make the world a worse place.
Paraphrasing, but: “testing can only show presence of bugs, not their absence”