Francois Heinderyckx<p>“[E]ven if an L.L.M. was trained exclusively on the best peer-reviewed science, it would still be capable only of generating plausible output, and “plausible” is not necessarily the same as ‘true.’ And now A.I.-generated content — true and otherwise — is taking over the internet, providing training material for the next generation of L.L.M.s, a sludge-generating machine feeding on its own sludge” writes Zeynep Tufekci. </p><p>GIFT LINK:<br><a href="https://www.nytimes.com/2025/07/11/opinion/ai-grok-x-llm.html?unlocked_article_code=1.V08.uU3v.2SwLPT_JhmKi&smid=url-share" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nytimes.com/2025/07/11/opinion</span><span class="invisible">/ai-grok-x-llm.html?unlocked_article_code=1.V08.uU3v.2SwLPT_JhmKi&smid=url-share</span></a></p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/Enshitification" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Enshitification</span></a></p>