On LLM and passphrases ...
The thought has occurred that given that large language models are trained on texts, which one presumes includes not only Internet sources by scanned-in copies of published books and articles ...
... there's a strong probability that any given published word sequence appears within such a corpus ...
... and that given even a small sampling of a passphrase which is itself drawn from a similar corpus ... LLMs should be really good at guessing a given passphrase.
(How might it get a small sampling? Oh, say, shoulder-surfing, or acoustic signatures of typed characters, or leaks from inadvertently-entered phrases in the wrong dialogue, or other cues from context.)
Upshot: if you're relying on a single phrase from any published set of works ... as a long secret key ... you might want to reassess your threat model.
(I don't know that combining phrases from multiple sources might be an improvement ... though there are reasons to suspect that might also be at increased risk.)
(Oh, and by "you", I also mean "all the systems you're relying on, directly or indirectly". That would include, say, corporate, institutional, or governmental systems to which someone's previously relied on what they'd thought would be a long and hence difficult-to-crack phrase.)
(I also suspect that state-level actors will have first capabilities in this manner, but that that threshold will rapidly fall to far less-capable entities.)
(Many moons ago discussing security issues with a corporate user, I suggested that phrases from, oh, say, Alice in Wonderland would not be especially secure. Their passphrase was based on, of course, Jabberwocky.)
Edit: Markup.