Regarding AI and LLMs – I’ve recently been banging on about how there really isn’t anything in an AI system based on LLMs and transformers which can ‘understand’ – there isn’t a black box with ‘understanding’ written on the outside. Understanding isn’t a thing they can do.
The big push in the near future is into world models, and I see this as a huge expensive diversion with disappointing results. This is largely because we don’t understand the world using world models that are anywhere near impressive or that actually work well, and they’re certainly not complete. If you see an organisation embarking upon the making of a world model (of some domain or even worse, of everything), you can basically count them out, they’ll be distracted for the foreseeable future, heading toward becoming skint, and at the end of it all, well, there’s no end. It’s basically like the trap of doing an ontology – there’s endless revision of thought of how things connect, which things are on top of other things, which outside things formerly thought not to be included should now be shoehorned in, etc – it never ends, it’s a distraction, and it’ll never reap the benefit.
We don’t use world models that are in any way complete, or reliable, any world model in our heads is often subject to revision (not often enough in my opinion), and you’d get the picture that it’s literally shaped like a world – spherical and solid and complete and all the world’s arrangement of things is inside it.
I don’t think it works that way, it’s more like tree roots growing into the ground, they don’t touch all the ground, only bits of it, or more like the veins in blue cheese – growing along the fault lines but not everywhere, or similarly, like mycelia – covering a lot of space but not everywhere, not filling in all the gaps.
Humans building a world model will want to make it complete. Worse, if it’s left to technical people like computer programmers to do this, they simply won’t stop until it’s complete, or they die, whichever comes first (spoiler – it’s die). It will tie up so much thinking and resources and time and money you may as well count out any organisations that embark on such a voyage into idiocy, they’re no longer a factor, they no longer compete, they’re distracted.
Our internal world models are like fine threads of connection which forms our ‘understanding’ or at least facilitates it, but so often we understand something incorrect and then have to revise it later in life. A hard-coded world model will not offer this flexibility.
Anyway, we don’t build our own world models, they just happen. AI computers should be building their own world models, weak and tenuous world models, wispy revisable world models, through real-world interaction, and we’re definitely not there yet. #AI #LLM #WorldModel #Understanding