@dpwiz@qoto.org nothing you've said seems to contradict to what I've said, no? :)
The really interesting question (and the one I am not smart enough to formally answer) is in what space does it do its interpolation. My layman understanding that all the recent advancements are thanks to the fact that the new architectures are able to coax the math to learn in a higher-level space than just the examples seen. So yeah, it does apply learned patterns to examples that fit them.
Problems begin when there is no known pattern that fits the task, which is exactly what innovation and creativity usually deal with :)
@me There is one, thanks for focusing on it in the reply ((=
My claim is that the model training induces meta-learning...
> That was the goal all along - even before LLMs were a thing. OpenAI and DeepMind were on the hunt for making a thing that can learn on the go and adapt. And looks like we've got this by now.
... and that makes the exact content of its pre-training corpus irrelevant. As long as it can pick up knowledge and skills on the go it is intelligent. And the notion of "interpolation" (even in an insanely high-dimensional space) is irrelevant.
Can we please collectively shut up about stochastic parrots, just regurgitating the data, following the training distribution, interpolation, etc etc?
@me > as long as those tasks are within the scope of what we, humans, normally do
This is what I'm trying to contest.
> Where I don't expect AI to succeed, at least not in its current form, is creating new knowledge ... Simply because there is no pattern to apply here, it would be "the first ever" kind of thing.
But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.
> (a list of things that a model can't do)
That would not age well
That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality. What they miss in executive function we complement (for now) with environment. And from what I've seen, the agents are getting closer to actually run as `while True: model.run(world)`. Just as you don't solve math with your cerebellum, the agents don't do "mere interpolation".
This is what I'm trying to contest.
Noted. Even if we don't agree in the end, a discussion is a learning tool.
But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.
To my knowledge, in all those examples humans were heavily involved. It's not like someone fed Knuth's books into an LLM, and it suddenly started raining brand new algorithms around. Even DeepMind's AlphaZero and friends don't just do magic by itself, but rather the humans put a lot of effort into creating an environment that makes solving a particular task possible. I wouldn't call it brute force, more like guided computation.
Machine learning has been a useful tool in all sorts of scientific fields for decades, so it only makes sense for the tool to get sharper over time.
That would not age well :blobcoffee:
I mean... If I were making predictions, it wouldn't, but I am simply describing what I see today ;)
I feel like you are under a mistaken impression that I am trying to put AI into one neat pidgin hole, once and for all, thereby defining its boundaries forever. Which I'm not. I wouldn't dare extrapolating my already limited knowledge in this area into the unpredictable future (see what I did there?).
What I am really trying to do is to make sense of what different flavors of AI really are today, before I even bother doing any predictions. I am exploring different mental models, thinking out loud, and this thread reflects one of them. Judging by your reaction, it's not a great one, or at least a controversial one. But that's fine, I'll just keep on thinking and polluting your feed with the results :P
@me Polluting the feeds is what we're here for 🥂
That, and the thinking ofc.
That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality.
Hard to argue with that. I am aware that agents are a thing, but, quite honestly, I don't understand them well enough to have a useful opinion. From the first principles it does seem like having a feedback loop from the universe us a very useful advantage we, humans, rely on in our quest for knowledge, so it makes sense that granting it to AI agents would produce something noteworthy. But that's about all I can say for now.
Well, that and that online learning seems like an underexplored technique in relation to LLMs.
@me The feedback loop is important as it the thing that makes the multi-pass iterative improvement possible. An LLM-like model is a closed system and sure, I'll grant that it will bounce around the middle of its prob landscape.
But giving it at least a scratchpad *allows* it to leverage the more powerful and abstract higher-level patterns it learned. And *this* has no limits on novelty, like being turing-complete elevates the system from a level of a thermostat to all the complexity you can eat.
Of course "allows" does not guarantee it would be used effectively. But at least it liberates the system from the tyranny of the "mere interpolation".
@me > what is "intelligence"?
Intelligence is the ability to 1) learn new skills and 2) pick a fitting skill from your repertoire to solve a task.
Rocks don't have this. Thermostats don't have this. Cats have a little. Humans have this. AIs starting to have it. ASIs would have it in spades.
@dpwiz@qoto.org but, in the end of the day, I'm just a random guy on the internet without any particular qualifications to talk about AI, besides the fact that I've been hanging around people who do have such qualifications and picked some stuff up along the way.
So, ignoring my opinions as uneducated ones is perfectly legitimate.
@me I don't buy this.
SWT appears to only claim that an LLM *can* do interpolation. But even if I'm wrong here and interpolation is the only thing LLM does this doesn't matter as they are capable of systematically using learned patterns to perform in-context learning and then to produce solutions for unseen tasks. And this is a hallmark of intelligence.
Yes, novelty is hard. No, LLMs aren't just replicating old distributions.