One thing to remember about (and, by extension, ) is that it is, at the end of the day, a technique for complex function approximation. No more, no less. Think back to Stone–Weierstrass theorem from the mathematical analysis course, just on a different scale.

It is hard to imagine writing down an analytical definition for the "human speech" function, but, amazingly, we can computationally arrive at something that is behaving very similarly, and we call our latest take at it "Large Language Models". The impressive thing about this is how unimpressive it really is for what it does.

When looking through that lens, it feels kind of silly to ascribe real intelligence to such models, since it's merely an imitation of the original phenomenon. But it does provoke some reflection on what the existence of such approximation tells us about the original.

I think it also indicates the limitations of the current generation of AI techniques: they can achieve great (perhaps arbitrarily great) accuracy when interpolating, that is, when we are working within the information space well-represented in the training dataset.

However, it's much harder to make assertions about extrapolation accuracy the ideas and knowledge not seen by the model before, never mind the ideas completely novel to the humanity entirely. To me this is a hint as to why AI is actually pretty bad at creativity. It's not so much because it's bad at creativity, it's because its extrapolation is rather unlikely to match what humans consider creative.

Does this make useless for any art, or novel research, or other forms of innovation? Not at all, I don't think. For one, all innovation consists of 1% of actually new ideas and 99% of hard and boring implementation/testing/experimental work, and any help with those 99% could still be a massive help. And even within 1%, random flailing of AI models can inspire humans into actually useful ideas :)

All of that it say, AI is just a better brush and it's silly to pretend it doesn't exist.

in reply to this object

I should say that being curious and open-minded about also means being mindful and curious about its negative sides. Ethics of training on "public" datasets of ambiguous legality, displacing real art with low-quality slop, environmental impact of the datacenters that are required to execute the models, and so on.

Like many things in life, neither extremes of "AI is an absolute evil" and "AI is an answer to all problems" are valid positions, and the real challenge is in finding the balance.

I think I'm slowly coming around on the whole generative stuff. Much like most of the folks in my feed, my first reaction was "hype is bad for your brain" and "this is a solution looking for a problem". Both of which remain true. However...

I caught myself mentally defending the "AI is a dumb trend" position just because that was my instinctive reaction, which is as much of a fallacy as the opposite. So, I think before I restore my "criticize AI bros" privileges, I should learn first hand what AI can and can't do.

More than that, I think it is becoming self-evident, that AI tools can be valuable productivity boosters, just not in the ways that marketers would have you believe. In the same way that an LSP plugin lets me spend more cognitive power on the semantics of the code than hunting down missing semicolons, AI-based completion can help me with "boring" parts and let me focus on the high-level design and problem space. It's a smarter paint brush, but the result is still determined by whoever wields it.

Mind you, I am still responsible for making sure that the semicolons are in all the right places, and that the code is good enough for me to put my name next to it.

I realized that I was so burned out by all the FOMO marketing that I almost forgot how to be curious about things. So yeah, picking up this new paint brush and learning first hand what it can and can't do is by far not the worst way to spend my time.