Every AI innovation creates the feeling that it will keep going right up to the singularity. But every time in the past, there is good progress, but it peters out at some point.
It happened with symbolic AI, expert systems, fuzzy logic, neural networks of the 80's and 90's, lisp machines, "5th generation computers".
I think we will see this with LLM's.
They are amazing and very useful for a lot of things. But they have issues that will be hard to get around. Learning new stuff quickly. The problem with hallucinations. Not having as much useful creativity as the more-creative of humans.
Also, LLM's are already trained on nearly the entire collection of recorded human knowledge. At some point, we might be running into the problem of not being able to make LLM's larger, or we run out of enough data to well determine the parameters. For example, I can take 10 data points, and fit it perfectly with a 9th degree (or higher) polynomial. But it will have no predictive ability. The LLM equivalent would be just more hallucinations.
We are assuming the step after LLM's will come with no time gap. I'm not sure that will be the case.