I’m a bit torn on this. On one hand: obviously LLMs do this, since they’re essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.
I’m a bit torn on this. On one hand: obviously LLMs do this, since they’re essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.