Contra LeCun on “Autoregressive LLMs are doomed”
This is a linkpost for https://www.lesswrong.com/posts/zyPaqXgFzqHkQfccq/contra-lecun-on-autoregressive-llms-are-doomed. I answer LeCun’s arguments against LLMs as exposed in this lesswrong comment. I haven’t searched thoroughly or double-checked in detail LeCun’s writings on the topic. My argument is suggestive-hand-waving-stage. Introduction Current large language models (LLMs) like GPT-x are autoregressive. “Autoregressive” means that the core of the system is a function \(f\) (implemented […]