219 private links
Many of the arguments about LLMs seem to involve us talking past one another. Insistence that they are “just autocomplete” is demonstrably true, but often remains too abstract to persuade people. I have tried to be less abstract here. Meanwhile, most proponents at some point break down in frustration and say “just try it and you’ll see!” Their argument is phenomenological: Doesn’t it feel smart and capable? Don’t you feel like you’re getting value out of it? Working faster? This, too, is demonstrably true. Many people feel that way. The problem comes when we mistake the feeling of using a chatbot (writing this input and getting that output feels like talking to an intelligent person) for the actual inner mechanisms of it (next word prediction). As with calculators, many different internal mechanisms can produce indistinguishable output.