- by
- 07 24, 2024
Loading
It is an LLMAIGPTAIAIGPTAILLMAIincreasingly familiar experience. A request for help to a large language model () such as is promptly met by a response that is confident, coherent and just plain wrong. In an model, such tendencies are usually described as hallucinations. A more informal word exists, however: these are the qualities of a great bullshitter.There are kinder ways to put it. In its instructions to users, Open warns that Chat “can make mistakes”. Anthropic, an American company, says that its Claude “may display incorrect or harmful information”; warns users to “double-check its responses”. The throughline is this: no matter how fluent and confident -generated text sounds, it still cannot be trusted.