Oddly, "bullshit" qualifies as a technical term in this context. The authors argue that chatgpt (and similar systems) emit bullshit.
They don't lie or hallucinate because they don't know or believe anything. It's all just text modeling.
The focus in this type of AI is to produce text that looks convincing, but it doesn't have any concept of truth/falsehood, fact or fiction.
When this is the way someone talks, we say that they're bullshitting us. So it is with chatgpt.