not sure what you're saying here. are you claiming it can't do any sort of reasoning or open-ended problem solving?
i think we're fairly confident now that they can do structured reasoning to some degree. it is not flawless in that it might not give you real or accurate information every time, but we are also figuring out the contexts behind that. as for spreading misinformation, anything intentional prompted to be incorrect is irrelevant to gauging intelligence. unintentional results don't necessarily mean it's unintelligent either.
there's a really good document on this aspect as well.
https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post
there are a lot of ethical and technical aspects of LLMs that are severely underdeveloped, but that shouldn't be a surprise to anyone. i don't think any of that would suggest that it's reasonable to disregard the absurd pace of development this past decade, and last few years especially. good thing we have a sudden surge of attention towards developing these things.