I keep having to repeat this, but the conversation does keep going on a loop: LLMs aren’t entirely useless and they’re not search engines. You shouldn’t ask it any questions you don’t already know the answer to (or have the tools to verify, at least).
That is exactly the point, LLM aim to simulate the chaotic best guess flow of the human mind, to be conscious and at least present the appearance of thinking and from that to access and process facts but not be a repository of facts in themselves. The accusation here that the model constructed a fact and then built on it is missing the point, this is exactly the way organic minds work. Human memory is constantly reworked and altered based on fresh information and simple musings and the new memory taken as factual even while it is in large part fabricated, and to an increasing extent over time. Many of our memories of past events bear only cursory fidelity to the actual details of the events themselves to the point that they could be defined as imagined. We still take these imagined memories as real and act upon them exactly as has been done here by the AI model.