It can help to think of the AI as trying to optimize many functions when it answers you, one of the most important of which is “make you happy” by providing an answer you will like. That goal often is more important than another goal, “be accurate.” If you are insistent enough in asking for an answer about something it doesn’t know, it will make up something, because “make you happy” beats “be accurate.” LLMs’ tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known.