Even generating text in Japanese for reading practice for instance seems so odd to me, you have access to a virtually infinite amount of Japanese for basically any level online, why risk having AI generate something broken?
That happens with any language, even English. LLM are just statistical models, they regurgitate whatever was fed to them. They over use words (“Why Does ChatGPT ‘Delve’ So Much?”: FSU researchers begin to uncover why ChatGPT overuses certain words - Florida State University News), they don’t know if the output is correct or not (it has no concept of “knowing”, even), and they’re programmed to say whatever we want them to say, which is not to say that they’re programmed to say the right things.
In some models, even if it gives you the right explanation and you wrongly correct them, or pose a question in a certain way, it will “say” that they were wrong and wrongly correct themselves, just because that’s what you were looking for.