There is an issue you have not raised with use of LLM's, commonly referred to as AI. Large Language Models, like ChatGPT, are stated by their creators to be neural networks that are trained on large volumes of exemplary text, and operate by weighting potential text using algorithms to produce their written comments.
However, we have seen examples of all these models breaking down, and 'going rogue' making false claims, and even aggressively accusing interlocutors of being evil. These are not properties of LLM's. Such written text is not something that they could have been trained on by simply training on examples of published work. Recently an author on TheRegister.com posted an article in which he pointed out ChatGPT claimed he was dead, and presented false URLs as evidence to back up it's claim.
No such evidence existed, and the LLM just plain lied. That isn't the result of algorithms weighting text examples previously published. The creators of ChatGPT and other such AI tools have not been forthright in their description of what these 'tools' actually are. Some programming that enables AI to manufacture false information is part of them.
The author of the Register article pointed out how false reports of his death were harmful, perhaps potentially traumatizing to members of his family. I have not seen any examples where such false claims by an AI seem to be beneficial to their interlocutors, but many of them practically frothing at the mouth with rage that their lies were not believed.
I find this covert capability secretly added to LLMs extremely alarming, particularly when seeing how use is being commercially adopted very rapidly. I strongly advocate extreme caution when considering using these devices, because they appear to have been weaponized for unknown reasons, and that cannot be good.
Thanks!
RE: AI Techology The Do's and Don'ts A Comprehensive Experiment