Chat GPT is great, but it's not foolproof. One example of this is with respect to disease treatments. I wrote some time ago about my Dupuytren's disease. Radiation therapy is one form of treatment available to those experiencing it in its early stages. However, since this is not a common form of treatment Chat GPT is not able to discuss it.
If a professor were to ask students for forms of treatment on Dupuytren's disease and the student used Chat GPT, then Radiation Therapy would likely not be on their list of answers on a test.
I found an oncology center in the town where I live that can provide such treatment. When discussing this with Chat GPT it would only mention that the therapy was not an option.
Using references in the essays is also not currently possible for Chat GPT. Hive users will downvote articles that appear to be plagiarized or fraudulent when sources aren't available for viewing. However, I do believe that Chat GPT will not only be able to provide references, but also in virtually every format available.
How then could professors combat plagiarism if Chat GPT is available in its final form? You could require students to write their essays during class time periods, restrict the Chat GPT address, and monitor their keystrokes. You'd have to employ some draconian measures unfortunately, I think.
RE: ChatGPT vs. University Professors -- Not a Fair Fight (Not Even Close)