[AI tools] are a great starting point for many tasks but not the endpoint.
- Dr Sean van der Merwe, Mathematical Statistics and Actuarial Science
Always remember when using AI you still need to adhere to the ethical standards and policies of the UFS. Submitting work generated entirely by AI without permission is considered academic misconduct. Here are a few tips on how to use AI responsibly:
If you are a lecturer, consider including information on how your students may use AI (or not) in your course guide. For example, if you want them to cite AI tools, communicate this and provide the citation format you want them to use. If you are using AI detection tools, don't rely on % alone. If the tool detects 100% AI generated content, there is most likely a problem. And if it detects 0%, then most likely there is no problem. But AI detection tools are not always consistent or accurate (yet).
If you are an author, you are solely responsible for ensuring the authenticity, validity, and integrity of the content of your manuscript. ASSAf recommends the following: "Because it is not the work of the authors, any use of content generated by an AI application must be appropriately referenced. To do otherwise is the equivalent of plagiarism. Authors are called upon to avoid including misinformation generated by an AI application, as this could have adverse consequences for them personally, and impact the quality of future research and global knowledge."
Authors may use tools and resources that aid preparation, methodology, data analysis, writing support, review, and translation of their work. But only humans can be considered authors. Importantly, concealing the use of AI tools is unethical and violates the principles of transparency and honesty in research.
Transparency and reproducibility: The use of AI in research can reduce the reproducibility of results. AIs may give different outputs to the same prompts at different times. The algorithms used may be proprietary and not transparently documented, so it isn't clear how decisions are being made. Using AI with transparent and explainable algorithms can help mitigate these issues.
Privacy: In addition to other concerns about the privacy of personal user data when using an AI, researchers should be sure to not input any confidential information or research data to AI tools without fully understanding the privacy policy of the tool.
Here are some suggested best practices from Buriak et al.:
Further reading:
Buriak, J., Akinwande, D., Artzi, N. Brinker, C.J., Burrows, C., Chan, W. C. W., Chen, C., Chen, X. Chowalla, M., Chi, L., Chueh, W., Crudden, C.M., Di Carlo, D., Glotzer, S.C., Hersam, M.C., Ho, D., Hu, T.Y., Huang, J., Javey, A.,... Ye, J. (2023) Best practices for using AI when writing scientific manuscripts. ACS Nano, 17(5), 4091-4093.
There are tools to help you detect AI-generated text. Studies have found discrepancies between these tools that emphasise the growing challenge in AI-generated content detection and its implications for plagiarism detection. A holistic approach for academic integrity issues is a better option than depending on AI detection tools. It is better to also include manual review and contextual considerations when plagiarism is suspected.
If you are looking for AI detection tools, here are a few: