Skip to Main Content

AI-assisted Technology in Education

Responsible use of AI

[AI tools] are a great starting point for many tasks but not the endpoint.

- Dr Sean van der Merwe, Mathematical Statistics and Actuarial Science

Always remember when using AI you still need to adhere to the ethical standards and policies of the UFS. Submitting work generated entirely by AI without permission is considered academic misconduct. Here are a few tips on how to use AI responsibly:

  • Stay updated on advancements in AI technology
  • Safeguard sensitive information and ensure that personal data shared with AI complies with data protection regulations
  • Critically evaluate and verify information generated by AI before using it in academic or administrative contexts

If you are a lecturer, consider including information on how your students may use AI (or not) in your course guide. For example, if you want them to cite AI tools, communicate this and provide the citation format you want them to use. If you are using AI detection tools, don't rely on % alone. If the tool detects 100% AI generated content, there is most likely a problem. And if it detects 0%, then most likely there is no problem. But AI detection tools are not always consistent or accurate (yet).

If you are an author, you are solely responsible for ensuring the authenticity, validity, and integrity of the content of your manuscript. ASSAf recommends the following: "Because it is not the work of the authors, any use of content generated by an AI application must be appropriately referenced. To do otherwise is the equivalent of plagiarism. Authors are called upon to avoid including misinformation generated by an AI application, as this could have adverse consequences for them personally, and impact the quality of future research and global knowledge."

Authors may use tools and resources that aid preparation, methodology, data analysis, writing support, review, and translation of their work. But only humans can be considered authors. Importantly, concealing the use of AI tools is unethical and violates the principles of transparency and honesty in research.

Transparency and reproducibilityThe use of AI in research can reduce the reproducibility of results. AIs may give different outputs to the same prompts at different times. The algorithms used may be proprietary and not transparently documented, so it isn't clear how decisions are being made. Using AI with transparent and explainable algorithms can help mitigate these issues.

PrivacyIn addition to other concerns about the privacy of personal user data when using an AI, researchers should be sure to not input any confidential information or research data to AI tools without fully understanding the privacy policy of the tool.

Recommended best practices

Here are some suggested best practices from Buriak et al.:

  • Acknowledge, in the Acknowledgements and Experimental Sections, your use of an AI bot/ ChatGPT to prepare your manuscript. Clearly indicate which parts of the manuscript used the output of the language bot, and provide the prompts and questions, and/ or transcript in the Supporting Information.
  • Remind your co-authors, and yourself, that the output of the ChatGPT model is merely a very early draft, at best. The output is incomplete, might contain incorrect information, and every sentence and statement must be considered critically. Check, check, and check again. And then check again.
  • Do not use text verbatim from ChatGPT. These are not your words. The bot might have also reused text from other sources, leading to inadvertent plagiarism.
  • Any citations recommended by an AI bot/ChatGPT need to be verified with the original literature since the bot is known to generate erroneous citations.
  • Do not include ChatGPT or any other AI-based bot as a co-author. It cannot generate new ideas or compose a discussion based on new results, as that is our domain as humans. It is merely a tool, like many other programs, for helping with the formulation and writing of manuscripts.
  • ChatGPT cannot be held accountable for any statement or ethical breach. As it stands, all authors of a manuscript share this responsibility.
  • And most importantly, do not allow ChatGPT to squelch your creativity and deep thinking. "Use it to expand your horizons, and spark new ideas!" (Buriak et. al., 2023, pp 4092).

Further reading:

Buriak, J., Akinwande, D., Artzi, N. Brinker, C.J., Burrows, C., Chan, W. C. W., Chen, C., Chen, X. Chowalla, M., Chi, L., Chueh, W., Crudden, C.M., Di Carlo, D., Glotzer, S.C., Hersam, M.C., Ho, D., Hu, T.Y., Huang, J., Javey, A.,... Ye, J. (2023) Best practices for using AI when writing scientific manuscripts. ACS Nano, 17(5), 4091-4093.

AI detection tools

There are tools to help you detect AI-generated text. Studies have found discrepancies between these tools that emphasise the growing challenge in AI-generated content detection and its implications for plagiarism detection. A holistic approach for academic integrity issues is a better option than depending on AI detection tools. It is better to also include manual review and contextual considerations when plagiarism is suspected.

If you are looking for AI detection tools, here are a few:

Report a problem