Guidance Note: Using Generative Artificial Intelligence in Research
Creators
- 1. Macquarie University
Contributors
Description
This is the university policy/guidance note for the use of Generative AI and LLMs at Macquarie University, Australia. The original source is https://policies.mq.edu.au/download.php?associated=1&id=768&version=1
Here is the summary:
Generative AI (Artificial Intelligence, Large Language Models) offers the unprecedented ability to
manipulate and generate text and media in response to arbitrary instructions. These new capabilities offer
opportunities and risks to researchers. This document will discuss responsible use, risk mitigation, and
appropriate use of these tools. The technological landscape changes quickly and new tools are released
almost weekly – this guide offers general advice which should be applied thoughtfully.
Regardless of any tools or technologies used, now or in the future, everyone at Macquarie University is
responsible for ensuring their research meets the expectations of the Australian Code for the Responsible
Conduct of Research and the Macquarie University Code for the Responsible Conduct of Research
(2018).
Generative AI must be used with caution, and its use is currently inappropriate in some research
processes because Generative AI services, including ChatGPT:
• cannot meet the requirements for authorship
• can create authoritative-sounding outputs that may be incorrect, incomplete, or biased
• could inappropriately capture sensitive data (including, but not limited, to personal information).
Researchers must not use Generative AI:
• to perform peer review activities
• to generate substantive content of research outputs, including HDR theses
• for writing the critical components of human ethics, animal ethics, or biosafety applications.
Researchers must exercise care in the use of Generative AI in other aspects of their research and should:
i. only do so with the written agreement of their research collaborators (& HDR supervisors)
ii. review and consider the terms of service/license of the platforms used and any models used
iii. consider the current issues and understandings around copyright and intellectual property
iv. mitigate risks around the insecure storage or unauthorised re-use of sensitive data
v. exert oversight and control when using the technology
vi. carefully and critically review the output and results created by Generative AI
vii. take responsibility for the integrity of the content altered or created using Generative AI
viii. disclose the use of Generative AI to potential publishers and in disseminated research outputs
ix. read and follow the policies of publishers and funders regarding the use of Generative AI.
Files
Generative AI in Research Guidance Note_v1.0_Oct 2023.pdf
Files
(316.3 kB)
Name | Size | Download all |
---|---|---|
md5:e4925e59669d8760967ed00c8bf18cfe
|
316.3 kB | Preview Download |
Additional details
Dates
- Issued
-
2023-10Version 1 on Policy Central