Skip to Main Content
Skip navigation

 

Artificial Intelligence including generative AI

Welcome

What is Generative AI?

Generative Artifical Intelligence (AI) is a machine learning technology that uses AI technology to understand natural language inputs (called prompts) and to generate natural language outputs (called completions). Users interact with generative AI tools and systems in a question and answer style “conversation”.

Australian National University (2023) Chat GPT and other generative AI tools: What ANU academics need to know (PDF, 103KB), and AI Essentials

Academic Skills and the University Library have updated guidance for students in the Best Practice When Using Generative Artificial Intelligence (AI)The document outlines how generative AI should be used with integrity to support students to use Generative AI appropriately within their learning.  It is a major update with very valuable information. 

What Generative AI systems are publically available?

There are many Generative AI systems available, including:

The University does not recommend any specific system. The University will continue to “help students develop skills around the appropriate and responsible use of AI tools as part of an ongoing conversation about academic integrity, ethics and professional practice”.

Australian National University (2023) Chat GPT and other generative AI tools: What ANU academics need to know (PDF, 103KB)

Generative AI for teaching and learning

Are students allowed to use Chat GPT?

Individual course convenors will guide students on what uses of ChatGPT may or may not be permitted in a specific course.

“ChatGPT is one of various AI language models and other AI tools that students can access. It would be ineffectual to ban access to ChatGPT. We also recognise that the use of AI tools by students can support their learning. The application of AI tools in some professions is growing and students need to be able to use them effectively. The ANU plans to work with staff so they can help students develop skills around the appropriate and responsible use of AI tools as part of an ongoing conversation about academic integrity, ethics and professional practice”.

Australian National University (2023) Chat GPT and other generative AI tools: What ANU academics need to know (PDF, 103KB)

What guidance does the University provide about generative AI for teaching and learning?

For more information see the Centre for Teaching and Learning blog and  AI Essentials - Make use of supported artificial intelligence tools at ANU while speaking to students about best practice.

Turnitin

Turnitin recently turned on a preview of its AI writing detection tool.

The tool is not part of the University's regular contract with Turnitin and it will not continue to be visible beyond 1 January 2024 unless we decide to upgrade our license to include it.

The tool is available and visible to staff, but not to students, who only see the regular originality score.

As the Turnitin AI writing detection tool was introduced part-way through a teaching semester, we had no opportunity to evaluate it or consider how we might properly prepare staff and students around its use. Hence, the Turnitin AI writing detection tool will not be used at ANU for the purposes of pursuing academic integrity matters at this time.

While the tool remains available, we can learn more about it and assess its efficacy, talk to our peers across the sector, and make an informed judgement about whether it has a place in our academic integrity toolkit.

Privacy and generative AI

Privacy and generative AI

Under the University's Privacy policy and the Privacy Act (1988), personal information such as names should not be provided to third parties such as generative AI tools such as ChatGPT.

Different tools will have different privacy policies and it is important to understand privacy policies before using a tool.

Generative AI systems are generally developed outside of Australia and are therefore not required to comply with Australian law but with the privacy law in the country in which the system is developed.

Further information

Further information about Open AI's data use policies and how data is used to improve model performance is available here. This includes details about how to opt-out of data use for model training for Open AI tools.

For any questions or concerns, please contact the Senior Privacy Officer on (02) 6125 4679 or privacy@anu.edu.au

Example privacy policies

This is an example of some of the privacy policies from a selection of Generative AI tools.

We encourage you to review the privacy policy of any tool you use.

Not all providers will have the same degree of transparency in how they use your data. If you are unable to find a privacy policy or a usage agreement for the specific tool, it may be best to find another tool with clearer terms around their data usage, storage, and retention.

ChatGPT

If an individual signs up and uses ChatGPT or other systems their information is recorded.

The ChatGPT Privacy policy outlines the use that will be made of personal information and the rights that you have.

Will you use my conversations for training?

Yes. Your conversations may be reviewed by our AI trainers to improve our systems.

Can you delete my data?

Yes, please follow the data deletion process.

Can you delete specific prompts?

No, we are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations.

This information is from the ChatGPT website and is current as of 3 July 2023.

Bard

Please do not include information that can be used to identify you or others in your Bard conversations.

What data is collected? How is it used?

When you interact with Bard, Google collects your conversations, your location, your feedback and your usage information. That data helps us to provide, improve and develop Google products, services and machine-learning technologies, as explained in the Google Privacy Policy

Read our privacy and security principles to better understand how we keep our users' data private, safe and secure.

Who has access to my Bard conversations?

We take your privacy seriously and we do not sell your personal information to anyone. To help Bard improve while protecting your privacy, we select a subset of conversations and use automated tools to help remove personally identifiable information. These sample conversations may be reviewed by trained reviewers and kept for up to three years, separately from your Google Account. 

Can I delete my data from my Google Account?

Yes, there's a link to managing your data in Bard. You can always turn off saving your Bard activity and you can always delete your Bard activity from your account at myactivity.google.com/product/bard. Even when Bard activity is off, your conversations will be saved with your account for a short period of time to allow us to provide the service and process any feedback. This activity will not show up in your Bard activity.

This information is from the Bard website and is current as of 3 July 2023.

Wordtune 

If an individual signs up and uses Wordtune their information is recorded. 

What information we collect, why we collect it, and how it is used

The Privacy Policy includes a detailed breakdown of what details are collected, when and how they are used by Wordtune. 

How to delete your account

You have the right to request the erasure/deletion of your Personal Data (e.g. from our records). 

Log files and information collected automatically 

We use log files. The information inside the log files includes internet protocol (IP) addresses, type of browser, Internet Service Provider (ISP), date/time stamp, referring/exit pages, clicked pages and any other information your browser may send to us. We use such information to analyze trends, administer the Website, track users’ movement around the Website, and gather demographic information. 

This information is from the Wordtune website and is current as of 6 July 2023. 

Textero 

If an individual signs up and uses Textero their information is recorded. 

Information we collect about you 

When using our Services and paying for them, and/or visiting our website, or member communities, we may automatically collect information about you, for example: 

  • Public information from your Facebook account 
  • Information from the payment provider  
  • Information about what type of device you use 
  • Cookies 

Why do we collect personal data? 

  • To provide services to you under the Terms 
  • To provide information about related or other our, or our partners’, products or services you might be interested in within a reasonable time afterward, if you are already an existing user 
  • To provide information to you about services you have purchased from us, or related products or services 
  • For legal reasons, for example, if you have entered into a contract with us 
  • To provide information to you about our services if you have consented to receive it. 

Your rights 

The policy also includes an extensive list of user’s rights, including:  

‘The right to Erasure: This is sometimes called ‘the right to be forgotten’. If you want us to erase all your personal data, and we do not have any legal reason to continue to process and hold it, please contact our Data Protection Officer: dpo@nuovostep.com’ 

This information is from the Textero website and is current as of 6 July 2023. 

Jasper 

If an individual signs up and uses Jasper their information is recorded.  

Will the content generated in Jasper Chat be saved? 

Yes! Like all other AI outputs with Jasper, chat history is available in your AI Outputs History section. You can also separate your content using multiple Chat threads. 

Request erasure of your personal data  

You have the right to ask Us to delete or remove Personal Data when there is no good reason for us to continue processing it. 

This information is from the Jasper website and is current as of 3 July 2023.

Writesonic 

Retention 

Processor will retain Controller Data as long as Customer deems it necessary for the Permitted Purpose or as required by Applicable Data Protection Law. Upon the termination of this DPA, or at Customer's written request, Processor will either destroy or return the Controller Data to Customer, unless legal obligations necessitate the storage of Controller Data. 

Confidentiality 

Processor shall limit access to Controller Data to its personnel who need access to fulfil Processor's obligations under the Agreement. Processor will take commercially reasonable steps to ensure the reliability of any Processor personnel engaged in the Processing of Controller Data. 

This information is from the Writesonic website and is current as of 3 July 2023.

FAQs

How did the system get my personal data? 

Training sets used by Generative AI systems will include material scraped from the internet. 

This could include personal information on ANU websites or from places such as LinkedIn. When registering for those systems, personal information is collected, and the data entered by individuals is generally recorded and placed into the training data set. 

When you sign up to use a generative AI system, it will generally require you to provide some personal details like name and contact information. Depending on the company’s privacy policy, this information may be accessible to third parties. 
 

Can I ask for my data to be deleted? 

This will depend on the privacy policy of a specific AI tool you have used. Before signing up to use any system, read the privacy policy in detail to see how your information will be used. 

If a company does not state whether they will be willing to delete your data, it is best to assume that they will not and take steps to protect yourself. 


How can I prevent my personal data appearing in an AI system? 

At present there are limited controls on offer to remove or prevent your information from being accessed by large data mining AI systems. In countries where data protection laws already apply, such as in Europe, there are some mechanisms in place to make data removal requests but these are not guaranteed.  

Read more at Tech Crunch, accessed 7 July 2023. 

Publishing and generative AI

Publishing

Publishing

Generative AI is becoming an increasingly significant issue for scholarly publishing.  While AI brings great opportunities there are many issues to consider.  Publishers have approached AI with a range of policy approaches and practices.  Authors should consider both issues of compliance with publisher guidelines when submitting their manuscripts and also the permissions they give publishers (and others) to reuse the scholarly content in AI applications.

Key issues

Key issues include:

1.           Ethical and Legal Concerns that affect the authorship of works.  There is concern from publishers about whether manuscripts are genuinely the product of the claimed authors. While AI cannot have copyright the acknowledgement of the use of AI tools is critical for an ethical approach to publishing. Increasing emphasis is being given to tools to seek, as far as possible, to ensure genuine human authorship and ethical behaviour.

2.           Quality and Integrity: AI also may be used in peer review. Ethical issues include concerns about whether using AI for review introduces biases or will reflect popular scholarship rather than originality. Publishers are increasingly seeking to use AI for processes to manage processes such as review. As the technical aspects of AI are not transparent this introduces risk that may affect the integrity of scholarly publishing.

3.           Equity and Accessibility: While AI has the potential to democratise access to and participation in scholarly publishing through translation and other software that assists with manuscript writing, the risk of bias and unintended misinterpretation of text is a significant risk.

4.           Transparency and Reproducibility: The creation of manuscripts using AI may result in misinterpretation of incorrect translation of data. This will reduce transparency and reproducibility of research. This could impact the accuracy and impact of the research.

5.           Economic and rights impact: Authors sign licences with publishers that often result in passing all rights either to the publisher or in making content available with a Creative Commons licence where reuse and derivatives can be made without permission or knowledge of the author.  Publishers can sell or supply the content to AI companies (for example https://www.booksandpublishing.com.au/articles/2024/08/05/256559/wiley-oup-confirm-ai-partnerships/) without the knowledge of authors.

Recommendations

 

  • If you intend to use generative AI tools in work you wish to get published, ensure sure your target journal or publisher allows the use of AI generated text and images in manuscript submissions. 
  • If you do not wish your work to be included in a generative AI system carefully read publisher advice and research the options to retain your rights.

Relevant resources

  • Australian Research Council Policy on Use of Generative Artificial Intelligence in the ARC’s grants programs
    Summary:
    The policy outlines the acceptable use of generative AI tools in the preparation and submission of grant applications to ensure integrity and transparency.
    • Disclosure: Applicants must disclose the use of generative AI in their applications.
    • Accuracy: Information generated by AI must be verified for accuracy and reliability.
    • Originality: AI-generated content must be original and not infringe on intellectual property rights.
    • Ethical Use: The use of AI should align with ethical guidelines and not mislead or deceive.
    • Compliance: Non-compliance with these guidelines may result in the rejection of the application or other penalties.

The policy aims to balance the innovative use of AI with the need for maintaining high standards of research integrity.

Industry standards websites and policies

  • Committee on Publication Ethics (COPE)  
    COPE is committed to educating and supporting editors, publishers, universities, research institutes, and all those involved in publication ethics. Members include editors, publishers, universities and research institutes, and related organisations and individuals involved in publication ethics.
    They advise “Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.”

     
  • Australian Publishers Association (APA) 

The Association urges  governments to ensure that any legislative or policy developments in relation to AI have regard to the following core principles, outlined in detail below:

    • Policies must be underpinned by a clearly defined ethical framework
    • Transparency is key
    • Ensure appropriate incentives and protections for creators and rights-holders
    • Policy settings must be balanced.

The STM (International Association of Scientific, Technical and Medical Publishers) released a white paper in 2023 titled “Generative AI in Scholarly Communications: Ethical and Practical Guidelines for the Use of Generative AI in the Publication Process.”

It covers:

    • Ethical Considerations: Emphasises the importance of maintaining integrity, transparency, and accountability when using GenAI tools.
    • Intellectual Property: Addresses concerns about copyright and the use of AI-generated content, ensuring that intellectual property rights are respected.
    • Practical Applications: Discusses how GenAI can be used to enhance various aspects of the publication process, including manuscript preparation, peer review, and editorial workflows.
    • Best Practices: Offers recommendations for authors, editorial teams, reviewers, and vendors to ensure responsible and ethical use of GenAI.

Recommendations:

    • Transparency: Clear disclosure of AI use in the publication process.
    • Verification: Ensuring the accuracy and reliability of AI-generated content.
    • Ethical Use: Aligning AI applications with ethical standards to avoid misleading or deceptive practices.

Publisher guidelines and policies

Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain ­about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.

Basic word processing systems that recommend and insert replacement text, perform spelling or grammar checks and corrections, or systems that do language translations are to be considered exceptions to this disclosure requirement and are generally permitted and need not be disclosed in the Work. As the line between Generative AI tools and basic word processing systems like MS-Word or Grammarly becomes blurred, this Policy will be updated.

A FAQ can be found here https://www.acm.org/publications/policies/frequently-asked-questions.

The guidance outlines the responsibilities of reviewers and authors. A detailed FAQ is included in the page.Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work.

Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work.

Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.

First AI tools/large language models cannot be credited with authorship of any Emerald publication. Secondly, any use of AI tools within the development of an Emerald publication must be flagged by the author(s) within the paper, chapter or case study.

The submission of content created by generative AI is discouraged, unless it is part of formal research design or methods. Examples of content creation include writing the manuscript text, generating other content in the manuscript, as well as using the AI to generate ideas that are presented in the submitted manuscript. Software that checks for spelling, offers synonyms, makes grammar suggestions or is used to translate your own words into English does not generate new content, and we do not consider it generative AI. 

The use of content generated by artificial intelligence (AI) in a paper (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of any paper submitted to an IEEE publication. The AI system used shall be identified, and specific sections of the paper that use AI-generated content shall be identified and accompanied by a brief explanation regarding the level at which the AI system was used to generate the content.

The use of AI systems for editing and grammar enhancement is common practice and, as such, is generally outside the intent of the above policy. In this case, disclosure as noted above is recommended.

Natural language processing tools driven by artificial intelligence (AI) do not qualify as authors, and the Journal will screen for them in author lists. The use of AI (for example, to help generate content, write code, or process data) should be disclosed both in cover letters to editors and in the Methods or Acknowledgements section of manuscripts. Please see the COPE position statement on Authorship and AI for more details..

Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. se of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term "AI assisted copy editing" as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone.

1. Authorship:

AI does not qualify as an author and should not be used to undertake primary authorial responsibilities, such as generating arguments and scientific insights, writing analysis, or drawing conclusions.

Authors must receive written permission from OUP to deliver AI-generated content (including the collection and analysis of data or the production of graphical elements of the text) as part of their submission and are obliged to replace AI-generated content with human generated content should OUP deem that appropriate.

2. Accountability:

Authors are responsible for the accuracy, integrity, and originality of their works, as well as any AI generated content these may include. Any use of AI must be consistent with the Press’s mission and publishing values, with all that entails in terms of quality, integrity, and trust.

3. Disclosure:

The use of any AI in content generation or preparation must be disclosed to your commissioning editor. It must also be appropriately cited in-text and/or in notes (such as footnotes, endnotes) according to the guidelines of the relevant manual of style.

PLOS expects that articles should report the listed authors’ own work and ideas. Any contributions made by other sources must be clearly and correctly attributed.

Contributions by artificial intelligence (AI) tools and technologies to a study or to an article’s contents must be clearly reported in a dedicated section of the Methods, or in the Acknowledgements section for article types lacking a Methods section. This section should include the name(s) of any tools used, a description of how the authors used the tool(s) and evaluated the validity of the tool’s outputs, and a clear statement of which aspects of the study, article contents, data, or supporting files were affected/generated by AI tool usage.

In cases where Large Language Model (LLM) AI tools or technologies contribute to generating text content for a PLOS submission, the article’s authors are responsible for ensuring that:

  • the content is accurate and valid,
  • there are no concerns about potential plagiarism,
  • all relevant sources are cited, 
  • all statements in the article reporting hypotheses, interpretations, results, conclusions, limitations, and implications of the study represent the authors’ own ideas.
  •  

The use of AI tools and technologies to fabricate or otherwise misrepresent primary research data is unacceptable.

The use of AI tools that can produce content such as generating references, text, images or any other form of content must be disclosed when used by authors or reviewers. Authors should cite original sources, rather than Generative AI tools as primary sources within the references. If your submission was primarily or partially generated using AI, this must be disclosed upon submission so the Editorial team can evaluate the content generated. 

Authors are required to follow Sage guidelines, and in particular to:

  • Clearly indicate the use of language models in the manuscript, including which model was used and for what purpose. Please use the methods or acknowledgements section, as appropriate.
  • Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors, biases or inconsistencies.
  • Be conscious of the potential for plagiarism where the LLM may have reproduced substantial text from other sources. Check the original sources to be sure you are not plagiarising someone else’s work.
  • Be conscious of the potential for fabrication where the LLM may have generated false content, including getting facts wrong, or generating citations that don’t exist. Ensure you have verified all claims in your article prior to submission.
  • Please note that AI bots such as ChatGPT should not be listed as an author on your submission.   

AI-assisted technologies [such as large language models (LLMs), chatbots, and image creators] do not meet the Science journals’ criteria for authorship and therefore may not be listed as authors or coauthors, nor may sources cited in Science journal content be authored or coauthored by AI tools. Authors who use AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript should note this in the cover letter and in the acknowledgments section of the manuscript. Detailed information should be provided in the methods section: The full prompt used in the production of the work, as well as the AI tool and its version, should be disclosed. Authors are accountable for the accuracy of the work and for ensuring that there is no plagiarism. They must also ensure that all sources are appropriately cited and should carefully review the work to guard against bias that may be introduced by AI. Editors may decline to move forward with manuscripts if AI is used inappropriately. Reviewers may not use AI technology in generating or writing their reviews because this could breach the confidentiality of the manuscript.

AI-generated images and other multimedia are not permitted in the Science journals without explicit permission from the editors. Exceptions may be granted in certain situations—e.g., for images and/or videos in manuscripts specifically about AI and/or machine learning. Such exceptions will be evaluated on a case-by-case basis and should be disclosed at the time of submission. The Science journals recognize that this area is rapidly developing, and our position on AI-generated multimedia may change with the evolution of copyright law and industry standards on ethical use..

Authors are accountable for the originality, validity, and integrity of the content of their submissions. In choosing to use Generative AI tools, journal authors are expected to do so responsibly and in accordance with our journal editorial policies on authorship and principles of publishing ethics and book authors in accordance with our book publishing guidelines. This includes reviewing the outputs of any Generative AI tools and confirming content accuracy.  

Taylor & Francis supports the responsible use of Generative AI tools that respect high standards of data security, confidentiality, and copyright protection in cases such as: 

  • Idea generation and idea exploration 
  • Language improvement 
  • Interactive online search with LLM-enhanced search engines 
  • Literature classification 
  • Coding assistance 

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research and validation, and is created by the author. 

Generative AI tools must not be listed as an author.

Authors must clearly acknowledge within the article or book any use of Generative AI tools.

If an author has used a GenAI tool to develop any portion of a manuscript, its use must be described, transparently and in detail, in the Methods section (or via a disclosure or within the Acknowledgements section, as applicable). The author is fully responsible for the accuracy of any information provided by the tool and for correctly referencing any supporting work on which that information depends. GenAI tools must not be used to create, alter or manipulate original research data and results. Tools that are used to improve spelling, grammar, and general editing are not included in the scope of these guidelines. The final decision about whether use of a GenAI tool is appropriate or permissible in the circumstances of a submitted manuscript or a published article lies with the journal’s editor or other party responsible for the publication’s editorial policy.

GenAI tools should be used only on a limited basis in connection with peer review. 

Referencing and generative AI

How should generative AI be cited?

Generative AI content can be unique because it is not created by a person and the content is generally non-recoverable by the reader. 

While different referencing styles have different requirements for citing generative AI content, some do not provide requirements and authors should therefore follow the format for a personal communication. 

As a general guideline, you should:

  • cite a generative AI tool whenever you paraphrase, quote, or incorporate into your own work any content (whether text, image, data, or other) that was created by it
  • acknowledge all functional uses of the tool (like editing your prose or translating words) in a note, your text, or another suitable location
  • take care to vet the secondary sources it cites.

MLA Style Center, 2023, https://style.mla.org/citing-generative-ai/

 

The guidance provided is current as of 3 May 2023.

What to do if AI has provided references you can’t locate?

Citations from any source should always be checked and verified, as this is part of the practice of Academic Integrity.

You should verify any references provided by any generative AI system.

If references can’t be located please ask the Library or your course convenor or supervisor.

AGLC

In text

Two options

  1. Output from ChatGPT, OpenAI to [First name Surname], 3 May 2023.
  2. Output from ChatGPT, OpenAI to Marshall Lee, 23 April 2023. The output was generated in response to the prompt, ‘[Detail question/propt here]’: see Appendix A. 

Reference list

Placed in 'Other' category in bibliography.

OpenAI, ChatGPT to First name, Surname, Output, 3 May 2023

Appendix

Appendix created for the detail of the prompt.

APA

In text

(OpenAI, 2023).

Reference list

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

Further detail is provided by the APA in the blog post How to cite ChatGPT?

Chicago

In text

Three options are available:

  1. Numbered Footnote
    Text generated by ChatGPT, May 3, 2023, OpenAI, https://chat.openai.com/chat.
     
  2. Notation
    ChatGPT, response to “Specific Question asked,” May 3, 2023, OpenAI
     
  3. Author-date 
    (ChatGPT, May 3, 2023).

Reference list

Do not cite ChatGPT in a bibliography or reference list.

Further detail is provided in the Chicago Manual of Style online FAQ.

Harvard

To be referenced using private communication format.

In text

(OpenAI ChatGPT, personal communication, 3 May 2023) 
or
OpenAI ChatGPT (personal communication, 3 May 2023)

Reference

Entry not required in reference list.

IEEE

To be referenced using private communication format.

In text

Specific question asked (OpenAI's ChatGPT, private communication, 3 May 2023).

Reference

Entry not required in reference list.

MLA

The MLA Style Center has developed guidelines for the citation of a generative AI interaction.

The guidelines state you should:

  • cite a generative AI tool whenever you paraphrase, quote, or incorporate into your own work any content (whether text, image, data, or other) that was created by it
  • acknowledge all functional uses of the tool (like editing your prose or translating words) in a note, your text, or another suitable location
  • take care to vet the secondary sources it cites.

In text

(specific question asked)

Reference list

“Specific question asked” prompt. ChatGPT, OpenAI, 3 May. 2023, chat.openai.com/chat.

Visit the MLA website for specific examples and guidance.

Vancouver

To be referenced using personal communication format.

In text

(OpenAI’s ChatGPT, response to input from author, 3 May, 2023).

Reference

Entry not required in reference list.

Further reading

Further reading

Support and guidance

Contact us

Getting in touch with us is easy:

We're the information specialists, and are here to help you!

Centre for Learning and Teaching (CLT)

ANU staff can get support from the ANU Centre for Learning and Teaching (CLT).

CLT provides expert advice and support in delivering innovative learning and teaching services through collaborative partnerships across the University.

For more information and support, browse the CLT website, subscribe to their newsletter, read blog posts, or email clt@anu.edu.au

Page Contact: ANU Library Communication Team