AI driven language processing tools (Large Language Models) such as ChatGPT have attracted a lot of attention from the general public and the scientific community in a short period of time. Their areas of application are already numerous and will continue to grow. At the same time, these tools confront scientists also with new challenges in terms of research integrity. With this position paper, the Directorate informs PSI employees about the framework conditions for the use of AI driven language processing tools at PSI. This position is explicitly limited to the use of AI in text generation and does not refer to the use of AI and ML technologies for scientific projects.
The general idea behind these guidelines for AI-based language processing tools is essentially not different from the existing guidelines for research integrity: as an author, you are responsible for your scientific research, your information gathering and for your scientific text. This is also stated in the PSI position paper: “When using language processing tools/text generators in a scientific context, e.g., when producing publications, employees must comply with the applicable rules on research integrity at PSI and the respective research funding institution (in this context, in particular the rules on producing research results and citing sources).” Since AI-based language processing tools are not only becoming increasingly powerful, but are also used for scientific publications for the creation of content that may originate from unknown sources, the ETH Zurich and EU guidelines recommend a high degree of transparency regarding the use of AI in the creation of texts, e.g. in the form of a declaration of use.
Further information on the responsible use of ChatGPT for scientific work can be found on the Lib4RI website, the ETH library website and the Living Guidelines of the the European Research Area.
Contact
This expert can be contacted at all research integrity issues in the ETH-domain.