Artificial Intelligence
The potential uses and limitations of artificial intelligence (AI) are being discussed in the university library. Of the many possible applications, some take place in the background work of the library (e.g. the use of AI to improve searches (German) or to create accompanying texts for digital cultural objects (German)), while others are more directly visible to library users. This page summarizes the activities and services on the topic of artificial intelligence at the University of Kassel as well as other, supra-regional offers.
Recommendations for action
These recommendations are excerpts from the "Guidelines for the use of generative AI systems in studying and teaching at the HAWK (V1)" (German), created by the AI task force of the HAWK - University of Applied Sciences and Arts Hildesheim/Holzminden/Göttingen, as well as from the website of the HAWK Service Center for Quality in Teaching.
All users of AI tools should ask themselves the following questions:
- Does it need to be labeled (e.g. in text production)?
The labeling of AI-generated content is common and is explicitly required in some contexts. In scientific work, good scientific practice requires that the support provided by AI be declared transparently. - Could plagiarism have been inadvertently created or copyrighted content used?
It is possible that the content generated by an AI is too similar to an existing original. An example can be found in "The AI Index 2024 Annual Report" by the AI Index Steering Committee of Stanford University (Chapter 3, Section 5. "LLMs can output copyrighted material").
- Could the information passed on to the AI theoretically be published?
With regard to information security, data protection and privacy, university members must respect the privacy of others when using generative AI and comply with all applicable data protection regulations, including EU legislation on artificial intelligence. The use of personal and proprietary data as well as sensitive information of third parties in prompting must be avoided and is only permitted with the express consent of the persons concerned. Examples of personal data and sensitive information include real names, dates of birth, contact details, health data and sensitive research data, which may also be contained in theses, reports or assessments.
- Can responsibility be assumed for the result?
Regardless of whether a scientific paper was created with or without generative AI, the person creating it is responsible for its accuracy, quality and quality. Accordingly, as with scientific sources, the adoption of AI-generated content is the responsibility of the person using it. They must label the transfer accordingly. Assuming responsibility ensures that academic integrity is maintained and that generative AI tools serve as a supporting resource in the research and teaching/learning process without replacing the critical reflection and judgment skills of the user, which are essential for academic work. - How can the results of AI be verified?
It is well known that AI tools can produce erroneous results ("hallucinating" or rather "bullshitting", see Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5). It is therefore necessary to critically examine the results and, if necessary, consult other sources. - Could there be a systematic bias?
Most providers of LLMs (Large Language Models) do not disclose which data they have used to train their models, as stated in "The AI Index 2024 Annual Report" by the AI Index Steering Committee of Stanford University (Chapter 3, Section 6. "AI developers score low on transparency, with consequences for research"). It can therefore neither be ruled out that AI produces biased results, nor can this be easily evaluated. Particular attention is therefore required in scientific research, as the potential distortions can not only lead to distorted research results, but also to an impairment of self-perception (see e.g. Messeri, L., Crockett, M.J. Artificial intelligence and illusions of understanding in scientific research. Nature627, 49-58 (2024). https://doi.org/10.1038/s41586-024-07146-0).
Further links
- AI Act Explorer of the EU (opens in a new window)
- AI report - By the European Digital Education Hub's Squad on artificial intelligence in education (opens in a new window)
- Artificial Intelligence in Science: Challenges, Opportunities and the Future of Research (OECD) (opens in a new window)
- Virtual Competence Center for Artificial Intelligence and Scientific Work (VK:KIWA) (opens in a new window)