Torsten Hiltmann (Berlin)
Till Grallert (Berlin)
Gerd Graßhoff (Berlin)
Peter Haslinger (Gießen)
Torsten Hiltmann (Berlin)
Elena Kalogeropoulos (Berlin)
Wulf Kansteiner (Aarhus)
Sita Steckel (Frankfurt/Main)
So-called 'artificial intelligence' (AI) or machine learning (ML) approaches, which can use statistical models based on gigantic data sets to generate all types of media from text to images to sound and film deceptively real in form and content, have been the talk of the town ever since the publication of OpenAI's chatbot ChatGPT in November 2022. The developments in the field of applied AI are so rapid that it is easy to get the impression that a reign of the machines, of whatever kind, is virtually just around the corner. The assessment of current developments ranges from unlimited enthusiasm for the new possibilities that are opening up to dystopian doomsday scenarios that are publicised with great publicity. In academia, it is primarily the challenges of AI for research and teaching that are being discussed. However, it is often unclear in both the social and the professional discussion what AI can actually achieve at the moment and what a realistic horizon of expectations is for the developments of the next three to five years. This is where the event organised by Task Area 5 "Data Culture" of the NFDI consortium 4Memory comes in with the aim of taking stock and clarifying the situation.
In fact, AI-based methods have long since found their way into historical research and its methodological apparatus. One only has to think of automatic handwriting recognition and OCR, the use of which is already considered almost normal, but also of procedures such as topic modelling or name entity recognition, which have at least already found wider use in the historical research process.
As colleagues involved in digital and computational history, we regard the developments associated with generative AIs as a singular moment of shock that calls all the foundations of our discipline into question. On the one hand, the entire cultural production of human communities, if it is available digitally in any form, becomes accessible to research in a completely new way. On the other hand, the archive of this cultural production is flooded with an unmanageable amount of absolutely plausible, yet machine-generated multimedia data, which can have a considerable influence on our historical culture. These parallel developments of history(s) from the machine place fundamentally new methodological demands on the historical sciences.
Historians are faced with the challenge of being able to assess the epistemological implications of the "new" methods of source indexing by AI and how they can use them meaningfully - from computer vision for recognising layouts, symbols and structures in images, to new methods of natural language processing (NLP) and models of semantic enrichment with normative data, to the automatic summarisation of the content of individual sources and entire source corpora or even their direct natural language, dialogical interrogation. At the same time, it is necessary that we as a discipline find answers to the major societal challenge of AI in dealing with the past in the context of our historical culture, especially since the refutation of falsehoods is at least an order of magnitude more costly than their creation and dissemination (Brandolini's Law).
The event will first provide an overview and basic insights into these new technologies and their fields of application for historical studies. This will be followed by a panel discussion in which we will then discuss together and in exchange with the audience the potentials, challenges and dangers of 'artificial intelligence' as well as its consequences for historical culture.