AI Lab

Ki lab

The AI Lab forms part of the Electronic Media teaching area. It supports students in developing Deep Learning algorithms adapted to their aesthetic ideas and/or integrating already existing data sets and trained models into their artistic practice.

Deep Learning is the branch of AI research that uses so-called 'Deep Neural Networks' to find significant patterns automatically. These networks enable, among other things, the autonomous navigation of vehicles, they beat the best human players in popular board/computer games and already produce almost photorealistic portraits of non-existent persons. Their versatility implies a largely unexplored potential for generative art, AI-assisted creative work processes, automation, and human-machine interaction.

The AI Lab is a forum for reflecting on the implications of technology for society and artistic creation. Building a solid basic understanding of the workings of current algorithms is intended to lay the foundation for fostering a mature engagement with the subject area.

Funded by the German Federal Ministry of Education and Research as part of the joint project "KITeGG KI greifbar machen und begreifen: Technologie und Gesellschaft verbinden durch Gestaltung (Making AI tangible and understandable: Connecting technology and society through design)". Further information https://gestaltung.ai/

Leon Etienne Kühr

Research Associate

Co-Lead AI Lab
Westflügel, Raum 307

Mattis Kuhn

Research Associate

Co-Lead  AI Lab

Westflügel, Raum 307​

Johanna Teresa Wallenborn

Research Associate
Algorithms in Context

wallenborn@hfg-offenbach.de

T +49 (0)69.800 59-220

Wintersemester 2024/25

Text Synthesizer. Playing with Language.​​

In the course "Text Synthesizer. Playing with Language" we develop different synthesizers to generate texts. We start with the use of Python libraries to analyze language in order to generate texts. We calculate with words in the form of word embeddings. We use Large Language Models (LLM) to generate code, but also literature. We personalize LLMs through system prompts and template designs. We experiment with image-to-text models to write text from images. We translate texts into speech with text-to-speech models. Following these workshops, we will use the last weeks of the semester to individually develop our own text synthesizer, experimental/literary text or song(text).​

​Wednesday, 14:00 - 16:30

Generating Surveillance | from vision to synthesis

​The ability to generate images or even videos is closely linked to the tasks of machine vision. In the course “Generating Surveillance | from vision to synthesis” we want to understand this step from seeing to generating through practical exercises in python. Starting from existing video material, we will use computer vision methods such as classification, segmentation, mask generation, object recognition and the creation of image descriptions to modify existing moving images or create completely new compositions. On this basis, we combine generative AI with these techniques to replace objects, generate new frames or manipulate existing images. In addition to the practical exercises, we will deal with topics such as anti-surveillance art, activism and the problem of deep fakes. We will shed light on questions of privacy and how our everyday data - for example through social media, smartphones or public cameras - unconsciously end up in training data sets for AI systems. The aim of the course is to develop your own small program that automatically generates a video and is presented as part of an AI evening.

Thursday, 14:30 - 17:00

KI-Abend

Ki abend website

In addition to courses and an open workshop, we organize AI evenings at irregular intervals to talk about art and design projects, current developments and papers over drinks and snacks. The evening is open to everyone, including interested people from outside the university. It serves as a platform for discussing AI topics. The content of the evenings can be shaped in a participatory way, for example, papers, tools or projects can be presented and discussed. If you have any suggestions, please get in touch with us: kuhn@hfg-offenbach.de, kuehr@hfg-offenbach.de

Do. 2. Mai 2024, 18 Uhr – Das Lab sagt Hallo!

Do. 23. Mai 2024, 18 Uhr – KI und Literatur

Do. 20. Juni 2024, 18 Uhr – KI und Krieg

Student work

j00n — ELIZA bot (2024)

“What I had not realised is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” – Joseph Weizenbaum – Computer Power and Human Reason (1976)

The interactive installation »bl00t - ELIZA bot« explores interactions with a talking chatbot. An adaptation of Joseph Weizenbaum's ELIZA chatbot from 1966 speaks with a real-time AI voice and responds to typed user input. The simple ELIZA algorithm enables latency-free interaction. The robot's voice amplifies the (ELIZA) effect.

Raspberry PI 5, Python, Piper (text-to-speech), light sensor

Elise Olenina — Draw Me a Circle (2024)

Im Schaffensprozess sind wir häufig stark von den Tools abhängig, die wir nutzen. Sie können den Prozess so beeinflussen, dass die ursprüngliche Idee verloren gehen kann. Um dieses Feld zu untersuchen, hat Elise Olenina eine Reihe von Befehlen formuliert, die typischen Aufgaben im Designprozess entsprechen, etwa das Erstellen eines Objekts, dessen Neuordnung, das Hinzufügen weiterer Elemente, das Positionieren in bestimmten Formen usw. Diese Eingabeaufforderungen wurden von einem Large Language Model verarbeitet, das aus Code und Text, jedoch nicht aus Bildern gelernt hat. Die Ausgaben erfolgen als SVG-Codes, welche als statische und animierte Vektorgrafiken gezeigt werden.

In Form von Prompts versucht Elise Olenina ihren kreativen Gestaltungsprozess nachzuahmen. Wie strukturieren wir Dinge? Wie schaffen wir Ordnung und wie gehen wir mit Chaos um? Welche Kriterien unterscheiden menschliche Arbeit von maschineller? Und kann die Sprache unsere Intentionen während des Gestaltungsprozesses sichtbar machen?

LLM, Paper, Display

Zhichang wang stuhl 2024

Zhichang Wang — Stuhl (2024)

Photo: Cheesoo Park

Zhichang Wang — Stuhl (2024)

Computer, monitor, graphics tablet, Stable Diffusion (SDXL), Custom Model, ComfyUI, Python

The interactive installation Stuhl (chair) uses AI real-time generation to expand the design process of chairs based on the Thonet chairs. Visitors can draw or sketch freely on a graphics tablet. Whether spontaneous lines or simple sketches—the system stimulates the imagination and helps to design individual Thonet chairs. A ComfyUI workflow ensures that the generation process runs smoothly and quickly and that the AI system can process image requests efficiently. It is based on the basic SDXL model from Stability AI and uses a specially trained LoRA model and the LCM model for real-time image generation. Especially with drawings that are very close to the shapes of chairs, the AI system manages to design a chair that matches the style of the drawing by interpreting the lines.

...