Background to the Luther AI project

Background to the Luther AI project

A blog post by Ralf Peter Reimann

❗️Lernen with virtual contemporary witnesses in the metaverse through artificial intelligence ❗️

Luther KI at the Didacta 2024

Unfortunately, Germany is increasingly seen as a "developing country for education".
Recent studies also seem to prove this...

But what is often forgotten is that we also have a very strong entrepreneurial culture, and there are still many people who are courageous, think outside the box and dare to try something new. And very often with an uncertain outcome.

In my opinion, these are precisely the skills that we need to give students today and that need to be promoted.

Performance comparisons between pupils based on content that has only changed marginally in the last 100 years lead nowhere!

I also see our Luther AI project, which we recently presented at Didacta, against this backdrop.

I'm really proud of our team Vladimir Puhalac Jakow Smirin Sascha Cramer Ralf Peter Reimann

All background information on the project status can be found in the latest blog post by Ralf Peter Reimann 👇

https://theonet.de/2024/03/04/interaktionen-mit-dem-ki-xr-martin-luther-im-klassenraum/

Special kudos also to Telekom TechBoost "for making this happen" 👍 and the #OpenTelekomCloud for the Plattformsupport❗️

Metaverse #AI #AI #VirtualHuman #FutureOfEducation #DigitalEducation #PisaIsNotAll #Metainnovator

How human can AI really become?

How human can AI really become?

An essay on a possible philosophical classification

When can we consider an AI to be "human"?

INTRODUCTION

Erich Fromm, an outstanding thinker of the 20th century, left us a reflection on the essential aspects of being human in his work "To Have or to Be". This essay explores the question of whether artificial intelligence (AI) can ever be human by combining Fromm's philosophy with Richard David Precht's thoughts from "AI and the Meaning of Life". The fundamental question is to what extent human decisions are made logically or emotionally.

I. Fromm's "To have or to be" in the age of AI:

Fromm held the view that true fulfillment lies in being and not in having. If we consider AI as a tool, the question arises as to whether machines can ever understand and replicate the "being" of human beings. Fromm held the view that the true identity of man is primarily to be found in non-material "being" rather than in material "having". 

II The humanity of AI according to Precht:

In his book, Precht argues for an ethical approach to AI development. He recognizes the superiority of AI in many areas, but doubts that machines will ever reach the consciousness and depth of human life. The integration of moral principles and values remains a challenge for the creation of human-like AI.

III Emotions and intuition in humans

Fromm emphasized the role of emotions in humans, which often underlie intuitive decisions. The question of the humanity of AI leads to considerations of whether machines can ever understand emotions and make intuitive decisions. Precht emphasizes that it is unlikely that AI can develop true empathy, which is an essential part of human intuition.

IV. Logic vs. intuition: the mix of human decisions:

The question of how much human decision-making is based on logic and how much on intuition is complex. Psychological studies show that many decisions are not purely logical, but are also influenced by emotional factors. This poses a challenge for the development of AI, which must not only think logically but also understand emotional nuances.

V. The future of human-like AI:

Against the background of Fromm's philosophy and Precht's views, the question of the possibility of a human-like AI remains open. The challenges lie not only in technological development, but also in the integration of moral and emotional aspects. An AI can make logical decisions, but humanity requires more than logic - it requires empathy, love and a deeper connection to life.

Conclusion:

Erich Fromm's "To Have or To Be" provides a critical lens through which to view humanity in the context of AI. Richard David Precht's views emphasize the ethical aspects of AI development and the unlikelihood of a fully human-like AI. The question of the relationship between logical and intuitive decisions in human action extends the discussion into the field of psychology. All in all, the human being remains a complex being that goes beyond purely logical thinking and presents AI developers with a challenging, perhaps unsolvable, task.

Martin Luther answered questions live on Reformation Day

Martin Luther answered questions live on Reformation Day

Luther speaks from the pulpit

On October 31, 2023, the YouTube channel EKiRInternet premiered a three-dimensional Luther avatar that is controlled by AI and looks photorealistic. The avatar was created using a painting of Martin Luther as a visual basis and answered questions as if Martin Luther were speaking today.¹ With the help of modern AI algorithms, a painting of the reformer was converted into a photorealistic representation. The result is an avatar that behaves like a real person and can interact in space.

These interactions took place on the XRHuman platform in the Metaverse and were broadcast live on YouTube¹. Using ChatGPT technology, the Luther avatar was able to answer questions from the audience in real time. The AI was programmed to give answers similar to Martin Luther¹.

Ralf Peter Reimann (Internet Officer of the Protestant Church in the Rhineland) and I initiated and implemented the cooperation project with the Metaverse platform XRhuman.

We see great potential in making historical figures accessible to a broad target group through the use of AI and providing new impetus, including for the church.

Up to 150 people took part in the live chat and over 100 questions were answered.

Sources:
(1) Ask Martin Luther your questions on Reformation Day - presse.ekir.de.
https://presse.ekir.de/presse/D89F4DDA59924D37B70A56665710E09C/stell-martin-luther-am-reformationstag-deine-fragen.
(2) Live video of the event with Luther on YouTube.
https://www.youtube.com/live/uBwCHNYvgRY?si=fO_aXVeSNdpPzC4R

Trauma therapy with AI support: opportunities and challenges.

Traumatic experiences can lead to serious psychological consequences, such as post-traumatic stress disorder (PTSD). The treatment of trauma disorders requires individual and professional support by psychotherapists. But how can artificial intelligence (AI) support or complement trauma therapy? What are the benefits and risks involved?

AI in the diagnosis and prevention of trauma sequelae.

AI could also help identify traumatized individuals at an early stage and offer preventive measures. For example, AI-supported apps or chatbots could provide victims with information, tips or exercises to deal with traumatic symptoms. Such digital interventions could provide a low-threshold and anonymous means of accessing psychological help.

AI in the therapy of trauma sequelae.

One possible field of application for artificial intelligence is to support the diagnosis of mental illnesses. For example, AI-based models based on various parameters could provide indications as to the direction in which more in-depth diagnostics might be useful and thus facilitate diagnosis. This could be done, for example, by analyzing speech patterns, facial expressions, gestures or physiological data.

AI could also be used in the therapy of trauma sequelae, e.g. as a complement or alternative to conventional psychotherapy. Different methods could be used, such as:

- Virtual Reality (VR): VR makes it possible to recreate traumatic situations in a controlled and safe environment to provide exposure therapy. The VR environment could be adapted by AI to the individual needs and reactions of the patient.

- Avatar therapy: Avatar therapy is a form of conversational psychotherapy in which patients interact with a virtual counterpart controlled by AI. This could, for example, represent a traumatic person with whom the patient can have a dialogue in order to process the experience.

- AI-based software: AI-based software could support therapy for trauma sequelae, for example, by providing personalized feedback, recommendations, or reminders. It could also facilitate documentation and evaluation of therapy.

Ethical issues and challenges

However, the use of AI in trauma therapy also raises ethical issues and challenges that need to be considered. Some of these are:

- Data protection and security: Processing sensitive data about traumatic experiences requires a high level of protection against misuse or unauthorized access. Both technical and legal measures must be taken to protect the privacy and autonomy of patients.

- Quality and effectiveness: The quality and effectiveness of AI-based interventions must be scientifically tested and evaluated before they can be applied in practice. This must also take into account possible side effects or harm that could result from faulty or inappropriate AI.

- Trust and relationship: The relationship between patient and therapist is an essential factor for the success of trauma therapy. Trust, empathy and respect play an important role. How can such a relationship be established and maintained with an AI? How can an AI complement or replace human interaction without replacing or endangering it?

Conclusion

AI offers many opportunities to enhance or expand trauma therapy. However, the ethical aspects and challenges associated with the use of AI in this sensitive area must also be considered. Interdisciplinary collaboration and critical discourse are therefore needed to explore and responsibly shape the opportunities and risks of AI in trauma therapy.