Menu

IAV

03.03.2023

AI – more science than fiction 

With the “AI in the Loop” project, DLR, HS Mittweida and IAV want to jointly find solutions to develop AI-based software products in an agile manner in the future.This will involve the use of robots that have been recorded with AI algorithms, which will make it easier to optimize the software and give humans a better understanding of what an AI system is.In this interview, Prof. Dr. Frank Köster, founding director of the DLR Institute for AI Safety, talks about why the project is so important.

Viktoria Hoffmann

Students at Mittweida University of Applied Sciences are field testing the robots. | © Hochschule Mittweida 

Why does the AI in the Loop project exist?

The project exists because we can no longer create future products, especially AI-based products, in one step. We no longer proceed according to the classic phases, such as design, implementation and roll-out, but instead carry out successive product enhancement and improvement in the field. We talk about a DevOps approach, in which we integrate development (“Dev” for Development) and operations (“Ops” for Operations) with the goal of continuous product improvement or further development.

At the DLR Institute for AI Security, we are particularly focused on the topics of safety and security for AI-based solutions. Therefore, we focus on safety-relevant issues in the project “AI in the Loop”.

We also need innovative methods to be able to perform safety-oriented proofs for an iterated system development and to be able to substantially address the topic of security. The project is a vehicle for our research, which is why it is strategically important for us to set up and implement the lab as close to the real world as possible.

What is the major goal of the project "AI in the loop"? 

DLR would like to build the solutions generically so that we can apply them in different industrial domains. In the future, this will be done for applications in robotics, aviation and automotive, for example.

We intend to apply the findings practically in the foreseeable future. That’s why it makes sense for us to have partners who come from the real world and don’t view the project in purely academic terms. The idea is also to continue the cooperation, which started with the laboratory setup “AI in the Loop”, in the longer term.

How far advanced is the project? What can the robots already do?

We can already take the first steps now. The service robots are already driving around DLR, picking up guests from the elevator, driving through a crowd of people and mastering their first interactions with humans.

The development of service robots is not actually our main topic, but they make it easier to understand what an AI system is. The robots are primarily there to make our work visible and understandable. From their physical form, you can see that AI algorithms are not only relevant in science fiction movies, but are already very close to us today. So it’s not something that will have to be dealt with in 20 years, but a topic that will reach us all relatively quickly.

Do the robots recognize people they have seen before?

Not yet, but in the next step it would be desirable for the robots to recognize people who are frequent guests at the institute and start an interaction with them. They should then also know, among other things, where the person is that the guests would like to visit.

How does AI work in "AI in the loop"?

In AI, we have the ability to process comprehensively, even large amounts of data, with intelligent algorithms. This is also a key functionality in the robots. For example, they should be able to perceive the environment with a camera, recognize the objects in it and classify them. In addition, they should know how to behave when interacting with people and objects. It is important that future developments in the environment or in a system can be predicted.

To realize this, we use AI algorithms. Especially in perception, i.e. in recognizing, interpreting and deducing actions, AI-based components are absolutely necessary solution modules.

What are the challenges in doing so?

It is a great challenge to project a deep technical or semantic level into technical systems. This means that a robot should recognize objects and people and understand exactly which object it has in front of it. It should know that a chair has fallen over or that a person is lying on the floor because he or she might have a health problem. We are trying to figure out how to elegantly link technical algorithms and semantic information to reliably plan and initiate adequate actions in each case.

A second challenge is to design the identification and the algorithms robustly with an eye to safety and security in order to be able to handle attack scenarios. Currently, for example, a special pattern on a T-shirt could be used to prevent objects from being recognized correctly: We need to figure out what this potential for deception is, and how it can be compensated for by other sensors or data sources.

What else can the technology be used for later?

The focus for us is the “AI in the Loop” lab as a tool. The technique will be relevant for various robotic systems, but also in the field of automated flying and automated driving. Research work will initially focus on the perception part – in deriving models of the environment and understanding situations.

Are there overlaps with other sciences?

Beyond the technology-oriented questions, we are also interested in ethical, legal, and societal issues as they arise with strong human-technology integration.

Again, if we first look at the use case of service robots, it is exciting for us to observe how people react to such robots when they come into contact with them: Do they push it aside as a technology component or do they interact normally and make room for them.

Do we need more collaborations like this?

I think we need more long-term collaborations between research institutions and practical partners because, given the complexity of IT-driven solutions that is necessary today, we can no longer imagine that we can jump into a new topic area in the short term and find a scalable problem solution there after a very short time that will still work in the long term.

Therefore, we have to accept that many problems in the area of safety and security of AI systems can only be solved with scientifically based approaches. Here, we need much more intensive cooperation between business and scientific partners that is focused on the long term. Therefore, we have to accept that many problems in the area of safety and security of AI systems can only be solved with scientifically based approaches. Here, we need much more intensive cooperation between business and scientific partners that is geared toward the long term.

Contact:

Jan Gacnik
jan.gacnik@iav.de

Falk Langer
falk.langer@iav.de

Christian Steiner
christian.steiner@iav.de