Menu

IAV

10.08.2023

How safe are robots?

Robots are an important part of our everyday lives these days, but how safe are they really? Dr.-Ing. Alexander Joos, an expert in the field of robotics and product safety, talks about ethical issues and standards that are up for discussion in connection with the safety of mobile robots and AI.

Robots help people, but there are risks in using them, too. ©Jason Leung/Unsplash

What is product safety for robots? What is it about?

To say it simply: technical devices must be designed in such a way that no harm can come from them. More specifically, this is about safety, which becomes important when we use partially automated, automated or autonomous robots or vehicles, i.e. when they do our work for us. It's about mobile companions that do things on their own. If we refer to our harvesting robot, for example, it is about making the harvesting of strawberries or other fruits safe.

So product safety does not only concern autonomous systems? 

No, in principle it affects every technical product. Also non-automated or even autonomous systems. So the questions are not new. It is a topic that is as old as technology. The questions were already asked, for example, by the pioneers in the field of rocket science. Back then, people started thinking in terms of redundancy and fail-safety. So the question does not arise only with the use of AI.

The biggest challenge is ... ?  

The challenge is always to design systems in such a way that they do not pose any danger. Especially in the automated operation of these systems. Let's stay with our picking robot as an example: it must be safe enough not to cut off a finger of a worker or visitor of a plantation with the scissors instead of the strawberry. It does not matter whether this misbehaviour occurs because of a component failure, or a faulty function (e.g. detection of the berry).

No matter what a robot does, whether it grasps something with its arm, sorts it or cuts it - the whole thing involves forces and sharp objects, which is why danger emanates from it or its function. And this danger must be managed and kept in check.

What has to be considered in terms of mechanics, electrics, software and environment for product safety?

Basically all these points. We have to introduce security measures for each point so that a certain level of security is achieved overall. A system needs to be looked at as a whole in order to make an assessment.

In the case of the IAV picking robot, we have to consider how it is mechanically designed so that we can, for example, constructively limit the length or extension of the arms so that the robot cannot swing so far that it could hit a human.

In terms of the environment, we could build a cage around it or install an emergency button on the device itself that can be pulled if the robot runs in the wrong direction and, to use a somewhat exaggerated and drastic image, rolls towards an open pedestrian zone.

Regarding code, we can design software so that it monitors itself, e.g. with multiple redundant software components that are developed slightly differently, would not make the same mistakes, and "synchronize" with each other. This way we can ensure more security.

Es kann vorkommen, dass Mensch und Roboter – zum Beispiel bei der Ernte – sich einen Arbeitsraum teilen. Wie stellen wir sicher, dass kein Angestellter bei der Arbeit verletzt wird?

In certain scenarios, these systems operate alongside humans. This is also possible if security permits. Basically, you have to think about the technology, software and system at the beginning. Then I can decide what to do. Can I send it off safely and accept that my worker will walk around in parallel, or do I have to safely cordon off the space around the robot?

In manufacturing, for example, a cage is often built around a robot that assembles parts.

Mobile robots would need a much larger cage. This could be a whole warehouse or a whole greenhouse that is locked off. This locked room would be even safer if I built in an automatism that cuts off the power when the door opens.

Don't mobile robots already recognise humans and know because they have to be careful when dealing with us?

In principle, yes, robots can be technically upgraded to recognize people or even other objects that could damage them. But equipping the robot with this capability is always a question of money. This can be done, for example, with cameras, radar, lidar or heat sensors. The latter would then recognize us, for example, as an object with 37.8 degrees, i.e. as a human being.

A robot has to recognize obstacles, doesn't it?

It always depends on what a robot is designed for. If it cannot detect obstacles (e.g. because it is not equipped with the appropriate sensors), it will knock them down. Then there is property damage. That is more acceptable than if a robot were to drive into a group of people. That is assessed quite differently.

It sounds cynical, but ultimately it is always a matter of weighing up probabilities. And it is a question of social acceptance. One aircraft is allowed to crash for every 109 hours flown. Even if it's on the news in the evening, many of us would still get on a plane the next day, because it happens very rarely and this frequency is accepted by people.

What does a user have to prove if he wants to use mobile robots?

There are standards for product safety and standards for robotics. Basically, there are no clear instructions for action; it is always a matter of consideration and interpretation how to deal with them. Standards are designed in such a way that they do not provide a recipe for every specific robot and every case. They are very abstract, but apply to everything. The detailed interpretation and implementation are then usually up to the developing/implementing company and the approving authorities. A product must be safe, full stop.

Who is liable if something happens?

That is a question of the constellation. The standard is not specific. It is mostly a question of who applies, who has the mandate to train, who develops and implements the safety concepts and who monitors the operation.

If the manufacturer recommends fencing the system and putting up warning signs, a higher risk of danger from the device itself is accepted because a dangerous behaviour of the device is less surprising than without a fence.

In complicated cases, courts ultimately decide who is to blame in the event of damage.

How often does something happen in connection with robotics?

In our cultural environment, product safety is very important. It has a high priority, especially in technical development. There are detailed and strict regulations, monitoring and implementation of safety measures. With trains, for example, great attention is paid to safety, so that very rarely anything happens. The accident near Enschede was a very rare exception.

What about industrial robots that assemble parts?

Here, too, something safety-critical very rarely happens. At least if all participants involved adhere to the specifications. In many cases, it is not the technology that is the problem, but very often simply human error.

How does IAV deal with product safety?

Highly sensitive. We think very carefully about which standard is valid, which technical safety feature we need to include, etc.

In addition to considerations regarding the technology to be developed, we also think about the necessary usage regulations and sensitization or training for users.

Basically, however, I am completely serious when I say that security must always be fulfilled, even if flexibility is restricted or the business case cannot be realized as a result.

What does that mean in programming?

There are rules about which tools, languages and procedures are used for development. Redundancy and testing are enormously important.

We have talked a lot about standards. Do innovative ideas and product safety go together?

A resounding yes! Standards must also be taken into account in innovations. Often they are even the seedbed for new ideas, because they partly force you to be creative. To find a new solution. We first work with a prototype in a secured or virtual environment. Then we quickly see what can and cannot be implemented, what may behave dangerously, and where further analyses and measures are necessary to develop a safe, functional product in the end.

 

If you would like to learn more about product safety, please contact our expert directly:

Dr.-Ing. Alexander Joos
Senior Vice President (SVP) – Digital Solutions Powertrain, TD-X | Future Powertrain
alexander.joos@iav.de
linkedin.com/in/dr-alexander-joos-91b64712