SP 7Interaction with vulnerable road users

Events occurring in traffic often result in situations that are resolved when road users interact with each other by signalising their intentions or the reaction they are seeking by means of certain actions or gestures. Typical examples include the eye contact made between drivers and pedestrians at a zebra crossing, a hand signal that cyclists use to indicate their intention to turn a corner, or the gestures made by a police officer to regulate  the flow of traffic.


The automation of vehicles will turn technical systems into protagonists in what was previously a system of road traffic exclusively characterised by human activities. In this context, automated vehicles are facing the challenge of mastering highly varied and complex situations involving other road users safely. One key prerequisite for this is that automated vehicles are capable of identifying and understanding the behaviour and intentions of other road users. This applies to a particularly high degree in situations involving vulnerable road users.


Analysing behaviour and identifying intentions

Vulnerable road users predominantly communicate in road traffic by using poses or gestures. This is, on the one hand, a targeted activity that consciously signalises intentions by means of gesture. Cyclists, for example, use hand signals to indicate their intention to change lanes or to make a turn at an intersection. On the other hand, vulnerable road users also make unconscious gestures that help make other road users capable of drawing conclusions about what they intend to do. This can, for example, be the direction that a pedestrian looks in, which indicates that the pedestrian has seen an approaching vehicle. In order to be able to anticipate the behaviour of vulnerable road users, automated vehicles must be able to reliably 

identify these types of actions and gestures and interpret them correctly. These are the objectives pursued by the ‘Interaction with vulnerable road users’ subproject. The starting point for a more exhaustive behavioural analysis is reliable and robust detection of vulnerable road users in road traffic. Stereo cameras and, laser scanners and high resolution radar sensors will be used to realise this. A sensor fusion will combine the information arriving from the different sensor sources to form a consistent image. In particular, the detection of vulnerable road users should also be implemented by deep learning methods, such as deep neuronal networks, and should be able to reliably identify vulnerable road users even under aggravating conditions, 

such as partial concealment. A focal point of the subproject is directed at a detailed analysis of vulnerable road users. Using a field study as a basis, relevant characteristics of interaction and a large number of elementary gestures must first be identified and evaluated for their relevance. Using a range of sensor modalities, characteristics that are relevant for identifying intentions and gestures are extracted from the sensor data. These characteristics form the basis for the identification of poses (e.g. position of the head and body, line of sight or the position of the legs) and gestures (such as a hand signal made by a cyclist) exhibited by vulnerable road users.


Context information

The context in which vulnerable road users act is often highly significant for an exact interpretation of behaviour or intentions. This is why other, also relevant elements of the traffic situation (e.g. objects, spaces that are occupied or vacant, kerbs, zebra crossings) need to be detected as contextual information and be interpreted together with the characteristic gestures.

Using the poses and gestures identified, along with the relevant contextual information, as a basis, intentions can be identified and behaviour be modelled for vulnerable road users. Deep learning methods, in particular, will be used for this. By means of a subsequent situational analysis and behavioural planning, strategies on which action to take can be derived and implemented for automated driving.

 

SP 1

Sensing the environment ­and situational understanding

 

SP 2

Digital map and localisation

 

SP 3

Concepts and pilot applications

 

SP 4

Human-vehicle-interaction

 

SP 5

Automated driving through urban junctions

 

SP 6

Automated driving on urban streets

 

SP 7

Interaction with vulnerable road users

Sitemap

@CITY-Logo