<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Domain-centric ADAS datasets</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Václav Diviš</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tobias Schuster</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marek Hrúz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Siemens Technology, Software Systems &amp; Processes, Research in Verification &amp; Test</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of West Bohemia, Faculty of Applied Sciences, Department of Cybernetics and New Technologies for the Information Society</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of West Bohemia</institution>
          ,
          <addr-line>Sedláčkova 214, Pilsen 3 301 00</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Since the rise of Deep Learning methods in the automotive field, multiple initiatives have been collecting datasets in order to train neural networks on diferent levels of autonomous driving. This requires collecting relevant data and precisely annotating objects, which should represent uniformly distributed features for each specific use case. In this paper, we analyze several large-scale autonomous driving datasets with 2D and 3D annotations in regard to their statistics of appearance and their suitability for training robust object detection neural networks. We discovered that despite spending huge efort on driving hundreds of hours in diferent regions of the world, merely any focus is spent on analyzing the quality of the collected data, from an operational domain perspective. The analysis of safety-relevant aspects of autonomous driving functions, in particular trajectory planning with relation to time-to-collision feature, showed that most datasets lack annotated objects at further distances and that the distributions of bounding boxes and object positions are unbalanced. We therefore propose a set of rules which help find objects or scenes with inconsistent annotation styles. Lastly, we questioned the relevance of mean Average Precision (mAP) without relation to the object size or distance.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Advanced Driver-Assistance Systems</kwd>
        <kwd>Trajectory Planning</kwd>
        <kwd>Domain-centric Datasets</kwd>
        <kwd>Object Detection</kwd>
        <kwd>mean Average Precision</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and Motivation</title>
      <p>SafeAI 2023: The AAAI’s Workshop on Artificial Intelligence Safety,
Feb 13-14, 2023 Washington, D.C., US
* Corresponding main author.
$ divisvaclav@gmail.com (V. Diviš); tobias.schuster@siemens.com
(T. Schuster); mhruz@ntis.zcu.cz (M. Hrúz)
 https://gitlab.com/divisvaclav/ (V. Diviš)
0000-0001-9935-7824 (V. Diviš); 0000-0002-9421-8566 (M. Hrúz)</p>
      <p>© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g ACttEribUutRion W4.0oInrtekrnsahtioonpal (PCCroBYce4.0e).dings (CEUR-WS.org)
2.1. Automotive Datasets
• We define a minimum safe distance  = areas whereas A2D2 and ONCE also contain highways,
 (, ) for a variety of scenarios (ego speed  and country roads, tunnels etc. On top of that, the sensor
weather dependent deceleration ). We calculate setups vary as well, e.g. diferent camera resolutions. For
the safe distance based on German legislation, but the the KITTI dataset, the authors used a LIDAR sensor and
process can easily be adapted to any other legislation. two stereo cameras (left and right), whereas for Waymo
ifve LIDARs (restricted to 75m) and five cameras were
• We analyze the distribution of objects’ bounding boxes’ used. nuScenes used six cameras and one LIDAR sensor
(BB) relative sizes, distances to ego vehicle, and posi- as well as five radars, ONCE uses one LIDAR and seven
tions in the datasets. cameras, and A2D2 five LIDARs and six cameras.
How• We define a set of standardizable sanity checkers which ever, for the ACC, only the front cameras are taken into
help verify the quality of the collected data and mark account, resulting in a smaller amount of usable images.
ambiguously labeled data. The KITTI dataset is the smallest in terms of scenes
and the least diverse, containing only sunny and cloudy
• We highlight the concrete missing information which daytime scenes. For a short period of time, the Waymo
is not part of the datasets and diagnose the cause. and nuScenes datasets provided the largest variety and
amount of data and annotations; they are among the most
• We propose an automotive mean Average Precision widely-used autonomous driving datasets. Although the
(amAP) metric, which is now related to the distance to ONCE dataset recently set a new benchmark for the
the object, or relative BB size. amount of driving hours and frames, Waymo contains
the highest amount of 3D bounding boxes because ONCE
2. Datasets and Related Work focuses on self-supervised learning without labels.
Table 1 gives an overview of important general information
per dataset.</p>
      <p>
        As mentioned in Section 1, the prerequisite for the cor- 2.2. Dataset Analysis
rect functionality of the ACC is information about
object classes, sizes (used in case of overtaking or evasive The authors of the above-mentioned datasets compared
maneuver), and distances of the objects to the ego ve- their works based on the common aspects of the datasets
hicle. Several common automotive datasets are there- as shown in Table 1. General properties like the number
fore not suitable for such a task, despite some of them of driving hours are often used to compare datasets and to
providing LIDAR data, namely BDD100K [
        <xref ref-type="bibr" rid="ref3">9</xref>
        ] (no LIDAR state an improvement. Moreover, the number of scenes,
data), CityScapes [
        <xref ref-type="bibr" rid="ref4">10</xref>
        ] (no bounding box annotations), images, or annotations is often used to determine the
Perl [
        <xref ref-type="bibr" rid="ref5">11</xref>
        ] (no 3D annotations) or Apollo Scape [12] (no quality of the datasets.
images, only LIDAR). The other group of datasets con- For instance, the authors of the ONCE [
        <xref ref-type="bibr" rid="ref14">18</xref>
        ] and
tains images and LIDAR point clouds including the 2D A2D2 [14] dataset focus on the number of annotations,
as well as the 3D annotations. It is for this reason that the amount of driving hours, the adverse weather
condiwe decided to take the following large-scale automotive tions, the time (day/night) and diferent locations (urban,
datasets into account: KITTI [13], Audi Autonomous highway, country roads) as well as countries/cities where
Driving Dataset (A2D2) [14], Lyft Level 5 dataset [15], the data was captured. However, specific requirements
nuScenes dataset [
        <xref ref-type="bibr" rid="ref12">16</xref>
        ], Waymo Open Dataset [
        <xref ref-type="bibr" rid="ref13">17</xref>
        ] and of driver assistance systems were not considered while
the ONCE dataset [
        <xref ref-type="bibr" rid="ref14">18</xref>
        ]. creating or evaluating any of those datasets. The intent
      </p>
      <p>
        Each dataset contains a various number of labeled was rather to generate general datasets for a wide range
objects like small vehicles (vehicle, ego vehicle, SUV, mo- of supervised and unsupervised learning tasks as well as
torcycle, etc.), large vehicles (truck, bus, tram), pedestri- driving functions.
ans, and cyclists. Especially KITTI and nuScenes show The authors of A2D2, as well as nuScenes, focused
high-class imbalance for some classes due to fine-grained on statistics relevant to the ACC and other assistance
classes. Furthermore, the selected datasets contain la- systems. Their work provides information about the
beled camera images with 2D and 3D bounding boxes and distribution of the object distances for diferent classes
the corresponding LIDAR point cloud information which as well as the absolute number of objects within the
provides distance information for each object. However, dataset. Additionally, the authors of nuScenes analyzed
the size of the datasets, in terms of the number of labeled the distributions of the velocities of common objects like
frames and captured ambient conditions, varies. For in- vehicles and bikes as well as bounding box dimensions.
stance, A2D2 provides a dataset of 2D labeled images, but The KITTI benchmark [13] is based on the
perforonly a small part contains 3D bounding boxes. KITTI, mance analysis of neural networks on the size of BBs
nuScenes, LyftLevel5 and Waymo reflect only urban in pixels as a proxy for the distance of the ego vehicle
to the object. Their work in general follows the COCO on the same path". Let’s define the first object to be an
evaluation methodology [
        <xref ref-type="bibr" rid="ref15">19</xref>
        ], but no physical distance in- obstacle (anything else than the ego vehicle) and the
formation is used. The authors of nuScenes and Waymo second object to be the ego vehicle. We consider the
vedid set a baseline for various detection tasks, yet without locity of an obstacle to be equal to 0km/h (representing
considering the distances to the diferent objects explic- the stand-still object and therefore the worst-case
sceitly. The analysis closest to ours is done by the authors nario), and the ego vehicle’s deceleration to be 7 /2
of the ONCE dataset. They analyzed the collected data (can variate within a range from 7 /2 till 10 /2 on
regarding distance-wise mean Average Precision perfor- dry roads [
        <xref ref-type="bibr" rid="ref17">21</xref>
        ],[
        <xref ref-type="bibr" rid="ref18">22</xref>
        ]). Vehicle deceleration can be seen as
mance for 3D object detection using only point clouds. a function of adhesion between the tires and the road,
However, their distance thresholds were selected rather which depends on the material used in the tires, material
intuitively, whereas we specifically derive the distance of the road, temperature, weather conditions, mounted
from the domain safety requirements. Additionally, we braking system, and the mass of the vehicle.
analyze the spatial distribution of objects within the im- Based on the definition of TTC, let us consider three
ages as well as the bounding box/object size compared driving scenarios:
to the image size.
• highway (recommended speed 130/ℎ ≈ 36/)
• country road (maximum speed 100/ℎ ≈ 28/)
      </p>
    </sec>
    <sec id="sec-2">
      <title>3. Background</title>
      <p>• city (maximum speed 50/ℎ ≈ 14/)</p>
      <sec id="sec-2-1">
        <title>Since the majority of related works only analyze datasets</title>
        <p>
          from a general ML perspective, omitting the point of We now compute the minimal safe distance which
data-centric paradigm, we decided to verify the SOTA au- needs to be ensured in order to brake in time
(withtomotive datasets in regard to a trajectory planning task. out initiating any evasive maneuver), for the following
One part of our motivation is that forecasting the trajec- case: ego vehicle is driving on the highway, the
postory planning is conditioned by ego’s vehicle velocity, sible deceleration is equal to 7/2 and reaction
dewhich in case of higher value takes the further-distance lay is 0.0. Based on the kinematic equations of a
linobjects into account. In order to be able to evaluate the early decelerating object, the distance which the object
sduafictaiesnetcsy, waned’vqeucahliotyseonf tahnenToitmatee-dToob-Cjeocltlsiswiointhaisn athnein- wwihllerteratvimeleisaisfuanfcutinocntioofntoimfedece=lera0ti+on0 =− 210−2 ,.
stance to calculate the minimal safe distance from the With a linear deceleration of 7/2, the vehicle, moving
ego vehicle. within the legal limits, will reach its standstill state in
obsFtoarcltehheesaadkinegoffrosmimtphleicoitpyp, owsieteddoirencottiocnon(osnidtehreacnoyl- ftrimame e, t=he e0g−o v=ehic3l6e7− w0il=ltr5a.v1e4l.a Wdisitthanincethoifs tim=e
lision course), since we are working with static images 0 + 36 · 5.14 − 12 · 7 · 5.142 = 185.04 − 92.46 = 92.58.
and thus don’t have the information about the relative For completeness, the braking distance under the same
motion of the objects. As described in [
          <xref ref-type="bibr" rid="ref16">20</xref>
          ]: "The TTC weather conditions on country roads is 55.11 and in
value at instant  is defined as the time for two objects the city 14 as can be seen in Table 2. As mentioned
to collide if they continue at their present velocity and earlier, this process can be generalized and repeated for
rather generally (small, middle, big), we propose to have
clearly specified operational domain dependencies and
incorporate the minimum safe distances as thresholds.
        </p>
        <p>Furthermore, the relation of AP to the objects’ relative
BB sizes or distance highlights detailed discrepancies
between the model’s performance trained on the same
dataset. We therefore incorporate the minimal safe
distance  of each scenario as a threshold for the creation
of test subsets  ⊂ ℬ ⊂  from original test dataset
. For instance  contains objects only in distance
&gt; (ℎℎ). For each subset the average precision
can be calculated, representing a concrete value for a
specific operational domain e.g. driving in the city.
Equation 1 represents a diferent perspective on the average
precision metric, which we call automotive mean AP.
any ambient conditions, type of vehicle, and speed
limitations.</p>
        <p>
          It is noticeable that the highway’s maximum foresight
boundary will be in reality limited by the physical
properties of the camera or the maximum speed diference
between the ego vehicle and the object. But most impor- 
tantly, objects within those safe ranges must be part of   = 1 ∑︁  , (1)
a dataset (training and testing), otherwise, the system  =1
will deal with an epistemic uncertainty [
          <xref ref-type="bibr" rid="ref19">23</xref>
          ]. In order where  is a number of domain specific test-subsets.
to be able to investigate the statistics of objects’
appearances, we need a dataset that provides information about
the object’s distance. As mentioned in Section 2, some 4. Analysis of datasets
datasets such as Perl and BKK100, etc. are not suitable for
this task. Consequently, we have chosen the following In this chapter we analyze all datasets from Table 1 for
large-scale automotive datasets: KITTI, Waymo open the characteristics mentioned in Section 3. In order to
Dataset, A2D2 from Audi, nuScenes, LyftLevel5 and evaluate the model’s generalization ability, the collected
ONCE, which contain 3D annotations and distances to data has to be divided into two parts, namely training
the objects. and validation. The training part is used for extracting
        </p>
        <p>
          In regard to the functionality of trajectory planning, we the relevant features and finding a reasonable
combinafocused on the following in-dataset object characteristics: tion in order to build a high-level feature representation,
whereas the validation part is used to evaluate the loss
• distribution of a BB’s relative size: in order to verify after each training cycle (epoch). The training loop
usuthat a variety of objects’ sizes is captured within the ally ends when the loss, on a validation set, stagnates for
dataset, several epochs [
          <xref ref-type="bibr" rid="ref20">24</xref>
          ]. Logic demands that both the training
and validation parts should have uniformly distributed
• distribution of the distance between obstacles and the objects densities and their properties (e.g.size, distance).
ego vehicle (with relation to minimum safe distances): We therefore decided to evaluate the correlation between
in order to verify that objects in further distances are the theoretical uniform distribution and observed one by
incorporated within the dataset, calculating the Wasserstein distance [
          <xref ref-type="bibr" rid="ref21">25</xref>
          ].
• relation between BB’s relative size and object distance
from ego vehicle: to discover abnormality within the
dependency,
• heatmap of an object’s appearance density: to visualize
the potential asymmetrical appearance of an object
with relation to the ego vehicle,
• an optical flow between consecutive images; in order
to identify a series of static images, which can lead to
class imbalanced dataset.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>As originally presented in [19] and further explored in [18], it seems to be reasonable to observe the mean Average Precision with relation to a specific object’s size. Since the original authors clustered the object groups</title>
        <p>4.1. Analysis of outliers
There is no reason to assume that all data are flawlessly
annotated since the task is usually done by several
people, which increases the uncertainty of an inconsistent
annotation style. We therefore designed plausibility
check functions which allow us to indicate the
potentially wrong annotated data and analyze them later on.
• 1 :  (0.0 ≥ .. &gt; 1.0)
• 2 :  (−  +.. )</p>
        <p>• 3 :  ((,  ) &gt; ℎ)
• 4 :  (. (− 1, ) &gt; ℎ)
7
w
lo6
lf
cap
it5
o
s4
e
n
e
d
f3
o
e
d
itu2
n
g
a1
M
0</p>
      </sec>
      <sec id="sec-2-3">
        <title>Exemplary results of the analysis of the objects’ distances</title>
        <p>distribution as well as the relation between the relative
BB size and object distance can be seen in Figures 3 and 4.
Moreover, we show an example of object appearance
variation (heatmap) of class Vehicle in the A2D2’s dataset
in Figure 5.</p>
      </sec>
      <sec id="sec-2-4">
        <title>The function 1 returns a bounding box whose rela</title>
        <p>tive size is out of the range (0.0, 1.0]. The outliers of
otherwise exponentially decayed objects’ size with
respect to the distance to the ego are marked by function
2. This function covers the quadratic dependency of the
BB area and the mapping of the captured object within
the real world to the pixel coordinates. The parameter
..  and the denominator  were found with
the least squares optimization from the analyzed data
and therefore are unique for each dataset and each class.
The function 3 highlights objects whose bounding boxes
significantly overlap. The threshold of the relative
intersection area can be defined by ℎ.</p>
        <p>
          Examples of the outcomes for functions 1 − 3 are
given in Figure 1. We further found that most of the
datasets contain sequences of "stop and go" in a trafic
jam, or idling at crossroads. These situations result in
the recording of many similar images without objects or
surrounding variation. Therefore we added 4, which
calculates a dense optical flow [
          <xref ref-type="bibr" rid="ref22">26</xref>
          ] from previous and
actual images in the sequence and returns a positive flag
in case the magnitude sinks bellow empirically defined
threshold as seen in Figure 2. The higher the value of the
magnitude, the more objects were moving from frame
to frame. With this method, we could even identify if
the data were recorded repeatedly in the same place [
          <xref ref-type="bibr" rid="ref23">27</xref>
          ]
(when recorded in one session), but we only used it to
discover static scenarios.
        </p>
        <p>(a)
(c)
(b)
(d)
Minimum safe distance in [m]
for 130km/h
for 100km/h
for 50km/h
80 100 120
20
40</p>
        <p>60</p>
        <p>Distance to object [m]</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Concluding Remarks</title>
      <p>the annotation style (objects under a certain pixel area
were excluded from the annotation process), and the
ambient conditions in which the dataset was recorded. In
addition, a system trained on such a dataset would have
to deal with epistemic uncertainty and look for additional
sources of information (namely LIDAR).</p>
      <p>
        Furthermore, all datasets contain predominantly
smallsized objects (the highest MEAN value of the relative BB
size of the class Person was 0.091 in the case of the KITTI
dataset). For comparison, the same can be stated for the
well-known COCO dataset [
        <xref ref-type="bibr" rid="ref15">19</xref>
        ], where the class Person
has a MEAN relative BB size equal to 0.089. By generating
heatmaps, we discovered that 99.8% of the objects appear
only in the two lower quadrants of the image. Such
information can lead to a significant downsizing of the
ifeld of view and the thereof acceleration of the detectors’
inference time. The majority of overlapping BBs, with
potentially wrong annotation styles, were extracted from
a sequence of streams on crossroads. Such a static data
sequence (9.55% in case of nuScenes dataset) contains
a lot of similar features (the majority of surrounding
objects are not moving) and could be removed from the
dataset.
      </p>
      <p>Finally, we defined and evaluated a reasonable set
of rules, described in Section 4.1, which automatically
proves the quality of the collected data from a
domainrelated perspective. We encourage the community to use
our "domain-centric" approach in order to create a dataset
under concrete functional constraints and train detectors
on it. Our code and additional results are published on
GitLab and can be publicly accessed. 1</p>
      <p>This work deepened our vision of a domain-centric ML
approach in the automotive industry. To conclude, we
outline some research directions which we are currently
investigating: (a) Analysis of automotive mAP based on
1https://gitlab.com/arrk-fi/ObjectDetectionCriticality/-/tree/
dependency_branch.
1.0
0.8
itsy
end0.6
cyen
uq0.4
e
fr
0.2
0.0 0.0</p>
      <p>0.2
Distribution of relative BB sizes for objects: Small_Vehicle</p>
      <p>A2D2 nsamp =25043, wdist =0.96
Kitti nsamp =28742, wdist =0.98
ONCE nsamp =33292, wdist =0.98
nuScenes nsamp =139575, wdist =0.99
Waymo nsamp =1280729, wdist =0.99
LyftLevel5 nsamp =190029, wdist =0.99
Distribution of distances for objects: Pedestrian</p>
      <p>LyftLevel5 nsamp =8751, wdist =0.60
nuScenes nsamp =2686, wdist =0.82
Waymo nsamp =767542, wdist =0.65
A2D2 nsamp =4388, wdist =0.80
ONCE nsamp =3006, wdist =0.86
Kitti nsamp =4487, wdist =0.89
1.0</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <sec id="sec-4-1">
        <title>This work is partly funded by ARRK Engineering GmbH.</title>
        <p>The work has also been supported by the grant of the
University of West Bohemia, project No. SGS-2022-017
and by the Technology Agency of the Czech Republic,
project No. CK03000179.
the relative BB size or distance to object with SOTA
object detectors. (b) Object detector performance analysis
on cleaned data (without outliers). (c) Dataset creation
with respect to our domain-centric approach. (d)
Combination of datasets in order to achieve a more uniform
data distribution. (e) Data augmentation to compensate
weak aspects in the datasets.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>SAE</given-names>
            <surname>International</surname>
          </string-name>
          ,
          <article-title>Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems</article-title>
          , volume
          <volume>J3016</volume>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gasparetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Boscariol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lanzutti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vidoni</surname>
          </string-name>
          ,
          <article-title>Path planning and trajectory planning algorithms: A general overview, Motion and operation planning of robotic systems (</article-title>
          <year>2015</year>
          )
          <fpage>3</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Xian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madhavan</surname>
          </string-name>
          , T. Darrell,
          <article-title>BDD100K: A diverse driving video database with scalable annotation tooling</article-title>
          , CoRR abs/
          <year>1805</year>
          .04687 (
          <year>2018</year>
          ). URL: http://arxiv.org/ abs/
          <year>1805</year>
          .04687. arXiv:
          <year>1805</year>
          .04687.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cordts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Omran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ramos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rehfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Enzweiler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Benenson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Franke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schiele</surname>
          </string-name>
          ,
          <article-title>The cityscapes dataset for semantic urban scene understanding</article-title>
          ,
          <source>in: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>3213</fpage>
          -
          <lpage>3223</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>McBride</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Eustice</surname>
          </string-name>
          , Ford campus
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Schwalb</surname>
          </string-name>
          ,
          <article-title>Analysis of safety of the intended use vision and lidar data set</article-title>
          ,
          <source>The International Journal (sotif)</source>
          (
          <year>2019</year>
          ).
          <source>of Robotics Research</source>
          <volume>30</volume>
          (
          <year>2011</year>
          )
          <fpage>1543</fpage>
          -
          <lpage>1552</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W. W.</given-names>
            <surname>Royce</surname>
          </string-name>
          , Managing the development of large [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>W.</surname>
          </string-name>
          <article-title>Wang, software systems: concepts and techniques</article-title>
          , in: D. Manocha, Traficpredict:
          <article-title>Trajectory prediction Proceedings of the 9th international conference on for heterogeneous trafic-agents</article-title>
          ,
          <source>in: Proceedings Software Engineering</source>
          ,
          <year>1987</year>
          , pp.
          <fpage>328</fpage>
          -
          <lpage>338</lpage>
          .
          <source>of the AAAI Conference on Artificial Intelligence,</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wohlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Baca</surname>
          </string-name>
          , The waterfall volume
          <volume>33</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>6120</fpage>
          -
          <lpage>6127</lpage>
          .
          <article-title>model in large-scale development</article-title>
          , in: International [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Geiger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stiller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Urtasun</surname>
          </string-name>
          , Vision Conference on
          <article-title>Product-Focused Software Process meets robotics: The kitti dataset</article-title>
          ,
          <source>The International Improvement</source>
          , Springer,
          <year>2009</year>
          , pp.
          <fpage>386</fpage>
          -
          <lpage>400</lpage>
          .
          <source>Journal of Robotics Research</source>
          <volume>32</volume>
          (
          <year>2013</year>
          )
          <fpage>1231</fpage>
          -
          <lpage>1237</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[4] ISO</source>
          <volume>26262</volume>
          :
          <year>2018</year>
          ,
          <string-name>
            <surname>Road</surname>
            vehicles - Functional safety [14]
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Geyer</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Kassahun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mahmudi</surname>
            ,
            <given-names>X. Ricou</given-names>
          </string-name>
          ,
          <source>(ISO 26262)</source>
          , Standard, International
          <string-name>
            <surname>Organization R. Durgesh</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Hauswald</surname>
            ,
            <given-names>V. H.</given-names>
          </string-name>
          <string-name>
            <surname>Pham</surname>
          </string-name>
          , for Standardization,
          <year>2018</year>
          . M.
          <string-name>
            <surname>Mühlegg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Dorn</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Fernandez</surname>
          </string-name>
          , M. Jänicke,
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [5] ISO/PAS 21448:
          <year>2019</year>
          ,
          <article-title>Road vehicles - Safety of the S</article-title>
          . Mirashi,
          <string-name>
            <given-names>C.</given-names>
            <surname>Savani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sturm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vorobiov</surname>
          </string-name>
          , intended functionality, Standard,
          <string-name>
            <given-names>International</given-names>
            <surname>Or- M. Oelker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garreis</surname>
          </string-name>
          , P. Schuberth, A2d2: Audi auganization for Standardization,
          <year>2019</year>
          . tonomous driving dataset,
          <year>2020</year>
          . URL: https://arxiv.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <article-title>Deep Learning AI data-centric ai com</article-title>
          - org/abs/
          <year>2004</year>
          .06320. doi:
          <volume>10</volume>
          .48550/ARXIV.
          <year>2004</year>
          . petition, https://https-deeplearning-ai.
          <source>github.io/ 06320</source>
          . data-centric-comp/,
          <year>2021</year>
          . Accessed:
          <fpage>2022</fpage>
          -08-16. [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kesten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Usman</surname>
          </string-name>
          , J. Houston, T. Pandya,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nadhamuni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Low</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ondruska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Omari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kazakova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Platinsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shet</surname>
          </string-name>
          ,
          <article-title>Level 5 perception dataset 2020</article-title>
          , https://level-5.global/level5/data/,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H.</given-names>
            <surname>Caesar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bankiti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. E.</given-names>
            <surname>Liong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Baldan</surname>
          </string-name>
          ,
          <string-name>
            <surname>O.</surname>
          </string-name>
          <article-title>Beijbom, nuscenes: A multimodal dataset for autonomous driving</article-title>
          ,
          <source>in: CVPR</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>11621</fpage>
          -
          <lpage>11631</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kretzschmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Dotiwalla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chouard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Patnaik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tsui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Caine</surname>
          </string-name>
          , et al.,
          <article-title>Scalability in perception for autonomous driving: Waymo open dataset</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>2446</fpage>
          -
          <lpage>2454</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Niu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xu</surname>
          </string-name>
          , et al.,
          <article-title>One million scenes for autonomous driving: Once dataset</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [19]
          <string-name>
            <surname>T.-Y. Lin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maire</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Belongie</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Hays</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Perona</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ramanan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dollár</surname>
            ,
            <given-names>C. L.</given-names>
          </string-name>
          <string-name>
            <surname>Zitnick</surname>
          </string-name>
          ,
          <article-title>Microsoft coco: Common objects in context</article-title>
          , in: ECCV, Springer,
          <year>2014</year>
          , pp.
          <fpage>740</fpage>
          -
          <lpage>755</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Van Der Horst</surname>
          </string-name>
          , J. Hogema,
          <article-title>Time-to-collision and collision avoidance systems (</article-title>
          <year>1993</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [21]
          <string-name>
            <surname>I. für Unfallanalysen</surname>
          </string-name>
          ,
          <article-title>Bremstabelle hamburg - institut für unfallanalysen - bremstabelle</article-title>
          , https://unfallanalyse.hamburg/index.php/ ifu-lexikon/bremsen/bremstabelle-a/,
          <year>2022</year>
          . Accessed:
          <fpage>2022</fpage>
          -09-14.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Erd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jaśkiewicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Koralewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rutkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stokłosa</surname>
          </string-name>
          ,
          <article-title>Experimental research of efectiveness of brakes in passenger cars under selected conditions</article-title>
          ,
          <source>in: 2018 Xi International Science-Technical Conference Automotive Safety, IEEE</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kendall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gal</surname>
          </string-name>
          ,
          <article-title>What uncertainties do we need in bayesian deep learning for computer vision?</article-title>
          , in: NIPS,
          <year>2017</year>
          , pp.
          <fpage>5574</fpage>
          -
          <lpage>5584</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Prechelt</surname>
          </string-name>
          ,
          <article-title>Early stopping-but when?</article-title>
          ,
          <source>in: Neural Networks: Tricks of the trade</source>
          , Springer,
          <year>1998</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>69</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>C.</given-names>
            <surname>Villani</surname>
          </string-name>
          ,
          <article-title>Optimal transport, old and new. notes for the 2005 saint-flour summer school</article-title>
          ,
          <source>Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer</source>
          <volume>3</volume>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>G.</given-names>
            <surname>Farnebäck</surname>
          </string-name>
          ,
          <article-title>Two-frame motion estimation based on polynomial expansion</article-title>
          ,
          <source>in: Scandinavian conference on Image analysis</source>
          , Springer,
          <year>2003</year>
          , pp.
          <fpage>363</fpage>
          -
          <lpage>370</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Davison</surname>
          </string-name>
          ,
          <article-title>Real-time simultaneous localisation and mapping with a single camera</article-title>
          ,
          <source>in: Computer Vision</source>
          , IEEE International Conference on, volume
          <volume>3</volume>
          , IEEE Computer Society,
          <year>2003</year>
          , pp.
          <fpage>1403</fpage>
          -
          <lpage>1403</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>