=Paper=
{{Paper
|id=Vol-3647/SemIIM2023_paper_11
|storemode=property
|title=Towards Semantic Modeling of Camera from Image Quality Testing Perspective: Valeo Vision Systems Case
|pdfUrl=https://ceur-ws.org/Vol-3647/SemIIM2023_paper_11.pdf
|volume=Vol-3647
|authors=Muhammad Yahya,Aedan Breathnach,Faisal Khan,Iman Abaspur,Rajkumar Ranganathan
|dblpUrl=https://dblp.org/rec/conf/semiim/YahyaBKAR23
}}
==Towards Semantic Modeling of Camera from Image Quality Testing Perspective: Valeo Vision Systems Case==
Towards Semantic Modeling of Camera from Image
Quality Testing Perspective: Valeo Vision Systems
Case
Muhammad Yahya1 , Aedán Breathnach1 , Faisal Khan1 , Iman Abaspur1 and
Rajkumar Ranganathan1
1
Valeo Vision Systems, Dunmore, Tuam, Galway, Ireland
Abstract
Advanced Driving Assistance Systems (ADAS) have significantly enhanced the modern driving experience
by integrating state-of-the-art technology to bolster vehicle safety and driver comfort. Cameras, serving
as the eyes of these systems, are pivotal in capturing real-time visual data. This data is processed and
analyzed to make instantaneous decisions, such as object detection and lane departure warnings. The
manufacturing of the camera has to pass certain tests to qualify the production. During the production,
it generated a huge amount of data which is stored in different storage places. This, however take
huge efforts and time for image quality team to digest such scattered data. To solve this issue of data
integrity, we propose Camera Ontology (CamOnt) with the scope to represent the camera testing domain
knowledge. The ontology is built using the knowledge gathered from the domain experts and ISO12233
document, and is evaluated with the catalogue of SPARQL queries provided.
Keywords
ADAS, Ontology modeling, Image Quality Measurement,
1. Introduction
Advanced Driving Assistance Systems (ADAS) have revolutionized the modern driving experi-
ence by integrating cutting-edge technology to improve vehicle safety and driver comfort [1]. By
offering features such as adaptive cruise control, lane departure warnings, automatic emergency
braking, and parking assistance, ADAS reduces the likelihood of accidents and alleviates driver
fatigue and stress [2]. As a result, drivers enjoy a more relaxed and confident driving experience.
Moreover, the widespread adoption of ADAS has the potential to significantly reduce traffic
accidents and fatalities, making roads safer for everyone [3]. The integration of these systems
is gradually shifting the responsibility of driving from humans to machines, paving the way for
the future of autonomous vehicles and transforming the way society perceives and interacts
with transportation.
Cameras play a pivotal role in the functionality and efficiency of Advanced Driving Assistance
SemIIM’23: 2nd International Workshop on Semantic Industrial Information Modelling, 7th November 2023, Athens,
Greece, co-located with 22nd International Semantic Web Conference (ISWC 2023)
Envelope-Open muhammad.yahya@valeo.com (M. Yahya); aedán.breathnach@valeo.com (A. Breathnach);
faisal.khan@valeo.com (F. Khan); iman.abaspur@valeo.com (I. Abaspur); rajkumar.ranganathan@valeo.com
(R. Ranganathan)
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
Systems (ADAS) [4]. Serving as the eyes of the system, cameras capture real-time visual data,
which is then processed and analyzed to make split-second decisions, such as object detection,
lane departure warnings, and pedestrian alerts. The precision and reliability of these cameras
directly influence the system’s ability to prevent potential accidents and ensure driver safety.
However, If an ADAS camera fails, the system may not detect obstacles or misinterpret visual
data, leading to compromised safety features and an increased risk of accidents [5, 6]. Causes of
camera failure can range from physical obstructions like dirt or debris on the lens, software
glitches, and malfunctioning hardware components, to adverse environmental conditions such
as extreme temperatures. Ensuring the consistent performance and reliability of ADAS cameras
is thus paramount for the overall safety and effectiveness of the system.
Valeo1 , an Original Equipment Manufacturer (OEM) and a global automotive supplier is at
the forefront of developing advanced vision systems that play a crucial role in the realm of
ADAS. Their state-of-the-art cameras are designed to enhance vehicle safety by providing a
comprehensive view of the vehicle’s surroundings, enabling features such as lane departure
warnings, traffic sign recognition, and pedestrian detection. Valeo’s vision systems leverage
sophisticated image processing algorithms and high-resolution sensors to ensure accurate
real-time data interpretation, even in challenging lighting or weather conditions. As ADAS
technologies continue to evolve towards fully autonomous driving, Valeo’s commitment to
innovation and quality ensures that its cameras remain an integral component in driving safety
advancements, helping to reduce accidents and save lives on the road.
Each camera undergoes rigorous testing to meet specified standards in the Valeo production
lines. During these tests a huge amount of data is produced, that is then stored across various
storage locations. The Image Quality (IQ) team is responsible for evaluating this scattered data
to ensure that testing criteria are met. The challenge, however, arises from the dispersion of data
across multiple locations, resulting in the IQ team spending an extended duration on this task.
To address this issue ontology emerged as a significant tool to integrate the data [7]. Ontolo-
gies2 appeared as a significant tool to represent the domain knowledge of the manufacturing
domain to support data integration and interoperability [10, 11].
In this work, we are proposing Camera Ontology (CamOnt) to integrate the data residing in dif-
ferent places in Valeo vision system. The CamOnt is built with the knowledge gathered from the
Valeo experts with the scope defined to help the image quality team access the data integrated via
a uniform model. At present, the ontology is evaluated with the experts-defined query catalogue.
2. Related Work
In the evolving landscape of semantic web technologies, ontologies have emerged as pivotal
techniques for representing domain-specific knowledge [12, 13, 14]. The domain of camera
1
https://www.valeo.com/en/catalogue/cda/surround-view-camera-solutions/
2
Gruber et al. 1993 [8], define an ontology as the formal, explicit specification of a shared conceptualization. The
basic elements of an ontology are a concept and the relation between it, and its axioms. When the instances of
concepts are populated into the developed ontology it becomes a knowledge base also known as knowledge graph
[9]. There are two key elements in Ontology: terminological components (Tbox) and assertional components (Abox)
that specify the concepts and their instances, respectively.
Figure 1: Class hierarchy of the Camera testing ontology
technology and testing is no exception, with several attempts made to formalize and structure
the vast array of concepts and relationships inherent to it.
Wei et. al explored sentiment analysis within the context of digital camera reviews [15].
They constructed a Sentiment Ontology Tree (SOT) specifically for digital cameras, which
contained nodes representing various attributes of the camera and their associated sentiments.
This work provides a unique perspective on camera ontology, emphasizing the sentiment aspect
of camera attributes rather than just the technical specifications. In the visual domain, the
Spatio-Temporal Visual Ontology was introduced [16]. While not directly related to cameras,
it offers insights into the semantic representation of visual data, such as images produced by
cameras. A notable contribution in the broader realm of product families comes from Nanda
et al. [17]. They showcased the practicality of the Product Family Ontology Development
Methodology (PFODM) by constructing an ontology for a family of one-time-use cameras.
There are existing ontologies like the aforementioned that touch on cameras, but they don’t
specifically address the complexity of camera testing for automotive use. However, camera
technology in the automotive sector is crucial for enhancing safety. Our work fills this gap,
focusing on this niche yet vital area. The rapid evolution of camera tech highlights the need for
such a specialized ontology, especially since current literature doesn’t adequately cover this
domain.
3. Ontology Development Methodology
We now discuss the ontology development process. We have adopted the ontology development
process from [18]. In the pursuit of a semantic model for camera image quality testing, the Valeo
Vision Systems case study emphasize the significance of a structured ontology development
approach. The initial phase involves defining the functional and non-functional requirements
for an ontology tailored to the complexity of camera image quality. Collaborating with domain
experts in line with ISO12233 document, specific use-cases related to the Image quality is
described. This is strengthen by a thorough examination of ISO12233 standard document, and
datasets, laying the groundwork for the ontology’s purpose and scope.
The next phase is the formalization of concepts. By scrutinizing existing ontologies, relevant
terminologies are searched, ensuring alignment with the camera domain. This meticulous
formalization is characterized by establishing relationships between concepts. For instance,
a concept like ImageSensor might be linked to ImageProcessingAlgorithm through specific
properties. Leveraging tools such as Protégé, these formalized concepts are seamlessly translated
into RDF/OWL formats. After the development of the ontology, the next step is its evaluation.
In this preliminary work, we have evaluated our ontology based on a competency questions.
4. Ontology Overview
The section presents an overview of the CamOnt ontology. The ontology focuses on the domain
of camera testing and visual resolution measurements (see Figure 1 for class hierarchy). The
ontology encapsulates various aspects of camera testing, from the settings and conditions
under which cameras are tested to the specific features of test charts used in visual resolution
measurements.
The key classes in the ontology include Camera , which represents the primary entity being
tested, and Test, which captures the various tests a camera might undergo. The ontology also
delves into the nuances of test conditions, such as white balancing (AutomaticWhiteBalancing
and ManualWhiteBalancing ), gamma correction (GammaCorrection ), and camera focusing
(CameraFocusing). Test charts, essential tools in camera testing, are represented with classes
like TestChart , DimensionalSpecification , and HyperbolicWedgeTestPattern . Table 1
Table 1
Classes and their definitions in the Ontology
Class Definition
Camera The primary entity being tested, representing any camera device.
Test Captures the various evaluations or procedures a camera may undergo.
Represents the automatic method of adjusting the colour balance in
AutomaticWhiteBalancing
images taken by the camera.
ManualWhiteBalancing Denotes the manual method of adjusting the colour balance in images.
GammaCorrection Refers to the adjustment of luminance or colour values in the image.
CameraFocusing Represents the techniques or methods used by the camera to focus on subjects.
TestChart A tool or pattern used in camera testing to evaluate various parameters.
DimensionalSpecification Specifies the dimensions or size-related attributes of a test chart.
HyperbolicWedgeTestPattern A specific pattern on a test chart used for certain evaluations.
Table 2
Object properties with their respective domains and ranges in the camera testing ontology.
Object Property Domain Range
hasTest Camera Test
hasTestConditions Test TestConditions
hasChartSpecifications Test TestChartSpecifications
hasResolutionMeasurements Test ResolutionMeasurements
usesWhiteBalancing Camera WhiteBalancing
employsTestChart Test TestChart
hasCameraSetting Test CameraSettings
measuresWithPattern VisualResolutionMeasurement HyperbolicWedgeTestPattern
hasMaterialType TestChart Material
adjustsGammaWith Camera GammaCorrection
shows some of the classes with their definitions.
The ontology defines several object properties to capture the relationships between these
classes. For instance, hasTestConditions can link a Test to its specific conditions, while usesWhite-
Balancing might specify the type of white balancing a camera employs. Table 2 shows some of
the object properties with their domain and range.
To exemplify the domain knowledge, consider axiom 1 which states that there exists some
cameras that use the automatic white-balancing method. Axiom 2 represents that there exist
some tests that employ the Hyperbolic Wedge Test Pattern in their test charts. Moreover, axiom
3 states that there exist some cameras that have a specific lens setting of type Camera Framing
and Lens Focal Length Setting and also use a focusing method of type Camera Focusing.
1. Camera ⊓ ∃usesWhiteBalancing .AutomaticWhiteBalancing
2. Test ⊓ ∃employsTestChart .HyperbolicWedgeTestPattern
3. Camera ⊓∃hasLensSetting .CameraFramingandLensFocalLengthSetting ⊓∃usesFocus-
ingMethod .CameraFocusing
5. Evaluation
According to Gomez-Perez et al. 1995, ontology evaluation is a technical judgment of the
ontology in relation to a frame of reference [19]. This frame of reference can encompass
requirement specifications, competency questions, and its real-world applications. A crucial
aspect of ontology evaluation is the formulation of competency questions from the user’s
perspective to ascertain if the ontology meets its intended purpose. Hammar et. al 2010
proposed the creation of usage examples in natural language to highlight the significance of the
ontology’s concepts [20]. Commonly termed competency questions, these user-centric queries
are instrumental in gauging the ontology’s scope. Essentially, they represent the questions
Table 3: Competency questions and their SPARQL representation with patterns. CE: Class Expression, OPE: object property, DP:
datatype property
Competency Question Pattern SPARQL Query
select ?camera where { ?camera cam:usesWhite-
Q1. Which cameras use Automatic
[CE1][OP1][CE2] Balancing cam:AutomaticWhiteBalancing.
White Balancing?
}
What are the test charts used select ?testChart where {
Q2. [CE1][OP1][CE2] }
in a specific test? ?test cam:employsTestChart ?testChart.
select ?camera where {?camera cam:hasLensSetting
Q3. Which cameras have a specific
[CE1][OP1][CE2] cam:CameraFramingandLensFocalLengthSetting.
lens setting?
}
select ?gammaCorrection where {
What are the different gamma
Q4. [CE1][OP1][CE2] ?test cam:employsGammaCorrection ?gammaCorrection.
corrections used in camera tests?
}
select ?camera where {
Q5. Which cameras use a specific
[CE1][OP1][CE2] ?camera cam:usesFocusingMethod cam:CameraFocusing.
focusing method?
}
stakeholders aim to answer using the ontology and its linked knowledge base. Hence, designing
a comprehensive set of competency questions that encapsulate most real-world scenarios is
imperative. These questions necessitate thorough scrutiny to eliminate any that are irrelevant.
5.1. Competency Questions: Valeo Use-case
In the context of CamOnt, the competency questions were sourced from Camera IQ domain
experts, eliminating the need for an in-depth analysis of their effectiveness. It is worth noting
that experts who provide domain knowledge are different from those gave SPARQL queries.
These questions are tabulated in Table 3. The table’s first column presents the competency
questions in natural language, the second column delineates the triple pattern addressed by
each question, and the final column showcases the corresponding SPARQL query designed
to extract the requisite knowledge. Furthermore, we incorporated instances of camera data
to validate the capability of the CamOnt to represent it. The SPARQL queries are tailored to
examine the camera and their test results. Each test is conducted with specific parameters like
lens setting, white balancing method, gamma correction, and test chart used. These parameters
are interconnected to the test via specific relations. For instance, query 1 reveals that an XYZ3
camera utilized an automatic white balancing, query 2 returns a test chart of a hyperbolic wedge
test pattern and query 4 with a gamma correction of 2 is used in a particular test.
6. Conclusion and Future work
In this paper, we have developed and presented the CamOnt ontology, a semantic model tailored
for the domain of camera testing and visual resolution measurements. Our ontology stands as a
testament to the interplay between camera technology and semantic modeling, offering a struc-
tured approach to understanding and analyzing camera tests. The ontology is developed with the
knowledge acquired from ISO12233 documents and domain experts. Furthermore, the compe-
tency questions, curated with insights from domain experts, underscore the real-world relevance
and robustness of CamOnt. The catalogue of SPARQL queries, specifically designed for CamOnt,
showcases its ability to extract detailed insights and highlights its potential as a valuable tool for
Valeo end users. In future, we will incorporate more domain knowledge of cameras other than
testing. Its harmonization with DOLCE or BFO will be carried out will be carried out in the future.
References
[1] M. A. Farooq, P. Corcoran, C. Rotariu, W. Shariff, Object detection in thermal spectrum
for advanced driver-assistance systems (adas), IEEE Access 9 (2021) 156465–156481.
[2] W.-Y. Chung, T.-W. Chong, B.-G. Lee, Methods to detect and reduce driver stress: a review,
International journal of automotive technology 20 (2019) 1051–1063.
[3] S. Barakoti, Enhancing driving safety using artificial intelligence technology (2023).
3
Used dummy data instead of the actual values due to Valeo’s data policy. The data can not be shared in any
form outside the organization. Due to the policy, we are unable to share the figures of the real queries.
[4] J. S. Murthy, G. Siddesh, W.-C. Lai, B. Parameshachari, S. N. Patil, K. Hemalatha, Objectde-
tect: A real-time object detection framework for advanced driver assistant systems using
yolov5, Wireless Communications and Mobile Computing 2022 (2022).
[5] A. Ebrahimi, E. Akbari, Design and implementation of an affordable reversing camera
system with object detection and obd-2 integration for commercial vehicles, 2023.
[6] A. Wahid, M. Yahya, J. G. Breslin, M. A. Intizar, Self-attention transformer-based architec-
ture for remaining useful life estimation of complex machines, Procedia Computer Science
217 (2023) 456–464.
[7] B. Zhou, Z. Tan, Z. Zheng, D. Zhou, Y. He, Y. Zhu, M. Yahya, T.-K. Tran, D. Stepanova, M. H.
Gad-Elrab, et al., Neuro-Symbolic AI at Bosch: Data Foundation, Insights, and Deployment,
Technical Report, 2022.
[8] T. R. Gruber, A translation approach to portable ontology specifications, Knowledge
acquisition 5 (1993) 199–220.
[9] M. Yahya, J. G. Breslin, M. I. Ali, Semantic web and knowledge graphs for industry 4.0,
Applied Sciences 11 (2021) 5110.
[10] M. Yahya, B. Zhou, Z. Zheng, D. Zhou, J. G. Breslin, M. I. Ali, E. Kharlamov, Towards
generalized welding ontology in line with iso and knowledge graph construction, in:
European Semantic Web Conference, Springer, 2022, pp. 83–88.
[11] M. Yahya, A. Ali, Q. Mehmood, L. Yang, J. G. Breslin, M. I. Ali, A benchmark dataset with
knowledge graph generation for industry 4.0 production lines, Semantic Web (????) 1–19.
[12] D. Rincon-Yanez, M. H. Gad-Elrab, D. Stepanova, K. T. Tran, C. C. Xuan, B. Zhou, E. Kar-
lamov, Addressing the scalability bottleneck of semantic technologies at bosch, arXiv
preprint arXiv:2309.10550 (2023).
[13] Z. Zheng, B. Zhou, A. Soylu, E. Kharlamov, Towards a visualisation ontology for data
analysis in industrial applications (2022).
[14] A. Iqbal, A. Shahid, M. Roman, M. T. Afzal, M. Yahya, Exploiting contextual word embed-
ding for identification of important citations: Incorporating section-wise citation counts
and metadata features, IEEE Access 11 (2023) 114044–114060. doi:10.1109/ACCESS.2023.
3320038 .
[15] W. Wei, J. A. Gulla, Sentiment learning on product reviews via sentiment ontology tree, in:
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics,
2010, pp. 404–413.
[16] J. I. Olszewska, Spatio-temporal visual ontology, in: BMVA/EACL EPSRC Workshop on
Vision and Language (VL’11), 2011.
[17] J. Nanda, T. W. Simpson, S. R. Kumara, S. B. Shooter, A methodology for product family
ontology development using formal concept analysis and web ontology language (2006).
[18] M. Yahya, B. Zhou, J. G. Breslin, M. I. Ali, E. Kharlamov, Semantic modeling, development
and evaluation for the resistance spot welding industry, IEEE Access (2023).
[19] A. Gómez-Pérez, N. Juristo, J. Pazos, Evaluation and assessment of knowledge sharing
technology, Towards very large knowledge bases (1995) 289–296.
[20] K. Hammar, K. Sandkuhl, The state of ontology pattern research: a systematic review of
iswc, eswc and aswc 2005–2009, in: The Workshop On Ontology Patterns (WOP 2010) At
The 9th International Semantic Web Conference (ISWC 2010), 2010, pp. 5–17.