<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CILC</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>ASP Chef for Water Waste Monitoring</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mario Alviano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luis Angel Rodriguez Reiners</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DeMaCS, University of Calabria</institution>
          ,
          <addr-line>87036 Rende (CS)</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>40</volume>
      <fpage>25</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>Water quality monitoring is a critical task in environmental protection and climate change adaptation. In this paper, we present the use of ASP Chef, a lightweight web-based environment for exploring and transforming ASP answer sets, in the context of the Tech4You project. ASP Chef enables intuitive pipelines over arrays of interpretations, supporting data-driven analysis without requiring low-level programming or complex tooling. Inspired by CyberChef, ASP Chef has recently been extended with a novel mechanism for content generation based on Mustache templates. This feature allows answer sets to be transformed into JSON and other structured formats, enabling seamless integration with JavaScript-based visualization frameworks such as @vis.js/Network, Tabulator, and ApexCharts. We demonstrate how ASP Chef is used to visualize and analyze water quality data collected by multisensory buoys, which monitor a wide range of chemical and physical parameters. Raw data is preprocessed using Python libraries such as NumPy, Pandas, and TensorFlow for cleaning and neural network-based modeling. Selected portions of the cleaned data are then explored via ASP Chef recipes, launched directly through dumbo-asp, a tool that opens the browser and executes the recipe with the given input. Our results show that ASP Chef enables efective visual exploration of parameter trends, detection of critical values through logic queries, and the creation of interactive dashboards for domain experts. This work illustrates how declarative logic programming can be combined with modern front-end technologies to build practical tools for environmental monitoring and decision support.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Answer Set Programming</kwd>
        <kwd>ASP Chef</kwd>
        <kwd>visualization</kwd>
        <kwd>data analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Answer Set Programming (ASP) is a declarative programming paradigm rooted in nonmonotonic logic
and stable model semantics [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">1, 2, 3, 4, 5</xref>
        ], and it has been successfully applied in a wide range of
knowledge-intensive domains [
        <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6, 7, 8, 9</xref>
        ]. ASP is particularly suitable for problems involving
combinatorial search, reasoning with incomplete or uncertain information, and constraint satisfaction.
Despite its strong theoretical foundations and expressiveness, the practical integration of ASP into
real-world applications has traditionally posed nontrivial challenges, especially for those applications
requiring interaction with modern web technologies or rich data visualizations [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11, 12, 13, 14, 15</xref>
        ].
To address this gap, ASP Chef [16] has emerged as a lightweight framework designed to facilitate
the generation of structured output from ASP computations. ASP Chef ofers a unique approach to
problem-solving through the concept of ASP recipes. These recipes consist of chains of ingredients
or operations combining computational tasks typical of ASP addressed by the clingo ASP solver [17]
with other operations like data manipulation and visualization.
      </p>
      <p>Originally inspired by CyberChef (https://gchq.github.io/CyberChef/), ASP Chef is a simple and
intuitive web application designed to facilitate the analysis and manipulation of answer sets produced
by ASP programs [18]. Unlike traditional ASP development environments, ASP Chef does not aim
to be an IDE or code editor. Instead, it focuses on providing a user-friendly interface for building
pipelines of operations over collections of answer sets, enabling users to inspect, transform, filter,
and visualize interpretations without the need to integrate complex external tools or write auxiliary
code in general-purpose programming languages. Recently, ASP Chef has been extended with support
for template-driven content generation [19], allowing users to define output formats using Mustache
templates (a popular web templating systems; [20]). These templates are dynamically expanded using
the atoms projected via #show directives, enabling answer sets to be transformed into structured data
formats such as JSON, CSV, or Markdown. This makes it particularly efective for integrating ASP
reasoning with modern web-based visualization libraries, without requiring users to write glue code or
deal with low-level data wrangling. Among the recently integrated front-end frameworks there are the
followings:
• @vis.js/Network (https://visjs.org/) for visualizing graphs and network structures, useful for
understanding relationships and dependencies;
• Tabulator (https://tabulator.info/), an interactive table library that supports sorting, filtering,
pagination, and inline editing—ideal for inspecting structured datasets;
• ApexCharts (https://apexcharts.com/), a comprehensive charting library capable of producing
time series graphs, scatter plots, radar charts, box plots, heatmaps, and more.</p>
      <p>In this paper, we illustrate the application of ASP Chef within the context of the Tech4You project
(Technologies for climate change adaptation and quality of life improvement; https://iia.cnr.it/project/
tech4you/), where one of the goals is to develop intelligent tools for analyzing environmental data,
particularly focusing on water quality monitoring. As climate change and pollution increasingly afect
natural water sources, real-time monitoring and intelligent analysis of water parameters have become
essential tools for environmental protection, public health, and sustainable resource management.</p>
      <p>The data considered in our study was collected using multisensory buoys, which continuously
monitor a diverse set of chemical and physical water quality parameters. These include:
• chemical indicators such as ammonia (NH3), nitrate ions (NO−3 ), ammonium ions (NH4+), and
lfuorescent dissolved organic matter (fDOM),
• physical and electrochemical properties such as specific conductance (SpCond), dissolved oxygen
(DO), redox potential (ORP), salinity (Sal), total dissolved solids (TDS), pH, temperature (T), and
turbidity.</p>
      <p>In addition to the in-water measurements, meteorological and hydrological observations were recorded
over the same time period, including rainfall, atmospheric pressure, hydrometric level, and humidity.
This multi-source dataset allows for rich, multi-dimensional analysis, uncovering relationships between
environmental conditions and water quality metrics.</p>
      <p>The complete sensor and meteorological dataset collected by the multisensory buoys, comprising
both water quality and meteorological parameters, is initially processed using standard Python data
science libraries, including NumPy and Pandas for data wrangling, cleaning, and transformation. These
tools are employed to handle missing values, normalize sensor readings, and align time series across
multiple sources. For predictive analysis and pattern recognition, selected features are fed into neural
network models implemented with TensorFlow, allowing the identification of anomalies and trends
through supervised and unsupervised learning techniques. Once this initial processing is complete,
selections of the cleaned and structured data are passed to ASP workflows using ASP Chef recipes. This
integration is made seamless through the use of dumbo-asp (https://github.com/alviano/dumbo-asp), a
lightweight Python module that provides a convenient interface to launch the ASP Chef environment.
With a single command, dumbo-asp opens the browser and loads a specified recipe along with the
relevant input data, enabling users to interactively explore results, apply logic-based filters, and visualize
ifndings without manual setup or scripting.</p>
      <p>Using ASP Chef, we were able to construct a suite of interactive visualizations that aid experts in
interpreting the data, identifying anomalies, and formulating hypotheses. ASP logic is used not only to
preprocess and filter the data, but also to declaratively express conditions of interest, such as threshold
exceedance, joint parameter deviations, or domain-specific constraints. These logic-based selections are
then rendered visually via the integrated frameworks. Among the visualizations, here we present the
followings:
• Time series charts that compare the evolution of multiple parameters over time, helping identify
trends and correlations;
• Scatter plots with customizable axes to explore relationships between any pair of parameters,
including meteorological vs. chemical indicators;
• Box plots and radar charts that summarize the distribution and spread of parameters, highlighting
outliers or unusually high variability;
• Network diagrams (via @vis.js/Network) used to visualize correlation graphs or logical
dependencies encoded in ASP;
• Interactive tables (via Tabulator) for sorting, filtering, and browsing subsets of the dataset in a
user-friendly format.</p>
      <p>A particularly notable feature of ASP Chef is that the logic and the presentation remain decoupled but
aligned: the reasoning layer determines what information should be displayed, and the templating
system handles how it is rendered. This modularity encourages the reuse of logic components across
diferent visualizations, supports rapid prototyping, and empowers domain experts to explore the data
through diferent lenses without changing the underlying data processing pipeline.</p>
      <p>In summary, ASP Chef provides a novel and practical bridge between declarative logic programming
and web-based data visualization, ofering a powerful toolset for environmental monitoring and decision
support. The case study in this paper demonstrates its efectiveness in the water quality domain, but
its architecture is general and can be readily applied to other fields where structured reasoning and
interactive exploration of complex datasets are needed.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>This section provides an overview of essential Python libraries and methodologies commonly employed
in data science, machine learning, and artificial intelligence, along with Answer Set Programming (ASP).</p>
      <sec id="sec-2-1">
        <title>2.1. Time-Series Forecasting</title>
        <p>Let us fix a set  = {1, . . . , } of parameters sampled at regular intervals Δ for  time steps. The
resulting -length time-series associated with parameter  (1 ≤  ≤ ) is denoted
 = (⟨,1, 1⟩, . . . , ⟨,, ⟩) , with Δ =  − − 1 for every 1 &lt;  ≤ ,
(1)
where , is the value of parameter  at observation time  (1 ≤  ≤ ).</p>
        <p>In time-series forecasting, the goal is to predict the future value(s) of a variable based on its past
observations and, optionally, the historical data from other related variables. Formally, let  be the
variable we want to predict. The prediction at the next time step  + 1 can be defined as
^,+1 =  (,1, . . . , ,, 1,1, . . . , 1,, . . . , ,1, ,)
(2)
where ^,+1 is the predicted value of the target parameter  at time step +1; , , . . . , , are the past
 observations of the target parameter ; the remaining arguments are the past  observations of the
 auxiliary variables {1 , . . . ,  } ⊂  . This formulation applies to both univariate and multivariate
time series forecasting: in the univariate case, only the history of the target variable is used; in the
multivariate case, the model also uses data from other related variables.</p>
        <p>Long Short-Term Memory (LSTM) networks [21] are a specialized type of recurrent neural network
(RNN) designed to handle sequential data and capture long-term dependencies. They achieve this
through the use of memory cells and gating mechanisms; specifically, the input gate, output gate, and
forget gate (as shown in Figure 1). These components allow LSTMs to selectively retain or discard
information over time, making them well-suited for tasks that involve patterns spread across long
sequences. Thanks to these capabilities, LSTM networks are widely used in time-series forecasting
applications such as speech recognition, language modeling, and environmental data analysis.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Python Libraries</title>
        <p>Pandas (https://pandas.pydata.org/) is a Python library for data manipulation and analysis. Its core data
structures, DataFrame and Series, provide eficient handling of tabular and labeled one-dimensional
data, respectively. Pandas supports powerful indexing, reshaping, and aggregation operations, and
integrates with matplotlib for data visualization [22]. TensorFlow (https://www.tensorflow.org/) is an
open-source framework for developing and deploying machine learning models. It abstracts low-level
computation through high-level APIs and represents data as multidimensional arrays called tensors.
TensorFlow supports training and inference on various hardware accelerators, making it suitable for
both research and production [23].</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Answer Set Programming</title>
        <p>An ASP program is a finite set of rules. Each rule typically has a head, representing a conclusion (which
may be atomic or a choice), and a body, representing a set of conditions that must hold (a conjunction
of literals, aggregates and inequalities). Formally, an ASP program Π induces a collection (zero or more)
of answer sets (also known as stable models), which are interpretations that satisfy all the rules in Π
while also fulfilling the stability condition (i.e., the models must be supported and minimal in a specific
formal sense [24]). The intended output of a program can be specified using #show directives of the
form</p>
        <p>#show () : conjunctive_query.</p>
        <p>Here,  denotes an optional predicate symbol,  is a (possibly empty) sequence of terms, and
conjunctive_query is a conjunction of literals serving as a condition for displaying instances of ().
Answer sets are then projected accordingly. For a detailed specification of syntax and semantics,
including #show and other directives, we refer to the ASP-Core-2 standard format [25].
Example 1. Consider a scenario where we want to choose a subset of water monitoring stations that
jointly cover a required set of chemical parameters. The following ASP program models this selection
problem:
1 : 1 &lt;= {selected(S) : station(S)} &lt;= N :- limit(N).
2 : :- required(P), #count{S : selected(S), covers(S,P)} = 0.</p>
        <p>3 : #show S : selected(S).</p>
        <p>Rule 1 is a choice rule that selects between 1 and  stations (as determined by the fact limit(N)). Rule
2 is a constraint that enforces full coverage: every required parameter  must be covered by at least
one selected station. The #show directive in 3 ensures that only the selected stations appear in the
output (projecting the answer sets accordingly). Suppose the following input facts define the stations,
the parameters they monitor, and the desired coverage:
parameter(ph).
required (ph).
station(s1).
station(s2).
station(s3).</p>
        <p>parameter(nitrate). parameter(lead).
required (nitrate). required (lead).
covers(s1, ph). covers(s1, nitrate).
covers(s2, lead). covers(s2, arsenic).
covers(s3, nitrate). covers(s3, lead).
parameter(arsenic).
limit(2).</p>
        <p>The program has the projected answer set s1 s2, indicating that selecting stations s1 and s2 ensures
full coverage of all required water quality parameters using at most two stations. ■</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. ASP Chef</title>
        <p>An operation  is a function receiving in input a sequence of interpretations and producing in output a
sequence of interpretations. Operations may produce side outputs (e.g., a graph visualization) and accept
parameters to influence their behavior. An ingredient is an instantiation of a parameterized operation
with side output. A recipe is a tuple of the form (encode, Ingredients, decode), where Ingredients is
a (finite) sequence 1⟨1⟩, . . . , ⟨⟩ of ingredients, and encode and decode are Boolean values.
If encode is true, the input of the recipe is mapped to [[__base64__("")]], where  = Base64 (in )
(i.e., the Base64–encoding of the input string in ). After that, the ingredients are applied one after
another. Finally, if decode is true, every occurrence of __base64__(s) is replaced with (the ASCII string
associated with) Base64 − 1(). Among the operations supported by ASP Chef there are Encode⟨, ⟩ to
extend every interpretation in input with the atom (""), where  = Base64 (); Search Models⟨Π, ⟩
to replace every interpretation  in input with up to  answer sets of Π ∪ {(). | () ∈ }; Show⟨Π⟩
to replace every interpretation  in input with the projected answer set Π ∪ {(). | () ∈ } (where
Π comprises only #show directives.</p>
        <p>Example 2. The problem from Example 1 can be addressed in ASP Chef by a recipe comprising a single
Search Models⟨{1, 2, 3}, 1⟩. Alternatively, a recipe separating computational and presentational
aspects would comprise two ingredients, namely Search Models⟨{1, 2}, 1⟩ and Show⟨{3}⟩. ■</p>
        <p>Several operations in ASP Chef support expansion of Mustache templates [19]; among them, there are
Expand Mustache Queries, @vis.js/Network (to visualize graphs), Tabulator (to arrange data in interactive
tables), and ApexCharts (to produce diferent kinds of charts). A Mustache template comprises queries
of the form {{ Π }}, where Π is an ASP program with #show directives—alternatively, {{= () :
conjunctive_query }} for {{ #show () : conjunctive_query. }}. Intuitively, queries are expanded
using one projected answer set of Π ∪ {(). | () ∈ }, where  is the interpretation on which the
template is applied on. Separators can be specified using the predicates separator/1 (for tuples of
terms), and term_separator/1 (for terms within a tuple). The varadic predicate show/* extends a
shown tuple of terms (its first argument) with additional arguments that enable repeating tuples in
output and can be used as sorting keys (using predicate sort/1). Moreover, Mustache queries can use
@string_format(format, . . .) to format a string using the given format string and arguments, and
lfoating-point numbers are supported with the format real("NUMBER"). Format strings can also be
written as (multiline) f-strings of the form {{f". . ."}}, using data interpolation ${expression:format }
to render expression according to the given format .</p>
        <p>Example 3. Recipes from Example 2 can be further extended by including ingredients to visualize input
and computed solution graphically. To this aim, the following Mustache template can be combined with
@vis.js/Network to obtain the graph shown in Figure 2:
{ data: {</p>
        <p>nodes: [
Mustache queries define nodes and links in the graph starting from facts in the computed answer set. A
recipe addressing the selection problem and producing the visualization shown in Figure 2 is available
at https://asp-chef.alviano.net/s/CILC2025/station-selection. ■</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The Fitterizzi Dataset</title>
      <p>Several physicochemical parameters were recorded continuously by a multiparametric probe located in
Fitterizzi (in the province of Cosenza, Southern Italy), among them ammonia (NH3), specific
conductance (SpCond), fluorescent dissolved organic matter (  DOM), nitrate ions (NO3− ), dissolved oxygen
(DO), ammonium ions (NH4+), redox potential (ORP), salinity (Sal), total dissolved solids (TDS), pH,
temperature (T), and turbidity. Moreover, meteo-hydrological observations were available for the same
period of time, such as rainfall, atmospheric pressure, hydrometric level and humidity.</p>
      <p>Given their nature, water quality and meteorological data should be represented as time-series,
however the provided dataset consists of two files (not necessarily aligned): a CSV file containing
data from the sensors; an Excel (XLSX) file containing meteorological and hydrological data. In fact,
once we loaded them into Pandas dataframes using the functions read_csv() and read_excel(), the
summary statistics reported in Tables 1–2 and obtained with the describe() method revel that not
all sensor variables have the same number of observations. As shown by the plots in Figure 3, the
dataset misses several values. We also observed that some parameters exhibit extreme or unexpected
negative values, indicating calibration problems or malfunctions within the instruments (e.g., NO3−
and  DOM). Moreover, the time periods covered by the two datasets do not match perfectly, and the
infer_freq() function reveals that while weather data are collected regularly at hourly frequency
(Δ = 1ℎ), sensor data has irregular intervals (no fixed Δ ), meaning that the time between readings
is not consistent (i.e., the time-series do not conform to the format of equation 1). This is a problem
for time-series analysis, which usually assumes that data points are collected at regular intervals. To
ifx this, the sensor data is resampled to a fixed 1-hour frequency using the resample() method. After
that the datasets are filtered to include only the overlapping time period, and merged together. Finally,
missing values are handled to ensure the final dataset is clean and usable.</p>
      <p>Given the presence of sensors failures and missing values for some variables, we employed time-series
forecasting to predict the missing values and possibly correct extreme and unexpected values, so that
the data remains complete and reliable. We therefore use the readings from other sensors that are still
working to predict missing values. Here we report the case of the target variable pH, for which we
implemented a prediction model using a LSTM network and the architecture shown in Figure 1. In our
model, the LSTM layer captures long-term dependencies and patterns in our sequential data. A dropout
layer is inserted after the LSTM layers to prevent overfitting by randomly dropping a fraction of the
connections during training, which encourages the model to improve generalization. Finally, two dense
layers are used to produce the final prediction. We used the following hyperparameters: 64 neurons for
the LSTM layer; dropout rate of 0.5; 10 neurons for the first Dense layer.</p>
      <p>The definition, training, and evaluation of the model were all implemented using TensorFlow. In
training the LSTM model, we set the number of epochs to 80, with the objective of predicting the
next pH value based on the observations from the previous 12 hours. The Mean Square Error (MSE)
was selected as the loss function. For model optimization, we employed Adaptive Moment Estimation
(Adam) with a learning rate of 0.01, a well established algorithm that has proven efective in practical
applications. Figure 4 reports MSE values across each epoch iteration, and the performance of the
trained model on the test dataset (RMSE). We observe that RMSE achieved a value of 0.014.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Filtering and Visualizing Data with ASP Chef</title>
      <p>This section presents how we performed further data analysis using ASP Chef. In particular, we
showcase how the Fitterizzi dataset is loaded in ASP Chef and a few recipes to obtain interactive
visualization. An interactive recipe is available at https://asp-chef.alviano.net/s/CILC2025/fitterizzi.</p>
      <sec id="sec-4-1">
        <title>4.1. Data Loading</title>
        <p>In ASP Chef, CSV data files can be easily loaded using the ingredient Parse CSV. The result is a set of
__cell__ predicate instances, which in our case represent parameter values. After that, the parameters
can be grouped by data collection time using a Search Models ingredient.</p>
        <p>Example 4. In our case, for every data collection time (DT), collected parameters are conductivity (C),
fDOM (FD), oxidation-reduction potential (ORP), salinity (SAL), total dissolved solids (TDS), turbidity
(TUR), pH (PH) and temperature (TEMP). Once these parameters are represented as instances of
__cell__/3, the grouping by DT is achieved by Search Models using the following program:
row(DT,C,FD,ORP,SAL,TDS,TUR,PH,TEMP) :- __cell__(R,2,DT), __cell__(R,3,C),
__cell__(R,4,FD), __cell__(R,5,ORP), __cell__(R,6,SAL), __cell__(R,7,TDS),
__cell__(R,8,TUR), __cell__(R,9,PH), __cell__(R,10,TEMP), R &gt; 1.
■</p>
        <p>After loading the data, we can use the Tabulator operation to represent the dataset in a table, where
each row corresponds to a specific time point or observation, providing a clear and organized view of
the information for further analysis.</p>
        <p>Example 5. Continuing with the set of facts representing the raw data obtained in Example 4, we
create a table representation of the dataset. Using the Tabulator ingredient we encode a configuration
JSON object within a __tab__ predicate. This configuration specifies the input data and presentation
options. The following Mustache template renders the table shown in Figure 5:
{ data: [{{= {{f"{ DateTime: "${DT}",</p>
        <p>Cond: ${C},
fDOM_QSU: ${FD},
ORP: ${ORP},
Sal_psu: ${SAL},
TDS: ${TDS},
Turbidity: ${TUR},
pH: ${PH},</p>
        <p>Temp: ${TEMP},
}"}} : row(DT,C,FD,ORP,SAL,TDS,TUR,PH,TEMP) }} ],
layout: "fitColumns",
pagination: "local",
paginationSize: 10,
columns: [ {title: "Datetime", field: "DateTime" },
{title: "Cond", field: "Cond" },
{title: "fDOM QSU", field: "fDOM_QSU" },
{title: "ORP mV", field: "ORP" },
{title: "Sal psu", field: "Sal_psu" },
{title: "TDS mg/L", field: "TDS" },
{title: "Turbidity", field: "Turbidity"},
{title: "pH", field: "pH" },
{title: "Temp ∘ C", field: "Temp" } ]
}
It is worth noting how the data field is obtained using a Mustache query. When evaluated inside
Tabulator, the Mustache query expands to produce the following result:
[ { DateTime: "2023-01-17 00:10:00+01:00", Cond: 377.5, fDOM_QSU: 20.3, ORP:
333.3, Sal_psu: 0.25, TDS: 341, Turbidity: 556.91, pH: 8.29, Temp: 10.369 }
{ DateTime: "2023-01-17 00:20:00+01:00", Cond: 369, fDOM_QSU: 26.39, ORP:
333.1, Sal_psu: 0.25, TDS: 333, Turbidity: 398.02, pH: 8.29, Temp: 10.335 }
{ DateTime: "2023-01-17 00:30:00+01:00", Cond: 371, fDOM_QSU: 25.89, ORP: 332.1,
Sal_psu: 0.25, TDS: 335, Turbidity: 398.5, pH: 8.3, Temp: 10.345 } . . . ]
■</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Exploratory Data Analysis</title>
        <p>Raw data, in isolation, ofers limited insight, and the sheer volume can be overwhelming to process.
Visual description of the data or turning information into graphs, charts, and other visuals helps people
understand the meaning of the values in the data set, revealing trends and patterns. We took advantage
of ApexChart to obtain several interactive charts, and used Tabulator to compare the quartile and mean
(a) Time Series of Water Quality Parameters (Conductivity and pH).</p>
        <p>(b) Anomalous Values in Turbidity Measurements Over Time.</p>
        <p>(c) Scatter Plot of Conductivity, Salinity, and Total Dissolved Solids (TDS).
values collected in Fitterizzi with those originated from a diferent location.</p>
        <p>Example 6. We used the data from Example 4 to obtain the charts reported in Figure 6 and investigate
water quality metrics. Figure 6a focuses on selected variables (conductivity and pH levels) to narrow the
focus of statistical analysis, and is obtained using the following Mustache template for the ApexChart
ingredient:
{ series: [{ name: 'Cond',
data: [{{= show({{f"${C}"}}, DT) : row(DT,C,_,_,_,_,_,PH,_) }}] },
(a) Comparison of quartile values for measures from two locations.</p>
        <p>(b) Comparison of weekly mean values between two locations.
The chart is useful to spot trends and variations over specific dates (in our case, November 2023).
Figures 6b and 6c are also created using Mustache templates and ApexChart: Figure 6b identifies
anomalous turbidity values and time-delayed measurements, suggesting potential data collection issues
or environmental fluctuations; Figure 6c illustrates the correlation between conductivity, salinity, and
TDS, revealing a linear relationship that aligns with expected physicochemical behavior in aquatic
systems. Together, these visualizations provide foundational insights into data quality, temporal patterns,
and parameter interdependencies, guiding further statistical or domain-specific analysis. ■</p>
        <p>We compared the Fitterizzi dataset with similar data obtained in a diferent location in Cosenza,
namely San Nicola Arcella, with the aim of identifying common trends and patterns. Such common
trends are useful for detecting deviations from expected values, so that authorities can be informed and
take action. Here we report a comparison of aggregated data using groups in Tabulator, and
specialized charts in ApexChart. Interactive recipes are available at https://asp-chef.alviano.net/s/CILC2025/
iftterizzi-vs-sannicola-1 and https://asp-chef.alviano.net/s/CILC2025/fitterizzi-vs-sannicola-2.
Example 7. Let us consider the mean and quartile values for the data of the two locations to compare.
Aggregated data can be loaded as presented in Example 4, and Tabulator can be configured to obtain
the tabular representation shown in Figure 7. With respect to the table presented in Example 5, the
main diference is how values are grouped by place. We used the following Mustache template:
{ data:[{{= {{f"{ Quart: "${Q}", Place: "${PLA}", Cond: ${COND},
fDOM_QSU: ${FDOM}, ORP: ${ORP}, Sal_psu: ${SAL},</p>
        <p>TDS: ${TDS}, Turbidity: ${TUR}, pH: ${PH}, Temp: ${TEMP}
}"}} : q(Q, R,PLA,COND,FDOM,ORP,SAL,TDS,TUR,PH,TEMP) }} ],
groupBy:"Quart", groupValues:[["min", "25%", "50%", "75%", "max"]],
.
.</p>
        <p>.</p>
        <p>Additionally, we can create a graphical representation of the aggregated data, allowing quicker insights
and comparison between the two locations. ApexCharts can be configured to obtain boxplots and radar
charts as shown in Figure 8. ■</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Related Work</title>
      <p>
        Eforts to improve the accessibility and interpretability of ASP have led to the development of various
visualization tools that assist users understand the content and structure of answer sets. Early tools
such as ASPViz [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], IDPD3 [12], and Kara [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] introduced the idea of using ASP facts to describe
visual layouts, efectively allowing logic programs to define how their output should be displayed
graphically. These tools typically rely on an auxiliary logic program that encodes visual elements, such
as shapes, colors, and positions, as logical atoms, which are then interpreted by an external visualization
engine. This approach makes it possible to generate customized visualizations directly from ASP output,
without the need for post-processing or manual design. More recent tools, including clingraph [14]
and ASPECT [15], build on this idea by ofering more expressive and user-friendly declarative languages
for visualization. These systems aim to make visualization a first-class citizen in the ASP ecosystem,
enabling users to specify graphical representations in a concise and intuitive manner, using logic rules to
define visual features. The common underlying principle of all these approaches is to treat visualization
as a logic-based task, where a dedicated logic program produces a set of special atoms that encode
graphical primitives. These atoms are then interpreted by a rendering component to produce visual
outputs, bridging the gap between symbolic reasoning and human-understandable representations.
      </p>
      <p>ASP Chef [16] introduced its first visualization capability in [ 18], through the inclusion of the Graph
ingredient, a dedicated component designed to generate graph visualizations as a side output of
logicbased recipes. This approach allows users to embed visual instructions directly within ASP programs,
producing interactive graph representations that reflect the structure or behavior of the underlying
answer sets, following the same idea of ASPViz, IDPD3 and Kara. Building upon this initial efort,
recent versions of ASP Chef have significantly expanded their visualization support by integrating
modern JavaScript libraries with a diferent idea: adapt Mustache templates to ASP in order to configure
third-party libraries by querying the processed answer set. These include @vis.js/Network, which enables
the rendering of dynamic, interactive network diagrams; Tabulator, a flexible and highly customizable
table library for displaying structured data; and ApexCharts, a powerful library for rendering a wide
range of charts such as line, bar, and area plots. These enhancements aim to provide users with a richer,
more versatile visualization experience, enabling the creation of tailored graphical outputs that align
with the structure of ASP solutions. By supporting these libraries, ASP Chef not only improves its
usability for teaching and demonstration purposes, but also empowers domain experts to interpret
complex answer sets through visually appealing and interactive dashboards.</p>
      <p>Traditional machine learning techniques such as Random Forest (RF), Decision Trees (DT), Support
Vector Regression (SVR), Linear Regression (LR), and Gradient Boosting Regression (GBR) have been
widely employed for water quality prediction tasks, ofering interpretable and relatively eficient
solutions for estimating key water quality indicators [26, 27, 28]. These models typically rely on
handcrafted features derived from environmental sensors and historical records, often requiring careful
preprocessing and domain knowledge to achieve accurate results. In parallel, the field has seen growing
interest in deep learning approaches, which have demonstrated strong performance in capturing
nonlinear patterns and temporal dependencies inherent in water quality data. Notably, architectures based
on Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks have been
applied successfully to model sequential water-related variables [29, 30, 31]. For example, in [29], an
LSTM-based model was developed to forecast pH and water temperature in mariculture environments,
where water quality is strongly influenced by natural ecological conditions. More specialized models
have also emerged. In [27], the authors introduce a prediction framework based on Principal Component
Regression (PCR) to estimate the Water Quality Index (WQI) by reducing dimensionality and mitigating
multicollinearity among input variables. Another notable contribution is presented in [32], where
the authors leverage the strengths of multi-task learning [33] in combination with deep architectures.
Their model integrates Convolutional Neural Networks (CNNs) with LSTM layers in a hybrid structure,
enabling simultaneous prediction of multiple water quality indicators. These works demonstrate the
efectiveness of machine learning and deep learning in environmental monitoring.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this work, we demonstrated how ASP Chef can be efectively used to support the exploration
and interpretation of environmental time-series data through a suite of interactive and logic-driven
visualizations. By combining ASP with modern JavaScript visualization libraries, we enabled domain
experts to go beyond static data inspection and engage in dynamic hypothesis generation, anomaly
detection, and multivariate analysis. ASP logic played a central role not only in preprocessing and
ifltering the data, but also in declaratively expressing domain-specific conditions, such as threshold
violations, co-occurring deviations across parameters, or user-defined constraints. These conditions
are translated into structured visual outputs, ofering a clear and customizable interface to inspect the
reasoning results.</p>
      <p>A particularly compelling aspect of our approach is the decoupling of logic and presentation: the ASP
rules define what to show, while the visualization templates control how to show it. This modularity
promotes the reuse of logic components across diferent visual formats, facilitates rapid prototyping, and
empowers domain experts to explore their data from multiple perspectives without modifying the core
data pipeline. Overall, the integration of declarative logic programming with interactive visualization
ofers a powerful paradigm for data-driven decision-making, especially in domains like environmental
monitoring, where expert knowledge, interpretability, and adaptability are key.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was supported by the Italian Ministry of University and Research (MUR) under PRIN project
PRODE “Probabilistic declarative process mining”, CUP H53D23003420006, under PNRR project FAIR
“Future AI Research”, CUP H23C22000860006, under PNRR project Tech4You “Technologies for climate
change adaptation and quality of life improvement”, CUP H23C22000370006, and under PNRR project
SERICS “SEcurity and RIghts in the CyberSpace”, CUP H73C22000880001; by the Italian Ministry of
Health (MSAL) under POS projects CAL.HUB.RIA (CUP H53C22000800006) and RADIOAMICA (CUP
H53C22000650006); by the Italian Ministry of Enterprises and Made in Italy under project STROKE 5.0
(CUP B29J23000430005); under PN RIC project ASVIN “Assistente Virtuale Intelligente di Negozio” (CUP
B29J24000200005); and by the LAIA lab (part of the SILA labs). Mario Alviano is member of Gruppo
Nazionale Calcolo Scientifico-Istituto Nazionale di Alta Matematica (GNCS-INdAM).</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT-4o for grammar and spelling check.
After using this tool, the authors reviewed and edited the content as needed and take full responsibility
for the publication’s content.
[12] R. Lapauw, I. Dasseville, M. Denecker, Visualising interactive inferences with IDPD3, CoRR
abs/1511.00928 (2015).
[13] L. Bourneuf, An answer set programming environment for high-level specification and
visualization of FCA, in: S. O. Kuznetsov, A. Napoli, S. Rudolph (Eds.), FCA4AI 2018, Stockholm, Sweden,
July 13, 2018, volume 2149 of CEUR Workshop Proceedings, CEUR-WS.org, 2018, pp. 9–20. URL:
https://ceur-ws.org/Vol-2149/paper2.pdf.
[14] S. Hahn, O. Sabuncu, T. Schaub, T. Stolzmann, Clingraph: A system for asp-based visualization,
Theory Pract. Log. Program. 24 (2024) 533–559. URL: https://doi.org/10.1017/s147106842400005x.
doi:10.1017/S147106842400005X.
[15] A. Bertagnon, M. Gavanelli, ASPECT: answer set representation as vector graphics in latex, J.</p>
      <p>Log. Comput. 34 (2024) 1580–1607. URL: https://doi.org/10.1093/logcom/exae042. doi:10.1093/
LOGCOM/EXAE042.
[16] M. Alviano, D. Cirimele, L. A. Rodriguez Reiners, Introducing ASP recipes and ASP Chef, in: ICLP</p>
      <p>Workshops, volume 3437 of CEUR Workshop Proceedings, CEUR-WS.org, 2023.
[17] M. Gebser, R. Kaminski, B. Kaufmann, T. Schaub, Multi-shot ASP solving with clingo, Theory</p>
      <p>Pract. Log. Program. 19 (2019) 27–82. doi:10.1017/S1471068418000054.
[18] M. Alviano, L. A. Rodriguez Reiners, ASP chef: Draw and expand, in: P. Marquis, M. Ortiz,
M. Pagnucco (Eds.), Proceedings of the 21st International Conference on Principles of Knowledge
Representation and Reasoning, KR 2024, Hanoi, Vietnam. November 2-8, 2024, 2024. URL: https:
//doi.org/10.24963/kr.2024/68. doi:10.24963/KR.2024/68.
[19] M. Alviano, W. Faber, L. A. Rodriguez Reiners, ASP Chef grows Mustache to look better, 2025.</p>
      <p>URL: https://arxiv.org/abs/2505.24537. arXiv:2505.24537.
[20] J. S. Mittapalli, M. P. Arthur, Survey on template engines in Java, in: ITM Web of Conferences,
volume 37, EDP Sciences, 2021, p. 01007.
[21] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural computation 9 (1997) 1735–1780.
[22] S. Molin, Hands-On Data Analysis with Pandas: A Python data science handbook for data collection,
wrangling, analysis, and visualization, Packt Publishing Ltd, 2021.
[23] N. Shukla, K. Fricklas, Machine learning with TensorFlow, volume 7, Manning Greenwich, 2018.
[24] M. Gelfond, V. Lifschitz, Logic programs with classical negation, in: D. Warren, P. Szeredi (Eds.),</p>
      <p>Logic Programming: Proc. of the Seventh International Conference, 1990, pp. 579–597.
[25] F. Calimeri, W. Faber, M. Gebser, G. Ianni, R. Kaminski, T. Krennwallner, N. Leone, M. Maratea,
F. Ricca, T. Schaub, ASP-Core-2 input language format, Theory Pract. Log. Program. 20 (2020)
294–309. URL: https://doi.org/10.1017/S1471068419000450. doi:10.1017/S1471068419000450.
[26] M. Y. Shams, A. M. Elshewey, E.-S. M. El-Kenawy, A. Ibrahim, F. M. Talaat, Z. Tarek, Water quality
prediction using machine learning models based on grid search method, Multimedia Tools and
Applications 83 (2024) 35307–35334.
[27] M. S. I. Khan, N. Islam, J. Uddin, S. Islam, M. K. Nasir, Water quality prediction and classification
based on principal component regression and gradient boosting classifier approach, Journal of
King Saud University-Computer and Information Sciences 34 (2022) 4773–4781.
[28] A. H. Haghiabi, A. H. Nasrolahi, A. Parsaie, Water quality prediction using machine learning
methods, Water Quality Research Journal 53 (2018) 3–13.
[29] Z. Hu, Y. Zhang, Y. Zhao, M. Xie, J. Zhong, Z. Tu, J. Liu, A water quality prediction method based
on the deep lstm network considering correlation in smart mariculture, Sensors 19 (2019) 1420.
[30] S. Aslan, F. Zennaro, E. Furlan, A. Critto, Recurrent neural networks for water quality assessment
in complex coastal lagoon environments: a case study on the Venice Lagoon, Environmental
Modelling &amp; Software 154 (2022) 105403.
[31] L. Li, P. Jiang, H. Xu, G. Lin, D. Guo, H. Wu, Water quality prediction based on recurrent neural
network and improved evidence theory: a case study of Qiantang River, China, Environmental
Science and Pollution Research 26 (2019) 19879–19896.
[32] X. Wu, Q. Zhang, F. Wen, Y. Qi, A water quality prediction model based on multi-task deep
learning: a case study of the Yellow River, China, Water 14 (2022) 3408.
[33] R. Caruana, Multitask learning, Machine Learning 28 (1997) 41–75.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Brewka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Truszczynski</surname>
          </string-name>
          ,
          <article-title>Answer set programming at a glance</article-title>
          ,
          <source>Commun. ACM</source>
          <volume>54</volume>
          (
          <year>2011</year>
          )
          <fpage>92</fpage>
          -
          <lpage>103</lpage>
          . doi:
          <volume>10</volume>
          .1145/2043174.2043195.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Erdem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gelfond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Leone</surname>
          </string-name>
          ,
          <article-title>Applications of answer set programming</article-title>
          ,
          <source>AI Mag</source>
          .
          <volume>37</volume>
          (
          <year>2016</year>
          )
          <fpage>53</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          , Answer Set Programming, Springer,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kaminski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schaub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wanko</surname>
          </string-name>
          ,
          <article-title>How to build your own asp-based system?!</article-title>
          ,
          <string-name>
            <given-names>Theory</given-names>
            <surname>Pract</surname>
          </string-name>
          . Log. Program.
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <fpage>299</fpage>
          -
          <lpage>361</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Alviano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dodaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fiorentino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Previti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ricca</surname>
          </string-name>
          ,
          <article-title>ASP and subset minimality: Enumeration, cautious reasoning and muses</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>320</volume>
          (
          <year>2023</year>
          )
          <fpage>103931</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cappanera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gavanelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nonato</surname>
          </string-name>
          , M. Roma,
          <article-title>Logic-based benders decomposition in answer set programming for chronic outpatients scheduling</article-title>
          ,
          <source>TPLP</source>
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <fpage>848</fpage>
          -
          <lpage>864</lpage>
          . doi:
          <volume>10</volume>
          .1017/ S147106842300025X.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cardellini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dodaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Galatà</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Giardini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maratea</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Porro</surname>
          </string-name>
          ,
          <article-title>Solving rehabilitation scheduling problems via a two-phase ASP approach</article-title>
          , TPLP
          <volume>24</volume>
          (
          <year>2024</year>
          )
          <fpage>344</fpage>
          -
          <lpage>367</lpage>
          . doi:
          <volume>10</volume>
          .1017/S1471068423000030.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Wotawa</surname>
          </string-name>
          ,
          <article-title>On the use of answer set programming for model-based diagnosis</article-title>
          , in: H.
          <string-name>
            <surname>Fujita</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Fournier-Viger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ali</surname>
          </string-name>
          , J. Sasaki (Eds.), IEA/AIE 2020, Kitakyushu, Japan,
          <source>September 22-25</source>
          ,
          <year>2020</year>
          , Proceedings, volume
          <volume>12144</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>518</fpage>
          -
          <lpage>529</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>030</fpage>
          -55789-8_
          <fpage>45</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Taupe</surname>
          </string-name>
          , G. Friedrich,
          <string-name>
            <given-names>K.</given-names>
            <surname>Schekotihin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Weinzierl</surname>
          </string-name>
          ,
          <article-title>Solving configuration problems with ASP and declarative domain specific heuristics</article-title>
          , in: M.
          <string-name>
            <surname>Aldanondo</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          <string-name>
            <surname>Falkner</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Felfernig</surname>
          </string-name>
          , M. Stettinger (Eds.),
          <source>Proceedings of the 23rd International Configuration Workshop (CWS/Conf WS</source>
          <year>2021</year>
          ), Vienna, Austria,
          <fpage>16</fpage>
          -
          <lpage>17</lpage>
          September,
          <year>2021</year>
          , volume
          <volume>2945</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>20</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2945</volume>
          /
          <fpage>21</fpage>
          -
          <string-name>
            <surname>RT-Conf</surname>
            <given-names>WS21</given-names>
          </string-name>
          _paper_4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>O.</given-names>
            <surname>Clife</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Vos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Padget</surname>
          </string-name>
          ,
          <article-title>ASPVIZ: declarative visualisation and animation using answer set programming</article-title>
          ,
          <source>in: ICLP</source>
          , volume
          <volume>5366</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2008</year>
          , pp.
          <fpage>724</fpage>
          -
          <lpage>728</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kloimüllner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Oetsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pührer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tompits</surname>
          </string-name>
          ,
          <article-title>Kara: A system for visualising and visual editing of interpretations for answer-set programs</article-title>
          ,
          <source>in: INAP/WLP</source>
          , volume
          <volume>7773</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2011</year>
          , pp.
          <fpage>325</fpage>
          -
          <lpage>344</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>