Highlights WILF 2021 offered a venue for summarising recently published work, in order to increase its visibility. In this contribution we collect the abstracts of four presentations, with topics ranging from theory to applications. Highlight 1: Generalizing 𝐶𝐹1𝐹2-integrals. GP Dimuro, G Lucca, B Bedregal, R Mesiar, JA Sanz, J Fernandez, CT Lin and H Bustince. Highlight 2: General interval-valued overlap functions and n-dimensional admissibly ordered interval- valued overlap functions and its influence in interval-valued fuzzy rule-based classification systems. T Asmus1, G Pereira Dimuro, JA Sanz, B Bedregal, J Fernandez and H Bustince. Highlight 3: Counting data in presence of possibilistic uncertainty. C Mencar. Highlight 4: Using Fuzzy C-Means for Error Mitigation in Quantum Measurement. G Acampora, A Vitiello. Wilf’21: The 13th International Workshop on Fuzzy Logic and Applications, December 20–22, 2021, Vietri sul Mare, Italy ©️ 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) Highlight 1 Generalizing 𝐶𝐹1𝐹2-integrals Graçaliz Pereira Dimuro1,2, Giancarlo Lucca3, Benjamín Bedregal4, Radko Mesiar5,José Antonio Sanz1, Javier Fernandez1, Chin-Teng Lin6 and Humberto Bustince1 1 Departamento de Estadística, Informática y Matemáticas, Universidad Publica de Navarra, Pamplona, Spain 2 Centro de Ciências Computacionais, Universidade Federal do Rio Grande, Rio Grande, Brazil 3 Programa de Pós-Graduação em Modelagem Computacional, Universidade Federal do Rio Grande, Rio Grande, Brazil 4 Departamento de Informática e Matemática Aplicada, Universidade Federal do Rio Grande do Norte, Natal, Brazil 5 Slovak University of Technology, Bratislava, Slovakia, and Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague, Czech Republic 5 Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia Here, we highlight the research presented in [1], concerning a theoretical framework for a generalization of 𝐶𝐹1𝐹2 -integrals, a family of Choquet-like integrals used successfully in the aggregation process of the fuzzy reasoning mechanisms of fuzzy rule based classification systems. First, in, we introduced the The proposed generalization, called by 𝑔𝐶𝐹1𝐹2 -integrals, is based on the so-called pseudo pre-aggregation function pairs (𝐹1, 𝐹2), which are pairs of fusion functions satisfying a minimal set of requirements in order to guarantee that the 𝑔𝐶𝐹1𝐹2 -integrals to be either an aggregation function or just an ordered directionally increasing function satisfying the appropriate boundary conditions. We propose a dimension reduction of the input space, in order to deal with repeated elements in the input, avoiding ambiguities in the definition of 𝑔𝐶𝐹1𝐹2 - integrals. We study several properties of 𝑔𝐶𝐹1𝐹2 -integrals, considering different constraints for the functions 𝐹1 and 𝐹2, and state under which conditions 𝑔𝐶𝐹1𝐹2 -integrals present or not averaging behaviors. Several examples of 𝑔𝐶𝐹1𝐹2 -integrals are presented, considering different pseudo pre-aggregation function pairs, defined on, e.g., t-norms, overlap functions, copulas that are neither t-norms nor overlap functions and other functions that are not even pre-aggregation functions. Acknowledgments Supported by the Spanish Ministry of Science and Technology (PC093-094 TFIPDL, TIN2016-81731-REDT, TIN2016-77356- P (AEI/FEDER, UE)), Spanish Ministry of Economy and Competitiveness through the Spanish National Research (project PID2019-108392GB-I00 / AEI / 10.13039/501100011033), UPNA (PJUPNA1926), CNPq (311429/2020-3, 301618/2019-4) and FAPERGS (19/2551-0001660). References [1] G. P. Dimuro, G. Lucca, B. Bedregal, R. Mesiar, J. A. Sanz, C.-T. Lin, H. Bustince, Generalized CF1F2- integrals: From Choquet-like aggregation to ordered directionally monotone functions, Fuzzy Sets and Systems 378 (2020) 44 – 67. doi:10.1016/j.fss.2019.01.009. Highlight 2 General interval-valued overlap functions and n-dimensional admissibly ordered interval-valued overlap functions and its influence in interval-valued fuzzy rule-based classification systems Tiago Asmus1,2, Graçaliz Pereira Dimuro1,3, José Antonio Sanz1, Benjamín Bedregal4, Javier Fernandez1 and Humberto Bustince1 1 Departamento de Estadística, Informática y Matemáticas, Universidad Publica de Navarra, Pamplona, Spain 2 Instituto de Matemática, Estatística e Física, Universidade Federal do Rio Grande, Rio Grande, Brazil 3 Centro de Ciências Computacionais, Universidade Federal do Rio Grande, Rio Grande, Brazil 4 Departamento de Informática e Matemática Aplicada, Universidade Federal do Rio Grande do Norte, Natal, Brazil Overlap functions are a type of aggregation functions that are not required to be associative, generally used to indicate the overlapping degree between two values. They have been used in several practical problems, e.g., image processing, decision making and fuzzy rule-based classification systems (FRBCSs). Some generalizations of overlap functions were proposed, such as n-dimensional and general overlap functions, which allowed their application in n-dimensional problems. More recently, the concept of interval-valued (iv) overlap functions was presented, mainly to deal with uncertainty in providing membership functions. Here, we highlight the research presented in two papers. First, in [1], we introduced the concepts of n-dimensional iv-overlap functions and general iv-overlap functions, studying their representability, characterization and construction methods, with application to iv-FRBCSs. Then the authors noticed that in iv-FRBCSs, the choice of an appropriate total order for intervals can play an important role. However, neither the relationship between the interval order and the n-dimensional iv-overlap function (which may or may not be increasing for that order) nor the impact of this relationship in the classification process had been studied in the literature. Moreover, there was not a clear preferred n-dimensional iv-overlap function to be applied in an iv-FRBCS. Hence, in [2], we presented some new results on admissible orders, which allowed us to introduce the concept of n-dimensional admissibly ordered iv-overlap functions, developing a width-preserving construction method derived from an admissible order and an n-dimensional overlap function. We analyzed the behaviour of several combinations of admissible orders and n-dimensional (admissibly ordered) iv-overlap functions when applied in iv-FRBCSs. Our main contribution resided in pointing out the effect of admissible orders and n-dimensional admissibly ordered iv-overlap functions, both from a theoretical and applied points of view. Acknowledgments Supported by the Spanish Ministry of Science and Technology (PC093-094 TFIPDL, TIN2016-81731-REDT, TIN2016-77356- P (AEI/FEDER, UE)), the Spanish Ministry of Economy and Competitiveness (project PID2019-108392GB-I00 / AEI / 10.13039/501100011033), the Public University of Navarre under the project PJUPNA1926, CNPq (307781/2016-0, 301618/2019-4, 311429/2020-3) and FAPERGS (19/2551-0001660). References [1] T. C. Asmus, G. P. Dimuro, B. Bedregal, J. A. Sanz, S. P. Jr., H. Bustince, General interval-valued overlap functions and interval-valued overlap indices, Information Sciences 527 (2020) 27–50. doi:10.1016/j.ins.2020.03.091. [2] T. C. Asmus, J. A. A. Sanz, G. Pereira Dimuro, B. Bedregal, J. Fernandez, H. Bustince, N-dimensional admissibly ordered interval-valued overlap functions and its influence in interval-valued fuzzy rule- based classification systems, IEEE Transactions on Fuzzy Systems (2021) 1–1. doi:10.1109/TFUZZ.2021.3052342 (early access). Highlight 3 Counting data in presence of possibilistic uncertainty Corrado Mencar Department of Computer Science, University of Bari Aldo Moro, Bari, Italy Modern technology allows collecting huge amounts of data, which call for complex methodologies for understanding and analyzing them. A basic operation with data is counting, i.e. finding the number of data samples having a specific value. Counting is often a preliminary step for several types of analysis, such as descriptive statistics, comparisons, etc. This is quite a simple operation if objects are represented as precise data, but it becomes non-trivial when data observations are uncertain. In fact, uncertainty in data should propagate in counting, therefore results are granular rather than precise. When data uncertainty is due to incomplete information, i.e. when there is not enough information in observations to detect the value of data, Possibility Theory can provide a convenient way to represent and process this kind of uncertainty. In particular, counting uncertain data with Possibility Theory leads to granular counts that are represented as fuzzy intervals. The formula of granular count is derived on the basis of two weak assumptions that can be applied in a wide variety of problems involving uncertain data. Furthermore, the formulation can be further extended to introduce the granular sum of counts, by taking into account the interactivity of granular counts. Two algorithms have been proposed to compute granular counting: exact granular counting, with quadratic time complexity on the number of observations, and approximate granular counting, with linear time complexity. Also, it is possible to extend approximate granular counting by computing bounds for exact granular count; in this way, the efficiency of approximate granular count is combined with certified bounds whose width can be adjusted in accordance to user needs. On the other hand, the algorithm for exact granular counting can be extended by an incremental version which provides an efficient and exact computation of the granular count without the need of having all data available, thus being applicable in scenarios involving data streams. Future research is aimed at devising algorithms for summing granular counts with polynomial time complexity, which may open the door to a novel methodology for analyzing data in full accordance with the Granular Computing paradigm. Keywords Possibility Theory, Granular Counting, Fuzzy Intervals, Data Uncertainty References [1] C. Mencar, W. Pedrycz, Granular counting of uncertain data, Fuzzy Sets and Systems 387 (2020) 108–126. URL: https://linkinghub.elsevier.com/retrieve/pii/S0165011419302192. doi:10.1016/j.fss.2019.04.018. Highlight 4 Using Fuzzy C-Means for Error Mitigation in Quantum Measurement Giovanni Acampora1,2, Autilia Vitiello1,2 1 Department of Physics "Ettore Pancini", University of Naples Federico II, 80126 Naples, Italy 2 Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, 80126 Naples, Italy This contribution discusses the work presented in [1] related to the application of the Fuzzy C- Means algorithm for achieving error mitigation in quantum measurement. Quantum computers are quantum-mechanical devices that are potentially able to overcome classical computers in performing specific tasks. However, there is a crucial engineering challenge to face so as to make quantum computation productive and operative in real-world scenarios: dealing with the vulnerability of quantum hardware to errors. As a consequence, a very active research area in quantum engineering is that of quantum error correction, which aims at introducing techniques able to minimize computing error. Unfortunately, these techniques require a multiplicative increase in the number of resources to work, thus, alternative techniques belonging to the area of quantum error mitigation have been developing to compensate computation errors without requiring additional quantum hardware. These approaches work by updating the output of quantum algorithms by means of a post-processing task aimed at removing the effect of the different kinds of quantum errors from the above output. Among all the possible types of quantum errors, the measurement error is certainly among those that can most alter the computation and, consequently, the error mitigation techniques are mainly concerned with reducing the effects of this class of error. In this context, the state-of-the-art method, used in well-known quantum frameworks such as IBM Qiskit, works by computing a so-called mitigation matrix that, opportunely combined with the outcome of a quantum computation, tries to make this outcome as close as possible to the error free value. However, the uncertainty related to the stochastic nature of quantum computation does not allow this technique to compute a mitigation matrix able to reduce the error effect for all kinds of quantum computation. The goal of the work reported in [1] is to improve this technique by integrating it with the Fuzzy C-Means algorithm in order to compute a mitigation matrix more tolerant to the effect of stochasticity in computation. A comparative study shows that the fuzzy- based approach is able to better mitigate the effect of the measurement error in quantum computation than the state-of-the-art mitigation method currently used by IBM in its quantum library Qiskit. To conclude, the work in [1] received the Best Paper Award at the 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2021). References [1] G. Acampora, A. Vitiello, Error mitigation in quantum measurement through fuzzy c-means clustering, in: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2021, pp. 1–6. doi:10.1109/FUZZ45933.2021.9494538.