=Paper=
{{Paper
|id=Vol-3226/invited2
|storemode=property
|title=Towards Deep and Interpretable Rule Learning
(invited paper)
|pdfUrl=https://ceur-ws.org/Vol-3226/invited2.pdf
|volume=Vol-3226
|authors=Johannes Fürnkranz
|dblpUrl=https://dblp.org/rec/conf/itat/Furnkranz22
}}
==Towards Deep and Interpretable Rule Learning
(invited paper)==
Towards Deep and Interpretable Rule Learning⋆
Johannes Fürnkranz∗
Johannes Kepler University, 4040 Linz, Austria
Abstract
Inductive rule learning is concerned with the learning of classification rules from data. Learned rules are inherently
interpretable and easy to implement, so they are very suitable for formulating learned models in many domains. Nevertheless,
current rule learning algorithms have several shortcomings. First, with respect to the current praxis of equating high
interpretability with low complexity, we argue that while shorter rules are important for discrimination, longer rules are
often more interpretable than shorter rules, and that the tendency of current rule learning algorithms to strive for short and
concise rules should be replaced with alternative methods that allow for longer concept descriptions. Second, we think that
the main impediment of current rule learning algorithms is that they are not able to learn deeply structured rule sets, unlike
the successful deep learning techniques. Both points are currently under investigation in our group, and we will show some
preliminary results.
ITAT’22: Information technologies – Applications and Theory, Septem-
ber 23–27, 2022, Zuberec, Slovakia
∗
Corresponding author.
Envelope-Open juffi@faw.jku.at (J. Fürnkranz)
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)