=Paper= {{Paper |id=Vol-1751/AICS_2016_paper_50 |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-1751/AICS_2016_paper_50.pdf |volume=Vol-1751 }} ==None== https://ceur-ws.org/Vol-1751/AICS_2016_paper_50.pdf
       Preference Inference Through Rescaling
                Preference Learning

                       Nic Wilson and Mojtaba Montazery

                          Insight Centre for Data Analytics
                           University College Cork, Ireland



Summary
One approach to preference learning, based on linear support vector machines,
involves choosing a weight vector whose associated hyperplane has maximum
margin with respect to an input set of preference vectors, and using this to
compare feature vectors. However, as is well known, the result can be sensitive
to how each feature is scaled, so that rescaling can lead to an essentially different
vector. This gives rise to a set of possible weight vectors—which we call the
rescale-optimal ones—considering all possible rescalings. From this set one can
define a more cautious preference relation, in which one vector is preferred to
another if it is preferred for all rescale-optimal weight vectors. In this paper, we
analyse which vectors are rescale-optimal, and when there is a unique rescale-
optimal vector, and we consider how to compute the induced preference relation.
We illustrate the approach using a preference learning problem arising from a
ridesharing application. [1].


References
1. Wilson, N., Montazery, M.: Preference inference through rescaling preference learn-
   ing. In: Proc. IJCAI-2016 (2016)