Roman Słowiński is a Professor and Founding Chair of the Laboratory of Intelligent Decision Support Systems at the Institute of Computing Science, Poznań University of Technology, Poland.
Roman Słowiński is a Professor and Founding Chair of the Laboratory of Intelligent Decision Support Systems at the Institute of Computing Science, Poznań University of Technology in Poland. Since 2002 he is also Professor at the Systems Research Institute of the Polish Academy of Sciences in Warsaw. He is a full member of the Polish Academy of Sciences and, presently, elected president of the Poznań Branch of the Academy. He is also a member of Academia Europaea. In his research, he combines Operations Research and Computational Intelligence. Today Roman Słowiński is renown for his seminal research on using rough sets in decision analysis, and for his original contribution to preference modeling and learning in decision aiding. He is recipient of the EURO Gold Medal, and Doctor Honoris Causa of Polytechnic Faculty of Mons, University Paris Dauphine, and Technical University of Crete. In 2005 he received the Annual Prize of the Foundation for Polish Science – regarded as the highest scientific honor awarded in Poland.
Since 1999, he is principal editor of the European Journal of Operational Research, a premier journal in Operations Research. He is coordinator of the EURO Working Group on Multiple Criteria Decision Aiding, and past president of the International Rough Set Society.
Constructive learning of preferences with robust ordinal regression
The talk is devoted to preference learning in Multiple Criteria Decision Aiding. It is well known that the dominance relation established in the set of alternatives (also called actions, objects, solutions) evaluated on multiple criteria is the only objective information that comes from a formulation of a multiple criteria decision problem (ordinal classification, or ranking, or choice, with multiobjective optimization being a particular case). While dominance relation permits to eliminate many irrelevant (i.e., dominated) alternatives, it does not compare completely all of them, resulting in a situation where many alternatives remain incomparable. This situation may be addressed by taking into account preferences of a Decision Maker (DM). Therefore, all decision-aiding methods require some preference information elicited from a DM or a group of DMs. This information is used to build more or less explicit preference model, which is then applied on a non-dominated set of alternatives to arrive at a recommendation (assignment of alternatives to decision classes, or ranking of alternatives from the best to the worst, or the best choice) presented to the DM. In practical decision aiding, the process composed of preference elicitation, preference modeling, and DM’s analysis of a recommendation, loops until the DM accepts the recommendation or decides to change the problem setting. Such an interactive process is called constructive preference learning.
I will focus on processing DM’s preference information concerning multiple criteria ranking and choice problems. This information has the form of pairwise comparisons of selected alternatives. Research indicates that such preference elicitation requires less cognitive effort from the DM than direct assessment of preference model parameters (like criteria weights or trade-offs between conflicting criteria). I will describe how to construct from this input information a preference model that reconstructs the pairwise comparisons provided by the DM. In general, construction of such a model follows logical induction, typical for learning from examples in AI. In case of utility function preference models, this induction translates into ordinal regression. I will show inductive construction techniques for two kinds of preference models: a set of utility (value) functions, and a set of “if…, then…” monotonic decision rules. An important feature of these construction techniques is identification of all instances of the preference model that are compatible with the input preference information – this permits to draw robust conclusions regarding DM’s preferences when any of these models is applied on the considered set of alternatives. These techniques are called Robust Ordinal Regression and Dominance-based Rough Set Approach.
I will also show how these induction techniques, and their corresponding models, can be embedded into an interactive procedure of multiobjective optimization, particularly, in Evolutionary Multiobjective Optimization (EMO), guiding the search towards the most preferred region of the Pareto-front.