Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Short Introduction to PreferencesUncertainty in Preference Reasoning

A Short Introduction to Preferences: Uncertainty in Preference Reasoning [The formalisms for handling preferences described in the previous chapter support a powerful combination of compactness and expressive power. However, they still require a lot of information from the user. For example, if we represent an agent’s preferences with a soft constraint problem with 4 variables, each having 5 domain values, even if we assume only 4 constraints, each involving 2 variables, the user will have to provide 100 preference values (one for each tuple). Thus, it will often be the case that, in the problem specification, the preferences will be provided in an imprecise or vague way, or it will be missing. There are two main approaches to uncertainty in preferences. The first one is to do the best that we can with the available data without further bothering the user. This translates to looking for solutions that are of high quality with respect to the preferences that are known and “robust” with respect to the ones that are missing. The second strategy is to resort to elicitation, that is, to ask the user for the missing preferences. Eliciting preferences takes time and effort, and users may be reluctant to provide their preferences due to privacy concerns or annoyance, especially when confronted with large combinatorial candidate sets. Thus, issues such as the appropriateness of the preference language made available by the system in terms of elicitation, the identification of criteria that allows one to detect when elicitation is no longer necessary and can be stopped, and computational aspects related to elicitation must be taken into account.] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Loading next page...
 
/lp/springer-journals/a-short-introduction-to-preferences-uncertainty-in-preference-9Bxo25tSo9

References (0)

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Springer International Publishing
Copyright
© Springer Nature Switzerland AG 2011
ISBN
978-3-031-00428-5
Pages
29 –39
DOI
10.1007/978-3-031-01556-4_3
Publisher site
See Chapter on Publisher Site

Abstract

[The formalisms for handling preferences described in the previous chapter support a powerful combination of compactness and expressive power. However, they still require a lot of information from the user. For example, if we represent an agent’s preferences with a soft constraint problem with 4 variables, each having 5 domain values, even if we assume only 4 constraints, each involving 2 variables, the user will have to provide 100 preference values (one for each tuple). Thus, it will often be the case that, in the problem specification, the preferences will be provided in an imprecise or vague way, or it will be missing. There are two main approaches to uncertainty in preferences. The first one is to do the best that we can with the available data without further bothering the user. This translates to looking for solutions that are of high quality with respect to the preferences that are known and “robust” with respect to the ones that are missing. The second strategy is to resort to elicitation, that is, to ask the user for the missing preferences. Eliciting preferences takes time and effort, and users may be reluctant to provide their preferences due to privacy concerns or annoyance, especially when confronted with large combinatorial candidate sets. Thus, issues such as the appropriateness of the preference language made available by the system in terms of elicitation, the identification of criteria that allows one to detect when elicitation is no longer necessary and can be stopped, and computational aspects related to elicitation must be taken into account.]

Published: Jan 1, 2011

There are no references for this article.