A Short Introduction to PreferencesUncertainty in Preference Reasoning
A Short Introduction to Preferences: Uncertainty in Preference Reasoning
Rossi, Francesca; Venable, Kristen Brent; Walsh, Toby
2011-01-01 00:00:00
[The formalisms for handling preferences described in the previous chapter support a powerful combination of compactness and expressive power. However, they still require a lot of information from the user. For example, if we represent an agent’s preferences with a soft constraint problem with 4 variables, each having 5 domain values, even if we assume only 4 constraints, each involving 2 variables, the user will have to provide 100 preference values (one for each tuple). Thus, it will often be the case that, in the problem specification, the preferences will be provided in an imprecise or vague way, or it will be missing. There are two main approaches to uncertainty in preferences. The first one is to do the best that we can with the available data without further bothering the user. This translates to looking for solutions that are of high quality with respect to the preferences that are known and “robust” with respect to the ones that are missing. The second strategy is to resort to elicitation, that is, to ask the user for the missing preferences. Eliciting preferences takes time and effort, and users may be reluctant to provide their preferences due to privacy concerns or annoyance, especially when confronted with large combinatorial candidate sets. Thus, issues such as the appropriateness of the preference language made available by the system in terms of elicitation, the identification of criteria that allows one to detect when elicitation is no longer necessary and can be stopped, and computational aspects related to elicitation must be taken into account.]
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.pnghttp://www.deepdyve.com/lp/springer-journals/a-short-introduction-to-preferences-uncertainty-in-preference-9Bxo25tSo9
A Short Introduction to PreferencesUncertainty in Preference Reasoning
[The formalisms for handling preferences described in the previous chapter support a powerful combination of compactness and expressive power. However, they still require a lot of information from the user. For example, if we represent an agent’s preferences with a soft constraint problem with 4 variables, each having 5 domain values, even if we assume only 4 constraints, each involving 2 variables, the user will have to provide 100 preference values (one for each tuple). Thus, it will often be the case that, in the problem specification, the preferences will be provided in an imprecise or vague way, or it will be missing. There are two main approaches to uncertainty in preferences. The first one is to do the best that we can with the available data without further bothering the user. This translates to looking for solutions that are of high quality with respect to the preferences that are known and “robust” with respect to the ones that are missing. The second strategy is to resort to elicitation, that is, to ask the user for the missing preferences. Eliciting preferences takes time and effort, and users may be reluctant to provide their preferences due to privacy concerns or annoyance, especially when confronted with large combinatorial candidate sets. Thus, issues such as the appropriateness of the preference language made available by the system in terms of elicitation, the identification of criteria that allows one to detect when elicitation is no longer necessary and can be stopped, and computational aspects related to elicitation must be taken into account.]
To get new article updates from a journal on your personalized homepage, please log in first, or sign up for a DeepDyve account if you don’t already have one.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.