Despite all the differences offered in theories of utility formation and decisions from experience/
descriptions, they share common assumption – decision makers have stable and coherent preferences, informed by consistent use of psychological strategy/processing (computational or sampling) that guide their choices between
alternatives varying in risk and reward. In contrast, we argue for the non-existence of stable risk preferences; we propose that risk preferences are constructed dynamically based on strategy selection as a reinforcement-learning model. Accordingly, we found that decision context and
associative learning predict strategy selection and govern risky preferences; rather having fixed preferences for risk, people select decision strategies from current context and learn to select decision strategies that are most successful (in terms of effort and reward) for a given context