I. Introduction
Decision-making is essential for personal and professional development, problem-solving, and navigating the complexities of life. Improving decision-making skills can lead to better life choices and overall well-being. This is why Clearer Thinking developed a digital program aimed at improving decision-making.
Decision-making is a complex process influenced by a wide range of factors. There are cognitive factors like the quality and quantity of information available, as well as biases and heuristics. More information can lead to better decisions, but it can also lead to information overload. Different people may interpret the same information differently due to individual differences in perception.
II. Decision Advisor: The program overview
At a high level, the decision-making program was designed to aid with decision-making by helping users:
Brainstorm options for what to do for the current decision that they may not have considered
Consider other sources of information they may want to look into later that could help with their decision
Understand and reflect on cognitive biases that are relevant to and might negatively impact their decision
Go through a process of estimating the expected value of different options (relative to each other) to help evaluate which is best
The full description of the program can be found in Appendix A.
III. The theory of maximizing expected value
The theory of maximizing expected value is a fundamental concept in decision theory and probability theory. It provides a framework for making decisions when faced with uncertainty. In this theory, decision-makers choose the option that will yield the highest value outcome on average. Portions of our decision-making program are modelled on the expected value framework.
The theory involves:
1. Identification and clear definition of the decision: A decision typically involves choosing from a set of options, each with associated outcomes. In our program, the decision was selected at the beginning, and the potential options were brainstormed by the user shortly after that.
2. Assessment of probabilities: For each alternative, probabilities are assigned to different outcomes conditioned on each option being selected. These probabilities represent a degree of belief or expectation about the likelihood of each outcome occurring if each option is chosen. Participants were asked to assign such probabilities in the pro/con evaluation section of our program.
3. Calculation of expected values: For each alternative, the expected value is calculated by multiplying each possible outcome by its associated probability and then summing up all of the products. This calculation provides a numerical representation of the average value that can be expected from an outcome. Our tool carried out this calculation automatically using the inputs provided by the user.
4. Comparison of expected values: After calculating the expected values for each alternative, the alternatives can be compared to each other. The alternative with the highest expected value is considered the best choice according to the theory of maximizing expected value. In our program, the expected value of the options was displayed on the screen in a bar chart form.
This decision-making approach is widely used in various fields, including economics, finance, engineering, and risk management. It assumes that individuals make decisions rationally by weighing the potential outcomes and their probabilities, aiming to maximize their expected gains or utility. Clearer Thinking prepared a tool based on this theory, and this report describes how we analyzed the effectiveness of that tool.
IV. The study outlines and research design
Study participants were recruited on Positly (our platform facilitating fast research participant recruitment). They signed a consent form. During the screening process, they were asked whether they had a decision to make and if they would know the decision’s outcome in 6 months or less. Only participants who indicated that they did have the decision to make and that they would know the outcome of the decision within 6 months were invited to participate in the study. Participants were additionally asked about their ethnicity, political inclinations, and their SAT and ACT scores. (These were decoy questions to help ensure participants were not aware that this was a screener for a particular purpose to avoid creating an incentive to lie.)
After being admitted to the study, participants were randomized to be in either the control group or the intervention group. The intervention group was asked to make a decision using the decision-making program described above, while the control group was only asked (1) to indicate what decision they had to make, (2) to indicate how many options they were considering for this decision, and (3) to say which option they thought was best for them to take in the decision.
Before and after making a decision, both groups were given surveys to fill in. The surveys recorded various aspects of participants’ psychological profiles. We also recorded participants’ personalities measured by a standard Big5 IPIP test. All variables measured before and after the decisions were made are listed in Appendix B.
After making a decision, people declared the date by which they would know how the decision would have turned out. We followed up with them at that time and asked them to fill in a survey measuring their levels of satisfaction and their level of regret related to their decision. We used a previously validated decision satisfaction scale by Lawama, Greenberg, Moore, and Lieder. It included 3 decision satisfaction items recorded on a Likert scale and 10 regret items recorded on a Likert scale. The responses for these two dimensions (satisfaction and regret) were used to compute a combined decision score that was used as a primary dependent variable. We did it this way so we could later see what was deciding about how participants felt about their decisions.
The total decision score was calculated as:
total_decision_score = decision_satisfaction - decision_regret
V. Study participants
We screened 994 people on Positly. Of those, 917 people were eligible to participate: 423 men, 490 women, and 2 nonbinary. Participants were only eligible to be included in the study if they (1) had a decision to make and (2) would know how the decision would turn out within 6 months. Six people reported technical problems which did not affect the study coherence.
In total, 381 completed the study protocol up to the point of making a decision: 205 in the control group and 176 in the intervention. Out of these 381 people, 194 provided us with the follow-up information in 6 months: 95 people from the control group and 99 from the intervention group.
VI. The primary outcome: total decision score
After collecting feedback from the study participants, we compared how satisfied and regretful they were about their decisions.
Much to our surprise, the total decision score (i.e., the combined satisfaction and regret scores: decision_satisfaction - decision_regret) turned out to be higher in the control group than in the intervention. The control mean was 2.38, and the intervention mean was 2.0!
The difference in total decision scores between the intervention and the control was statistically significant (T(193)=2.01, p-value<0.05). While this result is statistically significant, p-value < 0.05, suggests a possibility that the resulting difference may be due to chance. However, given the evidence we have, we think the most likely conclusion is that the decision program caused people to be less happy with their decisions and that the result is not just a statistical fluke.
The difference between the groups is displayed on the graph below:
Why was the control group happier with their decision?
To find out what was responsible for such unexpected results, we examined our scales from a few different angles. We checked how groups differed on both of the sub-scales (for decision satisfaction and decision regret). We also examined every item on the scale independently to look for deeper insights into why the control group appeared to be happier with their decision than the intervention group.
We discovered that the main factor behind the difference was decision regret. The difference between the decision satisfaction score between the control and intervention groups was not statistically significant (T(193)=1.51, p-value=0.13). The mean satisfaction for the control was 3.22, and the mean satisfaction for the intervention was 3.04.
The difference between the decision regret in the control and the intervention groups, however, was statistically significant (T(193)=-2.29, p-value=0.02). The mean decision regret in the control was 0.84, and the mean decision regret in the intervention was 1.04.
Since we recorded many variables that could affect the decision score at the start, one of them was regret tendency. We checked to see if people in the intervention group had a stronger prior tendency to experience regret than the control group (before they completed the program). It appeared that the mean of the regret tendency in the intervention was higher at the start, but the difference was not statistically significant (T(193)=-0.6, p-value=0.55, Control = 0.78, Intervention = 0.83).
To understand better which specific items on the total decision score survey indicated differences between groups, we also analyzed each item separately. The differences between the two groups’ responses to most of the survey’s 13 items (when examined individually) were not statistically significant except for the following:
“I postponed this decision for too long.”
(T(193)=-2.55, p-value=0.01, Control = 1.1, Intervention= 1.53)
As indicated by the responses to the above question, the control group procrastinated less about making their decisions according to the self-assessment of study participants. This suggests that the decision-making program somehow caused intervention group participants to delay their decisions for longer than the control group and that this may have had detrimental effects.
VII. Other factors affecting the total decision score
Exploring other potential reasons why people in the intervention group were less happy with their decisions than the control, we examined other variables in the program. Most of them didn’t point us in any specific direction, although we did discover some interesting facts presented in the chapters below.
In the intervention group, 22% (39 people) selected an option they didn’t consider before doing the program and the remaining 78% of the intervention group (137 people) selected previously considered options. We have not measured this for the control. This suggests that the program may have made participants consider options they might not otherwise have considered (or, at least, hadn't previously thought about). We are not sure yet whether it is a good thing or not.
CHANGING THEIR MIND DURING THE PROGRAM
In the intervention group, people indicated which option they were "leaning toward" just before working through the portion of the tool that asked them to consider the pros and cons of each option. 30 out of 95 people (32%) changed their minds during the pros-and-cons portion and decided (by the time they reached the end of the tool) to go with a different option than they had picked right before the pros-and-cons portion.
We measured whether or not people who changed their minds during the pro/con part of the program had lower total decision scores than people who stuck to their first choice, but there was no statistical difference between these two groups: T(99)= 0.14, p-value=0.89. (The mean total decision score for people who changed their minds was 2.03 and for people who didn’t change their minds was 1.99.)
Whether or not they changed their minds also has a very low coefficient in the regression predicting total decision scores (0.006), which provides another reason to believe that people changing their minds during the pro/con part of the program didn’t cause them to be less satisfied with their decisions.
DECISION CONFIDENCE
We ran a regression to test whether or not any of the variables recorded inside the program were predictive of participants’ total decision score, but the regression's R² was very low (ridge R² < 0.006; lasso R² < 0.04). Only one variable was found to be meaningfully predictive of the total decision score, which was the level of confidence participants felt about their final decision at the end of using the tool (i.e., participants’ responses to "On a scale of 0% to 100%, how confident do you feel now that you've chosen the best available option for this decision?"), with a standardized coefficient of 0.8. This means that people who felt confident about their decisions were happier about them later. One possible explanation for this could be that people might be more confident in decisions where it is easier to figure out the best option. The list of variables included in this regression are listed in Appendix D.
In the intervention group, the reported confidence in the decision increased from mean=70.05 (before using the program) to mean=77.28 (t(94)=-2.82, p<0.005) after the intervention was completed. It seems that the program made people feel more confident about their decisions, even though long-term they were less satisfied with them. We later discovered that it is the people who did not increase their confidence were responsible for lower decision scores in the intervention group.
FINDING DECISIONS EMOTIONALLY UPSETTING, WITH HIGH INVESTMENT, AND MANY CONSEQUENCES
We also tested if people with different paths selected in the program differed in the total decision score, but they did not. For example, we checked if people who said that their decisions were emotionally upsetting had different levels of decision scores than the ones who said their decisions were not emotionally upsetting, but the scores were not different. (T(98)=1.53, p=0.13, the mean for the upset group = 1.8, the mean for the not upset group = 2.2.) Similarly, people who declared that they invested a lot of resources didn’t differ in their decision scores from those who said they did not invest much. (T(98)=0.55, p=0.59, the mean for the invested group = 1.93, the mean for the not invested group = 2.08.) Participants' decision scores also didn’t differ based on the number of consequences the decisions had. (T(98)=-0.61, p=0.54, the mean for a single consequence = 2.11, the mean for many consequences = 1.94.)
VIII. Correlations of specific items with the total decision score
MISSED OPPORTUNITIES
Besides our main dependent variable — the total decision score — we measured three extra variables. We wanted to know if the decision made caused any opportunities to be missed. So we added an item: “I missed out on opportunities because of my decision.” The Pearson correlation coefficient between this missed opportunities item and the total decision score was -0.5; so it was (perhaps unsurprisingly) a factor strongly associated with a negative total decision score (p<0.00001).
HINDSIGHT
We also checked if participants felt that with hindsight it was obvious what the right decision was. It was measured by the item: “In retrospect, it should have been obvious what the best choice to have made was for this decision, even if it didn't feel obvious at the time.” The Pearson correlation coefficient for this item and total decision scores was -0.02; so the association of these two variables was negligible.
REVERSIBILITY
The last item that we added measured if the decision was reversible: “At this point, it would be very hard to reverse my decision.” The Pearson correlation coefficient for the reversibility of the decision and the total decision score was -0.003; so again, the association of this variable was negligible.
The Pearson correlation coefficient for the group assignment and total decision score was -0.14, which illustrates the fact that the intervention group was less happy with their decisions.
IX. Factors predictive of the total decision score
As previously mentioned, we asked our participants to fill out a number of questions both before and after they made their decisions. We wanted to know if there were any psychological factors that could be predictive of who was happiest with their decisions and if any aspects of the decisions themselves were predictive of the total decision score.
We first ran a regression to check if any of the 36 factors describing our study participants and their decisions predicted the total decision score. The variables were first normalized by transforming them into z-scores (i.e., subtracting the mean from each column, and dividing by its standard deviation). A few missing values (not more than 7 so 3.6% of the total study sample) in relevant columns were filled with the mean value of the column. We used this method because it is simple and with such a low number of missing values, the method of filling in missing values has little impact. We tested the performance of both Ridge and Lasso regressions. Ridge scored R²=0.15 in predicting the total decision score (i.e., it accounted for 15% of the variance), and Lasso scored R² = 0.12. The variables included in these models had the following coefficients for predicting total decision score (with higher numbers meaning a positive association with people rating their decision as going better):
Variable | Lasso Regression Coefficient r2=0.12 | Ridge Regression Coefficient r2=0.15 | Description (how it has been measured) | Correlation with total decision score |
personalFreedom | 0.18 | 0.11 | “I have total personal freedom: I am free to do whatever I choose to do.” | 0.32 |
identityDecision | 0.18 | 0.13 | “The choice I picked for this decision reflects the kind of person I am better than the other choices.” | 0.26 |
agreeableness | 0.11 | 0.08 | Agreeableness measured by a standard Big 5 test | 0.28 |
optimism | 0.05 | 0.06 | “How much are you the sort of person who is typically optimistic about the future?” | 0.27 |
pragmatism | 0.05 | 0.06 | “Where would you rate yourself on the scale from being an idealistic person to being a pragmatic person?” | 0.09 |
selfEfficacy | 0.02 | 0.05 | “Do you believe you can do anything you set your mind to? | 0.3 |
conscientiousness | 0.01 | 0.03 | Conscientiousness measured by a standard Big5 test | 0.24 |
stability | 0.01 | 0.04 | Emotional stability measured by a standard Big5 test | 0.25 |
statusquo | 0.003 | 0.05 | “Did the choice that you ended up picking for this decision involve sticking with the status quo or default option, or did it instead involve implementing a change?” | 0.25 |
educationScore | 0 | 0.01 | Education level | 0.03 |
numberOfOptions | 0 | -0.04 | “For this decision, how many options were you choosing between?” | -0.05 |
wantShould | 0 | -0.03 | “This decision involves a conflict between what I actually want to do and what I feel like I should do.” | -0.15 |
reversibility | 0 | -0.01 | “How reversible was the decision you made?” | -0.00001 |
extraversion | 0 | -0.03 | Extraversion measured by a standard Big5 test | 0.12 |
openness | 0 | 0.01 | Openness to experience measured by a standard Big5 test | 0.13 |
similarityOfOptions | 0 | -0.03 | “How similar to each other were all of the options you came up with for this decision?” | -0.04 |
daysMeditating | 0 | 0.001 | “How many days a week do you meditate for 5 minutes or more?” | 0.06 |
age | 0 | 0.02 | Age | 0.11 |
stressorLifeEventsSize | 0 | -0.02 | “In the last 12 months have you experienced a change that significantly affected your well-being?” | -0.15 |
intuitionReason | 0 | -0.06 | “Are you more the sort of person who trusts your intuition or that follows your reasoning?” (A higher score means they rely more on reason and less on intuition) | -0.11 |
femaleAs1MaleAs0 | 0 | 0.02 | Female gender | 0.05 |
householdIncomeScore | 0 | -0.03 | Income level | 0.02 |
conformism | 0 | -0.02 | “When you make decisions, how influenced are you by other people's opinions?; How much do you tend to not act like yourself when you are surrounded by other people?” | -0.12 |
selfConfidence | 0 | 0.02 | “How self-confident are you?” | 0.22 |
regretTendency | 0 | 0.01 | “When you make decisions, how much do you tend to regret your choice?” | -0.15 |
emotionalityRationality | 0 | 0.01 | “Where would you rate yourself on a scale from an emotional person to a logical person?” | 0.06 |
reflectiveness | 0 | 0.03 | “Are you the sort of person who tends to act quickly on decisions or deeply reflect on decisions?” | 0.01 |
minutesPerDaySmartphone | 0 | 0.02 | “How many minutes a day do you use your smartphone?” | -0.001 |
minutesPerDayGames | 0 | 0.02 | “How many minutes a day do you spend playing video games or mobile phone games? | -0.009 |
minutesReading | 0 | -0.04 | “How many minutes a day do you spend reading?” | -0.12 |
negativeStressors | -0.02 | -0.04 | Negative items from the stressors’ list | -0.17 |
identifiesAsDepressed | -0.04 | -0.05 | “Do you believe that you are ‘depressed’?” | -0.29 |
GroupAssignement | -0.03 | -0.05 | They were assigned to be in the Intervention group (1) or the Control group (0) | -0.14 |
meYouHappy | -0.04 | -0.05 | “This decision involves a conflict between what makes me happy and what someone else wants me to do.” | -0.17 |
minutesListening | -0.07 | -0.06 | “How many minutes a day do you spend listening to podcasts, audiobooks, radio, or other audio content?” | -0.16 |
totalDecisionImportance | -0.08 | -0.06 | The number of important decisions they have to make in their life right now and their intensity measured by a test | -0.22 |
We'll focus here on the variables that had non-negligible coefficients in both the ridge regression and lasso regression.
It seems that people tended to be happier with their decision ultimately if the option they chose was aligned with their identity (as reflected by greater agreement to the statement “The choice I picked for this decision reflects the kind of person I am better than the other choices.") This is consistent with research on self-identify’s role in the decision-making process, but it is also one of the most interesting findings of our study.
People were also happier with their decisions if they had a higher degree of personal freedom (as reflected by higher levels of agreement to the statement “I have total personal freedom: I am free to do whatever I choose to do.”).
Other factors predictive of decision satisfaction included some positive personality traits like agreeableness, optimism, pragmatism, emotional stability, conscientiousness, and self-efficacy (faith in one’s abilities).
Choosing the status quo instead of a change was also slightly predictive of making satisfying decisions, but the coefficient was small (0.003).
Belonging to the intervention group turned out to be a negative predictor of the total decision score (as expected based on results discussed earlier). This further supports the findings reported in the sections above.
Other negative predictors of total decision score included depression, number of negative stressors, and making decisions out of a sense of obligation to someone else rather than out of a participant’s genuine desire.
We measured how many decisions in different aspects of life people had to make at the time of our study and how important these decisions were. We measured these by asking if they had a decision to make in each major aspect of life (health, family, relationships, finances, job, living arrangements); and then for each aspect, we asked participants to assess how important these decisions were on a 4-level Likert scale. A higher number of decisions to make was predictive of worse decision scores, supporting the idea that people need a certain amount of cognitive resources (energy and attention) to make satisfying decisions and that when they have too many decisions to make, they may decide less well.
The last negative predictor of low decision scores (which was not, however, as strong as the strongest positive predictors) was the number of minutes of listening to audio content per week. It is hard to explain why this might be the case. This result is puzzling and needs to be considered in future research.
In summary, we can say that people with positive personality traits, a high degree of personal freedom, and less cognitive overload tended to be happier with their decisions. Completion of the decision-making program did not enhance their total decision score and appeared to have the opposite effect.
X. Qualitative analysis
In addition to the quantitative questions described above, we also asked participants qualitative (open-ended) questions. They were collecting information on what types of decisions people were making. The responses to these questions were coded and interpreted by one researcher; and from that work, 15 thematic categories emerged. (Each answer could belong to one or more categories depending on its content.) The number of open-ended responses that fell into each of the 15 categories are shown in this table:
Theme | Total count | Count for the intervention | Count for the control |
work choices | 48 | 25 | 23 |
relocation | 17 | 11 | 6 |
money / investments | 33 | 11 | 22 |
medical choices | 16 | 10 | 6 |
buying a house or renting | 21 | 9 | 12 |
family decisions | 24 | 9 | 15 |
home or car upgrade | 16 | 7 | 9 |
romantic relationships | 11 | 5 | 6 |
change of habits | 8 | 5 | 3 |
business decision | 8 | 4 | 4 |
lack of any decision | 7 | 4 | 3 |
education | 4 | 2 | 2 |
holidays | 6 | 2 | 4 |
personal values | 1 | 1 | 0 |
politics | 1 | 0 | 1 |
Among the themes we discovered in our qualitative material, work choices emerged as the most prevalent, with 48 instances where individuals considered different options of career paths and professional trajectories. Financial considerations were common, as 33 decisions revolved around money and investments. The prospect of relocation occurred in 17 instances and buying a house or renting in 21. 16 people made decisions about their medical treatment. Family considerations came into play 24 times, while romantic relationships only appeared 11 times. So it seems that our tool was used less often for more subjective decisions.
Our study participants also tended to use the tool more often to make decisions that had more objective rather than subjective outcomes. This may be because of the original phrasing when they were asked to write the decision they were considering, which was: "What's a decision you are considering where you'll know within 6 months how well it turned out?" Perhaps it’s more difficult in a limited amount of time to tell how “well” decisions with more subjective outcomes turned out.
The number of cases in each category for the intervention and the control differs somewhat, but they follow a similar pattern. Because participants chose what decision to focus on before any differences occurred between the control and intervention groups, we would not expect any systematic differences between the two.
We also qualitatively categorized responses each participant gave in response to the question "Please describe how this decision worked out for you" into "positive", "neutral" and "negative" based on how well it appeared that the decision turned out for the participant.
Count for the intervention | % for intervention | count for the control | % for control | Z test | p-value | |
Positive outcomes according to the researcher | 50 | 50% | 59 | 62% | -1.687 | p=0.09 |
Neutral outcomes according to the researcher | 37 | 37% | 29 | 31% | 0.884 | p=0.39 |
Negative outcomes according to the researcher | 13 | 13% | 7 | 7% | 1.391 | p=0.17 |
Total | 100 | 100% | 95 | 100% |
Qualitatively, it appeared that participants' evaluations of how well their decisions turned out often depended on factors they couldn't control. But if these factors went in their favor, they were happy with the outcome; and if they went against them, they were unhappy. Some decisions involved the behavior of others, and that created additional factors beyond the control of participants. A potentially useful feature that the intervention could have included (but did not) is a submodule teaching participants the difference between things they can and cannot control, and the psychological benefits of focusing on what you can control rather than what you can't.
One challenge with evaluating outcomes is that it is hard to separate how well things turned out objectively (e.g., were the concrete outcomes what a person wanted) compared to how someone felt about how things turned out (e.g., were they looking for a silver lining in an otherwise bad situation). Some people tend to look at the worst in a good situation, while others look at the best in a bad situation; and these discrepancies between the objective and subjective outcomes made evaluating whether outcomes turned out positively, neutrally, or negatively especially difficult.
The results of the decisions, beyond just being positive, neutral, or negative, also differed in the intensity of outcomes, with some outcomes being very good and some very bad.
The decisions being made by participants also differed a great deal in how difficult those decisions were. Example decisions included: buying a house, upgrading an electrical panel at home, eating healthier, and getting a divorce.
XI. Exploration of the intervention program
We investigated all elements of the intervention program, trying to find out, what might have caused the intervention group to be less satisfied with their decisions compared with the control.
We ran a regression predicting the total decision score based on various elements of the program, and we obtained the following results:
Variable | Ridge Regression Coefficient (r^2=0.02) | Lasso Regression Coefficient (r^2=0.04) | Description | Correlation with total decision score |
finalconfidence | 0.075 | 0.09 | Confidence in the decision at the end of the intervention program | 0.36 |
confidenceIn Decision | 0.06 | 0.04 | Increase in confidence measured by a question asking participants at the end of the program, if they confidence changed where 1 meant increased, -1 meant decreased and 0 meant stayed the same | 0.29 |
change_in_confidence | 0.06 | 0.02 | The difference between the confidence prior to pros and cons and the final confidence | 0.23 |
hadNotPreviously ConsideredTaking ThisOption | 0.04 | 0 | If the option selected was thought of during the use of the program or was it the option considered earlier | 0.1 |
persontodiscusswith | 0.004 | 0 | Whether participants planned to discuss the decision with someone | 0.03 |
changedmind | 0.003 | 0 | At the start of the program participants declared which option they were most likely to pick and this variable of a record of whether they ended up with this option as their final choice | 0.01 |
extrainfo | 0.001 | 0 | Whether participants planned to seek extra information | 0.01 |
invested_ornot | -0.0005 | 0 | Whether participants were highly invested in the decision | -0.06 |
a | -0.005 | 0 | The number of possible good qualities of a decision option | -0.03 |
b | -0.005 | 0 | The number of possible bad qualities of a decision option | 0 |
choseAsFinalOptionOneFromNarrowFraming Exercise | -0.01 | 0 | Picking an option from the narrow framing exercise | 0.02 |
picked_recommended | -0.02 | 0 | If a participant picked the option recommended by the program | 0.03 |
manyconsequences_ornot | -0.02 | 0 | If the decision had many consequences | |
upsetting_ornot | -0.05 | 0 | If the decision was upsetting | -0.15 |
numberOfOptions | -0.06 | -0.05 | The number of options they chose from | 0.13 |
We obtained the optimal result for Lasso with r2=0.04. It showed that two factors were very slightly predictive of total decision satisfaction: final decision confidence and the number of options. It seems that the more options participants listed, the less likely they were to be satisfied with their decisions. It may be linked to final decision confidence — the degree to which participants believed that they made the right choice — because having a larger number of options from which to choose may dilute a person’s overall confidence that they’ve chosen the right option, which may be a sort of post-decision form of decision paralysis. Minding that r2=0.04 for this regression, we conclude that the predictive power of these factors is rather limited. If we remove most of the tested factors from this regression and leave only final decision confidence and change in confidence, the r2 for both Ridge and Lasso = 0.08. (For Ridge: finalconfidence = 0.19, change_in_confidence = 0.12; for Lasso finalconfidence = 0.24, change_in_confidence = 0.13.)
We also investigated if people chose the option the program recommended to them. It seems that in the majority of cases (71 out of 99, or 72%), participants decided to do what the program told them to do. This might have led them to ignore their natural instincts and/or to feel a smaller sense of ownership or agency in the decision, which may in turn have resulted in lower decision satisfaction.
We believe that the final part of the program, which included comparing the pros and cons of different options, was not responsible for program users being less happy with their decisions than people in the control group. We came to this conclusion because final decision confidence was the main positive predictor of total decision satisfaction, and decision confidence went up during the pros and cons part of the program. (In the intervention group, the reported confidence in the decision increased from mean=70.05 prior to using the pro/con section of the program to mean=77.28 (t(94)=-2.82, p<0.005) after that section was completed.)
We additionally tested the hypothesis that the pro/con section could have caused people to be less happy with their decision by subtracting the prior decision confidence from the final decision confidence and correlating this change in confidence with the final decision score. We found that it was positively correlated (r2=0.23, p-value<0.02). So, the pro/con section of the tool increased people's confidence in their decision on average, and an increase in confidence in the decision was associated with ultimately being happier with their decision at the follow-up point. It is possible that people who became less confident in their decision after using the pro/con section of the tool were caused by the pro/con section to have lower total decision scores. However, even if this was the case, it represents only a very small portion of study participants and therefore is very unlikely to explain the effect of those in the intervention group having lower total decision scores (only 24 out of 99 participants in the intervention group ended up with a lower confidence after using the pro/con portion than before using it).
In total, 75% of participants recorded an increase in their decision confidence. The mean change was 7.23 with std=14.28.
We wanted to check if there was a significant difference between the total decision scores of people whose confidence increased and people whose confidence didn’t increase. To do that we divided the intervention group into 2 subgroups: a group where people’s confidence didn’t increase more than 5 points and a group where people’s confidence did increase more than 5 points. (It would have been more intuitive to split the group across 0; but if we had done that, we would not have had enough observations in the no-change-of-confidence group to calculate a comparison.) We had 51 observations in the first group and 61 observations in the second. The mean total decision score in the first group was 1.67, and the mean total decision score in the second group was 2.26. The difference between these two groups was statistically significant with T(98)=2.58 and p-value=0.011. Hence, participants who increased their confidence in their decision by more than 5 points during the pro/con portion ended up happier with their decision at the follow-up point than those who didn't have at least a 5-point increase in confidence.
We also compared both of these groups to the control. There was no difference between the group where confidence increased and the control: T(155)=-0.54, p-value=0.6. There was, however, a big difference between the group with no confidence increase and the control: T(145)= -3.39, p-value<0.001. It seems that people whose confidence in their decision didn’t increase were probably responsible for lower total decision scores in the intervention group than in the control group!
We also measured if people who changed their minds during the pro/con part of the program had lower total decision scores than people who stuck to their first choice; but there was no statistical difference between these two groups: T(99)= 0.14, p-value=0.89. (The mean total decision score for people who changed their minds was 2.03 and for people who didn’t change their minds was 1.99.)
Whether or not they changed their minds also had a very low coefficient in the regression predicting total decision scores (0.006), which suggests that changing people’s minds during the pro/con part of the program didn’t cause them to be less satisfied with their decisions. This suggests that the pro/con portion of the intervention was not causing participants to have lower total decision scores; because if it was, we would expect those who changed their mind between the beginning of the pro/con section and the end to be the ones who ended up with lower total decision scores.
XII. Conclusions
A. The reasons the program didn’t work
The Decision Advisor program included these four components:
brainstorming new options for the current decision
considering other sources of information that could help with the current decision
understanding and reflecting on cognitive biases that are relevant to the current decision
estimating the expected value of different options with the pro/con tool
In this study, we identified several factors related to decision satisfaction and decision regret. However, the decision-making program that we investigated did not improve them.
It seems that the four components included in the program made people somewhat more confident with their decisions at the time of making them, but did not make people happy with their decisions in the long term, and may have led to more missed opportunities in the future.
It is hard to be sure why the program didn’t work to improve people's happiness with their decisions and may have even made them less happy with their decisions. Here are a few speculative hypotheses about these results that could be investigated in future research:
Cognitive overload: The program was very extensive, and perhaps that created a cognitive overload in the study participants, which caused them to make worse decisions. (It took an average control participant 11.8 minutes to complete the program, and it took an average intervention participant 25.8 minutes to complete the program. This means that the intervention itself was about 15 minutes long.)
Cognitive bias education backfiring: The intervention educated participants about cognitive biases that might be relevant to their situation by asking them detailed questions about the context surrounding their decision and then briefly discussing relevant cognitive biases based on their answers (e.g., if they said it was a decision where they had already invested into one option substantially, they were told about the sunk cost fallacy). Perhaps this cognitive bias education caused participants to make worse decisions, either because it nudged them away from good options (i.e., options that may have simply sounded like they related to cognitive biases) because it caused them to second-guess valid intuitions, or perhaps for some other reason.
A statistical fluke: While we found that the intervention performed worse than the control, the p-value was <0.05 so on the threshold of statistical significance. This means it's technically possible that this negative effect was random and that the decision program has no effects rather than negative effects. A p-value of 0.05 means that there is a 5% chance it was a random result and a 95% chance it was a meaningful result.
Since p = 0.05 is the threshold for statistical significance and this p-value sits right on that border, we find it difficult to rule out the possibility that the difference in overall decision scores between the intervention and control groups was the result of a fluke. That being said, based on the data we have collected and our desire to err on the side of caution, we believe it is more likely that the decision program caused people to be less happy with their decisions than that our result is due to a statistical fluke.
We have investigated all aspects of the program to find out what might have caused people to be less happy with their decisions after using the program. After thorough analysis, we think it is less likely that it was the pro/con section of the program. In any new versions of the program, we will include the pro/con section but remove the parts of the program that are not clearly efficacious.
Considering too many options: The intervention included a section on "narrow framing", where participants were asked to think about what would make a decision option good and bad, and then they were asked to generate at least one more option in addition to those they already listed. It is possible that this caused study participants to consider too many options, which reduced their ability to choose the best option or caused them to deliberate on their decision (rather than picking an option) for too long due to analysis paralysis or a "paradox of choice". This is potentially supported by the fact that those who received the intervention delayed their decision and were thus more likely to agree with the statement "I postponed this decision for too long" than the control group.We were able to obtain further evidence for this hypothesis by checking whether participants who ended up choosing as their final option (at the end of the tool) one of the options they came up with during the narrow framing exercise were more or less happy with their ultimate decision. To test this we compared the total decision scores of people who picked an option from the narrow framing exercise with people who didn’t, and the difference between them was not statistically significant (T(98)= -0.2, p-value=0.84, Mean1= 1.99, Mean2=2.05). So it seems that people who considered more options had lower decision scores, but these options didn’t have to come from the narrow framing exercise. (The number of options was a negative predictive of the total decision score: -0.05.)
Guarding against choice-supportive bias: The program didn’t address choice-supportive bias and its role in the decision-making process. Specifically, the program asked participants to pick one of their favorite options and then justify why the option that was not picked might be better (in addition to why the option that was picked might be better). Reflecting on why the non-picked option might be better might have deregulated the usual processes mitigating discomfort in the decision-making process. What we mean by this is that when people make a decision, they tend to justify the choice being made so they don’t need to feel uncomfortable with not choosing the other option. By making participants compare the options, we may have stopped them from formulating these justifications. However, the part of the intervention that may have created discomfort or prevented participants from formulating choice-supporting justifications was the pro/con portion, and we have evidence (discussed elsewhere in this article) that the pro/con portion did not cause participants to have lower total decision scores if they had high confidence in their decisions. Looking closer at what happened to participants during the pros and cons tool, some of them increased their confidence in the decision, and these are the ones whose total decision scores were similar to the control. Participants who didn’t increase their decision confidence had significantly lower total decision scores. Since factors related to emotional health were predictive of higher total decision scores, it is possible that these factors were mediating higher decision confidence.
Postponing to consider too much outside information: The intervention included a section where participants were asked to consider what other information they might want to collect that would be relevant to their decision and which people they might find useful to talk to about their decision. Since we have evidence that the intervention group felt they delayed their decision for too long, perhaps having them brainstorm what other information they should seek and what other people they could talk to caused this delay, ultimately leading to worse decision-making. We tested the difference between the total decision score of people who indicated they could "think of any additional information [they] could go out and acquire that would help [them] decide which option is best in this matter?" (the mean = 2.03) and those who didn’t (the mean = 1.99), and there was no statistically significant difference (T(98)=0.126, p=0.89). We also tested the difference between the total decision score of people who indicated that they could "think of a person with whom [they] have not discussed this decision, who could be useful to talk to?" (the mean = 2.04) and those who didn’t (the mean = 1.95), and there was no statistically significant difference (T(98)=0.334, p=0.74). This suggests that postponing to consider too much outside information was probably not the cause of intervention group participants having lower total decision scores. However, we can't rule out the possibility that merely thinking about whether or not you have others to talk to about the decision or other information to seek leads to lower total decision scores.
Here are some additional hypotheses we considered, though we think they are less likely because we have evidence against each of them based on the analyses we performed:
Accelerated decisions: perhaps asking participants to select their choice for what they plan to do right at the end of the program caused participants to make a choice earlier than they really should have, leading to worse decisions. However, we have pretty strong evidence against this because the intervention group agreed more with “I postponed this decision for too long" ( p=0.01) than the control group. If anything, the decision program seemed to cause people to postpone their decisions rather than make them decide too quickly. In the qualitative material, we did not spot a pattern of participants in the intervention group mentioning waiting for too long with their decisions.
Being told what to pick: Perhaps due to the decision program carrying out an expected value calculation on behalf of the participants, participants felt pressured to select the option that came out the highest in the calculation, even if their intuition was that it was not the best choice (and perhaps their intuition incorporated valid factors that the expected value didn't weigh properly). If their intuition in these cases reflected important information (about the world or the person's goals and values), then this might have led to worse decision-making. However, the variables we collected related to individual differences in the use of intuition ("Are you more the sort of person who trusts your intuition or that follows your reasoning?”, "Where would you rate yourself on a scale from an emotional person to a logical person?" and “Are you the sort of person who tends to act quickly on decisions or deeply reflect on decisions?”) were not predictive in our regression predicting total decision score. This result is, perhaps, some evidence against this hypothesis. Additionally, if this hypothesis were true, one would expect that those who ended up (right after they completed the pro/con portion of the tool) changing what option they were leaning toward (relative to right before the pro/con section) would have a worse total decision score than those who didn't change their option during the pro/con tool. However, we found no difference in the total decision score for those who changed their mind compared to those who didn't. (The correlation between people who changed their minds and the total decision score in the intervention group was small: r2=0.014, p=0.89.)
Externalization: The program placed the responsibility for the decision outside of the decision-maker (by calculating an expected value of each option for them), though participants in our study proved to be happiest with their decisions when those decisions were aligned with their identity (in particular, agreeing that "The choice I picked for this decision reflects the kind of person I am better than the other choices" had a correlation of 0.26 with total decision score). That suggests that making decisions that feel like your own might lead to the highest decision satisfaction and lowest decision regret. However, since this expected value calculation occurred in the pro/con section of the tool, and we have evidence (discussed elsewhere in this article) that the pro/con section of the tool wasn't the cause of lower total decision satisfaction scores in the intervention group if they had high confidence in their decisions, this is probably not the explanation. It would be interesting to see if people are also less satisfied with their decisions after seeking advice from friends, but that exceeds the scope of our research.
B. Potential ways to improve the tool
It seems that the theory of maximizing expected value used as a basis for the Clearer Thinking Decision Advisor program combined with bias reduction exercises didn’t address enough aspects of the decision-making to become an effective tool supporting this complex process.
Other aspects that affect the decision-making process include an individual's values, beliefs, ethical principles, personal and professional goals, and personal traits. Understanding all of these factors and their interplay is essential for making better decisions and for being aware of potential biases and influences that can affect the quality of choices made in various aspects of life.
Perhaps the program didn’t take into consideration enough different aspects of decision-making, especially the emotional, social, and cultural aspects. Technical aspects of the decision-making like the amount of time, the energy available, and environmental circumstances also matter. It is hard to overestimate the role of emotions in decision-making. The ability to recognize and manage one's own emotions, as well as the emotions of others, can impact decision-making in both personal and professional contexts. Even a person's current emotional state or mood can affect their risk tolerance and decision-making style. The opinions and actions of peers, family, and social groups can also have a powerful impact on decision-making. Cultural values and norms can shape the priorities and preferences of individuals and influence their decisions.
The Decision Advisor program didn’t include elements that we found predictive in our analysis. We found that emotional stability was a positive predictor of decision satisfaction, while negative stressors and making decisions to make someone else happy were negatively correlated with decision satisfaction. Future versions of the program could take these findings into consideration.
We hope that our findings in this study will contribute to the creation of more effective future tools for decision-making.
Appendix A: Steps of the Decision Advisor Program
Participants pick a decision that they want to make and describe it via predefined categories ("Work or employment", "Interpersonal Relationships", "Lifestyle (residence, diet, schedule, etc.)", "Education", and “Finances").
Participants list options that could be chosen for their decisions.
Participants learn about the narrow framing bias: a tendency to consider too few options. They list at least one more option to help avoid narrow framing.
Participants consider what qualities make a good option versus a bad option.
Participants are encouraged to seek additional information if it would be helpful.
Participants are encouraged to seek social support (i.e., a conversation about the decision with someone who will have useful thoughts or listen in a helpful way).
Participants are encouraged not to avoid an option that may be beneficial in the long term merely because that option would feel stressful or upsetting in the short term.
Participants learn about the sunk costs fallacy and are warned not to stick with choices just because they have invested a lot of past resources into it, and that what matters is the future costs and benefits.
Participants pick at least two of the options to focus on.
Participants consider the duration of each outcome’s consequences (i.e., not just whether an outcome will have good or bad consequences, but also how long those consequences are likely to persist).
Participants are asked to consider different types of consequences their decisions might have. They are also asked to indicate the likelihood of these consequences, and how long each consequence is likely going to last.
Participants are encouraged to be honest with themselves when considering the pros and cons of each option.
Participants indicate how confident they are with the decision.
Participants describe which of the options is the one they’re more likely to choose and why. And then for other options, they describe how the chosen option might be worse than the not-chosen ones.
Participants assess the likelihood and importance of each pro and con.
The program calculates an expected value score for each of the options based on the likelihood and importance ratings that the user assigned to the pros and cons.
Participants choose an option. They also indicate whether or not this option was considered before completing the program or if it's one they only thought of during their use of the program.
Participants indicate how confident they are about their decisions.
Participants select a follow-up date to check how the decision turned out (i.e., when they think they will know how well the decision went).
Appendix B: Variables measured in the study
The survey filled in by both the control and intervention groups prior to thinking about their decision included:
conformism (a tendency to behave in a way that is expected by others)
a tendency to regret
optimism
pragmatism
a tendency to be more logical versus more emotional
a tendency to follow intuition versus a tendency to follow reason
reflectiveness
self-confidence
a sense of self-efficacy (a feeling that someone can achieve whatever they want)
smartphone use (minutes per day)
video games use (minutes per day)
reading habits (minutes per day)
music listening habits (minutes per day)
stressful life events
depression
meditation experience
mental health conditions
degree of personal freedom
the number of decisions pending
The survey filled in by both the control and intervention groups after thinking about their decision included:
if the decision was made out of a genuine desire or a sense of obligation
if the decision was made for their own happiness or to please someone else
if the decision made was coherent with their self-identity
if they considered various or similar options
how many options they considered
if the decision made was a change or an adherence to the status quo
if the decision made was reversible
Appendix C: Demographics
The ethnic make-up of the study population looked as follows: 725 of the participants were Caucasian, 68 were Black, 65 were Asian, 62 were Latino, 15 were South East Asian, 19 were Native Americans, and 39 were East Asians.
The mean age of the studied group was 40, with a standard deviation of 11.6, a minimum of 20, 32 at the first quartile, 75 at the third quartile 46, and 77 at the maximum.
Most of the participants finished their education at an undergraduate level (403), 188 completed high school, 129 had an associate degree, 112 had a Master's, 49 had a technical or vocational degree, 29 had a professional degree, 13 had a PhD, 1 finished after the 8th grade, and 1 had no education at all.
Appendix D
The variables from the program included in the additional exploratory regression are:
Having additional information that would help with the decision. It was measured with a question: Can you think of any additional information you could go out and acquire that would help you decide which option is best in this matter? (“extrainfo”)
Having a person to talk to who could be helpful with the decision. It was measured with the question: Can you think of a person with whom you have not discussed this decision who could be useful to talk to? (“persontodiscusswith”)
Number of good qualities listed for the decision. (“A”)
Number of bad qualities listed for the decision. (“B”)
Prior confidence in the decision subtracted from final confidence. It was measured with the question: On a scale of 0% to 100%, how confident do you feel that you're going to choose the best available option for this decision? (“change_in_confidence”)
Final confidence in the decision. It was measured with the question: On a scale of 0% to 100%, how confident do you feel that you're going to choose the best available option for this decision? (“finalconfidence”)
Whether the decision that seemed optimal at the start of the program was changed to a different option at the end. It was recorded as 0 or 1. (“changedmind”)
How many options were considered for the decision. (“numberOfOptions”)
If the option considered was considered before participation in the program. (“hadNotPreviouslyConsideredTakingThisOption”)
If participants had invested a lot of resources in the decision. (“invested_ornot”)
If the decision had many consequences. (“manyconsequences_ornot”)
If the decision was upsetting. (“upsetting_ornot”)
If participants reported an increase in their decision confidence. (“confidenceInDecision”)
If participants picked what was recommended by the program. (“picked_recommended”)
If participants picked an option from the narrow framing exercise. (“choseAsFinalOptionOneFromNarrowFramingExercise”)