Do people have a tendency to agree on anonymous surveys administered online?
The utility of self-report surveys depends on the degree to which research participants provide accurate and true responses. Unfortunately, a participant’s true beliefs do not appear to be the only factor influencing how they respond to surveys. Other things being equal, people are thought to be more likely to agree than to disagree to research questions, a tendency called acquiescence bias. People are also thought to engage in satisficing, in which they select the first response that is a ‘good enough’ reflection of their beliefs, rather than considering all response options and optimizing the degree to which their response reflects their beliefs. Another suspected contributor to participants’ response patterns is thought to be social desirability bias. An important task for survey designers is to understand these factors, how they interact, and how to correct for them, in order to minimize the degree to which participants' responses are systematically distorted by things other than their true beliefs.
For example, Kuru and Pasek (2016) estimated that 10 to 20% of research participants are affected by a tendency to agree, also known as acquiescence bias. It is commonly believed that this effect may systematically distort research results. Understanding of how it occurs, and under what circumstances it becomes more pronounced, can allow survey designers to plan for it so as to minimize its impact on survey validity. The social context of surveys seems to be especially important here: agreeing appears to be a social skill that we learn in an attempt to be more likable. Acquiescence bias has been found to increase when the person asking the question is perceived to have a higher social status (Krosnick, 1999; Pasek & Krosnick, 2010).
We wanted to investigate the degree to which the human tendency to agree clouds survey responses. To do this, we took a popular personality test - Big 5 SAPA Personality Test by David M. Condon - and applied it with two different presentations of the answer options. This study used data on 703 people in the U.S. recruited via Positly.com (and link here: https://positly.com/)All participants provided written informed consent and were reimbursed for their participation. Half of the participants were randomized to mark their answers on a scale from Strongly Agree (at the top) to Strongly Disagree (at the bottom). The other half of the study’s participants answered the same set of questions but were presented with a reversed scale, from Strongly Disagree (at the top) to Strongly Agree (at the bottom).
Answers set 1: 'AgreetoDisagree scale'
Answers set 2: ‘DisagreetoAgree scale’
Two ways answers were presented to participants
We were measuring not only how much participants tended to agree on both scales but also how often they tended to pick an answer at the top (regardless of whether the top answer was 'agree' or 'disagree'). The reason we controlled this second aspect is that acquiescence (the tendency to agree) is sometimes discussed in the context of satisficing (a tendency to pick the first 'good enough' option, instead of seeking the answer that is just right). We suspected that satisficing may enhance the effect of acquiescence in sets of answers starting with 'agree' (at the top of the page): as study participants read from top to bottom, they may simply stop at the first response they consider 'close enough' to their real answer.
Every time a participant picked an ‘agree’ answer, we added a point to their total acquiescence score. We added +3 for ‘Strongly agree,’ +2 for ‘Agree,’ and +1 for ‘Slightly agree’. We also subtracted a point every time a participant picked a ‘disagree’ answer. We subtracted -3 for ‘Strongly disagree,’ -2 for ‘Disagree,’ and -1 for ‘Slightly disagree’. In the end, the total score was our measurement of a participant’s tendency to agree: i.e., their acquiescence.
The majority of total acquiescence scores were higher than 0. This means that most of the answers were placed on the ‘agree’ side of the scale (rather than the disagree). As shown in Table One, the minimum total score was -0.7 and the total score for the 25th percentile was 0.07. The mean acquiescence score for the sample was 0.34, and the standard deviation (std hereafter) was 0.43. The histogram below shows the distribution of the total acquiescence scores in our study sample.
The picture below shows how acquiescence depends on the order in which answers were presented. Both presentation orders resulted in high total acquiescence scores. Interestingly giving the 'disagree' option as first didn’t change people’s inclination to agree, suggesting that satisficing was not a factor influencing the way in which participants answered questions.
The total acquiescence scores showed that people tend to agree rather than disagree. It is worth noticing that most of the average scores fall within the -0.5 to +1 range, which means that people tended to avoid extreme responses.
Summary statistics: Acquiescence mean: 0.34 std: 0.43 min: -0.7 25%: 0.07 75%: 0.48 max: 2.68
As -1 corresponds to ‘slightly disagree,’ this means that the minimum average answer (-0.7) was ~‘slightly disagree,’ - that is, the participant who had the lowest average acquiescence score had an average that approximately translated to slightly disagreeing with questions, on average. As shown in Table Two, the mean of the total acquiescence score for the ‘AgreetoDisagree’ scale (0.36) was higher than ‘DisagreetoAgree’ (0.32), although the differences between the regular and the reversed scale were not statistically significant (t = -1.14, p = 0.25).
Acquiescence scores: the two presentation orders mean std variance AgreetoDisagree 0.36 0.46 0.21 DisagreetoAgree 0.32 0.41 0.16
Since people tended to agree, regardless of the order in which answer options were presented, people chose options closer to the top of the list when the presentation order was ‘AgreetoDisagree.’ In contrast, when the presentation order was ‘DisagreetoAgree,’ they chose options that were closer to the bottom of the list. Consequently, participants presented with ‘AgreetoDisagree’ scales had significantly different satisficing scores than those presented with the ‘DisagreetoAgree’ (t = -20.92, p = 7.41e-76). However, this difference can simply be explained by the fact that people tended to agree on these questions rather than disagree, rather than being due to satisficing per se.
As we’ve shown above, people tended to answer ‘agree’ more often than ‘disagree’ when completing the personality test, on average. But we were also interested in whether the level of acquiescence depended on the social desirability of an affirmative answer to a question. We tested this by dividing the set of questions into three subsets, each with different social desirability levels. Based on previous work, we divided questions into three categories: socially desirable, neutral, and socially undesirable questions. (Here, a ‘socially desirable’ question is defined as a question to which an affirmative answer is perceived to be socially desirable.) The mean total acquiescence score in response to socially undesirable questions was -0.2, the mean score in response to neutral questions was 0.18, and the mean score in response to positive questions was 0.36. So, as expected, acquiescence was less pronounced for undesirable and neutral questions, although most of the distribution of acquiescence remained on the positive (‘agree’) side.
After comparing means of acquiescence for positive, neutral, and negative questions, we wanted to verify these results from a different angle, so we examined a set of questions in which each socially desirable question was paired with a relevant socially undesirable question. For example: ‘I tell a lot of lies’ was paired with ‘I tell the truth.’ There are 15 items in the SAPA test that can be paired with items of opposite meaning, so we tested acquiescence on this subset of 15 pairs (30 questions).
The average acquiescence for the subset of socially desirable questions was 0.12, and the average for the subset of socially undesirable questions was -0.9. The mean of the whole subset of 30 questions was 0.03, which remains on the positive (‘agree’) side of the scale, but only very slightly so.
So, were responses really being driven by acquiescence (the tendency some people may have to agree, regardless of the question)? To investigate this, we used linear regression to check how much of people’s answers could be explained by their acquiescence (tendency to agree) if we account for social desirability and satisficing. In this context, satisficing can be understood as the differences in affirmative responses based on presenting the answer options from ‘agree’ down to ‘disagree,’ compared to the reverse order. With linear regression, we predicted the responses of all participants to all personality questions, using social desirability and presentation order as the independent variables. In this context, acquiescence corresponded to the constant term or intercept in the linear regression model: essentially, any leftover tendency to agree, once the other factors were accounted for.
To code the social desirability of each question, we divided them into three categories: ‘desirable’ traits (coded as +1), ‘undesirable’ traits(coded as -1), and ‘neutral’ traits (coded as 0).
On average, when a question went up a category in social desirability (e.g., from ‘undesirable’ to ‘neutral’ or from ‘neutral’ to ‘desirable’), respondents’ average acquiescence score increased by 0.9 units (on the agreement scale which went from -3 to 3), meaning that moving up by 0.9 units corresponds to moving up by 15% of the range of the scale). In contrast, answers presented from ‘agree’ down to‘ disagree’ gained an additional average agreement of only 0.04 when compared to ones presented as 'disagree' down to 'agree.' This suggests a lack of satisficing behavior in our study sample.
The constant (intercept) term in the linear regression - which corresponds to the average level of agreement across all study participants (on the -3 to 3 agreement scale) - was 0.2, reflecting 3.3% of the range of the scale.
Changes in participant agreement (with agreement on a -3 to 3 scale)
due to different factors according to the linear regression model
From these results, it appears that participants in our sample did engage in socially desirable responding. In contrast, we did not find evidence of satisficing, and acquiescence in our data appeared to be minimal, once socially-desirable responding was accounted for. This raises the question of whether socially desirable responding and acquiescence bias may be confused with each other in certain settings.
Consider a questionnaire that happens to have more questions asking people about whether they have positive traits than questions asking them if they have negative traits. If a person agrees to these statements more often than they disagree, this could be caused by any of the following three forces:
Acquiescence: the tendency to agree, regardless of what is asked
Socially-desirable responding: the tendency for people to say they have positive traits and don't have negative ones
Personality: in cases where participants actually are high in the personality traits they are providing affirmative responses to
When surveys have more positive statements than negative ones, acquiescence and socially desirable responding will both lead to more agreement, hence it is difficult to tell which (if either) is influencing responses! Acquiescence can easily be confused for what is actually socially desirable responding in these cases. In the case of our study, it appears that socially desirable responding is a much greater force than acquiescence, and there may not even be much acquiescence once social desirability is accounted for.
But why would our study lack acquiescence when others find it? In some cases, those studies may not be cleanly separating the effects of acquiescence and socially desirable responding. Another hypothesis about the relative lack of acquiescence is that we used an anonymous online administration of the survey. In such a context, surveys participants may not perceive any social pressure to agree, whereas in a face-to-face or phone survey administered by a person (especially in situations where an interviewer is perceived by a participant as an authority figure) acquiescence may be much stronger.
Below, we have provided links to all of the study materials for this study. If you run any analyses on our data, or use our study materials, and reach any interesting conclusions, please let us know! (firstname.lastname@example.org)
Krosnick, Jon A. (February 1999). ‘Survey Research’. Annual Review of Psychology. 50 (1): 537–567.
Kuru, Ozan; Pasek, Josh (2016-04-01). ‘Improving social media measurement in surveys: Avoiding acquiescence bias in Facebook research’. Computers in Human Behavior. 57: 82–92.
Pasek, Josh; Krosnick, Jon A. (2010-02-25). Leighley, Jan E. (ed.). ‘Optimizing Survey Questionnaire Design in Political Science’. The Oxford Handbook of American Elections and Political Behavior.