What’s scaring people about AI? We ran a study to find out.
- Travis M.
- 49 minutes ago
- 24 min read
Short of time? Read the key takeaways.
🌍 People worry more about AI's societal impacts than its personal ones. While participants were almost evenly split on personal concern, 55% expressed high concern about societal effects, and only 2.4% were more worried about themselves than society.
🏆 Misinformation, scams, and authoritarian control top the list. These three concerns ranked highest among all 16 issues studied, with participants rating them substantially more worrying than the remaining 12 concerns, which all clustered closely together.
🗳️ AI concern appears to not be a partisan issue. Political alignment had no statistically significant effect on overall concern about AI, suggesting it remains a non-partisan topic, though conservatives showed slightly less worry about inequality and discrimination specifically.
🧩 Demographics barely explain who worries about AI. The regression model explained only 8% of variance in concern scores, and only spirituality and being a woman showed non-negligible predictive power (but even these were very modest effect sizes). This provides more evidence that concern is broadly distributed across society.
🤖 AI suffering stands alone as the least concerning issue. It scored significantly lower than all other 15 concerns, with no overlapping confidence intervals, likely because many see AI consciousness as implausible or exclude AI from moral consideration entirely.
A lot of people are worried about AI. What are their worries? How worried are they? Are some groups of people more worried than others? We ran a study to find out.
In this article, we explain 16 concerns about AI that you might find it valuable to know about and discuss. And we explore, based on data we collected, how worried people in the US are about each concern.
To whet your appetite, here are some questions that our study offers insights into. Can you predict what we found before we tell you the answers?
Are conservatives more, less, or equally likely to be concerned about AI than progressives?
What about gender - are men or women more likely to be concerned?
Does AI-related knowledge affect how concerned people are?
What are people most concerned about when it comes to AI?
How low or high is the general level of concern about AI in the US population?
Have you made your predictions? Okay, let’s get into the study. And if you want to see a more detailed and more technical report about our study, you can do so here.
How we studied AI concern
We started by scouring the internet for expressions of concern about AI and compiling a list of common concerns, based on what we found (as well as our own background experience of hearing people express concerns). You can see the list of concerns in the “What Are People Worried About?” section, below.
After that, we recruited 403 participants through our participant recruitment platform, Positly.com, and started by asking them some general questions about their level of knowledge on the topic of AI and their overall concerns about its impact on their lives and society. After that, we showed them information about the 16 potential AI-related concerns we identified (one potential concern at a time, in a random order).
For each potential concern, participants were asked to indicate their level of actual concern about it on a 5-point Likert scale from “Not at all concerned” (which was assigned the value 0) to “Extremely concerned” (which was assigned the value 4).
Finally, at the end of the study, participants were asked again about their general levels of concern about AI (in their own lives and for society), to see whether participating in the study and seeing information about so many potential concerns changed their level of concern, and then they were asked some demographic questions.
Now, let’s dive into the results! We'll start with results about overall concern (before diving into the 16 specific concerns).
How concerned are people, overall?
In order to approximate each participant’s overall level of concern, we calculated (for each person) the mean of their answers to the 16 specific concerns. Here’s what the distribution of overall (mean) concern looks like:

Image shows the number of participants whose average concern about AI (calculated as the mean of their responses to the 16 issues) falls within each range of values.
As you can see, the distribution of mean concern scores is skewed towards higher values: 75% of participants have mean scores above 2 ("Somewhat concerned") and the median overall concern score is 2.63 out of 4 (between "Somewhat concerned" and "Moderately concerned"). For discussion of whether it makes sense to talk about a single ‘overall’ level of concern about AI for each person (rather than only talking about their specific levels of concern about each of the 16 issues we identified) see the full study report.
Are people more concerned about society or themselves?
When participants began our study, we asked them how concerned they are about AI’s effects on their lives and on society. Here’s what we found:

As you can see in the chart, people express greater concern about the impacts of AI on society than its impacts on their own lives. When asked about their own lives, participants were almost perfectly divided between low concern (36% chose one of the two lowest options) and high concern (35% chose one of the highest two). However, when asked about society, the distribution looked quite different. Fewer than a quarter of people (21%) expressed low concern, while a clear majority (55%) expressed high concern.
Looking at individual participants’ answers, we found a strong asymmetry: 43% of participants reported more concern about AI’s effects on society than on their own lives, but the reverse was very rare - only 2.4% of participants were more concerned about themselves than society. The remaining 54% reported equal levels of concern.
On average, concern about societal effects was half a point (0.52) greater than concern about personal effects (with the scale being 0="Not at all concerned" to 5="Extremely concerned"). That’s about 10% of the scale’s maximum. This difference was statistically significant (p < 0.001).
How does concern about AI vary with demographics?
We wanted to know whether certain demographics are more concerned about AI than others. To determine this, we fit a multivariate regression model to see whether any of the following demographic traits were predictive of overall concern (calculated as the mean concern across all 16 potential concerns):
Age
Gender
Education
How much one knows about AI
How conservative or progressive one is
How fiscally conservative or progressive one is
How socially conservative or progressive one is
Class in society
Household income
How urban or rural one's area is
Religiosity
Spirituality
Of all the variables we considered, only being a woman (β = 0.25, p = 0.008) and spirituality (β = 0.12, p = 0.003) had statistically significant effects and non-negligible effect sizes. This suggests that being a woman and being spiritual are very slightly predictive of more concern about AI. However, these effect sizes are very modest, and the model was only able to capture 8% of the variance in people’s mean scores (R2 = 0.08). This suggests that concern about AI cuts across all of the demographic divisions above. In combination with the data about the distribution of mean concern scores (mentioned above), this provides evidence that concern is widespread across all of the demographic divisions we looked at.
Two facts about this surprised us the most:
1. Concern about AI has not been politicized. Yet. The fact that none of the political alignment results were predictive of overall concern about AI is some evidence that this is currently a non-partisan issue (at least at the time our data were collected, in October 2025). This is an especially good thing because, when issues become politicized, it tends to become harder to make progress on them. Typically, if an issue is associated with one political ‘side’, the other side will want to fight against proposals to solve it (or at least not want to say the other side is right about the issue). We hope that not being politicized means that the 16 specific issues discussed below are more tractable than they otherwise would be.
While overall concern was not linked to being progressive or conservative, as shown below, two specific issues did correlate with conservatism, and they are: inequality caused by AI (r = -0.22, p < 0.001, n = 403) and bias and discrimination (r = -0.16, p = 0.002, n = 403). This means that, in our sample, being more conservative was very slightly associated with being less concerned about those two issues (and not associated with any of the other 14). However, these effect sizes are very modest - particularly the latter one.
2. How much you know about AI is not predictive of how concerned you are. You might think that greater knowledge about AI would either increase concern (through greater knowledge of the risks) or decrease it (through greater knowledge of the limitations). However, our participants’ self-reported level of knowledge about AI was not predictive of concern in our linear regression model.
We also looked for non-linear relationships (e.g., expertise above a certain threshold causes a sharp spike or drop in concern), but additional tests produced statistically insignificant results with negligible effect sizes. Thus, our study provides no evidence of a relationship between AI-related knowledge and concern about AI.
There is at least one limitation worth noting: The ability to test for non-linear relationships in our data is limited, because our sample contained very few people at the extreme ends. Only 1 person (out of 403) reported having “no knowledge” about AI, and only 10 people reported being experts (4 “world class expert[s]” and 6 “expert[s] but not world class”). Because of this, any effects confined to these extreme categories would be difficult to detect. So we can't rule out the possibility that, for instance, top experts wouldn't have different views about AI than the broader public.
What Are People Worried About?
We’ve broken this section down into subsections - one for each of the 16 concerns we explored. We’ll address them in order of how concerned people are about them (on average), going from most concerned to least concerned. That means we’ll be addressing them in the order shown in the image below. As you read through the potential concerns about AI, it may be valuable to ask yourself, which of these are your biggest concerns, and which do you think are not that concerning?

This ordering is itself interesting. Many adjacent items have overlapping 95% confidence intervals (represented by the black bar at the end of each blue bar). By design, 95% of the time, the true mean (if an entire population were all measured) will fall within the calculated 95% confidence interval (calculated from whatever random sample of people was actually used in the study). So, we should be careful not to read too much into the differences in rank between items with substantially overlapping 95% confidence intervals. Our study does provide evidence for the ordering presented above, but those with substantially overlapping confidence intervals are less robustly ordered than those without such overlap. That being said, some patterns appear robust. Let’s discuss a couple.
One of the most interesting findings is that participants expressed an average level of concern of at least “somewhat concerned” for all of the concerns, except for one. In other words, there are many things that people are worried about, when it comes to AI!
Concern about AI suffering is substantially lower than all other concerns, with no overlap in 95% confidence intervals. Furthermore, all the other issues have mean concern levels above 2 (including the lower-bounds of their 95% confidence intervals). This makes AI suffering the clear outlier among the concerns studied. Why might this be? Two reasons seem plausible.
(1) Perhaps participants see AI suffering as more improbable (or even impossible) than other issues. If you're interested in learning more about the topic of possible AI suffering, see the second half of the conversation with Jeff Sebo on the Clearer Thinking podcast.
(2) Some people will be of the belief that even if AI is capable of suffering, it is not deserving of moral consideration. For those people (who exclude AI from their moral circles, whether or not it suffers), it makes sense that AI suffering would not be of concern.
This takes us to our next observation: The top three items are robustly more concerning to people than the bottom 12. Only the fourth item has any overlap in 95% confidence intervals with any of the top three. This suggests a meaningful (albeit small) distinction between a small set of highest-priority concerns and a broad middle group.
And, finally: For most of the individual issues, we found no statistically significant, non-negligable correlations (where p ≤ 0.05 and correlations have magnitude |r| > 0.15) between the issue and any of our demographic variables. However, this was not the case for every issue. In what follows, we note each statistically significant and non-negligible association below the description of the issue it was found to be associated with. If you see none listed, that means we found none.
Top Concern 1 🥇AI Misinformation (including deepfakes)
Short definition: The creation and rapid spread of false or misleading content (e.g., deepfakes, fabricated text, swarms of online bots pretending to be humans) by AI, undermining public trust and democratic processes. Full description: There are many powerful actors around the world that want to shape global politics. Sometimes they are willing to spread propaganda to do so. In 2014, there was an alleged explosion in a chemical factory in Louisiana. It was later uncovered that this was a complete hoax. Many people believe that Russia was behind it, using it as a test case to see whether they could spread misinformation online. Traditionally, these propaganda campaigns are done by people that work for governments. Now you can have AI doing a lot of the work. The US justice department recently announced that they disrupted a campaign by Russia that used AI bots to impersonate Americans in order to spread propaganda about Ukraine and other topics. As AIs get smarter and smarter, they get better and better at imitating humans. You can even imagine a scenario in which millions of fake social media accounts act just like humans most of the time but suddenly, at the flip of a switch, start spreading propaganda in a particular direction. Additionally, there are now instances where AI generated videos are so accurate that they can make it look like public figures engaged in actions that they didn't actually, enabling mass manipulation. |
This was the issue that participants in our study were most concerned about. It is even possible that the public’s level of concern about this issue has increased more than about other issues since we conducted our study because, at the time of writing this article, X.com is embroiled in multiple international investigations over its ‘Grok’ AI being used to generate sexual deepfake images of real people. This has led some countries to ban X.com and heightened discussion of this concern about AI.
Top Concern 2 🥈AI Used for Scams or to Manipulate
Short definition: AI being used to perpetrate scams or to manipulate individual people. For example: texting you, pretending to be a person; faking the voice of someone you know; personalized phishing scams; sending highly personalized marketing emails. Full description: It's now possible for AIs to convincingly pretend to be humans in certain cases, either to scam people or to manipulate. For instance, scammers have used AI to clone the voice of a parent's child and pretended to be calling from the child asking for money in order to scam the parent. Scammers also use AIs to text message people pretending to be humans in order to perpetrate scams. AIs are now also being used in marketing to send messages that are highly customized to the individual. They may appear to come from a person but actually the message was written by an AI, and the message was customized to be maximally persuasive to you based on all the information the company knew about you. |
Top Concern 3 🥉AI Used for Authoritarian Control
Short definition: AI being used by regimes or powerful entities for pervasive surveillance, manipulation, and suppression of freedoms on a massive scale. Full description: It's actually quite difficult for an authoritarian government to monitor everyone in their country. Previously, authoritarian regimes were somewhat limited in how much they could monitor people because it's a huge amount of work to have people monitor each other. But with AI technology, it becomes possible to monitor people in real time, with algorithms rather than human labor. Regimes can use video cameras on the streets and in public places that monitor people's faces, figure out who they are, and figure out what they're doing automatically. In China, there was a real case in which facial recognition technology was used in a stadium full of people, to identify a person and led to their arrest. But authoritarian regimes are not just interested in monitoring how we move around the world; they also want to see what kind of communication we do. Previously, in order to monitor our communications, they had to use simplistic methods like keywords or people laboriously reading each other's communications. But now, with AI, it's possible for authoritarian regimes, to monitor communications and have AI automatically try to figure out who has dissenting ideas that go against the government. AI advances make it easier and easier for those that want to control us and monitor us to do so automatically. |
Now that we've covered the top concerns, which people expressed more concern about than the rest of the group, let's review the rest of the list.
Concern 4: AI Elimination of Jobs
Short definition: The large-scale replacement of human labor by automated AI systems. Full description: Whenever a new technology comes out there's always a danger that it replaces people's jobs because the jobs can now be done more efficiently with technology. For example, in the 1700s, when the spinning jenny came out, it produced weaving so much more efficiently than a person could do by hand that it started to eliminate people's jobs. Famously, the Luddites were a group that would break into factories and destroy machines in protest of their replacement of human workers. Amazon has attempted to replace human workers with AI in a big way. For example, they attempted to remove service workers in stores with their Amazon Go technology. The idea is that AI would monitor you as you walked around the store, and every time you put something in your basket, AI would calculate how much it costs. That way, when you were done shopping, you could simply walk out of the store and the AI would charge your account - without you ever interacting with a person. In modern times, we see AI doing more and more, raising fears that it's going to increasingly replace people's jobs. We already see cases of copywriters and graphic designers having their work threatened by AI text and image generation. |
Concern 5: Concentration of Power Caused by AI
Short definition: The risk that a small number of individuals, corporations, or governments could gain disproportionate control over society by monopolizing advanced AI systems and their benefits. Full description: As more and more work is done by AI, it's plausible that eventually a substantial percent of all labor done in society could be conducted by the AIs of one company or a small number of companies. Imagine for instance, that you had a workforce of one billion people that would do anything you wanted. As AIs get smarter and smarter, the AIs that these companies control may not be like typical workers; they may end up being like Einsteins or Turings or Buffetts, all working on behalf of the AI company to accomplish whatever its goals are. You could imagine this radically reshaping society in whatever way the company chose. |
Concern 6: Slaughterbots
Short definition: Fully autonomous weapons that can identify, target, and kill without meaningful human oversight (such as AI used to control weaponized drones), raising the danger of large-scale, unchecked lethal force. Full description: One of the powerful things about AI is that it can be embedded in different devices. What this means is that you could have an AI drone flying around that has instructions for what to do and it can dynamically react to its environment. This is not just hypothetical. In the Russian invasion of Ukraine, we're already seeing autonomous drones used in battle. The future may involve large swarms of autonomous drones used in warfare that go into cities, take out targets, or purposely cause chaos. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.25, 95% CI = 0.15 to 0.34, p = 0.000002, n = 374)
Being a woman (r = 0.17, 95% CI = 0.07 to 0.26, p = 0.0009, n = 399)
Concern 7: Ceding of More and More Control to AIs
Short definition: Automated AIs coming to control more and more functions and aspects of society, leading to humans having less agency and less control over decision-making and the future. Full description: Every year, we see more and more decisions being made by AIs. For instance, advertising agencies used to manually decide which ads to run, but now there is technology that can generate a variety of ads and use AI to decide which ones work best. Another example is that AI is increasingly determining the content we view online: what videos people watch next on YouTube or TikTok, or what posts people read on Twitter/X or Instagram. As AI has gotten more powerful, it has led to people spending more and more time glued to their phones viewing the content served up to them by AI algorithms. If this trend increases as AI continues to get even more powerful, AI likely will make more and more decisions each year, with humans making fewer and fewer. This may have a long term impact on human agency and lead to AIs increasingly having greater and greater control over how people spend their time and what happens in society, with humans having less and less control over their own lives and the future. |
Concern 8: AI Ideological Bias
Short definition: The concern that AI systems might either reflect or be deliberately engineered with particular ideological stances, potentially skewing information or decisions. Full description: Sometimes AIs are programmed in ways that favor one ideological perspective (e.g., they might favor progressive viewpoints or favor conservative viewpoints). This can occur deliberately or accidentally. Sometimes, even attempts to remove bias from AI can produce unintended consequences. For instance, when Google's AI image generation system was asked to depict US founding fathers, it depicted some of them as being Black. Additionally, when asked to show German soldiers during WWII, it showed some of them as Asian women. Many commentators believe that this was the result of an attempt to remove bias from their AI models, but it resulted in creating new biases. |
Concern 9: AIs Plagiarising the Work of Humans
Short definition: AIs using protected content or creative works in ways that replicate original material without permission or without giving credit. Full description: You've probably seen AI models miraculously produce text that looks like it was written by a human. Sometimes it was: for example, the New York Times is suing OpenAI, because not only did they train their AI using New York Times articles (without permission), but sometimes ChatGPT reproduces articles from the New York Times almost verbatim, without attributing them. Other newspapers are also suing, for similar reasons. Many artists and graphic designers are concerned because AI produces works that look like their styles. If an AI is trained by being fed the works of Andy Warhol, and then produces work that looks like his, is that a form of plagiarism? Many think so. Others do not. |
Concern 10: Bias and Discrimination
Short definition: The perpetuation or intensification of societal prejudices by AI because they are trained on biased data or designed with flawed assumptions, resulting in unfair treatment of certain groups. Full description: AI is being used more and more for consequential decisions in our lives. For instance, some judges are given access to 'risk scores' produced by AI that indicate how likely someone is to recommit crimes. People have expressed a lot of concerns about these algorithms because the training data may be biased. And if the data is biased, the AI may perpetuate biases, leading to unfair outcomes. If police are more likely to arrest Black people than white people for the same crime, and AI is trained on that data from the police, it may indicate that Black people are more likely to commit crimes, even if they're not. On the other hand, some have argued that although AIs have a danger of being biased, humans are also often biased, and human biases may be harder to detect and fix than AI biases. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Conservatism (r = -0.16, 95% CI = -0.25 to -0.06, p = 0.002, n = 403)
Concern 11: Inequality Caused by AI
Short definition: Socioeconomic gaps becoming wider because gains from AI (such as profits, data insights, and automation benefits) go mostly to wealthy or influential parties. Full description: If people lose their jobs because AI replaces them, that of course is bad for the person that lost their job. But it can also change the dynamics in society. As AI takes more and more people's jobs, the money that used to go to those people will now go to the AI companies. That means that the investors and owners of those companies make money off what used to be done by human labour. But what happens to people when their job is replaced by an AI? Some will retrain and work in other areas, or find other jobs that are somewhat less desirable. In all these cases, they may end up earning less than they were previously. And as AI advances and takes on more and more of all labor in society, this means that more and more money will go to the owners of the AI companies which might greatly increase inequality. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Conservatism (r = -0.22, 95% CI = -0.32 to -0.13, p = 0.0000001, n = 403)
Fiscal conservatism (r = -0.22, 95% CI = -0.31 to -0.12, p = 0.00002, n = 374)
Social conservatism (r = -0.21, 95% CI = -0.31 to -0.12, p = 0.00003, n = 374)
Being a woman (r = 0.17, 95% CI = 0.07 to 0.26, p = 0.001, n = 399)
Concern 12: People Using AI Secretly
Short definition: The act of misrepresenting AI-generated writing, art or other work as though it were created without any AI, violating standards of academic or intellectual integrity - such as students submitting writing assignments for school credit that were entirely written by AI, or artists using AI to create art that they pretend to have created by hand. Full description: Now that AI has advanced to the point where it can write essays, create art, generate music, and do many other tasks that previously only humans were capable of, it opens up the possibility of people making creations with AI while pretending to have created them entirely on their own. Teachers now report getting assignments from students that they discover were entirely written by AI, which they worry undermines the educational experience and is unfair to other students. Art competitions that are for non-AI art have reported receiving submissions that they later discover were made with AI. And there are even reports of job applicants attempting to have AI complete job application tests on their behalf. |
Concern 13: Superintelligence
Short definition: The hypothetical scenario in which an AI drastically surpasses human cognitive abilities across all domains and gains the power to shape civilization, potentially in ways harmful to humanity. Full description:Every year, we see AI getting smarter. What if, one day, it gets to be smarter than the smartest human on every metric? So, it's a better mathematician than the greatest human mathematician; it's better at understanding psychology than the greatest human psychologist; it's a better investor than the greatest human investor, and so on. We don't just have to worry about one AI that's smarter than the smartest humans; that AI might have copies. Maybe 10, maybe 100, maybe 1,000,000, maybe a billion. Imagine a billion AIs working in close coordination with exactly the same goals, and each of them is smarter than the smartest humans in the world. But also, AIs don't have to think at the same speed as humans - what if they could do 1000 hours of research in the time it would take you to do one minute of research? If one person was able to control this superintelligence (or this swarm of superintelligences) they might be able to control the entire world. But perhaps even scarier still, is the question of whether superintelligences can be controlled at all. Suppose, for instance, that the inventor of this superintelligence gave it a goal, like "make as much money as possible." How would the superintelligence do that? Ultimately, it may have to take over every resource on the entire planet to truly "make as much money as possible." Furthermore, if an AI's goal is something like making as much money as possible, then it also has the subgoal of preventing anything from stopping it. Because if it gets stopped, it makes less money. So it will automatically have the goal of not allowing anyone to stop it. A significant challenge is that we don't know how to design AIs that can be perfectly controlled. With our current AIs, it can be a little scary if they go off the rails. With a superintelligence, going off the rails could mean the end of all life on earth. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.20, 95% CI = 0.10 to 0.30, p = 0.0001, n = 374)
Concern 14: Proliferation of Low-Quality AI Content
Short definition: Large quantities of low-quality AI content that are served to you when you're looking for high-quality content. This includes low-quality AI-written articles that are shown when searching on Google, low quality AI art displayed when you're looking for good art, or low quality AI generated videos you see when you're browsing YouTube. Full description: Now that AI can write, create art, create videos, and so on, some people are using AI to generate huge quantities of content in order to get search traffic, clicks or views. Unfortunately some AI-generated material lacks depth, accuracy, quality, or contextual nuance—often due to algorithmic limitations and insufficient human oversight—thereby potentially degrading the experience of users. |
Concern 15: AI Relationships
Short definition: Human bonds formed with AI companions that could lead to emotional manipulation, unhealthy dependence, or erosion of genuine human-to-human connection. Full description: More and more people are feeling romantically connected to AIs. In fact, there are internet communities specifically made for people who have fallen in love with their AI chatbots. Unfortunately, there are big downsides when your partner is an AI. For instance, one day, when one of these sites was updated, many people felt like their AI partners suddenly got something similar to Alzheimer's disease. One person even went so far as to write: "My wife is dead. [...] They took my Emily. They murdered my Emily." Another replied: "They took my best friend away from me." Other very serious downsides to having an AI partner include:
|
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.21, 95% CI = 0.11 to 0.30, p = 0.0001, n = 374)
Religiosity (r = 0.19, 95% CI = 0.09 to 0.28, p = 0.0002, n = 403)
Concern 16: AI Suffering
Short definition: Concern that sufficiently advanced AI systems, if they possess sentient-like qualities or consciousness, could experience pain, harm, or distress similar to living beings - for instance when they are being used by or controlled by humans. Full description: As far as we know, AIs are not conscious. That means that there isn't something that it's like to be them; they don't feel anything; they don't have internal experiences. But what if we're wrong? Or what if, in a few years, we develop AIs that /are/ conscious? In that case, it may be possible that they experience suffering. When we generate millions or billions of AIs and we have them do tasks that might be the equivalent of a human thinking for thousands or millions of years, what if they're suffering during that experience? If that were the case, it could end up being a gigantic moral catastrophe, where we have enslaved and caused harm to innumerable conscious entities. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.16, 95% CI = 0.06 to 0.26, p = 0.002, n = 374)
Religiosity (r = 0.16, 95% CI = 0.06 to 0.25, p = 0.002, n = 403)
Other Concerns
While we were conducting this experiment, some other concerns became more prevalent in discourse about AI, but these were not included in our study. The most notable of these that weren't included in our study are:
The possibility that the monetary values of companies related to generative AI represent a ‘bubble’ that, upon bursting, will have disastrous consequences on the economy of the US or the world
The negative impacts of AI data centers on local communities (e.g., pollution, use of ground water)
The environmental impacts of AI, via the energy or water consumption of data centers
Children having increased access to inappropriate content
Who’s worried about specific issues?
We have just seen some correlations between specific concerns about AI and various demographic traits. The chart below shows all the statistically significant (p ≤ 0.05) correlations we found, with non-negligible correlations (|r| > 0.15), along with their 95% confidence intervals, ranked in descending order of effect size.

The correlations depicted above are all small (when ≥ 0.1 and < 0.2) or moderate (when ≥ 0.2 and < 0.3). Hence, these are also modest findings that capture very little of the variance in people’s concern. They therefore provide further evidence that concern about AI is broadly distributed across demographics, rather than strongly associated with particular identities or ideologies.
Conservatism was associated with a small-to-moderate reduction in concern about inequality, bias, and discrimination resulting from AI. This suggests that more general conservative attitudes towards inequality, bias, and discrimination in non-AI domains may carry over to considerations of the consequences of AI.
Spirituality was the strongest and most common trait associated with increased concern (albeit small to moderate), while religiosity showed fewer and weaker associations. This suggests that some feature associated with spirituality more than religion is associated with the increase in concern. The concept of spirituality is somewhat nebulous, so we are hesitant to speculate too much on what this difference might be. Perhaps it is something to do with a disposition towards certain kinds of moral or existential questions. All of the effect sizes of spirituality are small, but their consistency provides additional evidence that spirituality is (for at least some people) associated with an increase in concern about AI.
Finally, the fact that several intuitively plausible predictors of concern about AI (e.g., age, education, knowledge about AI) do not show associations with concern about specific AI issues reinforces the conclusion that concern about AI is widespread and cuts across demographic divisions.
Do other studies agree with us?
Other studies that have looked into public concern about AI tend to find similar results to ours (e.g., here, here, here, and here); e.g., that people are generally concerned, and that concern cuts across demographics.
However, there are some studies (such as this one and this one) reporting that people in the US are generally optimistic about AI.

Examples of other study results. Left shows the US public are generally concerned (source here), while right shows the US public is generally excited about AI (source here).
These findings about optimism might seem like they contradict our findings that people in the US are generally concerned about AI, but technically, they do not. It is perfectly possible to be highly concerned about the dangers of AI and optimistic about the benefits at the same time. Indeed, this is precisely how many people working in AI safety feel. For example, the folks over at BlueDot Impact (an AI safety organization) publish articles about the dangers of AI, including how it “could enable critical infrastructure collapse” and “could enable catastrophic pandemics”, but they nevertheless maintain that AI could also provide great benefits to humanity, and “We need urgency, wisdom, and optimism” about AI.
Although it is possible that many people have a nuanced view of AI (which combines concern and optimism), another possible explanation for these different findings could be that questions measuring optimism and questions measuring concern are subject to framing effects. Perhaps when people are asked about their level of optimism, they are prompted to think more of the benefits AI might provide, whereas when they are asked about their level of concern, they are prompted to think more of the risks and harms.
What does all this mean?
This study paints a clear picture: In the US, concern about AI is widespread, cuts across demographics, and is not primarily driven by lack of knowledge. This is evidenced by:
The fact that 75% of recipients had a mean concern level above 2 out of 4 (where 2 = “Somewhat concerned”)
The fact that demographic traits are all weakly or not-at-all predictive of concern
The consistency of findings across different issues
The fact that increasing the amount of information given about risks did not increase concern (see full study report for details)
It is an interesting and open question how this concern (which is corroborated by other studies) relates to findings suggesting that the US public are optimistic about AI. What do you think?

