Study Report: What concerns people about AI?
- Travis M.
- 2 hours ago
- 28 min read
Note: This is a longer and more technical report of our study into public concerns about AI. If you want to see the shorter, more layperson-friendly version, click here.
A lot of people are worried about AI. What are their worries? How worried are they? Are some demographics more worried than others? We ran a study to find out.
In this article, we explain 16 concerns about AI that you might find it valuable to know about. We discuss, based on our data (collected in October 2025), how worried people in the US are about each concern.
To whet your appetite, here are some questions that our study offers insights into. Can you predict what we found before we tell you the answers?
Are conservatives more, less, or equally likely to be concerned about AI than progressives?
What about gender - are men or women more likely to be concerned?
Does AI-related knowledge affect how concerned people are?
What are people most concerned about when it comes to AI?
How low or high is the general level of concern about AI in the US population?
Have you made your predictions? Okay, let’s get into the study.
How we studied AI concern
We started by scouring the internet for expressions of concern about AI and compiling a list of common concerns, based on what we found (as well as our own background experience of hearing people express concerns). The potential concerns about AI that we identified are:
Proliferation of low-quality AI content (i.e., ‘AI slop’)
AIs plagiarising the work of humans (e.g., remixing the work of artists without compensation)
AI elimination of jobs
AI misinformation (including deepfakes)
People using AI but pretending not to have (e.g., to write school assignments)
AI used for authoritarian control (e.g., for monitoring and punishing populations based on behavior)
Relationships (often romantic) people have with AIs
Inequality caused by AI (such as by creating concentration of wealth)
AI ideological bias (e.g., favoritism toward progressive or conservative viewpoints)
AI bias and discrimination (e.g., by perpetuating unfair unequal treatment of different groups)
Concentration of power caused by AI (e.g., making those who control the most advanced AIs much more powerful than everyone else)
AI used for scams or to manipulate individuals (e.g., AI bots designed to seem like specific humans in order to trick people)
Ceding of more and more control to AIs (e.g., making major decisions impacting millions of people that humans no longer make)
Slaughterbots (i.e., weaponized AI drones)
Superintelligence (i.e., AI that outperforms the ability of humans in essentially all domains)
AI itself experiencing suffering when we train or run it.
Each of these concerns is described and explored in more detail below.
While we were conducting this experiment (in October of 2025), some other concerns became more prevalent in discourse about AI, but these were not included in our study. The most notable of these that weren't included in our study are:
The possibility that the monetary values of companies related to generative AI represent a ‘bubble’ that, upon bursting, will have disastrous consequences on the economy of the US or the world
The negative impacts of AI data centers on local communities (e.g., pollution, use of ground water)
The environmental impacts of AI, via the energy or water consumption of data centers
Children having increased access to inappropriate content
We recruited 403 participants through our participant recruitment platform, Positly.com, and started by asking them some general questions about their level of knowledge on the topic of AI and their overall concerns about its impact on their lives and society. After that, we showed them information about the 16 potential AI-related concerns we identified (one potential concern at a time, in a random order). For this, we assigned each participant randomly to one of two groups:
Short Definitions: 200 participants were shown just a short sentence defining each of the 16 concerns
Full Descriptions: 203 participants were shown the same short sentence definitions as the Short Definitions group and a longer description of each concern, containing examples. (We’ve included all of the full descriptions in this article, below.)
For each potential concern, participants were asked to indicate their level of actual concern about it on a 5-point Likert scale from “Not at all concerned” (which was assigned the value 0) to “Extremely concerned” (which was assigned the value 4).
Finally, at the end of the study, participants were asked again about their general levels of concern about AI (in their own lives and for society), to see whether participating in the study and seeing information about so many potential concerns changed their level of concern, and then they were asked some demographic questions.
Now, let’s dive into the results! We'll start with results about overall concern (before diving into the 16 specific concerns).
Are people more concerned about society or themselves?
When participants began our study, we asked them how concerned they are about AI’s effects on their lives and on society. Here’s what we found:

When asked about their own lives, participants were almost perfectly divided between low concern (36% chose one of the two lowest options) and high concern (35% chose one of the highest two). However, when asked about society, the distribution looked quite different. Fewer than a quarter of people (21%) expressed low concern, while a clear majority (55%) expressed high concern.
Looking at individual participants’ answers, we found a strong asymmetry: 43% of participants reported more concern about AI’s effects on society than on their own lives, but the reverse was very rare - only 2.4% of participants were more concerned about themselves than society. The remaining 54% reported equal levels of concern.
On average, concern about societal effects was half a point (0.52) greater than concern about personal effects (with the scale being 0="Not at all concerned" to 5="Extremely concerned"). That’s about 10% of the scale’s maximum. A nonparametric paired test (Wilcoxon signed-rank) confirmed that this difference was statistically significant (p < 0.001).
Do AI concerns rise and fall together?
Given the diversity of topics covered by the 16 different concerns we asked about, you might think that it doesn't make sense to talk about one overall concern level. For example, if people who are very concerned about job losses aren't also typically concerned about AI plagiarism, then maybe the idea of a general ‘overall’ level of concern isn’t applicable. However, two different statistical tests provide evidence to the contrary.
1. When we conducted a factor analysis on the results from all 16 of our concern questions, we found that there is one strongly dominant factor. Here is the ‘scree’ plot (as it's called). As you can see, the first bar (shown in blue) is far higher than the others, indicating one dominant AI concern factor that accounted for about 50% of the variance in responses across all 16 potential concerns.

2. Pairwise correlations between all 16 of the different concerns also provide evidence of a general level of concern. Correlations between concerns are all positive (mean correlation (r) = 0.48) and all statistically significant (p < 0.002 for each correlation individually), as shown in the chart below.

Thus, we infer the existence of an underlying construct approximately captured by the notion of a participant’s ‘overall’ level of concern about AI. In order to approximate each participant’s overall level of concern, we calculated (for each person) the mean of their answers to the 16 specific concerns. Here’s what the distribution of overall (mean) concern looks like:

Image shows the number of participants whose average concern about AI (calculated as the mean of their responses to the 16 issues) falls within each range of values.
As you can see, the distribution of mean concern scores is skewed towards higher values: 75% of participants have mean scores above 2 ("somewhat concerned") and the median overall concern score is 2.63 (between "somewhat concerned" and "moderately concerned"). This, in combination with the results from the previous section, paints the picture that the public is, overall, concerned about the effects of AI, with few people expressing no concern.
How does concern about AI vary with demographics?
We wanted to know whether certain demographics are more concerned about AI than others. To determine this, we fitted a regression model to see whether any of the following demographic traits were predictive of overall concern (calculated as the mean concern across all 16 potential concerns):
Age
Gender
Education
How much one knows about AI
How conservative or progressive one is
How fiscally conservative or progressive one is
How socially conservative or progressive one is
Class in society
Household income
How urban or rural one's area is
Religiosity
Spirituality
Of all the variables we considered, only being a woman (β = 0.25, p = 0.008) and spirituality (β = 0.12, p = 0.003) had statistically significant effects and non-negligible effect sizes. This suggests that being a woman and being spiritual are very slightly predictive of more concern about AI. However, these effect sizes are very modest, and the model was only able to capture 8% of the variance in people’s mean scores (R2 = 0.08). This suggests that concern about AI cuts across all of the demographic divisions above. In combination with the data about the distribution of mean concern scores (see previous section), this provides evidence that concern is widespread across all of the demographic divisions we looked at.
Two facts about this surprised us the most:
1. Concern about AI has not been politicized. Yet. The fact that none of the political alignment results were predictive of overall concern about AI is some evidence that this is currently a non-partisan issue. This is an especially good thing because, when issues become politicized, it tends to become harder to make progress on them. Typically, if an issue is associated with one political ‘side’, the other side will want to fight against proposals to solve it (or at least not want to say the other side is right about the issue). We hope that not being politicized means that the 16 specific issues discussed below are more tractable than they otherwise would be.
While overall concern was not linked to being progressive or conservative, as shown below, two specific issues did correlate with conservatism, and they are: inequality caused by AI (r = -0.22, p < 0.001, n = 403) and bias and discrimination (r = -0.16, p = 0.002, n = 403). This means that, in our sample, being more conservative was very slightly associated with being less concerned about those two issues (and not associated with any of the other 14). However, these effect sizes are very modest - particularly the latter one.
2. How much you know about AI is not predictive of how concerned you are. You might think that greater knowledge about AI would either increase concern (through greater knowledge of the risks) or decrease it (through greater knowledge of the limitations). However, our participants’ self-reported level of knowledge about AI was not predictive of concern in our linear regression model.
Linear regressions look for linear relationships. This means it is possible that a non-linear relationship exists between knowledge and concern, but is not detectable using the method above. Since it is a priori plausible that the relationship between knowledge and concern about AI would be non-linear (e.g., because expertise above a certain threshold causes a sharp spike or drop in concern), we tested for a monotonic relationship using Spearman’s rank correlation, and we tested for a curvilinear relationship using a quadratic regression model. Both analyses produced statistically insignificant results with negligible effect sizes (Spearman’s: ρ = -0.06, p = 0.24, n = 403; quadratic regression: R² = 0.003; quadratic term β = 0.03, p = 0.44). These results provide no evidence of a relationship between AI-related knowledge and concern about AI.
There is at least one limitation worth noting: The ability to test for non-linear relationships in our data is limited, because our sample contained very few people at the extreme ends. Only 1 person (out of 403) reported having “no knowledge” about AI, and only 10 people reported being experts (4 “world class expert[s]” and 6 “expert[s] but not world class”). Thus, effects confined to these extreme categories would be difficult to detect. So we can't rule out the possibility that, for instance, top experts wouldn't have different views about AI than the broader public.

Image shows mean concern about AI (calculated as the mean of concerns about the 16 issues) plotted against the self-reported level of knowledge about AI. Black markers indicate the mean at each level of AI knowledge, with bars showing the 95% confidence intervals. No clear linear or non-linear relationship is apparent. No bars are shown for when AI knowledge is ‘No knowledge’ due to limited data.
What Are People Worried About?
We’ve broken this section down into subsections - one for each of the 16 concerns we explored. We’ll address them in order of how concerned people are about them (on average), going from most concerned to least concerned. That means we’ll be addressing them in the order shown in the image below. As you read through the potential concerns about AI, it may be valuable to ask yourself, which of these are your biggest concerns, and which do you think are not that concerning?

This ordering is itself interesting. Many adjacent items have overlapping 95% confidence intervals (represented by the black bar at the end of each blue bar). By design, 95% of the time, the true mean (if an entire population were all measured) will fall within the calculated 95% confidence interval (calculated from whatever random sample of people was actually used in the study). So, we should be careful not to read too much into the differences in rank between items with substantially overlapping 95% confidence intervals. Our study does provide evidence for the ordering presented above, but those with substantially overlapping confidence intervals are less robustly ordered than those without such overlap. That being said, some patterns appear robust. Let’s discuss a couple.
One of the most interesting findings is that participants expressed an average level of concern of at least “somewhat concerned” for all of the concerns, except for one. In other words, people are worried about many potential concerns from AI! It's also interesting to note the one potential concern that they were less worried about.
Concern about AI suffering is substantially lower than all other concerns, with no overlap in 95% confidence intervals. Furthermore, all the other issues have mean concern levels above 2 (including the lower-bounds of their 95% confidence intervals). This makes AI suffering the clear outlier among the concerns studied. Why might this be? Two reasons seem plausible.
(1) Perhaps participants see AI suffering as more improbable (or even impossible) than other issues. Of all the issues we studied, AI suffering and superintelligence seem to be most related to highly speculative future developments. Some very visible public figures, such as AI industry leaders, have used public platforms to talk frequently about the alleged near-possibility of superintelligence (and we have explored some of the relevant arguments in a recent newsletter, which you can read here), but no such discussion is happening about AI suffering. Thus, one candidate explanation for why people seem much less concerned about AI suffering is: AI suffering seems highly speculative and there are not high-profile people telling us to take it seriously. If you're interested in learning more about this topic of possible AI suffering, see the second half of the conversation with Jeff Sebo on the Clearer Thinking podcast.
(2) Perhaps participants would not include AIs within their ‘moral circles’. Your moral circle is the set of all the entities you think are deserving of moral consideration. In other words, the set of all the entities you weigh in your moral calculations or moral judgments. For example, if non-human animals are in your moral circle, then you think they matter morally, and you probably factor things like their interests or their capacity for suffering into your moral decision-making. Some people will be of the belief that even if AI is capable of suffering, it is not deserving of moral consideration. For those people (who exclude AI from their moral circles, whether or not it suffers), it makes sense that AI suffering would not be of concern.
This takes us to our next observation...
The top three items are robustly more concerning to people than the bottom 12. Only the fourth item has any overlap in 95% confidence intervals with any of the top three. This suggests a meaningful (albeit small) distinction between a small set of highest-priority concerns and a broad middle group.
For most of the individual issues, we found no statistically significant, non-negligable correlations (where p ≤ 0.05 and correlations have magnitude |r| > 0.15) between the issue and any of our demographic variables. However, this was not the case for every issue. In what follows, we note each statistically significant and non-negligible association below the description of the issue it was found to be associated with.
Top Concern 1: AI Misinformation (including deepfakes)
Short definition: The creation and rapid spread of false or misleading content (e.g., deepfakes, fabricated text, swarms of online bots pretending to be humans) by AI, undermining public trust and democratic processes. Full description: There are many powerful actors around the world that want to shape global politics. Sometimes they are willing to spread propaganda to do so. In 2014, there was an alleged explosion in a chemical factory in Louisiana. It was later uncovered that this was a complete hoax. Many people believe that Russia was behind it, using it as a test case to see whether they could spread misinformation online. Traditionally, these propaganda campaigns are done by people that work for governments. Now you can have AI doing a lot of the work. The US justice department recently announced that they disrupted a campaign by Russia that used AI bots to impersonate Americans in order to spread propaganda about Ukraine and other topics. As AIs get smarter and smarter, they get better and better at imitating humans. You can even imagine a scenario in which millions of fake social media accounts act just like humans most of the time but suddenly, at the flip of a switch, start spreading propaganda in a particular direction. Additionally, there are now instances where AI generated videos are so accurate that they can make it look like public figures engaged in actions that they didn't actually, enabling mass manipulation. |
This was the issue that participants in our study were most concerned about. It is even possible that the public’s level of concern about this issue has increased more than about other issues since we conducted our study because, at the time of writing this article, X.com is embroiled in multiple international investigations over its ‘Grok’ AI being used to generate sexual deepfake images of real people. This has led some countries to ban X.com and heightened discussion of this concern about AI.
Top Concern 2: AI Used for Scams or to Manipulate
Short definition: AI being used to perpetrate scams or to manipulate individual people. For example: texting you, pretending to be a person; faking the voice of someone you know; personalized phishing scams; sending highly personalized marketing emails. Full description: It's now possible for AIs to convincingly pretend to be humans in certain cases, either to scam people or to manipulate. For instance, scammers have used AI to clone the voice of a parent's child and pretended to be calling from the child asking for money in order to scam the parent. Scammers also use AIs to text message people pretending to be humans in order to perpetrate scams. AIs are now also being used in marketing to send messages that are highly customized to the individual. They may appear to come from a person but actually the message was written by an AI, and the message was customized to be maximally persuasive to you based on all the information the company knew about you. |
Top Concern 3: AI Used for Authoritarian Control
Short definition: AI being used by regimes or powerful entities for pervasive surveillance, manipulation, and suppression of freedoms on a massive scale. Full description: It's actually quite difficult for an authoritarian government to monitor everyone in their country. Previously, authoritarian regimes were somewhat limited in how much they could monitor people because it's a huge amount of work to have people monitor each other. But with AI technology, it becomes possible to monitor people in real time, with algorithms rather than human labor. Regimes can use video cameras on the streets and in public places that monitor people's faces, figure out who they are, and figure out what they're doing automatically. In China, there was a real case in which facial recognition technology was used in a stadium full of people, to identify a person and led to their arrest. But authoritarian regimes are not just interested in monitoring how we move around the world; they also want to see what kind of communication we do. Previously, in order to monitor our communications, they had to use simplistic methods like keywords or people laboriously reading each other's communications. But now, with AI, it's possible for authoritarian regimes, to monitor communications and have AI automatically try to figure out who has dissenting ideas that go against the government. AI advances make it easier and easier for those that want to control us and monitor us to do so automatically. |
Now that we've covered the top concerns - that people expressed more concern about than the rest of the group, let's review the rest of the list.
Concern 4: AI Elimination of Jobs
Short definition: The large-scale replacement of human labor by automated AI systems. Full description: Whenever a new technology comes out there's always a danger that it replaces people's jobs because the jobs can now be done more efficiently with technology. For example, in the 1700s, when the spinning jenny came out, it produced weaving so much more efficiently than a person could do by hand that it started to eliminate people's jobs. Famously, the Luddites were a group that would break into factories and destroy machines in protest of their replacement of human workers. Amazon has attempted to replace human workers with AI in a big way. For example, they attempted to remove service workers in stores with their /Amazon Go/ technology. The idea is that AI would monitor you as you walked around the store, and every time you put something in your basket, AI would calculate how much it costs. That way, when you were done shopping, you could simply walk out of the store and the AI would charge your account - without you ever interacting with a person. In modern times, we see AI doing more and more, raising fears that it's going to increasingly replace people's jobs. We already see cases of copywriters and graphic designers having their work threatened by AI text and image generation. |
Concern 5: Concentration of Power Caused by AI
Short definition: The risk that a small number of individuals, corporations, or governments could gain disproportionate control over society by monopolizing advanced AI systems and their benefits. Full description: As more and more work is done by AI, it's plausible that eventually a substantial percent of all labor done in society could be conducted by the AIs of one company or a small number of companies. Imagine for instance, that you had a workforce of one billion people that would do anything you wanted. As AIs get smarter and smarter, the AIs that these companies control may not be like typical workers; they may end up being like Einsteins or Turings or Buffetts, all working on behalf of the AI company to accomplish whatever its goals are. You could imagine this radically reshaping society in whatever way the company chose. |
Concern 6: Slaughterbots
Short definition: Fully autonomous weapons that can identify, target, and kill without meaningful human oversight (such as AI used to control weaponized drones), raising the danger of large-scale, unchecked lethal force. Full description: One of the powerful things about AI is that it can be embedded in different devices. What this means is that you could have an AI drone flying around that has instructions for what to do and it can dynamically react to its environment. This is not just hypothetical. In the Russian invasion of Ukraine, we're already seeing autonomous drones used in battle. The future may involve large swarms of autonomous drones used in warfare that go into cities, take out targets, or purposely cause chaos. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.25, 95% CI = 0.15 to 0.34, p = 0.000002, n = 374)
Being a woman (r = 0.17, 95% CI = 0.07 to 0.26, p = 0.0009, n = 399)
Concern 7: Ceding of More and More Control to AIs
Short definition: Automated AIs coming to control more and more functions and aspects of society, leading to humans having less agency and less control over decision-making and the future. Full description: Every year, we see more and more decisions being made by AIs. For instance, advertising agencies used to manually decide which ads to run, but now there is technology that can generate a variety of ads and use AI to decide which ones work best. Another example is that AI is increasingly determining the content we view online: what videos people watch next on YouTube or TikTok, or what posts people read on Twitter/X or Instagram. As AI has gotten more powerful, it has led to people spending more and more time glued to their phones viewing the content served up to them by AI algorithms. If this trend increases as AI continues to get even more powerful, AI likely will make more and more decisions each year, with humans making fewer and fewer. This may have a long term impact on human agency and lead to AIs increasingly having greater and greater control over how people spend their time and what happens in society, with humans having less and less control over their own lives and the future. |
Concern 8: AI Ideological Bias
Short definition: The concern that AI systems might either reflect or be deliberately engineered with particular ideological stances, potentially skewing information or decisions. Full description: Sometimes AIs are programmed in ways that favor one ideological perspective (e.g., they might favor progressive viewpoints or favor conservative viewpoints). This can occur deliberately or accidentally. Sometimes, even attempts to remove bias from AI can produce unintended consequences. For instance, when Google's AI image generation system was asked to depict US founding fathers, it depicted some of them as being Black. Additionally, when asked to show German soldiers during WWII, it showed some of them as Asian women. Many commentators believe that this was the result of an attempt to remove bias from their AI models, but it resulted in creating new biases. |
Concern 9: AIs Plagiarising the Work of Humans
Short definition: AIs using protected content or creative works in ways that replicate original material without permission or without giving credit. Full description: You've probably seen AI models miraculously produce text that looks like it was written by a human. Sometimes it was: for example, the New York Times is suing OpenAI, because not only did they train their AI using New York Times articles (without permission), but sometimes ChatGPT reproduces articles from the New York Times almost verbatim, without attributing them. Other newspapers are also suing, for similar reasons. Many artists and graphic designers are concerned because AI produces works that look like their styles. If an AI is trained by being fed the works of Andy Warhol, and then produces work that looks like his, is that a form of plagiarism? Many think so. Others do not. |
Concern 10: Bias and Discrimination
Short definition: The perpetuation or intensification of societal prejudices by AI because they are trained on biased data or designed with flawed assumptions, resulting in unfair treatment of certain groups. Full description: AI is being used more and more for consequential decisions in our lives. For instance, some judges are given access to 'risk scores' produced by AI that indicate how likely someone is to recommit crimes. People have expressed a lot of concerns about these algorithms because the training data may be biased. And if the data is biased, the AI may perpetuate biases, leading to unfair outcomes. If police are more likely to arrest Black people than white people for the same crime, and AI is trained on that data from the police, it may indicate that Black people are more likely to commit crimes, even if they're not. On the other hand, some have argued that although AIs have a danger of being biased, humans are also often biased, and human biases may be harder to detect and fix than AI biases. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Conservatism (r = -0.16, 95% CI = -0.25 to -0.06, p = 0.002, n = 403)
Concern 11: Inequality Caused by AI
Short definition: Socioeconomic gaps becoming wider because gains from AI (such as profits, data insights, and automation benefits) go mostly to wealthy or influential parties. Full description: If people lose their jobs because AI replaces them, that of course is bad for the person that lost their job. But it can also change the dynamics in society. As AI takes more and more people's jobs, the money that used to go to those people will now go to the AI companies. That means that the investors and owners of those companies make money off what used to be done by human labour. But what happens to people when their job is replaced by an AI? Some will retrain and work in other areas, or find other jobs that are somewhat less desirable. In all these cases, they may end up earning less than they were previously. And as AI advances and takes on more and more of all labor in society, this means that more and more money will go to the owners of the AI companies which might greatly increase inequality. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Conservatism (r = -0.22, 95% CI = -0.32 to -0.13, p = 0.0000001, n = 403)
Fiscal conservatism (r = -0.22, 95% CI = -0.31 to -0.12, p = 0.00002, n = 374)
Social conservatism (r = -0.21, 95% CI = -0.31 to -0.12, p = 0.00003, n = 374)
Being a woman (r = 0.17, 95% CI = 0.07 to 0.26, p = 0.001, n = 399)
Concern 12: People Using AI Secretly
Short definition: The act of misrepresenting AI-generated writing, art or other work as though it were created without any AI, violating standards of academic or intellectual integrity - such as students submitting writing assignments for school credit that were entirely written by AI, or artists using AI to create art that they pretend to have created by hand. Full description: Now that AI has advanced to the point where it can write essays, create art, generate music, and do many other tasks that previously only humans were capable of, it opens up the possibility of people making creations with AI while pretending to have created them entirely on their own. Teachers now report getting assignments from students that they discover were entirely written by AI, which they worry undermines the educational experience and is unfair to other students. Art competitions that are for non-AI art have reported receiving submissions that they later discover were made with AI. And there are even reports of job applicants attempting to have AI complete job application tests on their behalf. |
Concern 13: Superintelligence
Short definition: The hypothetical scenario in which an AI drastically surpasses human cognitive abilities across all domains and gains the power to shape civilization, potentially in ways harmful to humanity. Full description:Every year, we see AI getting smarter. What if, one day, it gets to be smarter than the smartest human on every metric? So, it's a better mathematician than the greatest human mathematician; it's better at understanding psychology than the greatest human psychologist; it's a better investor than the greatest human investor, and so on. We don't just have to worry about one AI that's smarter than the smartest humans; that AI might have copies. Maybe 10, maybe 100, maybe 1,000,000, maybe a billion. Imagine a billion AIs working in close coordination with exactly the same goals, and each of them is smarter than the smartest humans in the world. But also, AIs don't have to think at the same speed as humans - what if they could do 1000 hours of research in the time it would take you to do one minute of research? If one person was able to control this superintelligence (or this swarm of superintelligences) they might be able to control the entire world. But perhaps even scarier still, is the question of whether superintelligences can be controlled at all. Suppose, for instance, that the inventor of this superintelligence gave it a goal, like "make as much money as possible." How would the superintelligence do that? Ultimately, it may have to take over every resource on the entire planet to truly "make as much money as possible." Furthermore, if an AI's goal is something like making as much money as possible, then it also has the subgoal of preventing anything from stopping it. Because if it gets stopped, it makes less money. So it will automatically have the goal of not allowing anyone to stop it. A significant challenge is that we don't know how to design AIs that can be perfectly controlled. With our current AIs, it can be a little scary if they go off the rails. With a superintelligence, going off the rails could mean the end of all life on earth. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.20, 95% CI = 0.10 to 0.30, p = 0.0001, n = 374)
Concern 14: Proliferation of Low-Quality AI Content
Short definition: Large quantities of low-quality AI content that are served to you when you're looking for high-quality content. This includes low-quality AI-written articles that are shown when searching on Google, low quality AI art displayed when you're looking for good art, or low quality AI generated videos you see when you're browsing YouTube. Full description: Now that AI can write, create art, create videos, and so on, some people are using AI to generate huge quantities of content in order to get search traffic, clicks or views. Unfortunately some AI-generated material lacks depth, accuracy, quality, or contextual nuance—often due to algorithmic limitations and insufficient human oversight—thereby potentially degrading the experience of users. |
Concern 15: AI Relationships
Short definition: Human bonds formed with AI companions that could lead to emotional manipulation, unhealthy dependence, or erosion of genuine human-to-human connection. Full description: More and more people are feeling romantically connected to AIs. In fact, there are internet communities specifically made for people who have fallen in love with their AI chatbots. Unfortunately, there are big downsides when your partner is an AI. For instance, one day, when one of these sites was updated, many people felt like their AI partners suddenly got something similar to Alzheimer's disease. One person even went so far as to write: "My wife is dead. [...] They took my Emily. They murdered my Emily." Another replied: "They took my best friend away from me." Other very serious downsides to having an AI partner include:
|
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.21, 95% CI = 0.11 to 0.30, p = 0.0001, n = 374)
Religiosity (r = 0.19, 95% CI = 0.09 to 0.28, p = 0.0002, n = 403)
Concern 16: AI Suffering
Short definition: Concern that sufficiently advanced AI systems, if they possess sentient-like qualities or consciousness, could experience pain, harm, or distress similar to living beings - for instance when they are being used by or controlled by humans. Full description: As far as we know, AIs are not conscious. That means that there isn't something that it's like to be them; they don't feel anything; they don't have internal experiences. But what if we're wrong? Or what if, in a few years, we develop AIs that /are/ conscious? In that case, it may be possible that they experience suffering. When we generate millions or billions of AIs and we have them do tasks that might be the equivalent of a human thinking for thousands or millions of years, what if they're suffering during that experience? If that were the case, it could end up being a gigantic moral catastrophe, where we have enslaved and caused harm to innumerable conscious entities. |
We found the following statistically significant correlations between concern about this issue and demographic characteristics:
Spirituality (r = 0.16, 95% CI = 0.06 to 0.26, p = 0.002, n = 374)
Religiosity (r = 0.16, 95% CI = 0.06 to 0.25, p = 0.002, n = 403)
Who’s worried about specific issues?
We have just seen some correlations between specific concerns about AI and various demographic traits. The chart below shows all the statistically significant (p ≤ 0.05) correlations we found, with non-negligible correlations (|r| > 0.15), along with their 95% confidence intervals, ranked in descending order of effect size.

Here we'll use a simple (though imperfect) rubric for interpreting correlation sizes in behavioral science, based on Cohen’s 1988 criteria. Given this, we would describe the correlations depicted above as small (when ≥ 0.1 and < 0.2) or moderate (when ≥ 0.2 and < 0.3). Hence, these are also modest findings that capture very little of the variance in concern. They therefore provide further evidence that concern about AI is broadly distributed across demographics, rather than strongly associated with particular identities or ideologies.
Conservatism was associated with a small-to-moderate reduction in concern about inequality, bias, and discrimination resulting from AI. This suggests that more general conservative attitudes towards inequality, bias, and discrimination in non-AI domains may carry over to considerations of the consequences of AI.
Spirituality was the strongest and most common trait associated with increased concern (albeit small to moderate), while religiosity showed fewer and weaker associations. This suggests that some feature associated with spirituality more than religion is associated with the increase in concern. This is consistent with the result (reported in the section “Does concern about AI cut across demographics?”) that spirituality was a statistically significant predictor (β = 0.12, p = 0.003) of overall concern, but religiosity was not. The concept of spirituality is somewhat nebulous, so we are hesitant to speculate too much on what this difference might be. Perhaps it is something to do with a disposition towards certain kinds of moral or existential questions. All of the effect sizes of spirituality are small, but their consistency provides additional evidence that spirituality is (for at least some people) associated with an increase in concern about AI.
Finally, the fact that several intuitively plausible predictors of concern about AI (e.g., age, education, knowledge about AI) do not show associations with concern about specific AI issues reinforces the conclusion that concern about AI is widespread and cuts across demographic divisions.
Did our study make people more concerned?
As part of our study, we ran an experiment whereby we sorted participants into two groups: Roughly half (n = 200) saw only the short definition of each issue, while the other half (n = 203) saw the short definition and the full description including examples. We found that being shown the extra information made no difference to how concerned people are. There was a very small increase in means when being shown the full description (0.14, on a scale from 0 to 4), but this was statistically indistinguishable from zero at conventional confidence levels.

These results provide evidence that simply giving participants more detailed descriptions of the dangers of AI does not meaningfully increase reported concern. This, in turn, provides evidence for some perhaps-surprising conclusions.
Increasing basic information about the potential concerns seems unlikely to cause people to change their minds a lot about them. If it were, we’d expect to see concern raise more substantially in the group that were given full descriptions, but we didn't find that. Of course, if some of the hypothetical concerns were to actually come to pass, or if infrequently occurring issues were to start becoming more common, and people learned about this (for instance, from the news), perhaps that could increase levels of concern.
Earlier in this study, we explored the possibility of a relationship between self-reported levels of AI knowledge and concern. There, we found no relationship. You might be skeptical of those findings on the grounds that people might be inclined to misreport their level of knowledge. The finding in this section lends some support to our earlier findings because it does not rely on participants’ own assessments of their expertise - giving them information didn't change their views, on average.
Detailed framing of the risks doesn’t increase concern. Consider these two questions:
How scared are you of death?
How scared are you of death, given that it might be painful, you’ll never see your loved ones again, you might leave them bereft, and you’ll never get to achieve any goals after that point?
It would seem plausible that the question with more scary details would elicit a stronger response. However, in the case of our study’s questions about issues related to AI, that didn’t happen. At least in this context, making risks more vivid (by adding the extra details contained in the full descriptions) did not reliably increase concern. Why might that be?
Well, the results of this section are consistent with several different interesting hypotheses, such as:
Maybe participants had stable views about AI already. This explanation is supported by the fact that 74% of participants (300 out of 403) reported having at least ‘moderate’ knowledge of AI.
Maybe people were too concerned about AI already. This would limit how much extra concern could be generated by additional information.
Maybe concern about AI is driven by values as much as by facts. This is supported by the fact that the effects of additional information were comparable in size to the effects of some values-based variables (e.g., political orientation, spirituality)
Ultimately, these results are reassuring: They suggest that studies measuring concern about AI are unlikely to substantially skew results by giving participants more details.
Do other studies agree with us?
Other studies that have looked into public concern about AI tend to find similar results to ours (e.g., here, here, here, and here); e.g., that people are generally concerned, and that concern cuts across demographics.
However, there are some studies (such as this one and this one) reporting that people in the US are generally optimistic about AI.

Examples of other study results. Left shows the US public are generally concerned (source here), while right shows the US public are generally excited about AI (source here).
These findings about optimism might seem like they contradict our findings that people in the US are generally concerned about AI, but technically they do not. It is perfectly possible to be highly concerned about the dangers of AI and optimistic about the benefits at the same time. Indeed, this is precisely how many people working in AI safety feel. For example, the folks over at BlueDot Impact (an AI safety organization) publish articles about the dangers of AI, including how it “could enable critical infrastructure collapse” and “could enable catastrophic pandemics”, but they nevertheless maintain that AI could also provide great benefits to humanity and “We need urgency, wisdom and optimism” about AI.
Although it is possible that many people have a nuanced view of AI (which combines concern and optimism), another possible explanation for these different findings could be that questions measuring optimism and questions measuring concern are subject to framing effects. Perhaps when people are asked about their level of optimism, they are prompted to think more of the benefits AI might provide, whereas when they are asked about their level of concern, they are prompted to think more of the risks and harms.
What does all this mean?
This study paints a clear picture: In the US, concern about AI is widespread, cuts across demographics, and is not primarily driven by lack of knowledge. This is evidenced by:
The fact that 75% of recipients had a mean concern level above 2 out of 4 (where 2 = “Somewhat concerned”)
The fact that demographic traits are all weakly or not-at-all predictive of concern
The fact that increasing the amount of information given about risks did not increase concern
The consistency of findings across different issues
It is an interesting and open question how this concern (which is corroborated by other studies) relates to findings suggesting that the US public are optimistic about AI. What do you think?




