top of page

Clearer Thinking's Study: Can astrologers truly gain insights about people from entire astrological charts?

Updated: Aug 22


 

Astrology is very popular — both Gallup and YouGov report that about 25% of Americans believe that the position of the stars and planets can affect people's lives, with an additional 20% of people reporting being uncertain about astrology’s legitimacy.

 

Previously, we tested whether facts about a person's life can be predicted using their astrological sun signs (such as Pisces, Aries, etc.). A number of astrologers criticized this work, saying that of course we found that sun signs don't predict facts about a person's life, because that's baby or tabloid astrology. Real astrologers use people's entire astrological charts to glean insights about them and their lives. 


And they had a good point! Despite sun sign astrology being popular, most astrologers use entire astrological charts, not merely people's sun signs. Here are some examples of the feedback we received:



Inspired by these critiques, we enlisted the help of six astrologers, and with their feedback and guidance, we designed a new test to see whether astrologers can truly gain insights about people from entire astrological charts!


If it's true that a person's natal astrological chart contains lots of information about their character or life, then it stands to reason that astrologers should be able to match people to their charts at a rate that is at least moderately better than random chance. If they can do that, then that would provide substantial evidence that astrology really works! 


So, how did the study turn out? If you just want the quick highlights, we've provided a summary below. Or you can read on for all the details of the design of the study and the analyses we conducted on the study data. Even if you're not interested in astrology, per se, we think that designing a test of astrology serves as a nice example of the scientific method put into practice - that is, how to go from a claim to a scientific test of that claim. We'll explain how that process works.


If you believe you have astrological skill, you can also put yourself to the test by taking the same challenge that we used in the study! We're making this challenge permanently available so that anyone can test their own abilities at any time. At the same link, we also offer a practice test (based on matching celebrities and events to an appropriate chart), which is less scientifically rigorous but can be used to practice before taking the official test. Additionally, we've released the anonymized data from our study, so that anyone who wishes to can analyze the data for themselves.


Summary of Results: 


  • We tested and analyzed the ability of 152 astrologers to accurately match people to their natal charts. For our primary analyses, we excluded anyone who reported no prior astrology experience as well as anyone who believed they would not do better than random guessing at the task of matching people to their astrological charts.

  • The 152 astrologers largely believed that they were capable of doing this task with accuracy well above chance. Whereas a random guesser would, on average, only correctly answer 2.4 questions out of 12, astrologers with the least experience thought they had correctly answered 5 charts after completing the study tasks, and those with the most astrology expertise believed they had gotten 10 right. 

  • Despite their high-degree of confidence in their performance, astrologers as a group performed no better than chance - that is, their distribution of results closely resembled what you'd see if they had all been guessing at random, and the number of charts they matched correctly, on average, was not statistically significantly different than random guessing either.

  • Not a single astrologer got more than 5 out of 12 answers correct - even though, after completing the task, more than half of astrologers believed they had gotten more than 5 answers correct.

  • More experience with astrology had no statistically significant association with better performance, and the astrologers with the most experience didn't do any better than the rest. 

  • If astrologers as a group had been able to do meaningfully better than chance, this study design would have supported the conclusion that astrology works. But, as it turned out, astrologers in the study performed in a manner statistically indistinguishable from random guessing.

  • Despite astrologers' belief that they were performing well on the task, there was little agreement among astrologers about which natal chart belonged to each study subject. The astrologers who reported the greatest expertise had the highest level of agreement, but they still only agreed with each other 28% of the time - whereas if they had been selecting charts at random they would have agreed 20% of the time.  



How do you test a claim scientifically? 


Before we get into the details of our astrological charts study, let's talk about how to test a claim scientifically. The methods of science, while imperfect, are some of the most powerful ideas ever invented by humanity. Here's one approach for using scientific methods to test claims:


  1. Make the claim precise: usually an ambiguous claim can't be tested, or if a test is made, it's hard to tell if it truly was a valid test of the claim. For instance, saying "Your month of birth says a lot about you" isn't testable because it's too ambiguous. But specifying "people born in February are more empathic than those born in other months" makes the claim precise enough to be testable (note: this is not what astrologers actually claim, we're just using it as a simple example). Many claims fail at this step in the testing process, because they are not precise enough to be testable.

  2. Choose a measurement: to test the claim, you must be able to find something you can measure - the result of which will vary if the claim is true compared to if the claim is false. If you were testing the claim that people born in February are more empathetic, you could measure the empathy levels of people born in February as well as those born in other months using a well-validated empathy questionnaire. 

  3. Design a study: once you know what measurement you want to make, you need to design a study to actually make that measurement (while ruling out alternative explanations). In this example, you might decide to recruit 1,200 people from the U.S., collect their month of birth, measure the empathy level of each person, and then use a statistical test to check whether those born in February test higher, on average, than the rest.

  4. Run the study and analyze the result: finally, you have to put the study into action by recruiting study participants, collecting the data, and analyzing it to see whether it supports or contradicts the original claim. For instance, if the empathy level of those people born in February were found to be no different than those born in other months (within what would be expected by chance) that is strong evidence against the claim. Conversely, if the empathy level of those born in February is much higher, that is strong evidence in favor of the claim.



How do we apply the scientific methods to astrology? 


To apply scientific methods to astrology, we first have to figure out what astrology claims. Since there are many types of astrologers making somewhat different claims, it helps to focus on aspects that are core to many types of astrology. One of the most fundamental claims that astrology makes is: a person’s natal chart — reflecting the position of the celestial bodies at the time of their birth — offers insights about that person's character or life. 


If you're curious what natal charts look like, here's an example in two popular styles known as "Placidus" and "Whole Signs":


Adapted from charts provided by astro.com


If you've never seen an astrological chart before, you may find that it comes across as quite mysterious. Natal charts are generated based on a birth date, time and location. There are different opinions on the best ways to read such charts, but many astrologers agree on these basics:


  • Houses: the chart is divided into 12 houses, each representing different areas of life (e.g., some interpret the 2nd house as being related to finances)

  • Signs: each such house has a zodiac sign, which has an influence on the characteristics of that area of life (e.g., some interpret a "6th House Cusp in Virgo" as being related to a service-oriented approach to health). Though, note that not all astrologers use the same method to assign signs to houses.

  • Planets: planets are placed in houses and signs, affecting specific life areas or traits (e.g., some interpret Mars in the 3rd house as influencing communication style)

Reading such a chart requires specialized skill. Someone with no experience is not going to be accurate, even if astrology turns out to be effective. An additional challenge, when it comes to study design, is that astrologers differ in how they interpret these charts.  


But since nearly all astrologers agree that natal charts can reveal insights about a person's life or character, this is the claim we designed our study around, asking astrologers to identify which chart belongs to a given person from a set of options. What's appealing about testing this core aspect is that mainstream science considers such predictions about a person to be impossible (since none of the known forces of physics could account for the relationships between an astrological chart and a person's life). So if astrologers can actually do this task successfully, that's a strong demonstration that they have a skill that science can't currently explain.



How did our test for astrologers work? 


Our test for astrologers consists of 12 multiple choice questions. For each, we show lots of information about one real person's life (50 such pieces of information, including some basic factual information like gender and education, as well as lots of answers to open-ended questions, such as how they would describe their personality, what their brief life story is, how lucky or unlucky they feel they've been, what their home life was like growing up, and so on). These questions were chosen by asking astrologers what they would ask someone if they wanted to be able to accurately guess that person's astrological chart. 


Here are a few examples of a few of the 50 pieces of information we provided about each study subject:



After showing this information about a study subject, we showed each astrologer 5 astrological charts. Only one of these is the real natal chart of that person (based on their birth date, time, and location), and the other four are "decoy" charts that were generated based on random dates, times, and locations. The astrologer's task is to predict which one of the five charts is the person's real chart. 


Here's an example of the decision the astrologers had to make:



If astrologers were randomly guessing (i.e., if they had no skill whatsoever), they would get 20% of questions correct (an average of 2.4 out of 12). If, on average, astrologers can get at least, say, 33% correct (4 out of 12) that would provide substantial evidence that astrology works. And even if most astrologers don't do better than chance, but just one astrologer can get at least 11 out of 12 right, that would provide strong evidence that that astrologer has genuine skill. To increase interest and participation in our study, we offered a $1,000 prize to the first astrologer (if any) who could get at least 11 right during the study period. Just before starting the challenge, 25% of those with astrological experience believed they would win this prize, and right after finishing the challenge 15% believed that they had done well enough to win this prize.



What do we mean by “astrologer”?


328 participants took our test, including 152 astrologers. We define an astrologer (for these analyses) as a participant with at least some astrology experience who also predicted, just before starting the task (but after the study design was explained to them), that they would perform better than chance (i.e., that they would get at least 3 out of 12 questions correct). The reason for these exclusion criteria are because if someone doesn't have astrological experience, then their performance on this test obviously says nothing about astrology, and if someone with astrological experience does not believe they can do better than random guessing at the task, then it's not fair to use their inability to do the task as evidence that astrology itself doesn't work. 


We recruited astrologers through a variety of methods: 


  1. We reached out to dozens of notable, well-trusted and influential astrologers telling them about the project and asking if they would like to participate in the challenge.

  2. We promoted the challenge to our >200,000 newsletter subscribers as well as on our social media accounts.

  3. We posted the challenge to a variety of popular astrology Facebook groups.

We also attempted to post the challenge to the two largest astrology subreddits, but unfortunately in both cases the administrators would not allow us to post the study there.



What were the results of the study?


Did astrologers do better than chance (i.e., did they do better than the average of 2.4 questions out of 12 right expected from random guessing)?


No, astrologers did not perform better than chance. The statistical tests show no statistically significant difference between astrologer performance and what would be expected from random guessing.


Astrologers on average got 2.49 questions correct out of 12, with a 95% confidence interval of 2.29 to 2.7. This is extremely close to the 2.4 correct answers we would expect (under the null hypothesis that all of the astrologers were guessing completely at random), and 2.4 lies well within the confidence interval. 


Average number of answers that astrologers got correct

2.49

95% confidence interval for mean of correct answers

2.29 to 2.7

two-tailed t-test

p-value

0.395

one-tailed t-test

p-value

0.197

Standard deviation of correct answers

1.30

Number of astrologers

152


The following chart compares the actual performance of astrologers to what would be expected from random guessing. It shows:


  • The percentage of astrologers who got each number of correct answers (from 0 to 12) - as blue bars

  • The expected percentage if astrologers were just guessing randomly - as a green line.



As you can see, the results of all 152 astrologers as a group had a distribution of correct answers that is very similar to what we would have expected to see if none of them had skill and they were all guessing at random.


For example, 5.3% of astrologers got 0 questions correct, compared to 6.9% expected by chance. 28.3% of astrologers got 3 correct, versus 23.6% expected by chance. Importantly, no astrologer got more than 5 questions correct, while random chance predicts a small percentage would get 6 or more correct just by luck.


Correct Answers

% of Astrologers that got this many correct (n = 152)

Expected % if they had no skill and were all guessing at random

0

5.3%

6.9%

1

20.4%

20.6%

2

23.0%

28.3%

3

28.3%

23.6%

4

17.1%

13.3%

5

5.9%

5.3%

6

0%

1.6%

7

0%

0.3%

8

0%

0.01%

9

0%

0.006%

10

0%

0.0004%

11

0%

0.00002%

12

0%

0.0000004%



Did astrologers with more experience believe they would perform better?


Just before starting the challenge (but after reading how it works) participants were asked "How many of the 12 official challenge questions do you think you will get right?" Additionally, immediately after finishing the challenge, participants were asked "How many of the 12 questions do you think you got right?" Recall that we are excluding from our analysis the "guessers", which are participants who have no astrology experience or who believed (just before starting the challenge) they would get less than 3 questions right (since that means they predicted to perform worse than chance on the challenge).


We see that there is a strong relationship between astrology experience and how many questions astrologers believed they would get right. All groups, on average, believed they would do much better than chance (i.e., better than 2.4 questions right out of 12) both before starting and after completing the challenge. The group who believed they would do the worst were those who reported only "a little experience" with astrology (estimating they'd get 6.4 questions right prior to starting, and 5.0 right after they completed it, both well above chance). In contrast, the most confident group were world-class experts (estimating they'd get 10.4 right before the challenge and 10.2 after). 


On average, confidence dropped by about one question across all experience levels after completing the test, except for "world-class experts" whose estimates fell from 10.4 correct answers before the challenge to 10.2 after. So after finishing the challenge astrologers were a little less confident than they were just before starting, but even after the challenge, astrologers were still confident in their abilities.



Translation

Average number of questions they believed they would get right (before the challenge)

Average number of questions they thought they got right (after the challenge)

Average number of correct answers out of 12

I have a little experience

6.4

5.0

2.3

I'm an experienced amateur

7.6

6.3

2.7

I'm between an amateur and an expert

7.6

6.7

2.4

I'm an expert but not world-class

10.4

9.3

2.6

I'm a world-class expert

10.4

10.2

2.2


If we consider all participants in our study who said they had at least a little astrological experience, including those who didn't believe they would do better than chance (n=172), we see that before starting the test 61% of these participants believed they would get at least 6 questions right (and right after finishing the test 51% believed they had gotten at least 6 questions right). Before starting, 88% of these participants believed they would do better than chance (i.e., get more than 2.4 right), and right after finishing the challenge 80% believed they had done better than chance.


Questions right (what they predicted)

Just before starting the challenge, % of participants that thought they'd get at least this many right

After finishing the challenge, % of participants that thought they had gotten at least this many right

0

100%

100%

1

98%

96%

2

95%

94%

3

88%

80%

4

77%

67%

5

69%

59%

6

61%

51%

7

47%

39%

8

40%

31%

9

34%

24%

10

30%

19%

11

25%

15%

12

10%

6%



Did astrologers with more experience actually perform better than those with less experience?


Experience level did not correlate with improved performance. For example, astrologers that reported having "little experience" got an average of 2.3 questions right, while self-defined "world-class experts" averaged just 2.2 questions right. The best performing group were those with the 2nd lowest amount of experience (i.e., those who reported being "an experienced amateur .") They got 2.7 right, which was not statistically significantly different from random chance.


Astrology experience

Average number of correct answers out of 12

Number of participants in group

Standard deviation of correct answers

95% confidence interval for average number of correct answers

I have a little experience

2.3

52

1.4

2.0 to 2.7

I'm an experienced amateur

2.7

47

1.3

2.3 to 3.1

I'm between an amateur and an expert

2.4

38

1.2

2.1 to 2.8

I'm an expert but not world-class

2.6

10

1.0

2.0 to 3.2

I'm a world-class expert

2.2

5

1.1

1.2 to 3.2



How did performance vary based on prior experience with specific types of astrology?


At the start of the challenge, we asked participants to indicate which types of astrology (e.g., Western, Chinese, Traditional, etc.) they were experienced with. The answers were in checkbox format, allowing them to select multiple options. After collecting the data, we analyzed whether astrologers reporting experience in different types of astrology achieved different results.


Considering each group independently, participants with experience in hellenistic astrology got 2.9 correct answers on average (the most correct out of any group), with a p-value of 0.036 in a two-tailed t-test and 0.0178 in a one-tailed t-test (relative to the 2.4 questions right expected from random guessing). None of the other astrology types showed statistically significant results that positively differ (i.e. perform better) from random guessing. 


Since we tested 20 hypotheses (one for each of these 20 types of astrology), we would expect to have one false positive on average meeting a p < 0.05 threshold. If we correct for the number of hypotheses tested, we find that none of the groups are statistically significantly different from guessing at random. After adjusting for 20 hypotheses with a Bonferroni correction, the one-tailed p-value for Hellenistic astrology increases to 0.356, indicating no statistically significant deviation from random guessing.


Note that each participant could indicate being experienced with more than one type of astrology (so the same participant can appear in more than one group).

Types of astrology they reported being experienced with

Average number of correct answers out of 12

Number of participants in group

Standard deviation of correct answers

95% Confidence Interval for the average of correct answers

Hellenistic

2.9

29

1.22

2.4 to 3.3

Horary

2.8

18

1.31

2.2 to 3.4

Psychological

2.8

45

1.25

2.4 to 3.1

Mundane

2.7

27

1.10

2.3 to 3.1

Western

2.5

131

1.21

2.3 to 2.7

Evolutionary

2.5

23

1.44

1.9 to 3.1

Degree Theory

2.5

10

1.18

1.8 to 3.2

Medical

2.5

12

1.45

1.7 to 3.3

Humanistic

2.5

12

1.51

1.6 to 3.4

Uranian

2.5

8

1.31

1.6 to 3.4

Chinese

2.4

27

1.55

1.9 to 3

Electional

2.4

14

1.28

1.8 to 3.1

Mayan

2.4

5

1.14

1.4 to 3.4

Traditional

2.4

94

1.27

2.1 to 2.6

Vedic

2.2

22

1.10

1.7 to 2.6

Esoteric

2.1

22

1.32

1.6 to 2.7

Medieval

2.0

11

1.34

1.2 to 2.8

Cosmobiology

2.0

8

1.41

1 to 3

Renaissance

1.9

10

1.29

1.1 to 2.7

Egyptian

1.6

10

1.51

0.7 to 2.5



Do astrologers at least agree with each other?


Each graph in the following chart represents a different question (1 through 12) in the astrology test, with the correct answer indicated in blue. The bars show the percentage of astrologers who chose each option (A through E) for that question.


If astrologers strongly agreed with each other, we would expect to see one bar that is much higher than the others in each graph, indicating consensus. However, what we actually see is:


  • A wide distribution of answers given across most questions.

  • In most cases (9 out of 12), the correct answer (in blue) is not even the most commonly chosen option.

  • There is a general lack of agreement among astrologers about which chart belongs to which person. 



To assess the level of agreement among astrologers, we calculated the average pairwise agreement rate for different experience levels. This rate represents the percentage of questions for which two randomly-selected participants in each group gave the same answer.


The agreement rates among astrologers are very low, ranging from about 21% to 28% depending on experience level. This suggests there is little consensus among astrologers when interpreting the same charts, even among those with high levels of experience.



Note: For each astrological experience group, we first calculated the percentage of answers in common between all possible pairs of group members. This percentage was then averaged across all pairs within each group to derive the average pairwise agreement rate displayed in the chart.


Experience Level

Pairwise agreement rate

Number of people in group

I've never done it before

20.5%

156

I have a little experience

22.2%

66

I'm an experienced amateur

23.3%

50

I'm between an amateur and an expert

21.2%

39

I'm an expert but not world-class

20.8%

12

I'm a world-class expert

28.3%

5



What are the limitations of our study?



Possibility of Skill Existence


Even though we did not observe evidence of skill in our sample of 152 astrologers, this doesn’t rule out the possibility that a skilled astrologer might exist. However, if this skill exists at all, our study suggests it’s rare and that greater skepticism is warranted towards claim of skill at reading astrological charts.



Inadequacy of Task for Astrologers


It’s possible that our task cannot be done by astrologers even if astrology has merit. To address this concern, we included in our analyses only the participants who believed their performance would exceed random chance. We also developed the study design with the help of astrologers, and focused on a central claim of astrology: that astrological charts can reveal insights about a person’s character and life. Additionally, astrologers largely believed that they had performed well in our study even after they had finished it,  indicating a confidence in their skills at these tasks (whether or not the task truly measured their abilities accurately).



Insufficient Information


We might not have given astrologers the right or enough information to accurately match study subjects with their astrological charts. To address this, we formulated our questions for the anonymous study subjects based on feedback from astrologers (asking what questions they would ask someone to guess that person’s astrological chart) and also structured the topics of the questions based on the 12 astrological houses. We also provided a very large number of question responses for each study subject (i.e., their responses to 43 different question) to help astrologers have enough information to answer. 


We have also heard from some astrologers that they believe self-reported information is not reliable, or that we should have asked multiple choice questions to study subjects instead of open-ended questions. It is of course, absolutely true that people can have misperceptions about themselves, and that, at times, they can report unreliable information about themselves, either due to this misperceptions, or to make themselves look good.


However, the astrologers we worked with when developing this test suggested the questions that we asked study subjects based on us asking them what they would ask someone if they were trying to figure out that person's astrological chart. Additionally, it's clear that the astrologers participating in our study largely believed that the information we provided was adequate, as they largely believed they had performed at a rate far above chance on the questions. If they had believed the information we provided was not adequate for making these judgments, presumably they would not have predicted that they had gotten so many questions right.



Charts too Similar to Each Other


Perhaps the charts we showed were too similar to each other in each round - that is, perhaps the decoy charts were too similar to the correct answer. To help avoid this, we made sure that the 5 charts shown in each question all differed from each other in both sun sign and moon sign, and that they weren’t too close to each other in date (i.e., the charts had at least 21 days of distance between each other). This, however, does not guarantee that no two charts in the same round would have substantial similarities.


As noted, astrologers largely believed that they had done better than chance at the task after completing it, showing that they believed they had sufficient information and that the charts were sufficiently different from each other for them to be able to succeed at the task. Also note that, to demonstrate astrology works, astrologers would have needed to do only a better than chance on average — they did not need to get all or even most questions right.



Participant Skill Inadequacy


It’s possible that the astrologers who participated in our study were unskilled. To help mitigate this, we reached out to dozens of well-known and influential astrologers when the study began, inviting them to participate. While we had more participants with lower experience than with higher experience, we did have highly experienced astrologers in our study — and the astrologers with greater experience did not outperform the less experienced astrologers. Notably, not one out of 152 astrologers scored more than 5 out of 12 questions.


A related critique is that astrologers were self-reporting their own experience level. If they were not reporting this experience level accurately, that could make the results less reliable.



Limitation to Western Astrology


We only tested western astrologers using Placidus and/or Whole Sign charts, the two most popular western astrological chart systems. So we can’t rule out the possibility that other astrological systems (e.g., Vedic, Chinese, etc.) are more accurate, or that other ways of representing charts don't work better.



Limitations of a Single Study


As always, no single study is definitive. We designed our study with the aim of conducting a fair test of astrology, aiming to design it to show support if astrology is valid, and to show a lack of support if astrology is not. If astrology works (to do what it claims) then we want to believe that it works, and if it doesn't work (e.g., its claims are all false), we want to believe that it doesn't work. We did our best to have our study reflect a genuine search for the truth. That being said, at best, an individual study can only provide strong evidence related to a claim, not definitive proof. Every study, including this one, should be interpreted in the context of other evidence.



Areas for future research and posthoc analyses


While, as a group, astrologers in our study performed no better than chance, and no single astrologer performed especially well so as to stand out, we were interested to see the best case that could be made for astrology's effectiveness using our data, which we present here.


We received a request to do a posthoc analysis of just those astrologers that we were able to manually verify ourselves were professionals (while still respecting their privacy) - in other words, that they sold their astrological services or had published a book on the subject of astrology, or taught courses on the subject at an astrology school, or other strong evidence of professional involvement.


While post hoc analyses like these should be treated with caution - enough such post hoc analyses will inevitably result in seemingly positive results due to statistical flukes as was seen often in the replication crises in psychology - we conducted this analysis at their request. Its results suggest potential areas of future research that others may want to pursue.


In particular, there were 17 participants (of the 152 astrologers participating in our study) whom we were able to manually verify work professionally in the field. For the other 135 participants, we simply do not know whether they are professionals or not; all we can say about them is that we do not know that they are professionals.


While none of these 17 verified professionals got more than 5 questions right out of 12, they did, on average, perform slightly better than chance on the astrological challenge (3.29 questions right out of 12, which is equivalent to getting 27.4% correct, compared to 2.4 questions right, or 20% correct, if they had been guessing completely at random, p=0.003 one-sided t-test). Among these 17, there were 6 that we could confirm a higher level of credentials for (such as authoring a book on astrology or teaching a course on astrology at an astrological institution) - they got 2.83 right (23.6% correct).


Looking at which of the 12 questions these 17 astrologers that were confirmed to be professional did best on, we found three such questions that 47% of them gave the same answer AND were correct - and no questions with more than 47% agreement.


Since these analyses are conducted on small samples of just 17 and 6 participants, respectively, they are posthoc analyses (thus inflating the chance of false positives), and the effect size is small (7 percentage points and 4 percentage points better than chance, respectively), it is hard to interpret these results as much evidence of genuine skill. However, these represent the most promising result from the whole study - as all other findings completely failed to find astrological effects.


So, for anyone wishing to take up the mantle of conducting further research (which we would very much encourage), our recommendations for maximizing the chance of finding evidence that astrology works using a study design similar to ours would be to:


  1. Limit the participants to verified professional astrologers only.

  2. power the study so as to be able to easily detect effects even if astrological skill only enables doing a bit better than chance (e.g., 27% right compared to 20% from random guessing).

  3. Use a larger number of study subjects (that astrologers attempt to identify the charts of - whereas we used just 12) to avoid potential issues where, if some study subjects, just by fluke chance alone, were unusually easy or hard to guess the chart of - that has the potential to skew results. Hence, more study subjects is desirable. If a larger number of such subjects is used, then each astrologer could either get a subset of study subjects to predict or else they could have fewer charts to choose from in each round but more rounds.





Conclusion


We aimed to design a rigorous test of one of the most fundamental claims of astrology: that a person's astrological natal chart can be used to glean insights about the person's character or life. We conducted this test by, for each of 12 study subjects, providing a great deal of information about that study subject, and then asking astrologers to identify which of 5 astrological charts is that person's real natal chart. While astrologers largely believed that they were able to do this task at an accuracy far above chance, as a group their performance was indistinguishable from guessing completely at random. Additionally, not a single astrologer got more than 5 out of the 12 questions correct, despite more than half of astrologers reporting (right after finishing the tasks) that they believed they had gotten more than 5 right. More experienced astrologers did no better than less experienced ones. Finally, astrologers had little agreement with each other about what the correct chart was for each question.



Appendix


Methodology for generating charts


We used both the Placidus and Whole sign systems for the astrological charts, with charts sourced from astro.com.


To keep the incorrect charts realistic (so they wouldn't stand out at all from the correct answers), and to make the decoys more differentiated from the real chart, we randomly assigned a birth day, time, and location in the following way:


  • Year selection: We used the volunteer’s actual year of birth for all charts to prevent age-based guesses about the correct chart. For instance, if a volunteer was born in 1996, then all five charts for their question would be from 1996.

  • Time of day: We created random birth times generating a random hour between 0 and 23 and a random amount of minutes between 00 and 59.

  • Day of the year: We selected a random day of the year between 1 and 365. To ensure sufficient differentiation between charts, if a randomly selected day was within 21 days of another chart's day for the same volunteer, we re-generated it. This rule guaranteed a minimum separation of 21 days between each chart for a given question. We also made sure that no two options in a question had the same Sun sign or the same Moon sign.

  • City of birth: We randomly selected the birth city from a list of cities where the anonymous volunteers were born. While most times the randomly-selected city did not match the volunteer’s actual birth city, occasionally it did.



Extra charts


There was no statistically significant difference in test performance associated with more self-reported astrology accuracy (i.e., response to the question "When you make predictions or gain insights from your reading of astrological charts, what percent of the time are these predictions or insights accurate?"), when running a linear regression to predict the number of correct answers based on self-reported accuracy:


Intercept

Correlation

Degrees of freedom

t-statistic

p-value

2.28

0.06

134

0.68

0.499



The number of correct answers was not meaningfully better than chance levels, regardless of participant's self reported belief in astrology, in response to the question "How strongly do you believe in the main claims of astrology (e.g., that the positions and motions of celestial bodies can be usefully used to understand and predict human lives and events)?" The following chart includes all 328 participants, including both astrologers and "guessers" (i.e., participants not counted as astrologers):




With this study design, how well would astrologers have to have performed for us to be able to conclude their responses were not merely random?


If we consider the null hypothesis to be that astrologers were answering entirely at random, then for us to have rejected this null hypothesis, astrologers would only have had to get at least 23% of questions right, on average - just barely above the rate of random guessing, which is 20% of questions right. However, they did not meet this bar, so the study came out against astrology - astrologers got just 20.75% of questions right, which is a rate statistically indistinguishable from random guessing. 


An alternative way that this study could have demonstrated evidence in favor of astrology is if one or more astrologers had performed exceptionally well - for instance, if even one astrologer had gotten at least 11 out of 12 questions correct, that would have provided meaningful evidence of astrological skill. But none of the 152 astrologers got more than 5 questions right out of 12.


Take the astrology test


If you want to try taking the test for yourself, you can do so here:





bottom of page