Recently, we ran a study testing whether astrological sun signs (like Scorpio and Capricorn) can predict any of 37 facts about a person’s life, such as their educational level, and number of close friends. We found that whereas some personality tests had a reasonable ability to predict many of these facts, zodiac signs couldn’t predict a single one of them.
When we released this result, a number of astrologers were angry at us. They pointed out that sun signs are just tabloid astrology – real astrologers use much more complex systems involving a person’s entire astrological chart.
And they were right! Inspired by this critique, we enlisted six astrologers to help us design a new study to test astrology itself, as practiced by astrologers, rather than the silly, tabloid version of it. And to make the challenge more interesting, we offered a $1000 prize to the first astrologer who could perform sufficiently well on our test.
How Our Study To Test Astrology Works
There are many different types of astrology, so this study sought to test a claim fundamental to nearly all of them, which is: a person’s natal chart (reflecting the positions of celestial bodies at the time of their birth) offers insights about that person's character or life.
The test, which remains publicly available so that anyone can use it to test their own astrology skills, consists of 12 multiple choice questions. For each question, participants are shown a great deal of information about one real person's life, reflecting real person’s answers to 43 different questions. These questions were chosen by asking astrologers what they would ask someone if they wanted to be able to accurately guess that person's astrological chart.
Here are a few examples of pieces of information that were provided about each real person:
Alongside this information about each real person, astrologers were shown 5 astrological charts. Only one of these was the real natal chart of that person (based on their birth date, time, and location), and the other four were "decoy" charts that were generated based on random dates, times, and locations. The astrologer’s task was to determine which one of these five charts is the real one. Each chart was shown in two styles: “Placidus” and “Whole Signs”, because these are the two most commonly used in Western Astrology – testing other types of astrology and other charting methods was beyond the scope of this study.
Here's an example of the decision the astrologers had to make:
If astrologers were randomly guessing (i.e., if they had no skill whatsoever), they would get 20% of questions correct. So if, without cheating, astrologers were able to get at least, say, 33% correct on average (4 out of 12 questions) that would provide strong evidence that astrology works. And even if most astrologers don't do better than chance, but just one astrologer can get at least 11 out of 12 right, that would provide strong evidence that that astrologer has genuine skill.
To help increase participation, we offered a $1,000 prize to the first astrologer (if any) who could get at least 11 right during the study period. Just before starting the challenge, 25% of those with astrological experience believed they would win this prize, and right after finishing the challenge 15% believed that they had done well enough to win this prize.
If it's true that a person's natal astrological chart contains lots of information about their character or life, then it stands to reason that astrologers should be able to match people to their charts at a rate that is better than random chance. Let’s see what actually happened.
Summary of Astrology Study Results
In total, we tested 152 astrologers who believed they would do better than chance at the tasks we gave them – we excluded participants who lacked any astrology experience as well as those who didn’t believe that they would perform better on the tasks than random guessing.
Someone guessing at random would, on average, only correctly answer 2.4 questions out of 12, whereas those astrologers in our study with the least experience believed they had gotten 5 right, on average (right after they completed all the tasks), and those with the most astrology expertise believed they had gotten 10 right, on average.
Despite their high-degree of confidence in their performance, astrologers as a group performed no better than chance - that is, their distribution of results closely resembled what you'd see if they had all been guessing at random. And the number of charts they matched correctly, on average, was not statistically significantly different than random guessing either.
Not a single astrologer got more than 5 out of 12 answers correct - even though, after completing the task, more than half of astrologers believed they had gotten more than 5 answers correct.
More experience with astrology had no statistically significant association with better performance, and the astrologers with the most experience didn't do any better than the rest.
If astrologers as a group had been able to do meaningfully better than chance, this study design would have supported the conclusion that astrology works. But, as it turned out, astrologers in the study performed in a manner statistically indistinguishable from random guessing.
So, it seems that astrologers largely believed they can do the tasks required in this study, even though they actually lacked any ability to do so. But, even if they were getting many answers wrong, did they at least agree with each other about what the right answers were?
Quite surprising to us, there was very little agreement among astrologers about which natal chart belonged to each study subject. The astrologers who reported the greatest expertise had the highest level of agreement, but they still only agreed with each other only 28% of the time - whereas if they had been selecting charts at random, they would have agreed 20% of the time.
The agreement rates among astrologers are very low, ranging from about 21% to 28% depending on experience level. This suggests there is little consensus among astrologers when interpreting the same charts, even among those with high levels of experience.
We set out to design a rigorous test for one of the most fundamental claims of astrology: that a person's astrological natal chart can be used to glean insights about the person's character or life. While astrologers largely believed that they were able to beat this test with a degree of accuracy far above chance, their performance was indistinguishable from guessing completely at random. Not a single astrologer got more than 5 out of the 12 questions correct, despite more than half of astrologers reporting (right after finishing the tasks) that they believed they had gotten more than 5 right, and more experienced astrologers did no better than less experienced ones. Finally, astrologers had little agreement with each other about what the correct chart was for each question. All of this provides evidence that astrology simply doesn’t work.
Final thoughts on our scientific test of astrology
This study was designed in partnership with the help of astrologers, with the aim of conducting a fair test of astrology, such that it would show support if astrology is valid, and show a lack of support if astrology is not. If astrology works, we want to believe that it works – whereas if it doesn’t work, we want to believe that it doesn’t. And we sought to design and conduct this study in a way that reflects this genuine search for the truth. That being said, at best, any individual study can only provide strong evidence related to a claim, not definitive proof. Every study, including this one, should be interpreted in the context of other evidence. And no study of astrology, no matter how well-designed, can prove that there isn’t someone out there somewhere with astrological ability. But we believe that this study provides significant reason to doubt the claims of astrologers.
A much more detailed write-up of this study and its findings can be found here on the ClearerThinking.org website.
And if you want to try taking the test for yourself, you can do so here: