If you've been following the news around the upcoming American presidential election, you've probably read a lot about polls – and the election's still almost 100 days away, so you'll be hearing a lot more about them in the coming months. While polling can be a useful way to predict electoral outcomes, the sheer number of polls available and the varying degrees of quality and reliability among them can prove frustrating. With so many different polls reporting a broad variety of likely outcomes, how should we know which sources to trust?

It would require a great deal of work and background knowledge to thoroughly examine the methods of all of the hundreds of polling firms in the market. Fortunately, the respected statistics blog FiveThirtyEight recently performed this difficult feat for the public — you can find their extensive, sortable ratings of over 300 pollsters right here.

Interpreting political polls can be difficult because statistics itself can be quite complex and even counter-intuitive. This complexity engenders some popular misconceptions about the way polls work. For instance:

Misconception 1: To get an accurate sense of the opinions of a large country like the U.S., you must poll millions of people.

Reality: In fact, it doesn't matter whether a country has 1 million citizens or 1 billion; the number of people you need to poll to get an accurate reflection of attitudes will still only be in the thousands. For instance, on a simple agree/disagree question, you'd only need a random sample of roughly 1,000 people from the U.S. population to estimate the percentage of people that agree with a 3% margin of error in either direction, assuming no other biases are present. (That is, you could say with 95% confidence that the results from your poll will come within 3 percentage points in either direction of the true answer.)

Misconception 2: All polls are about equally reliable.

Reality: As 538 does such a good job of demonstrating, and as we'll discuss more in a bit, polls can vary considerably in their accuracy. There are essentially two sources of error in a poll. The first is sampling error, which can distort polls that examine only portions of the population they're intended to study. Large samples can reduce sampling errors, but only to an extent. Sampling error affects polls in proportion to the inverse of the square root of the sample size, meaning that if you quadruple the size of the sample you poll, you're only cutting error in half. The other major source of error in polling comes from selection bias, which in this context means that a non-representative subset of the population has responded to your poll. For instance, if you are trying to find out what Americans think about an issue on average, but you conduct your survey on a liberal-leaning website, you're like to come up with a distorted estimate that reflects what liberals think more than what conservatives think. Ideally, you want your selection process to resemble a completely random sampling of the true population of interest as closely as possible. A perfect selection process is never possible, because you ultimately cannot fully control who chooses to answer your questions.

Misconception 3: Political poling companies produce unbiassed surveys.

Reality: Virtually all political polling outfits exhibit some degree of bias towards one political party or the other if you analyze their entire polling history, though these biases tend to be minor and can change from one election cycle to the next. FiveThirtyEight's rankings feature a column that reflects each pollster's historical bias in this respect, though it's worth noting that only one pollster — the F-rated TCJ Research — shows a historical bias of over 3 percentage points.

One of the most interesting and informative parts of FiveThirtyEight's methodology report involves how far apart the very best pollsters are from the very worst. In short, the very best polling operations tend to beat the average margin of error by about 1 percentage point, while the very worst are 2 or 3 points further off the mark than the average. That means that the best and worst pollsters are only about 3 or 4 percentage points apart in terms of accuracy.

This figure demonstrates two important points. First, all individual polls should be taken with a grain of salt – the best approach for predicting the results of an election is to look at the average results of many different polls. Additionally, while 3 or 4 percentage points doesn't sound like much, it can make a big difference over time. FiveThirtyEight frequently covers sports as well as politics, and founding writer Nate Silver draws an analogy to baseball in his methodology summary. The difference between an average poll and a good poll is like the difference between a .260-average batter and a .300-average batter: not much in a given game, but very important in the long run.

You can find a very extensive explanation of the methodology FiveThirtyEight used to create their pollster rankings here. The easiest way to determine which polling operations are the most reliable is to take a look at the letter grades FiveThirtyEight assigned to each agency in their rankings, which range from A+ at the highest to F at the lowest. Here's a quick rundown of what each of the other sortable metrics in FiveThirtyEight's analysis means:

Live Caller With Cellphones: Pollsters that call both landlines and cell phones for live interviews with respondents (as opposed to automated interviews conducted by pre-recorded tapes) provide the most reliable results. Polls that use this technique have a dot in this column.

Internet: As the Internet has become a more popular means of communication, polls conducted online have become more reliable. However, they are still somewhat less reliable than live interviews conducted via cell phone and landline. Polling outfits that use this technique have a dot in this column.

NCPP / AAPOR / Roper: This column indicates whether the pollsters in question participates in any of 3 polling transparency organizations – the the National Council on Public Polls, the American Association for Public Opinion Research Transparency Initiative, and the Roper Center for Public Opinion Research archive. In FiveThirtyEight's words, "Polling firms that do one or more of these things generally abide by industry-standard practices for disclosure, transparency and methodology and have historically had more accurate results."

Polls Analyzed: This column displays the number of polls by each outfit FiveThirtyEight has analyzed.

Simple Average Error: Most polls don't quite nail the exact way an election will turn out in terms of voter share per candidate. This stat shows the average number of percentage points by which that pollster's predictions have missed the actual results of elections, historically. Lower numbers indicate greater accuracy.

Races Called Correctly: This stat shows the percentage of races in which the polling outfit in question has correctly predicted the winner.

Advanced Plus / Minus: This relatively complex figure essentially indicates how well the polling operation in question has performed, relative to other pollster who've predicted the same race. It also takes into account the typical margin of error for races of the sort the poll is covering. Again, lower numbers – including negative numbers – indicate greater accuracy.

Predictive Plus / Minus: This number features all of the ingredients of the Advanced Plus / Minus stat, but also takes into account the number of polls the pollster in question has conducted, as well as some features of the methodology they use.

Mean-Reverted Bias: This figures indicates which of the two major American political parties tends to be favored by the polling operation, by percentage points.