Nate Silver's Blog, page 56

November 25, 2019

Politics Podcast: A Very FiveThirtyEight Thanksgiving

By Galen Druke, Clare Malone, Nate Silver and Micah Cohen, Galen Druke, Clare Malone, Nate Silver and Micah Cohen, Galen Druke, Clare Malone, Nate Silver and Micah Cohen and Galen Druke, Clare Malone, Nate Silver and Micah Cohen












 












More: Apple Podcasts |
ESPN App |
RSS
| Embed


Embed Code





After five days of testimony, the House’s public impeachment hearings are over for now. In this installment of the FiveThirtyEight Politics podcast, the team takes stock of what we learned and the effect of the hearings on public opinion.


Also, the crew digs into the data to see if there was a conclusive winner of last week’s Democratic debate (spoiler alert: there was).


Finally, the team plays a Thanksgiving-themed game in which they try to guess what the candidates in the Democratic primary are thankful for.


You can listen to the episode by clicking the “play” button in the audio player above or by downloading it in iTunes , the ESPN App or your favorite podcast platform. If you are new to podcasts, learn how to listen .


The FiveThirtyEight Politics podcast publishes Mondays and Thursdays. Help new listeners discover the show by leaving us a rating and review on iTunes . Have a comment, question or suggestion for “good polling vs. bad polling”? Get in touch by email, on Twitter or in the comments.

 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2019 15:25

November 21, 2019

Politics Podcast: How The Fifth Democratic Debate Could Affect The Race

By Galen Druke, Sarah Frostenson and Nate Silver, Galen Druke, Sarah Frostenson and Nate Silver and Galen Druke, Sarah Frostenson and Nate Silver












 












More: Apple Podcasts |
ESPN App |
RSS
| Embed


Embed Code





After a day of headline-making testimony in the House impeachment inquiry, the contenders for the Democratic nomination squared off in their fifth debate. The candidates largely eschewed talk of impeachment and mostly avoided the kind of conflict that characterized previous debates. In this installment of the FiveThirtyEight Politics podcast, the crew discusses the current contours of the race and what impact the debate will have.


You can listen to the episode by clicking the “play” button in the audio player above or by downloading it in iTunes , the ESPN App or your favorite podcast platform. If you are new to podcasts, learn how to listen .


The FiveThirtyEight Politics podcast publishes Mondays and Thursdays. Help new listeners discover the show by leaving us a rating and review on iTunes . Have a comment, question or suggestion for “good polling vs. bad polling”? Get in touch by email, on Twitter or in the comments.


 

 •  0 comments  •  flag
Share on Twitter
Published on November 21, 2019 08:31

November 18, 2019

Politics Podcast: About Buttigieg … And That Iowa Poll

By Galen Druke, Micah Cohen, Sarah Frostenson and Nate Silver, Galen Druke, Micah Cohen, Sarah Frostenson and Nate Silver, Galen Druke, Micah Cohen, Sarah Frostenson and Nate Silver and Galen Druke, Micah Cohen, Sarah Frostenson and Nate Silver












 












More: Apple Podcasts |
ESPN App |
RSS
| Embed


Embed Code





The latest Des Moines Register poll was released this weekend, and it shows South Bend, Indiana, Mayor Pete Buttigieg leading Sen. Elizabeth Warren, former Vice President Joe Biden and Sen. Bernie Sanders in Iowa. In this installment of the FiveThirtyEight Politics podcast, the crew digs into this poll and discusses what it means for the state of the Democratic primary in Iowa and nationally.


Also, with a big week of impeachment hearings ahead, the team checks in on the latest impeachment polling.


Finally, the crew picks up a discussion from last week’s podcast and looks into why someone’s level of education is important in determining how they will vote.


You can listen to the episode by clicking the “play” button in the audio player above or by downloading it in iTunes , the ESPN App or your favorite podcast platform. If you are new to podcasts, learn how to listen .


The FiveThirtyEight Politics podcast publishes Mondays and Thursdays. Help new listeners discover the show by leaving us a rating and review on iTunes . Have a comment, question or suggestion for “good polling vs. bad polling”? Get in touch by email, on Twitter or in the comments.

 •  0 comments  •  flag
Share on Twitter
Published on November 18, 2019 15:12

November 11, 2019

Politics Podcast: Could Trump Lose The Popular Vote And Win, Again?

By Galen Druke, Nate Silver and Clare Malone, Galen Druke, Nate Silver and Clare Malone and Galen Druke, Nate Silver and Clare Malone












 












More: Apple Podcasts |
ESPN App |
RSS
| Embed


Embed Code





The New York Times, in collaboration with Siena College, recently released a round of battleground state polling that shows a much closer 2020 general election contest in swing states than you might expect if you just looked at national polling. In this installment of the FiveThirtyEight Politics podcast, we have not one but two Nates — Nate Silver and The New York Times’ Nate Cohn — to discuss who might have an advantage in the Electoral College in 2020.


The crew also discusses the state of the Democratic primary, including former New York Mayor Michael Bloomberg filing paperwork to join the Alabama primary.



FiveThirtyEight Politics Podcast: The latest in battleground polling



You can listen to the episode by clicking the “play” button in the audio player above or by downloading it in iTunes , the ESPN App or your favorite podcast platform. If you are new to podcasts, learn how to listen .


The FiveThirtyEight Politics podcast publishes Mondays and Thursdays. Help new listeners discover the show by leaving us a rating and review on iTunes . Have a comment, question or suggestion for “good polling vs. bad polling”? Get in touch by email, on Twitter or in the comments.

 •  0 comments  •  flag
Share on Twitter
Published on November 11, 2019 15:26

November 7, 2019

The Best NBA Players This Season, According To RAPTOR

Intro


hed: The Best NBA Players This Season, According To RAPTOR


dek: Our ratings use play-by-play and player-tracking data to calculate the value of every player in the NBA, updated daily.


see_also.readin: More NBA:


[see_also.links]


*


* [Player projections](https://projects.fivethirtyeight.com/2020-nba-player-projections/)


* [Our 2019-20 NBA predictions](https://projects.fivethirtyeight.com/...)


[]


Leaderboard


{leaderboard}


dek: Who are the top five players by RAPTOR and wins above replacement?


note: Players eligible for leaderboards are those who have played in at least 70 percent of their team’s games and average at least 24 minutes per game. Figures displayed are rounded to the nearest tenth, but placement along the line is based on unrounded RAPTOR or WAR values.


 

 •  0 comments  •  flag
Share on Twitter
Published on November 07, 2019 10:54

November 6, 2019

National Polls And State Polls Show Pretty Much The Same Thing

A common refrain in coverage of the Democratic primary campaign is that the race looks much different in the early states than it does nationally, with a wider playing field, greater strength for upstart candidates such as Pete Buttigieg, and signs of weakness for the leader in national polls, Joe Biden.


The refrain is true if you look only at Iowa or only at New Hampshire, but it’s mostly not true overall. Taken collectively, polls in the four early states — Iowa, New Hampshire, Nevada and South Carolina — tell almost the same story as national polls: Biden leads, Elizabeth Warren is in second, Bernie Sanders is in third, and Buttigieg is still a fairly distant fourth.


There are some differences: Tom Steyer, who has poured millions of dollars into advertisements in the early states, is notably stronger in all four of them than in national polls. But Biden’s position is pretty much the same in the four early states taken together as in national polls. If anything, Warren and Sanders lose slightly more ground in early-state polls relative to national ones than he does.


Overall, among the nine Democrats who have qualified so far for this month’s debate, the correlation between national polls and early state polls (weighted by the number of Democratic voters in each state) is .99, or nearly perfect. It seems like perceptions to the contrary derive from ignoring South Carolina (a strong state for Biden) and Nevada (strong for Biden and also Sanders’s strongest early state) and their more diverse and moderate electorates at the expense of white and liberal voters in Iowa and New Hampshire.


Let’s look at some data. Below, I’ve calculated a polling average in Iowa, New Hampshire, Nevada and South Carolina using polls in FiveThirtyEight’s polling database. Polls are weighted based on their sample size and their pollster rating (please don’t hesitate to check out our newly-updated pollster ratings!). Because recent polling is a bit sparse in some of these states — and because the race doesn’t seem to be changing that much anyway — I’ve gone back and included all polls since the September debate, but polls since the October debate are weighted double. If a polling firm surveyed a state multiple times in this period, only its most recent poll is included. I’ve also calculated a national polling average using the same method. (All polling averages are through early Tuesday.)




National polls and early-state polls look similar

Weighted polling averages in early states vs. national polling averages for Democratic candidates who have qualified for the November debate







Polling Averages


Candidate
Iowa
N.H.
Nev.
S.C.
Early State Avg*
National




Biden
19.1%
21.1%
27.4%
35.7%
27.5%
28.6%


Warren
22.3
24.7
19.9
15.5
19.5
21.7


Sanders
13.7
16.1
18.4
11.0
14.2
16.7


Buttigieg
15.0
9.1
4.5
4.0
7.6
6.5


Harris
3.6
3.6
4.1
6.1
4.7
4.8


Yang
2.1
2.9
3.7
1.8
2.5
2.5


Klobuchar
3.4
3.2
1.5
1.5
2.2
2.1


Booker
1.9
1.7
1.3
2.9
2.1
1.9


Steyer
2.5
2.5
3.5
3.9
3.3
0.9




* Weighted by the number of Democrats in each state.


Source: Polls




The Iowa numbers do look quite a bit different than national polls, with Buttigieg much stronger and Biden much weaker. Interestingly, Warren isn’t doing appreciably better in Iowa than in national polls, perhaps because she loses support to Buttigieg (both are competing for white, college-educated voters). Sanders is a tiny bit weaker in Iowa than he is nationally. Steyer and Amy Klobuchar are slightly stronger.


In New Hampshire, some of the same patterns hold, although less profoundly. And Warren slightly overperforms her national numbers.


But Iowa and New Hampshire’s Democratic electorates are both quite white and quite liberal, which plays to the strength of some candidates more than others. It’s disadvantageous for Biden, who relies on a coalition of black Democrats and moderate, usually non-college-educated white Democrats.


Nevada, which is more working-class and more racially diverse, is a stronger state for Biden, as well as for Sanders, who like Biden does better among voters who did not attend college. And South Carolina, which has both a lot of African-American voters and a lot of moderate white voters, is a very strong state for Biden, as his polling average there (35.7 percent) more than doubles that of the second-place candidate, Warren.


Overall, if you weight the early-state polling averages by the number of Democrats in each state,1 they show Biden at 27.5 percent — similar to his national polling average during this period, which is 28.6 percent. Warren, at 19.5 percent in the early-state average, and Sanders, at 14.2 percent, are actually a bit weaker in early states overall than in national polls. And Buttigieg is at best slightly stronger in the early states; his good numbers in Iowa and to a lesser extent New Hampshire are counteracted by weak ones in Nevada and South Carolina.


The one candidate who really does appear stronger in the early states is Steyer, who averages at least 2.5 percent in each of them — but only registers at 0.9 percent in national polls.


Of course, Iowa and New Hampshire vote first — and could therefore affect voting in the subsequent states, including Nevada and South Carolina. But the media ought to be careful about implying that Biden is especially weak in the early states or that Buttigieg is especially strong in them. Those characterizations only hold if you look at Iowa and New Hampshire, which have electorates that are highly unrepresentative of the Democratic Party as a whole. Biden does just fine in the more diverse early states, conversely.


Put differently, there isn’t much evidence that Biden does worse with voters who see him up close and personal, as often seems to be the implication of coverage that focuses heavily on Iowa and New Hampshire. Instead, he does worse with liberal, college-educated whites, who are plentiful in these states. Intentionally or not, the intense media focus on Iowa and New Hampshire serves to give more influence to liberal, college-educated whites at the expense of African-Americans, Hispanics, moderate Democrats and working-class Democrats, groups that are also key parts of the Democratic coalition.

 •  0 comments  •  flag
Share on Twitter
Published on November 06, 2019 02:55

November 5, 2019

The State Of The Polls, 2019

Much maligned for their performance in the 2016 general election — and somewhat unfairly so, since the overall accuracy of the polls was only slightly below average that year by historical standards — American election polls have been quite accurate since then. Their performance was very strong in the 2018 midterms, despite the challenge of having to poll dozens of diverse congressional districts around the country, many of which had not had a competitive election in years. Polls have also generally been accurate in the various special elections and off-year gubernatorial elections that have occurred since 2016, even though those are also often difficult races to poll.


[Related: FiveThirtyEight’s Pollster Ratings]


Does that mean everything is looking up in the industry? Well, no. We’ll introduce some complications in a moment. But I do want to re-emphasize that opening takeaway, since the media is just flatly wrong when it asserts that the polls can’t be trusted. In fact, American election polls are about as accurate as they’ve always been. That doesn’t mean polls will always identify the right winner, especially in close elections. (As a simple rule of thumb, we’ve found polls “call” the right winner 80 percent of the time, meaning they fail to do so the other 20 percent of the time — although upsets are more likely to occur in some circumstances than others.) But the rate of upsets hasn’t changed much over time.


Before we go any further, I want to direct you to the latest version of FiveThirtyEight’s pollster ratings, which we’ve updated for the first time since May 2018. They include all polls in the three weeks leading up to every U.S. House, U.S. Senate and gubernatorial general election since then,2 including special elections, plus a handful of polls from past years that were missing from previous versions of our database. You can find much more detail on the pollster ratings here, including all the polls used in the ratings calculation. Our presidential approval ratings, generic congressional ballot and impeachment trackers have also been updated to reflect these new ratings, although they make little difference to the topline numbers.


Now then, for those complications: The main one is simply that response rates to traditional telephone polls continue to decline. In large part because of caller-ID and call-blocking technologies, it’s simply harder than it used to be to get people to answer phone calls from people they don’t know. In addition to potentially making polls less accurate, that also makes them more expensive, since a pollster has to spend more time making calls for every completed response that it gets. As a result, the overall number of polls has begun to slightly decline. There were 532 polls in our pollster ratings database, which covers polls in the 21 days before elections occur, associated with elections on Nov. 6, 2018, which is down from 558 polls for Election Day 2014 and 692 polls for Election Day 2010.3



FiveThirtyEight Politics Podcast: The races we’re watching on Election Day 2019



So why not turn to online polls or other new technologies? Well, the problem is that in recent elections, polls that use live interviewers to call both landlines and cellphones continue to outperform other methods, such as online and automated (IVR) polls. Moreover, online and IVR polls are generally more prone toward herding — that is, making methodological choices, or picking and choosing which results they publish, in ways that make their polls match other, more traditional polls. So not only are online and automated polls somewhat less accurate than live-caller polls, but they’d probably suffer a further decline in accuracy if they didn’t have live polls to herd toward.


Still, online polling is undoubtedly a large part of polling’s future — and some online polling firms are more accurate than others. Among the most prolific online pollsters, for example, YouGov stands out for being more accurate than others such as Zogby, SurveyMonkey, and Harris Insights & Analytics. And many former IVR pollsters are now migrating to hybrid methods that combine automated phone polling with internet panels. In the 2018 elections, this produced better results in some cases (e.g., SurveyUSA) than in others (e.g., Rasmussen Reports).





Polls have been quite accurate — and unbiased — in post-2016 elections

Each time we update our pollster ratings, we publish a few charts that depict the overall health of the industry — so let’s go ahead and run the numbers again. The first chart is the one we consider to be the most important: the average error of polls broken down by the type of election. A few quick methodological notes:



By average error, I mean the difference between the margin projected by the poll and the actual election result. For instance, if the poll shows the Democrat up by 1 percentage point and the Republican wins by 2 points, that would be a 3-point error.
To not give any one polling firm too much influence, the values in the chart are weighted based on the number of polls a particular pollster conducted for that particular type of election in that particular cycle4
Polls that are banned by FiveThirtyEight because we know or suspect that they faked data are excluded from the analysis.
Note that I’ve included the handful of elections that have occurred so far in 2019 with the 2017-18 election cycle, even though we’ll classify them them later as part of the 2019-20 cycle instead.

OK, here’s the data:




Post-2016 polls have been accurate by historical standards

Weighted-average error of polls in final 21 days before the election, among polls in FiveThirtyEight’s Pollster Ratings database







Presidential



Cycle
Governor
U.S. Senate
U.S. House
General
Primary
Combined




1998
8.2
7.4
6.8


7.6


1999-2000
4.9
6.1
4.4
4.4
7.6
5.5


2001-02
5.2
4.9
5.4


5.2


2003-04
6.0
5.6
5.4
3.2
7.1
4.8


2005-06
5.0
4.2
6.5


5.3


2007-08
4.1
4.7
5.7
3.6
7.4
5.4


2009-10
4.9
4.8
6.9


5.7


2011-12
4.9
4.7
5.1
3.6
8.9
5.2


2013-14
4.6
5.5
6.5


5.4


2015-16
5.4
5.0
5.5
4.8
10.1
6.7


2017-19
5.3
4.3
5.0


5.0


All years
5.4
5.3
6.1
4.0
8.7
5.8




Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.




As I said, the 2017-19 cycle was one of the most accurate on record for polling. The average error of 5.0 points in polls of U.S. House elections is the second-best in our database, trailing only 1999-2000. The 4.3-point error associated with U.S. Senate elections is also the second-best, slightly trailing 2005-06. And gubernatorial polls had an average error of 5.3 points, which is about average by historical standards.


Combining all different types of elections together, we find that polls from 2017 onward have been associated with an average error of 5.0 points, which is considerably better than the 6.7-point average for 2015-16, and the best in any election cycle since 2003-04.


But note that there’s just not much of an overall trajectory — upward or downward — in polling accuracy. Relatively strong cycles for the polls can be followed by relatively weak ones, and vice versa.


One more key reminder now that the Iowa caucuses are only three months away: Some types of elections are associated with considerably larger polling errors than others. In particular, presidential primaries feature polling that is often volatile at best, and downright inaccurate at worst. Overall, presidential primary polls in our database mispredict the final margin between the top two candidates by an average of 8.7 points. And the error was even worse, 10.1 points, in the 2016 primary cycle. Leads of 10 points, 15 points or sometimes more are not necessarily safe in the primaries.


We can also look at polling accuracy by simply counting up how often the candidate leading in the poll wins his or her race.5 This isn’t our preferred method, as it’s a bit simplistic — if a poll had the Republican ahead by 1 point and the Democrat won by 1 point, that’s a much more accurate result than if the Republican had won by 20, even though it would have incorrectly identified the winner. But across all polls in our database, the winner was “called” 79 percent of the time.




Polls “call” the winner right 79 percent of the time

Weighted-average share of polls that correctly identified the winner in final 21 days before the election, among polls in FiveThirtyEight’s Pollster Ratings database







Presidential



Cycle
Governor
U.S. Senate
U.S. House
General
Primary
Combined




1998
86%
86%
57%


78%


1999-2000
80
80
56
68%
95%
76


2001-2002
87
87
77


82


2003-2004
76
76
69
78
94
79


2005-2006
89
89
71


83


2007-2008
95
95
83
94
80
88


2009-2010
85
85
75


82


2011-2012
90
90
70
81
63
77


2013-2014
80
80
76


77


2015-2016
68
68
57
71
86
77


2017-2019
77
77
78


76


All years
82
82
72
79
83
79




Pollsters get half-credit if they show a tie for the lead and one of the leading candidates wins. Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.




In recent elections, the winning percentage has been slightly below the long-term average — it was 76 percent in 2017-19. But this reflects the recent uptick in close elections, and that resource-constrained pollsters tend to poll these close elections more heavily.6


As basic as this analysis is, it’s essential to remember that polls are much more likely to misidentify the winner when they show a close race. Polls in our database that showed a lead of 3 percentage points or less identified the winner only 58 percent of the time — a bit better than random chance, but not much better. But polls showing a 3- or 6-point lead were right 72 percent of the time, and those with a 6- or 10-point lead were right 86 percent of the time. (Errors in races showing double-digit leads are quite rare in general elections, although they occur with some frequency in primaries. And errors in races where one candidate leads by 20 or more points are once-in-a-blue-moon types of events, regardless of the type of election.)




Polls often misidentify the winner in a close race

Share of polls that correctly identified the winner in final 21 days before the election, among polls in FiveThirtyEight’s Pollster Ratings database






Leading candidate’s margin
Share of polls correctly identifying winner




0-3 points
58%






3-6 points
72






6-10 points
86






10-15 points
94






15-20 points
98






≥20 points
>99








Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.




Another essential measure of polling accuracy is statistical bias — that is, whether the polls tend to miss in the same direction. We’re particularly interested in understanding whether polls systematically favor Democrats or Republicans. Take the polls in 2016, for instance. Although they weren’t that bad from an accuracy standpoint, the majority underestimated President Trump and Republicans running for Congress and governor, leading them to underestimate how well Trump would do in the Electoral College. Overall in the 2015-16 cycle, polls had a Democratic bias (meaning they overestimated Democrats and underestimated Republicans) of 3.0 percentage points. And that after a 2013-14 cycle when polls also had a Democratic bias (of 2.7 percentage points).




Polling bias is not very consistent from cycle to cycle

Weighted-average statistical bias of polls in final 21 days of the election, among polls in FiveThirtyEight’s Pollster Ratings database






Cycle
Governor
U.S. Senate
U.S. House
Pres. General
Combined




1998
R 5.7
R 4.8
R 1.5

R 4.2


1999-2000
D 0.6
R 2.9
D 0.9
R 2.6
R 1.8


2001-2002
D 3.0
D 1.4
D 1.3

D 2.2


2003-2004
R 4.2
D 1.7
D 2.5
D 1.1
D 0.9


2005-2006
D 0.3
R 1.3
D 0.2

R 0.1


2007-2008
D 0.5
D 0.8
D 1.0
D 1.1
D 1.0


2009-2010

R 0.7
D 1.7

D 0.6


2011-2012
R 1.3
R 3.3
R 2.6
R 2.5
R 2.6


2013-2014
D 2.3
D 2.5
D 3.7

D 2.7


2015-2016
D 3.3
D 2.8
D 3.7
D 3.1
D 3.0


2017-2019
R 0.9
D 0.1
R 0.3

R 0.3


All years
D 0.3
D 0.1
D 0.7
D 0.2
D 0.3




Bias is calculated only for elections where the top two finishers were a Republican and Democrat. Therefore, it is not calculated for presidential primaries. Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.




In 2017-19, however, polls had essentially no partisan bias, and to the extent there was one, it was a very slight bias toward Republicans (0.3 percentage points). And that’s been the long-term pattern: Whatever bias there is in one batch of election polls doesn’t tend to persist from one cycle to the next. The Republican bias in the polls in 2011-12, for instance, which tended to underestimate then-President Obama’s re-election margins, was followed by two cycles of Democratic bias in 2013-14 and 2015-16, as previously mentioned. There is simply not much point in trying to guess the direction of poll bias ahead of time; if anything, it often seems to go against what the conventional wisdom expects. Instead, you should always be prepared for the possibility of systematic polling errors of several percentage points in either direction.





Which pollsters have been most accurate in recent elections?

Although it can be dangerous to put too much stock in the performance of a pollster in a single election cycle — it takes dozens of polls to reliably assess a pollster’s accuracy — it’s nonetheless worth briefly remarking on the recent performance of some of the more prolific ones. Below, you’ll find the average error, statistical bias and a calculation we call Advanced Plus-Minus (basically, how the pollster’s average error compares to other pollsters’ in the same election),7 for pollsters with at least five polls in our database for the 2017-19 cycle. Note that negative Advanced Plus-Minus scores are good; they indicate that a firm’s polls were more accurate than others in the same races.




How prolific pollsters have fared in recent elections

Advanced Plus-Minus scores and other metrics for pollsters who conducted at least five surveys for the 2017-19 cycle, in FiveThirtyEight’s Pollster Ratings database






Pollster
Methodology
No. of Polls
Avg. Error
Bias
Adv. Plus-Minus




ABC News/Washington Post
Live
5
1.7
R 0.9
-4.1


Cygnal
IVR/Online/Live
9
2.5
D 1.9
-3.7


Mason-Dixon Polling & Research Inc.
Live
7
2.8
R 1.0
-3.0


Monmouth University
Live
9
3.1
R 1.7
-2.9


Suffolk University
Live
7
2.7
R 1.3
-2.7


Research Co.
Online
20
3.8
R 1.1
-2.3


Mitchell Research & Communications
IVR/Online
6
2.5
R 0.9
-2.0


Siena College/New York Times Upshot
Live
47
3.6
R 1.3
-1.7


Emerson College
IVR/Online
66
4.2
R 0.5
-1.5


Marist College
Live
13
4.4
D 2.7
-1.1


Landmark Communications
IVR/Online/Live
5
4.1
D 3.9
-1.0


YouGov
Online
12
3.1
R 1.7
-1.0


SurveyUSA
IVR/Online/Live
13
4.1
R 0.7
-1.0


Gravis Marketing
IVR/Online/Live
25
3.8
D 0.6
-0.8


Harris Insights & Analytics
Online
34
3.7
R 0.2
-0.2


Vox Populi Polling
IVR/Online
7
4.5
D 3.6
0.0


St. Pete Polls
IVR
10
2.3
D 1.7
0.0


Fox News/Anderson Robbins Research/Shaw & Co. Research
Live
10
4.7
D 2.7
0.0


Remington Research Group
IVR/Live
5
4.1
D 3.1
0.3


Change Research
Online
57
5.5
D 1.5
0.6


Quinnipiac University
Live
13
4.3
D 2.7
0.7


JMC Analytics/Bold Blue Campaigns
Live
5
6.7
R 5.5
0.9


SSRS
Live
11
5.2
D 4.3
0.9


Optimus
IVR/Online/Live/Text
5
6.8
R 6.8
0.9


Strategic Research Associates
Live
5
5
D 1.9
1.0


Susquehanna Polling & Research Inc.
IVR/Live
6
8.6
D 8.0
1.4


Trafalgar Group
IVR/Online/Live
21
4.6
R 1.9
1.6


Ipsos
Online
10
5.3
R 3.0
2.2


Rasmussen Reports/Pulse Opinion Research
IVR/Online
5
6.1
R 5.8
3.2


Carroll Strategies
IVR
5
9.9
R 9.9
3.4


Dixie Strategies
IVR/Live
5
8.4
R 5.9
3.8




Four of the top 5 and 6 of the 10 best pollsters according to this metric were exclusively live-caller telephone polls. In exciting news for fans of innovative polling, the list includes polls from our friends at The New York Times’s Upshot, who launched an extremely successful and accurate polling collaboration with Siena College in 2016. (It also includes ABC News, FiveThirtyEight’s corporate parent, which usually conducts its polls jointly with The Washington Post.)


Conversely, the five of the top six worst-performing pollsters — including firms such as Carroll Strategies, Dixie Strategies, and Rasmussen Reports/Pulse Opinion Research — were IVR pollsters (sometimes in conjunction with other methods), several of which had strong Republican leans in 2017-19. Some IVR pollsters did perform reasonably well in 2015-16, a cycle where most pollsters underestimated Republicans. In retrospect, though, that may have been a case of two wrongs making a right; IVR polls tend to be Republican-leaning, so they’ll look good in years where Republicans beat their polls, but they’ll often be among the worst polls otherwise.


Indeed, aggregating the pollsters by methodology confirms that live caller polls continue to be the most accurate. Below are the aggregate scores for the three major categories of polls — live caller, online, and IVR — by our Advanced Plus-Minus metric, average error and statistical bias.8




Live-caller polls have been most accurate in recent elections

Advanced Plus-Minus scores and other metrics for pollsters who conducted at least five surveys for the 2017-19 cycle, in FiveThirtyEight’s Pollster Ratings database






Methodology
No. of Polls
Avg. Error
Bias
Adv. Plus-Minus




Live caller w/cell
356
4.9
R 0.5
-0.3


Live caller w/cell only
210
4.4
R 0.2
-0.8


Live caller w/cell hybrid
146
5.5
R 0.9
0.4


IVR
239
5.2
R 1.0
0.3


IVR only
19
6.9
R 5.4
2.4


IVR hybrid
220
5
R 0.4
0.1


Online or text
358
5
R 0.4
0.2


Online or text only
154
5
D 0.4
0.5


Online or text hybrid
204
5
R 0.8
0.1


All polls
628
5
R 0.3
0.0




Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.




The differences are clearest when looking at pollsters that exclusively used one method. Polls that exclusively used live callers (including calling cellphones) had an average error of 4.4 percentage points in the 2017-19 cycle, as compared to 5.0 points for polls exclusively conducted online or via text message, and 6.9 points for polls that exclusively used IVR. (Pure IVR polls, however, are now quite rare. Polls that used a hybrid of IVR and other methods did better, with an average error of 5.0 percentage points.)


Polling firms that are members of professional polling organizations that push for transparency and other best practices also continue to outperform those that aren’t. In particular, our pollster ratings give credit to firms that support the American Association for Public Opinion Research (AAPOR) Transparency Initiative, belong to the National Council on Public Polls (NCPP), or contribute data to the Roper Center archive. Pollsters that are part of one or more of these initiatives had an average error of 4.3 percentage points in the 2017-19 cycle, as compared to 5.4 percentage points for those that aren’t.





Another way to detect herding

Our pollster ratings have also long included an adjustment to account for the fact that online and automated polls tend to perform better when there are high-quality polls in the field. We’ve confirmed that this still applies. For instance, polls that are conducted online or via IVR9 are about 0.4 percentage points more accurate based on our Advanced Plus-Minus metric when their polls are preceded by “gold standard” polls in the same race. (“Gold standard” is the term we use for pollsters that are exclusively live caller with cellphones and are also AAPOR/NCPP/Roper members.) Live-caller polls do not exhibit the same pattern, however; their Advanced Plus-Minus score is unaffected by the existence of an earlier “gold standard” poll in the field. This is probably the result of herding; some of the lower-quality pollsters may be doing the equivalent of peeking at their more studious classmate’s answers in a math test. In fact, these differences are especially strong in recent elections, suggesting that herding has become more of a problem.


There is also a second, more direct method to detect herding, which we’re also now applying in our pollster ratings. Namely — as described in this story — there is a minimum distance that a poll should be from the average of previous polls based on sampling error alone. For instance, even if you knew that a candidate was ahead 48-41 in a particular race — a 7-point lead — you’d miss that margin by an average of about 5 percentage points in a 600-person poll because sampling only 600 people rather than the entire population introduces sampling error. That is, because of sampling error, some polls would inevitably show a 12-point lead and some would show a 2-point lead instead of all the polls being bunched together at a 6- or 7- or 8-point lead exactly. If the polls are very tightly bunched together, this is not a good thing — you should be suspicious of herding, which can sometimes yield embarrassing outcomes where every poll gets the answer wrong


Of course, there are other complications in the real world. There’s no guarantee that the race will have been static since other pollsters surveyed the race; one candidate may be losing or gaining ground. And pollsters have healthy methodological disagreements from one another, so the same race may look different depending on what assumptions they make about turnout and so forth. But these should tend to increase the degree to which polls differ from each other, and not produce herding.


But our herding penalty only applies if pollsters show too little variation from the average of previous polls of the race10 based on sampling error11 alone. If a pollster is publishing all its data without being influenced by other pollsters — including its supposed outliers — it should be fairly easy to avoid this penalty over the long run.


Many polls are closer to the average of previous polls than they “should” be, however. Unlike the previous type of herding I described, which is concentrated among lower-quality pollsters who are essentially trying to draft off their neighbors to get better results, this tendency appears among some higher-quality pollsters as well. In some cases, we suspect, this is because, late in the race, a pollster doesn’t want to deal with the media firestorm that would inevitably ensue if it published a poll that appears to be an outlier. In other cases, frankly, we suspect that pollsters rather explicitly look at the FiveThirtyEight or RealClearPolitics polling average and attempt to match it.


In any event, our formula now detects this type of herding, and it results in a lower pollster rating when we catch it.12. Our pollster ratings spreadsheet now calculates each pollster’s Average Distance from Polling Average, or ADPA, which is how much the pollster’s average poll differs from the average of previous polls of that race.13 Among pollsters with at least 15 polls,14 the largest herding penalties are as follows:




Which pollsters show the clearest signs of herding?




Pollster
Herding Penalty




Research Co.
1.17


Muhlenberg College
0.84


Angus Reid Global
0.82


Grove Insight
0.71


NBC News/Wall Street Journal
0.53




The list is limited to pollsters with at least 15 polls for which an average of previous polls can be computed.







Other methodological changes

Unless you’re really into details — or you’re a pollster! — you probably aren’t going to care about these … but there are a few other methodological changes we’ve made to our pollster ratings this year.



Previously, pollsters got a bonus if they exclusively conducted their polls via live callers with cellphones, since these have been the most accurate polls over time. But this year, if a pollster uses live-caller-with-cellphone polls in combination with other methodologies, we now give them partial credit for the live-caller bonus. Even though these hybrid polls did not have a particularly good performance in 2017-19, they’ve been reasonably strong in the long run; also, we’re bowing to the reality that many formerly live pollsters are increasingly incorporating online or other methods into their repertoire.
In determining whether a poll’s result fell into or outside the margin of error, a calculation that’s available in our spreadsheet, we now use a more sophisticated margin of error formula that accounts for the percentages of the top two candidates and not just the distance between them. The margin of error is smaller in lopsided races, e.g., when one candidate leads 70-20.
Our Predictive Plus-Minus scores and pollster letter grades are based on a combination of a pollster’s empirical performance (how accurate it has been in the past) and its methodological characteristics. The more polls a firm has conducted, the more the formula weights its performance rather than its methodological prior. In assigning the weights, our formula now considers how recent a particular firm’s polls were. In other words, if a pollster has conducted a lot of surveys recently, its empirical accuracy will be more heavily weighted. But if most of its polling is in the distant past, its pollster rating will gradually revert toward the mean based on its methodology.
For pollsters with a relatively small sample of polling, we now show a provisional rating rather than a precise letter grade. (An “A/B” provisional rating means that the pollster has shown strong initial results, a “B/C” rating means it has average initial results, and a “C/D” rating means below-average initial results.) It now takes roughly 20 recent polls (or a larger number of older polls) for a pollster to get a precise pollster rating.

That’s all for now! Once again, you can find an interactive version of the pollster ratings here, and a link with further detail on them here. And if you have questions about the pollster ratings, you can always reach us here. Good luck to pollsters on having a strong performance in the primaries.

 •  0 comments  •  flag
Share on Twitter
Published on November 05, 2019 07:00

Nate Silver's Blog

Nate Silver
Nate Silver isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Nate Silver's blog with rss.