July 29th, 2021
This is Simon, a software engineer at Datawrapper. For this week’s edition of the Weekly Chart, I took a closer look at election poll graphics.
Germany, where Datawrapper is based, has an important election coming up this September. After 16 years, Angela Merkel is about to leave office and there are two likely contenders hoping to succeed her: Annalena Baerbock of the Green Party and Armin Laschet of Angela Merkel’s center-right Christian Democratic Union (CDU/CSU).
As I write this post, German media is preparing to go into full election campaign mode, and soon there won’t be a single news program or paper that does not regularly feature election polls. Reporting on election polls, however, is often misleading. I’ll use this opportunity to take a closer look at some fundamental problems with election poll reporting and show three simple ideas to fix them.
Polls are often reported as if they were actual election results — which they are, of course, not. In Germany, election polls are commonly based on samples of 1,000 to 3,000 people that are then extrapolated using statistical methods to represent all voters in the country. This process comes with a margin of error that is around plus or minus two to three percentage points. These margins are reported by most pollsters, but then, more often than not, completely ignored by journalists.
But margins of error can be very important. For example, if a poll has an outcome of 50% for one candidate, with a margin of error of plus or minus three percentage points, it means there’s a high chance the actual number is somewhere between 47% and 53%. This can make a huge difference, particularly in tight races. So be sure to highlight the error margin whenever you can.
There are multiple organizations that produce election polls, and the results often differ significantly. For example, INSA, the pollster used by the major German tabloid BILD, has been criticized for being biased towards the far-right, while polls created by FG Wahlen, used by the public broadcaster ZDF, are often said to be more left-leaning. And sometimes, a pollster may just get it wrong, without any political bias involved. Political polls are not an exact science after all.
An obvious solution to this problem is to look at data from multiple pollsters. News organizations such as FiveThirtyEight or Politico aggregate polls from multiple sources and then use their own statistical models to make sense of that data. But even if you don’t have the know-how and resources to do that, you can still show deviating polls, for example using split bars like in the chart above.
A single election poll reflects public opinion at a certain point in time. And public opinion can change significantly during an election campaign, as you can see in the chart above. For example, after Annalena Baerbock was announced as the Green Party candidate, they experienced a brief upswing, but then plummeted again, probably due to negative campaigning by an industry lobby group and conservative media. At the same time, Armin Laschet’s CDU/CSU seem to slowly recover from a long downward trend.
By showing how polls change over time, you can move beyond reporting mere numbers to analyzing and explaining the processes at work as voters make their decisions. In your charts, you can use annotations to help readers understand the poll data in the context of political events.
Of course, none of these ideas are new and there are plenty of examples of excellent reporting on election polls. If you are interested in the topic, I suggest you have a look at the election poll trackers of The Guardian and Politico. Also, wahlrecht.de and Wahlen_DE are excellent sources for German poll data. For more info about the tools and techniques I used to create the charts in this article, have a look at the following resources:
As always, do let me know if you have feedback, suggestions, or questions. I am looking forward to hearing your ideas about election polls! You can get in touch with me via firstname.lastname@example.org, Mastodon, or Twitter.