To me, they have all the believability and all the sincerity of a Rebekah Brooks solemnly promising she won’t do it again.
I for one happen to believe that Rebekah Brooks will never be involved in phone hacking again ... .
I think, because of the 5% line, if drawing virtual Parliament results, you kind of need two, side by side, with some visual indication of how likely each style of results, because the flow on effects of NZ First's result are (I think, looking at the numbers) going to have a more significant effect on government formation than anything else. Once NZ First's effects are dealt with, the other perturbations due to error ranges are pretty minor.
Thinking about it you need another bar or another colour on each bar to describe the don't know vote.
Honestly is there any reason why they couldn’t report actual intervals instead of just simplistic percentages.
I have some sympathy here. Explaining confidence intervals in a three-minute news item is not something I’d like to try and do.
But yes, as you and Pete both note, treating the simple number as gospel is actually misleading.
I've been thinking about this a bit and I think it comes back to something I get told in my profession a lot. First we have to make it simple enough for the [stupid] public and second we have to avoid talking down to the [stupid] public.
The big problem I have with that is that in my experience the public is NOT stupid. Sometime they are disinterested, which is fine, a lot of the time they don't remember or never learned the necessary background, but very rarely are they unable to understand.
However it can take time to explain which gets back to your 3 minute problem.
But I still think the MSM could take a punt and start representing the poll data in a manner more closely linked to reality. They love flashy graphics and surely this is an occasion where a really nice graphical representation could convey the uncertainty of the outcomes. I suspect the public would respond quite positively to a graph that showed just how important their vote might really be come election day.
you kind of need two, side by side
But TV is not a static medium, you aren't limited to a graph that is fixed. You could have some seats fixed red blue green or purple and some that flick between colours slowly or rapidly depending on uncertainty or something much cleverer with a 3D component.
I recall tv news using stacked columns to represent coalition blocs. Perhaps they could add some shading or animation to each chunk in the column to show its range? And dynamically adjust the other chunks when parties dependant on thresholds were added or removed.
you aren’t limited to a graph that is fixed
great opportunity for tv to differentiate themselves from print media, you’d think
I think, because of the 5% line, if drawing virtual Parliament results, you kind of need two, side by side, with some visual indication of how likely each style of results, because the flow on effects of NZ First’s result are (I think, looking at the numbers) going to have a more significant effect on government formation than anything else. Once NZ First’s effects are dealt with, the other perturbations due to error ranges are pretty minor.
More than two.
The polling data revealed to the public is only the party vote ("support"). The electorate voting isn't mentioned. In fact, in their methodology section C-B note
The interview introduction was changed in this poll to remove any reference to politics, and the weighting specifications were updated. This may impact comparability with the previous poll.
The data does not take into account the effects of non-voting and therefore cannot be used to predict the outcome of an election. Undecided voters, non-voters and those who refused to answer are excluded from the data on party support. The results are therefore only indicative of trends in party support, and it would be misleading to report otherwise.
So an unknown methodological change effect with previous polls? We seem to have overlooked that in the rush to narratives.
And every commentator seems to suffer from amnesia when it comes to the leap from party support to seats in the house in their narratives. C-B tell you not to use their data for that. It is misleading to do so.
A few people have mentioned (in one or more of the 3 discussions on this poll-fest) that in order to translate poll results into seats many assumptions have to be made about what parties do deals to not stand a candidate in certain seats in order to get certain minor parties in. So from where I sit the 5% threshold effect is but one of your worries compared to the piling up of assumptions. You would probably need to have several different scenarios depending on minor party deals and getting to the threshold. How well does a particular scenario stand up to sensitivity analysis (ie changing the assumptions)? They don't even begin to report that sort of thing.
Note: I'm having another read through my reference library with regards to the multiply by 1.414, and re-visiting my earlier work on how much non response scenarios can make for wider confidence limits. Can't help it even though I'm retired and not all that well...
But I still think the MSM could take a punt and start representing the poll data in a manner more closely linked to reality.
There is one thing they could, and should, be doing which is incredibly simple: include the undecided/don't know/wouldn't say vote. Making it disappear isn't "dealing with what's there". One of the most interesting things about that last poll, surely, was the rise in the undecided vote. These are the people who will, very probably, decide the election. If the undecided vote is, say, twice the difference between the 'left' and 'right' coalition blocks, that's pretty damn important, AND it makes for an interesting story.
I do not understand why it isn't done.
Balanced questions Paddy?
Whether it helps to create a definitive yes/no meaning or not, I don't understand the point in asking questions that need such a lead-in to make sense. If regular voters aren't going to have a controlled introduction towards forming an opinion, isn't a poll which does exactly this just inviting a biased response? It's doesn't seem that different from the Leading Questions scenario striaght out of Yes Prime Minister.
Whether it helps to create a definitive yes/no meaning or not, I don't understand the point in asking questions that need such a lead-in to make sense
Yeah, if you were going to make this question - and presumably the Orivida one as well - properly robust you'd make it two questions with the first being something along the lines of 'How much have you heard about David Cunliffe's financing of his campaign for leader?', and then filter out (during or afterwards) everyone who gave responses in the 'nothing' or 'a little' zone. If the news org can only have four new/unique questions per survey, though, that isn't practical.
Unfortunately this kind of graphic may be misleading and bogus for the reasons I mentioned before, although that was more in the context of the C-B methodology.
Now let’s consider what is happening with retirement of sitting members. The most dramatic effect is likely to be the retirement of Tariana Turia and Pita Sharples. Their electorate seats may be up for grabs. In the Maori seats it is all about the electorate seats and not the list seats. You can’t use “the last election results” or safely make assumptions like “things will be the same” because things have obviously changed. It takes a lot of work and very special interviewers to poll the Maori electorate seats accurately, and I suspect such Maori electorate polling doesn’t figure at all in the “party support” based seat counting. Yes there is sometimes overhang just to make things more complicated.
If somebody can reassure me by revealing the Reid methodology and assumptions for assigning two seats to the Maori party for the coming election then I’m all ears. Plus of course the seat based translation for other parties. If not I call bogus and misleading on the hand waving which turns the actual data into an infographic of seats in Parliament.
There is one thing they could, and should, be doing which is incredibly simple: include the undecided/don’t know/wouldn’t say vote. Making it disappear isn’t “dealing with what’s there”. One of the most interesting things about that last poll, surely, was the rise in the undecided vote. These are the people who will, very probably, decide the election. If the undecided vote is, say, twice the difference between the ‘left’ and ‘right’ coalition blocks, that’s pretty damn important, AND it makes for an interesting story.
I do not understand why it isn’t done.
This reminds me of a similar phenomenon which used to happen in Australia when Julia Gillard was PM. The MSM routinely reported drops in her level of support as PM to a new low. What they never mentioned was that there was one person who had a consistently lower level of support for PM than Gillard. His name was Tony Abbot. It used to amaze me that the MSM didn’t get called on that “data deletion” more often. Everybody just looked the other way.
It isn’t an accident which poll results get reported and which get unreported. It is a decision by journalists and editors. It’s that simple. That’s why we need some minimum standards for correct reporting of polls (you know: evidence based reporting) along with balance and fairness in interviews and narratives.
Yes, and we need journalists to distinguish between polling (randomly selected stratified or quota-selected, representative samples of potential voters) and surveying (self-selected, non-randomised on-line surveys). The new guidelines released by Research Association New Zealand "New Zealand Political Polling Code" are very good.
Yes, and we need journalists to distinguish between polling (randomly selected stratified or quota-selected, representative samples of potential voters) and surveying (self-selected, non-randomised on-line surveys). The new guidelines released by Research Association New Zealand “New Zealand Political Polling Code” are very good.
I’d only give the new code a bare pass. They left out the important things like Effective Sample Size, and calculating the error estimates properly. And they got the terminology backwards!
I was a statistician with a particular interest in survey based research (in contrast to experimental research) and RANZ have co-opted the term which was the proper scientific discipline I was involved in to become the new “low quality” term. And they have elevated the term “poll” which was the “low quality” term in my time (applied specifically to political things, not the full breadth of research which is conducted by survey).
Consider one of the key reference books in the field: Kish, Leslie 1955. Survey Sampling . It isn’t called “Polling Samping”.
Big Brother has raised the chocolate ration again. War is Peace. Survey is Poll. Where is their sense (and understanding) of the history of the discipline?
I'm allowed to be an old curmudgeon about these things because I'm retired and no longer have any skin in the game. The younger generation can go out wearing the Emperor's New Clothes.
A 4.9% result has other critical information associated with it. There’s a +/- 1.4 margin of error giving a 95% confidence of it being in a range from 3.5-6.3% which is slightly less than a fifty fifty chance of making or breaking the 5% threshold.
Sorry, Pete. Balderdash. If a poll has a margin of error of (let's make it easy) plus or minus 2% that means that 50 could be anywhere between 48 and 52. But it does not mean that 3 can be somewhere between 1 and 5. I don't know the actual figure but there will be a formula and it will be something like 3 can be somewhere between 2.87 and 3.13. The 2% is equal to two percentage points only when n=50. I failed School C maths but since having done a story on this in the 90s, I have had to explain it to a dozen political editors. I will forbear to name the ones who have shrugged and said "like, whatever, dude". It should go without saying that when a political journo says a party polling 1.9% is polling below the margin for error, he or she is uttering meaningless poppycock. Feel free to run this by any statistician you know and you'll find I am right ..
"Our poll shows National is (highly) likely to have 56 to 58 MPs."
It's only "not hard" for the point where you're discussing National, Labour, and the Greens. After that it's really, really uncertain.
UF will probably get Dunne back into Parliament, but it's not totally certain. Likewise Act and Seymour, but even less certain since he has no history and Epsom's relationship with pegged noses and voting for the tea-drinking candidate may come to an end.
The Maori Party might vanish this election; they've been the overhang every time they've been in Parliament, but a diminishing one, and with Sharples and Turia standing down it could become a party of one or even a party of none if Flavell gets the flick because of his association with a National government that has not, the Party's protestations to the contrary, been particularly good to Maori.
Then there's NZF, whose parliamentary presence on 21 September could as easily be zero as it is the current six.
Ranges of MPs in Parliament is good only when there's a certainty that the party will be present at all. For every party except the top three that's a fickle possibility, and this election looks to be particularly interesting for the small parties.
Peter, I may not have expressed that clearly but with a sample size of 1000 the margin is +/- 3.1 at 50% (with 95% accuracy) so that's 46.9-53.1 with a one in twenty chance of being outside that range.
And it's +/-1.34 at 4.9% which is the example I gave except 1.4 is slightly inaccurate (I calculated that elsewhere and may have got something wrong) but the point I made remains.
I couldn't care less for the continual news reportage of polls.... seems only to satisfy the math geeks who seem 100% in agreement that they are (seriously) flawed and cannot a soundbite make
Why Mr Gower insists on doing them the way he and his peers do frustrates me as its not entertaining nor enlightening nor actually news worthy
Sacha: That's better, yup, but you need to calculate exact binomial CI's for anything under about 1% or so (the binomial is too skew below that for the normal approximation to kick in). This will also mean you can estimate the CI for the 0's as well.
You can do this pretty easily in R using binom.test, but first need to the number of people rather than a percentage. We can't get this as we don't know the weightings used, but a simple approximation for our purpose is to just use the rounded number's once multiplied up by the 767 respondents.
In case you're interested, here's the code + table:
You can do this pretty easily in R
ta - hence my comment about not having the right tools. :)
anything under about 1%
I did wonder at the negative 0.1% values.
Looks like someone over at Te Standard wrote to the BSA... and had their complaint thrown out.
Note: I’m having another read through my reference library with regards to the multiply by 1.414
Hi Steve, I've been really enjoying your contribution. This is the reference I use in relation to 1.41(4) in relation to comparing between polls.
that's a complaint to TV3's in-house standards committee. Next step would be going to BSA with it.