Speaker by Various Artists

Read Post

Speaker: Polling 2017: life beyond landlines?

23 Responses

  • Russell Brown,

    One thing that fascinated me was huge difference between Reid and the Newsroom poll that had Labour on 45 and National on 30 – both of which use the SSI online panel.

    They're over different (but adjacent) time periods and the Newsroom sample is small, but it does seem to indicate that Reid is doing something differently with its SSI online component.

    Auckland • Since Nov 2006 • 22850 posts Report Reply

  • Russell Brown,

    Here's the interview Patrick Gower kindly did for MediaTake on how they're polling this year:

    Auckland • Since Nov 2006 • 22850 posts Report Reply

  • Jeremy Andrew,

    I've had two landline calls at home from pollsters in the past 10 days. Both were wanting to speak to someone up to the age of 44. Since I'm 45, they said thank you, goodbye.
    The second poll identified as being for TV3, can't recall who the first was.

    Hamiltron - City of the F… • Since Nov 2006 • 900 posts Report Reply

  • Ian Dalziel,

    Polling 2017: life beyond landlines?

    Have to admit at first glance I read that as
    life beyond landmines?

    ...and thought how much more dangerous
    modern politics is nowadays...

    Christchurch • Since Dec 2006 • 7953 posts Report Reply

  • Bart Janssen,

    The key problem is these are samples that try to replicate the result you would get from polling every person who can vote (the actual election). The question is -
    "Is the sample accurate?"
    Accuracy is about how close the sample is to the "real" result of polling everyone.

    The problem we have is that we have one "real" result every election, and multiple samples (polls) each giving different answers. There are two options, either the samples are "real" and the population as a whole is changing it's opinion by huge jumps from poll to poll OR the samples are not accurate.

    Margins of error allow you to give a fudge around the number but the margins themselves can be fraught. But ultimately if your samples don't reflect the "real" result then you're doing it wrong.

    At present you have to conclude one or more of the polling companies is doing it wrong - pretty badly wrong as well.

    It makes for easy entertainment and fills minutes on the news - but personally I don't think it adds any value to our democracy and could well be doing harm.

    Auckland • Since Nov 2006 • 4461 posts Report Reply

  • Euan Mason, in reply to Bart Janssen,

    At present you have to conclude one or more of the polling companies is doing it wrong - pretty badly wrong as well.

    It makes for easy entertainment and fills minutes on the news - but personally I don't think it adds any value to our democracy and could well be doing harm.

    Completely agree, Bart. It will influence how people vote. For instance, Labour voters will be tempted to vote Green to get Greens across the line this time around. If the polls are biased then people are misinformed about impacts of their voting choices.

    Canterbury • Since Jul 2008 • 259 posts Report Reply

  • Rob Stowell,

    With only two major polls, and this great gulf between them, we can have no clear idea where the real sentiment lies. Undecideds are also fairly high, another indeterminacy to take into account.
    Yet the reporting/commentary on 'the latest poll' is ripe with hyperbolic certainty. Still waiting for some pundit to squash the hyperventilating by pointing out the flimsiness of the rationale behind it.
    The unseen factor - which we can only judge by the actions of the politicians - is that there's at least as much private polling by the parties as public polling by the media. You can sense it - a magnetic force pushing and pulling the campaign messages and talking points - but we don't get to see it.
    Best guess: it's bloody close. (But it might not be!)

    Whakaraupo • Since Nov 2006 • 2120 posts Report Reply

  • Walter Nicholls,

    whatever reason phone polls have seemed to provide more credible results

    The obvious reason is that people participating in online surveys are self-selecting in the first place, and even if they start a survey, they can abandon it partway through and their opinion then counts for naught. I doubt there has been sufficient research on why people choose to participate or not online. The second you allude to, on the phone it is much harder psychologically to both give up, and to lie or just give a stupid answer.

    But the real problem here is the totally non-transparent scaling & fudging to get the simplistic 'poll results' When the opinions of 18-24 year olds are being reported based on statistical correlation with the opinions of 40-60 year olds and the three younger people who happened to answer Mum and Dad's phone the day the surveyor called... no wonder the results are 'volatile'.

    North Shore, Auckland • Since Jul 2008 • 42 posts Report Reply

  • Geoff Lealand,

    I am just wondering how you do a random sample online. It seems like an impossible proposition, unless you have some kind of membership/meta list to select from.
    Given there is no equivalent telephone directory for mobiles, even CATI is increasingly fragile, and possibly even fraudulent. Polling should be an industry in decline, rather than ascendance.

    Screen & Media Studies, U… • Since Oct 2007 • 2562 posts Report Reply

  • Sacha,

    there is census data on landline use

    But wouldn't it be changing faster than a 5-yearly census cycle?

    Ak • Since May 2008 • 19745 posts Report Reply

  • Russell Brown, in reply to Geoff Lealand,

    Given there is no equivalent telephone directory for mobiles, even CATI is increasingly fragile, and possibly even fraudulent. Polling should be an industry in decline, rather than ascendance.,

    Oddly enough, the polls that (correctly) confounded received wisdom at the UK general election were from YouGov's online panel. It seems it can be done well, but it's not trivial to achieve that.

    Auckland • Since Nov 2006 • 22850 posts Report Reply

  • Russell Brown, in reply to Rob Stowell,

    Best guess: it’s bloody close. (But it might not be!)

    I have seen whispers suggesting that both big parties' internal polling can be summed up as "bloody close".

    Auckland • Since Nov 2006 • 22850 posts Report Reply

  • Walter Nicholls,

    both big parties' internal polling can be summed up as "bloody close".

    And I can't help feeling this is an indication of proof that the polls *are* influencing voting, or at least subsequent poll results. It's like putting a hot object next to a cold one .. after time they will reach equilibrium i.e. the same temperature / voter share. A sign of MMP maturity perhaps ? (Along with the major parties shedding their "extremists" into minor parties, and moderating their policies to satisfy the middle hump of the voter bell curve).

    A bit of an aside: Thanks to the MMP 5% threshold, another effect is occurring for the the smaller parties. Say I and my 24000 friends are dithering between voting Green and something else. If a poll says they're at 4%, then I might consider a vote for them wasted. .. they'll drop to 3%. If they're close to 5%, then we'll all say "OMG must help the Greens" (especially if we would like their likely coalition partner(s) to govern) and they'll go up instead; if they're at 6% or more, we might conclude they don't need our help (and hopefully vote with our conscience, which probably means they'll drop toward 5% again).

    This might be why there are high-profile parties sitting at a fraction of a percent right now. Or it might just be that their sole asset is an high-profile egotist, not that this would explain the party currently at 7%.

    North Shore, Auckland • Since Jul 2008 • 42 posts Report Reply

  • Michael Cosgrove,

    What is the actual purpose of polls other than another means to bump up TV ratings? I can understand parties conducting their own polling, particularly if their aim is to gain/retain popularity at the expense of their principles.

    I always think it's a bit silly when a poll is conducted a few months afted an election and the presenter is earnestly juggling seats around to try and show how the opposition could form a government 'if an election was held today'.

    Mangere Bridge • Since Feb 2015 • 3 posts Report Reply

  • Gavin White,

    @ Russell - thanks for posting the interview with Patrick Gower. He's made some really useful points, and it largely confirms my assumptions. It sounds like they're using the online survey to target demographic groups that are hard to get in phone polls, which isn't quite the same as my preferred option of targeting those who don't have landlines. It's also important to note that demographic groups that are hard to get in phone surveys also tend to be hard to get in online surveys - young males are indeed less likely to participate in landline polls, but they're also harder to get in online polls.

    Although there are two polls using the SSI panel, the quotas, weights and questionnaire design will be done by the research company - that's how two surveys using the same panel can provide very different results.

    @ Jeremy Andrew - as you probably know, the reason they were looking for under 45s was almost certainly that their over 45 quota for men in your area was already full. That illustrates the point that Patrick Gower (sort of) makes - older quotas tend to fill up first. Therefore when you have a survey spread over several weeks (like the Newshub Reid poll before this one), it's likely that the older sample will have been covered at the beginning of the cycle and the younger participants towards the end. That's why a fairly short survey period is a good idea - although you should be suspicious of surveys completed very quickly because it often means the quotas haven't been very strict.

    @Bart Janssen - I essentially agree with your comment that if polls don't reflect the actual result there's a problem with the polls, although it's only fair to point out that there can be last minute changes and polls can't take into account things that happen between when the poll finishes and election day. Those changes do tend to be small of course, but if a poll shows NZ First (as the classic example) 1% below their final result that doesn't necessarily mean it was wrong - it could mean that a few voters decided at the last minute that they'd like Winston to stir things up.

    @Walter Nicholls - On the other hand, I've found online polls useful for research on sensitive topics - people seem to be more willing to admit to potentially embarrassing things.

    @Geoff Leyland - Yes online polls have to be based on big panels, and there are companies like SSI, Research Now and the Online Research Unit who make their livings out of building and maintaining them. They aim to make their panels as representative as possible, and invite people to participate in surveys based on the quotas they need to fill.

    Wellington • Since Jul 2014 • 16 posts Report Reply

  • Bart Janssen, in reply to Gavin White,

    if a poll shows NZ First (as the classic example) 1% below their final result that doesn't necessarily mean it was wrong

    Absolutely. 1% here or there is perfectly fine with this kind of sampling.

    The problem is we are seeing 10% differences and that realistically means one or more polls are very wrong.

    Auckland • Since Nov 2006 • 4461 posts Report Reply

  • Geoff Lealand, in reply to Gavin White,

    But it is still not clear how random sampling is possible.

    I tend to adopt a policy of deliberate peversity with online polls, especially when I encounter badly-worded questions or opaque agendas. It is so easy to lie online and disrupt the conceits of algorithms eg according to Facebook, I was born in 1910.
    But I am generally more interested in the politics of research than in research for the purpose of politics.

    Screen & Media Studies, U… • Since Oct 2007 • 2562 posts Report Reply

  • Gavin White, in reply to Bart Janssen,

    The problem is we are seeing 10% differences and that realistically means one or more polls are very wrong.

    It certainly seems that way - as I said I don't buy this argument that it's just about volatility.

    I think we might have the unfortunate situation where the two public polls represent the extremes, and the private polls are between the two (remembering that I don't have inside knowledge of that). I don't think, for example, National would have taken the risk of putting out an attack ad if their polling showed them close to governing alone.

    Wellington • Since Jul 2014 • 16 posts Report Reply

  • Walter Nicholls, in reply to Bart Janssen,

    The problem is we are seeing 10% differences and that realistically means one or more polls are very wrong.

    I'm sure it has been pointed out by articles linked to from PA recently, margins of error are usually simplistically reported. So the results for Nat&Lab (near to 50% of answers) are much more accurate than the results for minor parties . If you take say a result "3% would vote for the Purple party" then what that means is that of their sample of 1000 people, 30 people. I'm no statistician, but I think that means the confidence in the Purple result is more like 1/√30 than 1/√1000 - or 18%. The only way to reduce that number is to ask more people.

    If they are both asking the same questions of literally the same people at the same time, though ... then the variation between is caused by them f**king with the data and it all depends on the accuracy of their assumptions multiplied by their competence.

    North Shore, Auckland • Since Jul 2008 • 42 posts Report Reply

  • Walter Nicholls, in reply to Walter Nicholls,

    I'm no statistician

    Probably obvious, and now I've checked my facts, I'm guessing you're not pointing out relative differences in minor party votes.
    That recent Newshub poll is looking like a complete balls-up, though. At least the Newsroom poll has the excuse of a sampling a completely different set of people.
    For anyone watching, there's a good table of poll results at https://en.wikipedia.org/wiki/Opinion_polling_for_the_New_Zealand_general_election,_2017

    North Shore, Auckland • Since Jul 2008 • 42 posts Report Reply

  • linger, in reply to Walter Nicholls,

    30 people. I’m no statistician, but I think that means the confidence in the Purple result is more like 1/√30 than 1/√1000 – or 18%.

    Yeah, nah (as I’ve already said: depends on whether that number comes from a low support base generalisable over the entire sample, or a higher support base from some definable limited subsample). But in practice the methodology differences seem to be swamping the theoretically-modellable uncertainties.

    Tokyo • Since Apr 2007 • 1944 posts Report Reply

  • Jason Kemp,

    I wonder if the most useful flow on effect of polling is to get people off the couch. Last time for example lots of people simply didn't vote as they didn't think it would make a difference. Now that polling increasingly says "it is very close" perhaps more people will vote?

    Auckland • Since Nov 2006 • 368 posts Report Reply

  • linger, in reply to Jason Kemp,

    That's being quite optimistic. If (as is usually the case) polls show one option far below others, the more usual effect is to discourage voters for that option. Moreover, regardless of the poll result, there may be a fatigue effect from the relentless coverage. So I'm not convinced the overall effect of polls is positive.
    (OTOH the fatigue effect may be one factor behind the increased advance voting uptake -- get it over and done with and stop paying attention to the talking heads?)

    Tokyo • Since Apr 2007 • 1944 posts Report Reply

Post your response…

Please sign in using your Public Address credentials…

Login

You may also create an account or retrieve your password.