Posts by Andrew Robertson

Last ←Newer Page 1 2 3 4 5 Older→ First

  • Hard News: Meanwhile back at the polls, in reply to George Darroch,

    ...this degree of difference (consistently around 5% points at the last election)

    Perhaps the standard for how precise polls should be is set too high. I'm not sure about what poll you are talking about, but I was fairly impressed by how well some of them did at the last election.

    A poll is designed to measure public sentiment at the time fieldwork is carried out. It can't predict the future - it can't adjust for the weather on the day or other factors that may influence the Election Day result.

    The majority of fieldwork for all the polls was carried out at least five days before the last election. Given that, and given there were media reports of the PM saying NZ First voters are going to die out, some of the differences between the polls and the election result made sense to me.

    ...how do we know that the 18 year old male we are talking to on the phone is like other 18 year old males in New Zealand.

    You don't - this is the whole point behind probability sampling.

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Meanwhile back at the polls, in reply to George Darroch,

    I’m becoming more and more convinced that there is serious landline bias, but I don’t believe it’s the saviour of anyone.

    Hi George

    You know, since the dawn of population survey research the samples have been flawed. This hasn't just come about in recent times because of increasing land line non-coverage. Land line non-coverage is just one of many potential sources of error, and in my view it's not the most important one.

    The job of a good pollster isn't to collect a perfectly representative sample. That's simply impossible. Their job is to try to understand all the reasons why that's impossible, and how those reasons influence the specific thing they are trying to measure.

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Meanwhile back at the polls,

    I've tried to compare the methods used by different polling companies, using info I can find in the public domain. As you can see, there are a few gaps here and there.

    Public poll methods grid

    (Although I've mentioned this here before, it's probably important to state for the purpose of transparency that I work at Colmar Brunton - but I'm not posting on behalf of the company).

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Meanwhile back at the polls,

    ...there’s nothing in particular wrong with the company’s methodology.

    I agree this is probably correct, but it would be nice if they did actually give a few details about their methodology somewhere.

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Gower Speaks, in reply to steve black,

    That’s a really good point.

    I suspect (know) that it would (seriously) annoy quite a few people to not have ‘their party’ in the graph – I get calls/emails from people expressing such for parties that are grouped in ‘other’. They feel as though we are disadvantaging their party by not reporting the result.

    There’s also an issue with putting the undecideds, which have a different base, in the same chart as party support – the percentages won’t stack up!

    You may laugh – but 'why don't the %s add to 100!?' is one of the most common concerns or queries I get from people when I report/present survey results :)

    Maybe a separate graph or table would be a solution.

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Gower Speaks, in reply to steve black,

    When somebody writes “party support is at 0.1% +/- 0.2% this is almost always a sign that they are either using an approximation to the true distribution which shouldn’t be used near the extremes (0% and 100%), or not sufficiently statistical literate.

    Also, there is a time factor. The methodological and statistical detail you could present for a single poll is enormous. I’ve delivered 60-100 page reports to clients, just describing fieldwork and sampling success, data treatment, and error estimates (no actual interpretation of results). Putting something like this together can take a couple of weeks.

    In the short time available for exporting, verifying, weighting, analysing, and writing up poll results, judgement calls simply must be made. I’m not making excuses here. While confidence intervals for parties polling below 1% could be interesting to a small group of people, they are not relevant to most, and have little bearing on what can be reliably inferred from a poll. I simply wouldn’t devote time to this unless there was a very good reason why it’s more important than, and should take the place of, other things to be reported on.

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Poll Day 2: Queasy, in reply to Sacha,

    Oh I thought you were going to try doing that for the seat count.

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Gower Speaks, in reply to steve black,

    This was an excellent comment!

    I agree a) that non-response is a far, far bigger issue than non-coverage in telephone surveys, and b) survey reports should include the response rate.

    The problem is though, I've seen some very dodgy response rate formulas used in New Zealand by reputable companies. What the industry needs is an accepted formula and standard set of call outcome codes (like the American Association of Public Opinion Polling has - http://www.aapor.org/Response_Rates_An_Overview1.htm#.UzvMLvmSySo). When the industry has that, I'd argue that publishing response rates should become mandatory. Until then, readers won't be able to compare like for like. The report that includes the more conservative calculation will just get slated by those who don't know the difference between the formulas.

    The other thing to mention is that you can have a very non-representative survey that achieves a very high response rate, and a very representative survey that achieves a very low response rate. Response rate is not the only indicator of sample quality.

    In surveys I've designed, I've occasionally sacrificed response rate to reduce response bias. As an example, consider the following two hypothetical surveys about same-sex marriage.

    1) A random paper-based postal survey achieves a 45% response rate through the use of reminders, a second questionnaire sent to non-respondents, and a prize draw incentive.

    2) A random telephone survey, introduced as a survey about ‘current issues’, achieves a 25% response rate.

    In this instance the telephone survey (assuming good fieldwork practices) is likely to deliver a higher quality sample. In a postal survey potential respondents are able to see all the questions before they decide to take part. Although the initial mail-out would be random, people who feel strongly about the issue are more likely to respond. For this reason the postal survey sample will not be as representative as the telephone survey sample.

    Disclaimer: I work for Colmar Brunton, but my views don't represent those of the company, etc, etc, etc

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Poll Day 2: Queasy, in reply to Sacha,

    The party vote chart. I can email you the %s and 95% CIs. (The is the same Sacha I know from Twitter, right?)

    Wellington • Since Apr 2014 • 65 posts Report

  • Hard News: Poll Day 2: Queasy, in reply to Sacha,

    Heh... Well, it's written underneath the graph. :)

    Wellington • Since Apr 2014 • 65 posts Report

Last ←Newer Page 1 3 4 5 6 7 Older→ First