Posts by Tze Ming Mok

  • Hard News: Labour's medical cannabis…,

    What's interesting is the attempt to cross-validate expert survey DHI findings (n=12) with estimated harm costs of the drugs, most of which cannot be validated by anything else. It's sort of like: "Here's some soft and imprecise data - how does it match up to another set of soft and imprecise data?" Answer: 'reasonably'? Compared to what? There's just no yardstick; and we don't know which is the more valid measure: estimated dollars or the 'blunt instrument'. You can appreciate the effort though. Perhaps if this DHI instrument becomes widely used and generates a large amount of data (which seems to be the intent here, and also I would guess is the rationale behind it being so basic as this would ensure its easy takeup by frontline professionals), it could become much less soft as a measure - though of course, more potentially skewed by social attitudes rather than evidence-based opinions.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Hard News: Labour's medical cannabis…,

    Okay what. A survey of twelve people. Fine, it's a pilot.

    I suppose he was looking more for proof of concept for the Index itself. Thus, it's admirably transparent of the report to admit that Nutt the Wise had experts rank sixteen indicators of harm for each drug in the UK research. Compared to two for each drug in this NZ survey. The very low response rate might suggest that the other NZ experts thought this was a bit of a shit survey/Index too.

    Still, not completely without value as you say, but the sample size is unavoidably amusing.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…,

    I see Bryce Edwards's roundup completely missed my main point, which was about self-selective sampling bias posing a challenge to the validity of the results, but oh well. #nerdlife

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Tze Ming Mok,

    Case in point: If van der Linden had just Googled 'cognitive testing surveys' after I tweeted at him, this would have been the first hit:

    Ipsos Mori's cognitive testing policy

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Andrew Robertson,

    I'd say if academics look down their noses at market research, it's mainly because of the goals of the research, and possibly because there's not much quality control in the overall market. However, applied social researchers, whether in academia, research institutes or independent policy evaluation (I've been in all of these areas), know that the big market research companies (Ipsos, TNS, etc) are methodologically on point and are their main competitors for the same research contracts. In the UK, where there is actually some money in policy research and evaluation, the big market research companies have their own subdivisions dedicated to social and public policy research, rather than commercial product research, and they all draw from a similar pool of social research academics and methodologists, or the students of those academics and methodologists. There's a commonality at that level, when it comes to standards.

    Vox Labs though, not really on that level - they're not YouGov, and they're not Ipsos. They're a start-up run by a PhD student - no big diss there, I'm a PhD student - but we are not talking about a large organisation with a long history of high quality market or social research and a deep understanding of both traditional and innovative survey methodology. They're a small start-up with one gimmick.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Andrew Robertson,

    Yo Andrew, thanks for bringing this up. Yep, as I implied in the original post, YouGov is the market leader in this kind of non-probability sample online polling, although their weighting and sampling of their panel is based on masses of data and their algorithm is a closely guarded industry secret. Maybe Vox is as good as them; maybe not. My guess (and it can only be a guess) is 'not', given that the standard is high, and the level of understanding that has come out of Vox of the New Zealand population, as well as general understanding of principles of survey research design seems a bit lacking.

    Anyway, YouGov's panel has had the honour of being no worse at predicting election outcomes than probability sampled phone polling - i.e. in the case of the most recent British election, they all got it wrong. In the post-mortem breakdown of What Went Wrong With the Polls, Britain's leading psephologist found that the only survey that produced data to accurately match the election results, was the British Social Attitudes survey, run by dun-dun-DUN, my old employer the National Centre for Social Research. Because it was not only a probability sample, but was a repeated-attempt in-person CAPI, i.e. an army of middle-aged ladies swarming across the country, hounding people chosen in the random sample repeatedly until they answered the door. So representative samples are not the be all and end-all, but *how* surveys are delivered are obviously crucial - the context, the mode, the adherence to quality and targets... It's expensive and slow to do this kind of surveying - it's 'oldskool' in a time of wanting quick online fixes that are 'good enough'. But I actually think it's incredibly important - as the British election evidence and our Kiwimeter' acceptability problem shows - that it's worth holding up some survey methods as akin to a gold standard, otherwise the baby is out the window with the bathwater.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to linger,

    Final nerd point: though as linger points out, a pilot survey does not need to be a representative sample, in this case the '6 archetypes' that were based on that pilot/precursor survey, were accompanied within the 'Kiwimeter' by a percentage of the country they were meant to represent. But no disclaimer that there was no real representative sampling behind those percentages. These are actually misleading claims that are still being touted about. See Barry Soper in the Herald proudly proclaiming that his 'patriot' group is 36% of the country. Even worse, you get TVNZ reporting things like 'sport is the most important thing for New Zealand identity' based on the self-selective 'Kiwimeter' survey. None of this is accurate. The best you can call it is a 'good guess' based on demographic weighting. But of course, given that they fucked up the execution, you can't even really call data coming from the 'Kiwimeter' itself 'good' - more like a 'fatally compromised guess'.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to linger,

    You would think that would have been their first line of defense if the numbers looked right. However, I suspect (just a hunch, not based on my experience of online self-selective surveying in NZ) they would be needing to upweight Maori participants in the first place, for any online survey.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Peter Davis,

    The points made by Peter Davis are essentially what I covered in my first post on this matter, which is that we need to study the prevalence of racist attitudes by, inevitably, testing racist statement in surveys. I am not suggesting here that a research ethics committee needs to intervene into/block research like this; but that in this case the researchers did not think as hard as they normally should be expected to, about the likely impact of the reduced schedule of Kiwimeter questions .This is actually likely because they relied on their 'usual' approach despite the delivery of the survey being far removed from the context of say, the way the NZAVS is presented to respondents. These are duties researchers have to themselves and their own standards, not necessarily something that needs policing by an external REC.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…,

    Okay wow, the Director of Vox replied, and confirmed that he does not accept the validity of any feedback from Maori so far, about their decreased likelihood of filling in the survey on ideological grounds, because feedback is "anecdotal" and "sampling on the dependent variable". He does not seem to understand that *any* qualitative evidence of groups selecting-out of the survey on these grounds is a problem for the survey, *precisely because* it cannot be quantified. This is like a stereotype of a quant guy who does not understand the purpose of qualitative research in the context of survey design, i.e. that often a survey will be useless without it. Also he does not seem to want to acknowledge the effect that the media coverage is also likely to have on selecting out of the survey. Also tries to cover himself by saying that cognitive testing is not common in "academic research", I guess this is why he hadn't heard of it oh dear. I'm getting the feeling that this guy is a freakin' amateur.

    SarfBank, Lunnin' • Since Nov 2006 • 150 posts Report Reply

Last ←Newer Page 1 2 3 4 5 15 Older→ First