Posts by Tze Ming Mok

  • Speaker: Happy Race Relations Day,

    (Her name is Wong Liu Sheng)

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Hard News: Obscuring the News,

    Radio NZ website: still news.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Hard News: Obscuring the News,

    I'd actually even be happy to read the story about the cheese, because at least it's a local non-news story, rather than a non-news story from Canada or Australia or some random town in the US.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Hard News: Obscuring the News,

    I've been overseas for nearly ten years and there has definitely been a recent change to the Herald online. I can't put my finger on when exactly, but within the last 2-4 months perhaps? I went from being able to see what the main NZ news stories were by going to the homepage (and the homepage/web version in general being to a great extent more 'serious' and less tabloidy than the paper version when I visited New Zealand in person), to what seems to be a complete reversal of this policy. I've changed my bookmarks to bypass the homepage altogether and just go to the 'National' section to find out what is actually happening, because expats don't go to the NZ Herald website for a bunch of tabloid non-news from non-New Zealand countries. I don't know who does, to be honest.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Hard News: Labour's medical cannabis…,

    What's interesting is the attempt to cross-validate expert survey DHI findings (n=12) with estimated harm costs of the drugs, most of which cannot be validated by anything else. It's sort of like: "Here's some soft and imprecise data - how does it match up to another set of soft and imprecise data?" Answer: 'reasonably'? Compared to what? There's just no yardstick; and we don't know which is the more valid measure: estimated dollars or the 'blunt instrument'. You can appreciate the effort though. Perhaps if this DHI instrument becomes widely used and generates a large amount of data (which seems to be the intent here, and also I would guess is the rationale behind it being so basic as this would ensure its easy takeup by frontline professionals), it could become much less soft as a measure - though of course, more potentially skewed by social attitudes rather than evidence-based opinions.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Hard News: Labour's medical cannabis…,

    Okay what. A survey of twelve people. Fine, it's a pilot.

    I suppose he was looking more for proof of concept for the Index itself. Thus, it's admirably transparent of the report to admit that Nutt the Wise had experts rank sixteen indicators of harm for each drug in the UK research. Compared to two for each drug in this NZ survey. The very low response rate might suggest that the other NZ experts thought this was a bit of a shit survey/Index too.

    Still, not completely without value as you say, but the sample size is unavoidably amusing.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…,

    I see Bryce Edwards's roundup completely missed my main point, which was about self-selective sampling bias posing a challenge to the validity of the results, but oh well. #nerdlife

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Tze Ming Mok,

    Case in point: If van der Linden had just Googled 'cognitive testing surveys' after I tweeted at him, this would have been the first hit:

    Ipsos Mori's cognitive testing policy

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Andrew Robertson,

    I'd say if academics look down their noses at market research, it's mainly because of the goals of the research, and possibly because there's not much quality control in the overall market. However, applied social researchers, whether in academia, research institutes or independent policy evaluation (I've been in all of these areas), know that the big market research companies (Ipsos, TNS, etc) are methodologically on point and are their main competitors for the same research contracts. In the UK, where there is actually some money in policy research and evaluation, the big market research companies have their own subdivisions dedicated to social and public policy research, rather than commercial product research, and they all draw from a similar pool of social research academics and methodologists, or the students of those academics and methodologists. There's a commonality at that level, when it comes to standards.

    Vox Labs though, not really on that level - they're not YouGov, and they're not Ipsos. They're a start-up run by a PhD student - no big diss there, I'm a PhD student - but we are not talking about a large organisation with a long history of high quality market or social research and a deep understanding of both traditional and innovative survey methodology. They're a small start-up with one gimmick.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

  • Speaker: ‘Kiwimeter’ is a methodological…, in reply to Andrew Robertson,

    Yo Andrew, thanks for bringing this up. Yep, as I implied in the original post, YouGov is the market leader in this kind of non-probability sample online polling, although their weighting and sampling of their panel is based on masses of data and their algorithm is a closely guarded industry secret. Maybe Vox is as good as them; maybe not. My guess (and it can only be a guess) is 'not', given that the standard is high, and the level of understanding that has come out of Vox of the New Zealand population, as well as general understanding of principles of survey research design seems a bit lacking.

    Anyway, YouGov's panel has had the honour of being no worse at predicting election outcomes than probability sampled phone polling - i.e. in the case of the most recent British election, they all got it wrong. In the post-mortem breakdown of What Went Wrong With the Polls, Britain's leading psephologist found that the only survey that produced data to accurately match the election results, was the British Social Attitudes survey, run by dun-dun-DUN, my old employer the National Centre for Social Research. Because it was not only a probability sample, but was a repeated-attempt in-person CAPI, i.e. an army of middle-aged ladies swarming across the country, hounding people chosen in the random sample repeatedly until they answered the door. So representative samples are not the be all and end-all, but *how* surveys are delivered are obviously crucial - the context, the mode, the adherence to quality and targets... It's expensive and slow to do this kind of surveying - it's 'oldskool' in a time of wanting quick online fixes that are 'good enough'. But I actually think it's incredibly important - as the British election evidence and our Kiwimeter' acceptability problem shows - that it's worth holding up some survey methods as akin to a gold standard, otherwise the baby is out the window with the bathwater.

    SarfBank, Lunnin' • Since Nov 2006 • 154 posts Report Reply

Last ←Newer Page 1 2 3 4 5 15 Older→ First