Legal Beagle by Graeme Edgeler

6

The inflation adjustment of benefits; or a bill for Keith

A little while back on Twitter, I got sucked into a thread on inflation. Someone argued that the general consumer price index was a poor estimate of the effect of inflation on lower income households. Several of us argued back and forth, until someone new chimed in and pointed out that Statistics New Zealand now publishes a range of measures of increase in the cost of living, for different households.

And at least in recent years, the effects of inflation are greater on those who are in the lowest-spending households and households with beneficiaries (their annual inflation was 1.4%, while the annual inflation of the highest-spending households was 0.6%).

The recent numbers, pointed out to me by Keith Ng last week got me thinking about benefit levels. Every year, benefit levels are automatically increased by inflation, but the inflation measure used is overall inflation, not the inflation actually experienced by households with beneficiaries.

We inflation-index benefits because they're supposed to be set at a minimum level to ensure people can continue living, and if we didn't increase benefit levels with inflation, they would fall below the level where people could survive.

Whether they are set at the right level is debateable, but thanks to Statistics New Zealand's new range of inflation indexes, we know that we haven't quite got the details right. The inflation experiences by households with beneficiaries is slowly undermining the buying power of benefits.

Indexing benefits to overall inflation made sense when we didn't have research showing the actual inflation rate for beneficiary households. Now that we know what this is, we should clearly update our laws in light of this new information.

To help see if that can happen, I have drafted a bill to amend the bits on Social Security Act that relate to the inflation adjustment of benefits. It's neutral, so that, if in future, beneficiary households experience lower than average inflation, benefits will increase by less than overall inflation, but it provides for benefit rates (and certain asset thresholds) to be adjusted according to the Household Living-costs Price index - beneficiaries, instead of the overall consumer price index.

I don't know if the Labour Party, or the Green Party will ever get around to promising to reverse Ruth Richardson's benefit cuts. That's a bigger debate about priorities, but perhaps there's an MP willing to pick this up. If our MPs think that benefits are set at the right level, it shouldn't be too hard to get them to agree to keep them at that level, something Statistic New Zealand now tells us is not happening.

19

New Zealand rockets up the anti-Corruption ratings: a non-Spinoff investigation

New Zealand is again atop the Transparency International list of countries with the lowest perceived corruption.

With an index score of 90 out of a possible 100, the perception of non-corruption has also increased (last year, when New Zealand ranked forth, with a lowly 88). It’s not that the rest of the world is getting worse, New Zealand's apparently getting better. What is New Zealand doing right? I decided to investigate.

The Transparency International Survey of Corruption perceptions takes data from a range of international surveys that, in part, have questions on corruption. There are thirteen in all, including The African Development Bank Governance Ratings, the Economist Intelligence Unit Country Risk Ratings and World Justice Project Rule of Law Index. Corruption is a factor in governance, in economic risk, and in the rule of law, so Transparency International takes the data from the corruption parts of those rankings, and brings that together in one place.

New Zealand features in seven of the thirteen reports (Transparency International will give a country a rating if it features in at least three). Transparency International takes the information it gets from each of those surveys, turns that into a score out of 100, and then averages those scores to determine each country’s score.

So, how did New Zealand’s average score increase so drastically, up from a paltry 88, to a stellar 90?

Let’s look at each of the reports.

In the World Economic Forum Executive Opinion Survey (which surveys business executives), our corruption score fell from 92 to 90. That’s not good.

According to the IMD World Competitiveness Yearbook, a survey of business executives by IMD Business School in Switzerland, New Zealand’s corruption score increased from 93 to 95. That’s the two point increase we need to find, but all it does is cancel out the fall caused by the presumably different business executives that the World Economic Forum used.

In the Global Insight Country Risk Ratings, an assessment by in-country specialists who are part of the consulting firm IHS (now IHS Markit), New Zealand’s corruption score was unchanged: 83 in both 2014 and 2015.

New Zealand was up two points in the Economist Intelligence Unit’s Country Risk Ratings (which uses the experts in the Research Arm of the Economist to construct rankings), going from 88 to 90.

Unfortunately, New Zealand was down in its corruption rating in World Justice Project’s Rule of Law index, dropping from 83 to 79.

And New Zealand was down by five points in its corruption score according to score of risk analysis company Political Risk Services’ International Country Risk Guide, falling from 98 points to 93.

How then did New Zealand’s score increase? If you’ve been counting along, that’s only six reports, and New Zealand featured in seven. In the Bertelsmann Foundation’s Sustainable Governance Indicators report, New Zealand’s corruption score increased from 81 to 99. This is the sole reason for New Zealand’s index score increased.

How did the Bertelsmann Foundation arrive at its conclusions? Transparency International describes it in the following way:

The Sustainable Governance Indicators (SGI) examine governance and policymaking in all OECD and EU member states in order to evaluate each country's need for, and ability to carry out, reform.

The indicators are calculated using quantitative data from international organisations and then supplemented by qualitative assessments from recognised country experts. 

What changed between 2015 and 2016 that accounts for the upgrade? The people who prepare the Bertelsmann Foundation’s report have offered their own insights:

New Zealand is one of the least corrupt countries in the world. Prevention of corruption is strongly safeguarded by such independent institutions as the auditor general and the Office of the Ombudsman. In addition, New Zealand has ratified all relevant international anti-bribery conventions of the OECD and the United Nations. All available indices confirm that New Zealand scores particularly high regarding corruption prevention, including in the private sector.

This synopsis footnotes the Freedom House: Freedom in the World 2015 index. And what did that report have to say about corruption in New Zealand: 

C. Functioning of Government: 12 / 12

New Zealand is one of the least corrupt countries in the world. It was ranked 2 out of 175 countries and territories surveyed in Transparency International’s 2014 Corruption Perceptions Index. However, scandals involving political donations from migrant Chinese businessmen have hurt the government’s image. In May 2014, Minister Maurice Williamson resigned amid allegations of intervention in a domestic violence case involving a Chinese businessman who had made political contributions. In another case, donations were made to the National Party by a Chinese firm, one of whose board members is the husband of the Justice Minister.

So why, did Transparency International raise New Zealand’s anti-corruption score increase in 2016? According to the authors of the reports they relied on (and the authors of the reports they relied on) it’s because New Zealand was highly ranked on Transparency International rankings in 2014.

Now that's funny, and points out a pretty big problem with all of these indexes (although at least the Transparency International index is not quite as ridiculous as the index that suggested that North Carolina is no longer a democracy). but it's not actually the reason.

The Bertelsmann Foundation is interested in sustainable governance, of which corruption is only a part of its assessment. Indeed, the data transparency international uses from their assessment comes from a single question:

To what extent are public officeholders prevented from abusing their position for private interests?

The recognised country experts were told this question addresses:

… how the state and society prevent public servants and politicians from accepting bribes by applying mechanisms to guarantee the integrity of officeholders: auditing of state spending; regulation of party financing; citizen and media access to information; accountability of officeholders (asset declarations, conflict of interest rules, codes of conduct); transparent public procurement systems; effective prosecution of corruption.

Helpfully, the experts tasked with answering this question were warned:

Note: Please be aware that the Corruption Perceptions Index (CPI) of Transparency International uses the data and information given in response to question D4.4 for their assessments. To avoid circularity of assessments, please do not base your evaluation on the CPI.

They were asked to give a score from 1 to 10, with scores explained as following:

  • A score of 9 or 10 signifies that “Legal, political and public integrity mechanisms effectively prevent public officeholders from abusing their positions.”
  • A score of 6-8 would show that “Most integrity mechanisms function effectively and provide disincentives for public officeholders willing to abuse their positions.”

Quantitatively, this appears to be the source of the difference in New Zealand’s Transparency International index score between 2015 and 2016.

So, presumably, that question saw New Zealand get a score of 8 in 2015 (which Transparency International converted to 81), and a 10 in 2016 (converted to 99)?

Actually, no. The Bertelsmann Foundation's assessor gave New Zealand a 10 in 2016, and a 10 in 2015, and a 10 in 2014. It's just that in 2016, this 10 was converted by Transparency International to a 99 on a 100-point scale, and in 2015, that 10 was considered to be worth 81 by Transparency International.

Transparency International explains how this works:

Standardise data sources to a scale of 0-100 where a 0 equals the highest level of perceived corruption and 100 equals the lowest level of perceived corruption. This is done by subtracting the mean of the data set and dividing by the standard deviation and results in z-scores, which are then adjusted to have a mean of approximately 45 and a standard deviation of approximately 20 so that the data set fits the CPI’s 0-100 scale. The mean and standard deviation are taken from the 2012 scores, so that the rescaled scores can be compared over time against the baseline year.

I know a number of people much better at statistics than me occasionally read and comment here, so I won't try to explain it. In the end, I doubt understanding the exact process will help. Although I suspect it gives lie to Transparency International's outline of the results as shocking because so many countries score below 50:

So what explains New Zealand's rise?

In 2015, when New Zealand's Bertelsmann Foundation score of 10 was standardised as an 81, Denmark's score of 10 scaled to 97. Finland and Sweden, whom New Zealand beat on the Bertelsmann Foundation's measure (they received 9s) got scores of 91. The Netherland's score of 7 got scaled to 97 (oops). And Canada's 8 got them 81 like us.

Why did New Zealand rocket up Transparency International's Anti-Corruption ratings in 2016? And why did the Netherland's fall from fifth to ninth? There were data entry errors in Transparency International's 2015 analysis. Fix that mistake, and New Zealand's index score didn't go up from 88 to 90, it went down from 91 to 90. And New Zealand would have been first equal (relying on rounding) in both years.

Assuming, of course, there aren't more errors. And ignoring that these sorts of rankings are stupid anyway.

19

Three Strikes five years on! Now with accurate numbers!

A month ago, I retracted a piece I wrote in 2015 looking at the first five years of the three strikes sentencing regime for serious violent crime, attempting to see how the first five years after three strikes compared to the five years before three strikes.

As detailed in that retraction, the comparisons I then made were invalid. The two sets of data I was comparing were not comparable. I now have this data, following contact by the Ministry of Justice after my retraction (and Nikki Macdonald's excellent work in the Dominion Post) was published, and the Ministry apologised for falling short of the high standard they set for themselves, and offered to provide comparable data if I still wanted it.

The comparison between the years before and after the coming into force is less stark, but there remains a reduction in strike recidivism beyond that in strike crime generally. The extent to which this fall can be attributed to three strikes remains anyone’s guess.

In the first five years after three strikes came into effect 5248 offenders received a ‘first strike’ (that is, a “stage-1 conviction” under the three strikes sentencing regime), and 68 offenders received a ‘second strike’.

In the five years prior to three strikes, 5517 people were convicted of an offence where that conviction would have been a ‘first strike’ had three strikes been in force at the time, and 103 were convicted of an offence that would have been a ‘second strike’.

In addition, no-one was convicted of a third strikes in three strikes’ first five years, while four people were convicted of what would have been third strikes in the preceding five years, and two of them also racked up what would have been fourth strikes.

The bald numbers provide no evidence that the existence of formal strike warnings has a deterrent effect, and arguments about what caused. Though the numbers are low, the lack of third and fourth strikes could well be a consequence of incapacity, rather than deterrence – a second strike conviction means the offender is ineligible for parole, so result in longer times spent in prison.

In its response to me, the Ministry cautions against firm conclusions:

“Please note that although this data shows that reoffending has reduced since the Act came into force, there are several factors affecting numbers of convictions and hence people convicted over the 10 year period in question. These include changes in policing practices (for example, the Policing Excellence Scheme: www.police.govt.nz/about-us/programmes-initiatives/policing-excellence), an overall reduction in crime and a reduction in the number of people prosecuted and convicted from 2009 to 2014. This means that any reduction in offending cannot be solely attributed to the Sentencing and Parole Reform Act 2010.”

We’re now at the level where alternative explanations become more likely. Our first third strike (for offending leading a conviction since the five year period of comparison) is instructive. Raven Campbell was convicted of an indecent assault committed on a prison guard. A conviction for a crime committed in prison was always likely to be the first third strike. Few prisoners convicted of serious sexual or violent crimes will have had the opportunity to have committed a strike offence, been convicted and sentenced (probably to prison) then paroled, to be convicted of a new crime committed after the first, then serve every day of a second strike sentence (for which there is no possibility of parole) before being released involving to commit further serious offending, all within five years.

In addition to the Policing Excellence scheme suggested by the Ministry of Justice (in part, it created a greater Police focus on prevention), any number of other alternative explanations for reductions in recidivism rates for serious violent crime within five years of first conviction: random variation in offending levels, longer sentences for serious offending; changes to parole laws, or to the approach of the New Zealand Parole Board to parole decisions among others. are possible explanations for reductions in recidivism rates within five years of first conviction. We’ve also seen the rollout of extended supervision orders for child sex offenders, which started in 2004 (although child sex offences make up only a small proportion of serious violent crime, and tend to have low recidivism rates).

Establishing whether something like three strikes has had an effect on recidivism rates, or offending rates is hard. The bald numbers tell us little. It is something that could be investigated further, but would need the type of resources I do not have, and I suspect would also research agreements to enable access to information that would be otherwise be withheld under the OIA for reasons of personal privacy. Mostly, I’m just here to point out the problems with others’ arguments. Too many arguments about criminal justice focus on rhetorical effect, or run the risk of falling away when the crime rate changes by a little (if you’re on Twitter, follow Fordham Professor John Pfaff):

I can't take this much further, but there are some other slightly noteworthy notings from the recent OIA releases.

In explaining why it is difficult to come up with comparable data, the Ministry of Justice noted:

Under the Act, warning may be given either when guilt is established (which is usually when the conviction is entered) or at the sentencing date. However, as the time of the giving the warnings is at the discretion of the judge, it is impossible to estimate the timings of when warnings would have been given pre the implementation of the Act.

This accords with how I’ve seen the Act occasionally work in practice – warnings sometimes aren’t given upon conviction, but this isn't a correct statement of the law. Section 86B of the Sentencing Act is clear:

86B Stage-1 offence: offender given first warning

(1) When a court, on any occasion, convicts an offender of 1 or more stage-1 offences, the court must at the same time—

(a) warn the offender of the consequences if the offender is convicted of any serious violent offence committed after that warning…

When a warning is given is important, because an offence will only receive a higher-level warning (with the consequent parole and sentencing effects) if it is committed after the warning was entered. It will not be common, but there will have been instances where a defendant has avoided more serious strike consequences because a judge has exercised the discretion as to timing that the Ministry of Justice says they have.

In news from a wholly unrelated OIA request I made of the New Zealand Defence Force earlier in the year, I can confirm that former Navy Commander Philip Wiig, who was convicted at a Court Martial of an indecent assault, did not receive a first warning when he was convicted, or sentenced. Although indecent assault is classed as a “serious violent offence” under the Sentencing Act, not all parts of the Sentencing Act apply to Courts Martial, and the three strikes bit is one part that doesn’t. This would apply even if they charge faced was more serious: no conviction at a Court Martial has strike consequences. Now, I oppose three strikes, but I can’t see a particularly good argument that someone with a conviction for a strike offence following a Court Martial should be in a better position if subsequently convicted for further serious offending (whether in a civilian court, or a military one).

Far from the biggest deal, but I like my laws to be consistent where possible, even when I oppose them.

4

Retraction: Three Strikes Five Years On

On September 30 2015, I published a post: The Greg King Memorial Blogpost: Three Strikes Five Years On.

I retract that post. I am grateful to Dominion Post journalist Nikki Macdonald for her story published today looking at three strikes that determined that my piece was unsupportable.

The principal comparison I made in that post, between the number of second-strikes there had been during the first five years after three strikes, and the number there would have been in the five years before three strikes, had three strikes been in place five years earlier, is invalid. The pre-three-strikes data and the post-three-strikes data on which the post was based are not comparable.

The conclusions I reached in my post, as tentative as they were, are not supported by the evidence. I do not know what the correct figures are, but I have substantially overstated the number of second strikes there would have been.

The post was based on information provided to me in two OIA requests I made of the Ministry of Justice. I like to think that my request was drafted sufficiently clearly that the Ministry of Justice would know that the data I was seeking needed to provide a fair comparison, however this appears not to have been the case.

In relevant part, my request was:

I would like to be able to compare these numbers to the the previous five years (essentially looking at what the numbers would have been had the three strikes regime commenced five years earlier), so to enable a comparison, please also provide the following information:

How many convictions were there between 1 April 2005 and 31 March 2010 for a "serious violent offence" that was committed after a conviction had been entered for an earlier serious violent offence that was itself committed after 1 April 2005?

In respect of all questions I am asking only about offences committed by someone who was at least 18 at the time of their offending.

If you have any questions, please do not hesitate to get in contact. I have chosen the date ranges as they cover the same period before and after the entry into force of the three strikes regime, if your records are collated in a different way (e.g. calendar year), please feel free to respond with information in that way instead, but it does seem important to cover the same amount of time before and after the change to have a fair comparison.

In response to a question from Ms Mcdonald seeking to confirm the numbers released to me were comparable, the Ministry of Justice advised her that “data was extracted based on the date the charges were laid, irrespective of when the offences were committed or when a conviction was entered.” This is obviously important when drawing a comparison with three strikes, as the date of offending and the date of conviction are fundamental to whether there are strike consequences.

I am unsure why I was provided information about charges that were laid during the relevant time period, irrespective of when he offences were committed or when a conviction was entered in response to my request. Admittedly I am biased, but at least on my reading of my request, I drew that distinction.

Clearly, before publishing by post, I ought to have double-checked with the Ministry of Justice that the information that was supplied was the information that I had intended to seek (and still think I did seek) as it appears that upon being asked to do so, they have been able to confirm it was not relatively quickly.

When publishing information, I try to be careful to ensure that it is accurate. I am sorry that you have been misled by something I have written.

Update 2 January 2017: After I published this post, the Ministry of Justice got in contact, apologised for failing to meet the high standards they set themselves, and offered to provide comparable data. A new post, with a fair comparison, is up here.