OnPoint by Keith Ng


The Big Guns: Truecrypt and Tails


One of the simplest ways to encrypt stuff is with Truecrypt. It runs on Windows, Macs and Linux. It requires no installation, so you can run it off a thumbdrive. You can use it to create an encrypted container file, or to encrypt an entire drive.

Encryption containers are quite handy. When decrypted, they work just like a normal drive. But once you lock it, the whole drive disappears and becomes an encrypted file which can't be read without the password. Apart from that, it's like any other file - you can copy it wherever, put it on a USB stick, or even put it on Dropbox so that it syncs between your computers.

You can also encrypt entire drives. The whole drive will look like random data, and be unreadable without the password. Of course, anyone looking at the drive will see that it contains random data, and conclude that it must be an encrypted drive. This comes back to the problem I mentioned last time - that you can be compelled to give up your password, so what's the point of encryption?

This is where Truecrypt works its magic. You can create a hidden volume with Truecrypt - that is, create a hidden encrypted drive *inside* another encrypted drive. A drive with hidden encryption (i.e. Two layers, one hidden beneath the other) looks exactly the same as a drive with normal encryption (i.e. One layer). It's easy to prove that there is an encrypted layer, and you can be compelled to give up *a* password - but it is impossible to prove the existence of the second layer, so they can't compel you to give up the password to it.

This is, of course, some tricky shit. If you're going to go down this path, you really need to read the full documentation.


Encryption is maths. You can't hack maths, but you can hack computers. Rather than trying to break your encryption lock, it's much more likely that any adversary will just try to steal your key by compromising your computer. There's also a decent chance that your computer is compromised not because you're a target, but just because you clicked on the wrong thing at some point in the past.

One catch-all solution is to bypass your computer operating system altogether. The A in TAILS stand for "amnesiac". The whole operating system boots from the USB drive and straight onto your RAM. Nothing is saved - this means that you can click on all the viruses and trojans in the world, but when you reboot, you start with a clean system again.

Tails and Truecrypt combined is a very powerful combination. Your data is encrypted by Truecrypt, and your password only ever goes between you and the temporary operating system, which ceases to exist when you turn the computer off. The only way, in this system, to crack your data is to plant a physical bug in your computer, to install a camera over your keyboard, or to beat/coerce the password out of you.

Tails also comes packaged with Tor Browser, which uses Tor to redirect your traffic and mask where its coming from. It also comes with its own PGP tools, which you can use for encryption/decryption on the fly. Truecrypt is turned off by default, but if you want to use it, you can just type in "truecrypt" on boot (or carry the file in the drive containing Tails).


I think that's it for now. Feel free to post links to your favourite tools in the discussion below. Keep in mind that the focus here is on practical solutions for users with minimal expertise fighting real world, resource constrained adversaries. So let's not go overboard eh.

In other news, the Lavabit story is out. And jesus. It's a depressing read. If you think the state shouldn't be omniscient, please help the cause by donating to his defence fund. If you liked my security stuff so far and/or found it useful, please do me a favour by donate to his defence fund, so I can feel a little less nauseous about where we're headed.


Legal Context

Before we get onto Truecrypt, Tor and Tails, let's look at the legal context for using those things. Many thanks to Steven Price and John Edwards for their help and advice on this section.

The main protection for journalists comes from s68 of the Evidence Act, which says that journalists and their employers cannot be compelled to give evidence, answer questions or produce documents which would reveal the identity of confidential sources, unless:

A judge decides that the public interest in naming the source outweighs the potential harm to the source and the public interest in keeping sources confidential

Note that s68 of the Evidence Act is quite limited:

  • It does not cover third-parties (such as cellphone or email providers), only journalists and their employers
  • It does not cover cases which doesn't involve confidential sources (though s69 provides more general protection for confidential information)

When journalistic privilege might apply, Police have to give the media outlet a reasonable opportunity to claim privilege. It is therefore much easier for investigators to target known service providers (i.e. Phone numbers and emails associated with the journalist) rather than try to seize information directly from a journalist.


Surveillance vs Search

Data in transit (e.g. Phone calls, internet traffic) can only be legally captured with a surveillance warrant, which can only be obtained for offences which are punishable by sentences of 7 years or more, or for offences related to restricted weapons.

Data at rest (e.g. Text messages, logs of phone calls, anything on your computer) can be obtained with a search warrant or production order, which can be obtained for any imprisonable offence.

By definition, it is only possible to retrospectively obtain data at rest. This means if an investigation is only likely to occur after a story becomes public, only data at rest will be available.

As an example, in the Bradley Ambrose (“Tea Pot Tapes”) case, even if the Police were interested in ongoing communications, the offence he was being investigated for did not qualify for a surveillance warrant; even if they could get a surveillance warrant, it would've been impossible for them to retroactively capture the period they were interested in. They were, however, able to obtain his texts and call records directly from his phone provider.


Norwich Pharmacal Orders

While private individuals can't get warrants, they can apply for a Norwich Pharmacal Order. These are pre-trial or interim orders against a third party, which can be used to reveal the identity of a person to allow legal action to proceed.

For example, a plaintiff might file papers against an unknown person in court, then apply for an NPO against a third party (e.g. An ISP or telco) for information that would reveal that person's identity. That third party would have a legal obligation to provide that information (or be in contempt of court), and they would not be able to claim privilege, since they would not necessarily know there was a journalist-source relationship there.


Encryption Passwords

Harddisk encyption such as the operating system's default encryption would protect you against hackers and thieves, but it may not help you against law enforcement.

Under s178 of the Search & Survelliance Act, anyone who:

Fails without reasonable excuse to assist a person exercising a search power under section 130(1) when requested to do so (relates to searches of computer systems or data storage devices - a person may be required to assist with access to data).

..will be committing an offence, and can face up to 3 months of imprisonment.

Nor would the right against self-incrimination help you or your source. Self-incrimination doesn't protect against search and seizure, and the Police Search Manual specifically states:

A specified person may not be required to give any information tending to incriminate themselves. However, this does not prevent you from requiring them to provide information or assistance that is reasonable and necessary to allow you to access data held in, or accessible from, a computer system or other data storage device that contains or may contain information tending to incriminate the person.

This means that if they are able to seize your computers, they'll also be able to force you to give up the password, on threat of 3 months imprisonment. Unless, somehow... more on this next time.



  • Journalists and their employers are protected by the Evidence Act.
  • The minute data goes to a third-party (such as your ISP or telco) it's not protected.
  • Sources aren't protected.
  • Search warrants are easier to obtain than surveillance warrants.
  • Private individuals (including companies) can use Norwich Pharmacal Orders to root out sources.
  • Normal encryption doesn't help, as you can be legally compelled to give up the password.

The Gift that Keeps on Making Me Barf

Last week, we saw the first indication that the NSA & friends have developed "groundbreaking cryptanalytic capabilities". This week, we found out exactly what that means. Basically, the keys that major companies use to encrypt their traffic have been stolen or weakened with flaws; backdoors have been put into products and networks; this is sometimes done with the willing cooperation of companies, sometimes with coerced cooperation, and other times, without their knowledge at all.

To draw an analogy: They haven't yet figured out how to picked the locks on your door, but they've managed to steal keys, to open windows, and to make your locksmith install dodgy locks. Of course, once they've done this, they're not the only ones who can climb through those windows and break those dodgy locks.

This why the news is significant: Not only does their mass survelliance system reach deep into secure systems used by everyone, they've also worked with industry to seed security holes throughout the entire system. It is an utter nightmare - these systems are the basis for "e-commerce", or as we call it these days, "commerce". Not only can we not trust the systems, we can't trust the people who build the systems.

This is a huge deal.

However, despite this being framed as a breach of encryption, the actual process of encryption (the actual "lock") hasn't been broken. What this has really shown is that if you want security, there is no alternative to doing it yourself and verifying it yourself.

Part 3: Verifying Keys

So there's a public key on my page. How do you know that's *my* key? Anyone could have created that key, just like I created the John PGPKey key. For all you know, some Russian hacker could have taken over Public Address and put that key there.

As a first step, you should look up my key. My key is published, so you can go to this keyserver and look up it up using my name.

The second one looks like me. Which is nice, but doesn't mean much - that could be faked too. You can check the fingerprint against the one I have on my twitter profile and the one I have on my Public Address page.

They match up! This means the person who created the key also controls my Twitter and Public Address accounts. But what if both those things were hacked? Last year, Wired writer Mat Honan got hacked - from his Amazon account, they got his credit card number; with his credit card number, they got his Apple account and his Apple email; with his email, they got EVERYTHING, and remotely wiped both his computer and his phone.

Now we move on to the next step: Little further down, we see Idiot/Savant. He has signed my key, which means that he has used his key to vouch for my key. We can check I/S's key fingerprint against the fingerprint on his Twitter bio. That can be hacked as well, of course, but it means that the hacker would have to hack both our accounts, as well as Public Address and No Right Turn.

The thing that makes signed keys special is that those signatures can't be changed. If I make up a new key, those signatures have to be renewed.

If you met I/S and verified his key, then that takes you one step closer: You know that his key is not faked, therefore you can be more confident that my key is not faked.

(I'll be organising a key-signing party at some stage, which is why I haven't talked about key signing. Also, I'm on a bus to Warkworth.)


BTW, the NZ Police can use PRISM against you now

So, "GCSB assistance" is basically "NSA assistance", so when the Police asks for GCSB help, it's actually getting NSA help.

I buried the shit out of that lead last time. The only reason it didn't die there was that Juha Saarinen picked up the significance of the GCSB-NSA link and wrote about it. From there, the news made it on to Ars Technica, which got Slashdotted/Reddited/tweeted by Greenwald, which made me realise that, perhaps, this was news after all.

And that, perhaps, I shouldn't have put it in a throwaway line. Half way down a post about PGP keys. Made at 5pm. On the day Shearer resigned. After the GCSB bill passed.

Basically: I am the worst at newsing. Soz.

Partly, I figured that people already knew: David Fisher, on the back of the same documents, implied the same things two months ago. And partly, being the pessimistic apocalyptist that I am, I was already on "Depression", and forgot that everyone else was still on "Anger".

It also highlights how these stories (PRISM, GCSB etc.) work. Not only are they inherently complex and difficult to understand, but because there's so much of it coming out in so many pieces, it's really hard to know what "everyone knows". The fact that something is in the public domain, or even has been reported, doesn't mean that it's a part of the public discourse.

Now, we return you to your homework.

Part Two: Signing/Verifying Keys

This is part of a multi-part series on security, aimed at journalists but useful for anyone. They are intended to get you comfortable with the tools and help you understand the principles being them. These are short, easy learning exercises - *DO NOT use them to store or transmit sensitive information yet*. They are only effective one you understand all the layers and can put them together. 

Let's say I send you an email, encrypted using your public key. I know who you are, because only you can decrypt that message. But how do you know who I am?

The magic of public-private keys is that they work both ways. When we encrypt messages, we use a public key to encrypt it, then a private key to decrypt it. "Signing" a message is doing the reverse. You're creating an encrypted signature using your private key, which can then be decrypted using the public key. This way, anyone can use your public key to verify that it was sent by someone with your private key (which is hopefully you).

This message was signed with my key, so you'll need to have my public key to verify it (if you don't know how, go back and read part 1). To verify the message, copy and paste the whole thing into gpg4usb, then click Verify. The green message should pop appear down the bottom.

Every signature is unique because it's generated using the private key AND the message itself. If you change the message - even a single character - then the signature will be invalid. Try it!

You can sign your own messages by writing it normally, then selecting your private key and clicking the "Sign" button. Do this last, because making any changes to the message will break the signature. After this, you can encrypt your message like you would normally - the signature will get encrypted as well.

Now you know how to sign messages to prove that you were the one who wrote it - and not some hacker who's gotten into your email.

Next chapter: Publishing keys.


Ich bin ein Cyberpunk

Welcome to your sudden but inevitable future of ubiquitous surveillance.

To an extent, I appreciate the arguments made by supporters of the GCSB Bill - it's not really a huge encroachment of mass surveillance powers, it is, mostly, just the formalisation of mass surveillance powers that have been encroaching for a decade. We are not fucked off because of the bill itself, really, but because we've finally been forced to pay attention to the barftastic overreach of state surveillance that's been happening around us.

At least, that's true for me. Thanks to the GCSB Bill, I finally got around to reading the Kim Dotcom affidavits. It's the best example we have of how "GCSB assistance" is actually rendered. The Police asked the GCSB for help in a one-page request (page 13 of this):

Once the GCSB's lawyer had a look at it, the Police provided a list of "selectors" to the GCSB (we now know from the PRISM documents that "selectors" is the term used to describe the search terms used to make PRISM requests):

The selectors were entered into █████, in an email classified as "SECRET//COMINT//REL TO NZL, AUS, CAN, GBR, USA". In other words, the selectors were entered into a secret communications intelligence system, and this secret system was considered related to Five Eyes:

The email from the GCSB then described "traffic volume from these selectors": i.e. This secret system was capturing live traffic.

This is consistent with everything that we know about PRISM. Key has refused to comment on this.

What does this mean? It means that GCSB assistance is NSA assistance. It means that government agencies can tap into these powers as part of bread-and-butter law enforcement. Through the Bradley Ambrose case, we've seen that the Police are willing to use the full extent of their powers for entirely bullshit cases. Combine the two, and it makes me very, very queasy.

I ended my post in May with "we need to start by getting really, really fucked off". What is step two? Fortunately, there is a 25-year-old answer to this question: Encrypt everything.

Over the next however long it's going to take me, I'm going to be doing short posts on how to secretfy your stuff. Today's post is on encrypting text using public-key encryption.

Public-key Encryption (the uber-short version)

This technique is based on a pair of matching keys - one public, one private. Anything encrypted with one can only be decrypted with the other. Why? MATHS, that's why. The public key is then made public (my key is here), and anyone can use that key to encrypt a messsage. Only you - with the private key that you keep secret - can decrypt that message.

It's actually not that hard. The simplest tool for dealing with PGP keys is gpg4usb. Go download it and have a play. Purely for testing purposes, here is the public AND private keys for "John PGPKey" (right click on the link --> "Save link as.." to save the file). Open up gpg4usb and use the menu bar: Keys --> Import Key from.. --> File.

Select the .asc file you just downloaded. You can now use John PGPKey's private and public keys.

(Just to reiterate, this is for testing purposes only - you should NEVER put your real private key on the internet.)

Here is a message that's been encrypted using John PGPKey's public key. Open it up and copy and paste the garbled text into gpg4usb (including the BEGIN PGP MESSAGE and END PGP MESSAGE lines). Click on the "Decrypt" button. It'll ask you for a passphrase, which is "spicy panopticon in a dunnenad sauce" (this is a more reliable guide to making secure passphrases than your IT department).

(And no, you should not be putting the passphrases for your real private key on the internet, either. NOTE: Apologies if this didn't work before, I posted the wrong version of the key I was faffing around with.)

Enter the passphrase and BAM - you've decrypted a message! (If you haven't, check that you've copied the whole message, and check that you typed in the password properly.)

Now, to encrypt a message, just type things into the text box, select the key you want to encrypt with, and click on the "Encrypt" button. Pretty goddamn easy.

To create your own key, open up Keys --> Manage Keys. From the Key Management window, open up Key --> Generate Key. Fill out the boxes and go. You can export the public key and put it somewhere public - but let's not actually do that yet, until we have a way of securing your private key.

In the next part, we'll talk about publishing keys, verifying keys, signing with keys.