Thursday, November 24, 2011

Hurricane Electric IPv6 tunnel through Actiontec MI424WR

A couple of days ago I decided that I wanted to establish a permanent Hurricane Electric IPv6 tunnel to my house, and did the usual thing - started Googling for the setup I'd need to pass the tunnel traffic, IP Protocol 41, through my Actiontec MI424WR router. Most of the hits I found were people saying that the Actiontec doesn't support such a feature. As it turns out, the tunnel works just fine - it's just that the configuration isn't documented.

There are two steps needed. First, log in to the Actiontec and head to the Advanced menu. Acknowledge the warning, then choose 'Port Forwarding Rules'. That will show you a long list of preconfigured rulesets, organized by application. Unfortunately there isn't one for Protocol 41, so scroll down to the bottom of the list and choose the Add option.

The 'Edit Service' screen has a name and description that you can fill in as you wish, except that the name can't contain spaces. Then select 'Add Server Ports', and enter 41 in the 'Protocol Number' box. Apply that change, and you're halfway home.

Now head to Firewall Settings and choose 'Port Forwarding' on the left-side menu. Under 'Create new port forwarding rule' either find the IP of the tunnel server in the menu or type it in, and choose your newly created rule as the 'Application to forward'. Add the new entry, make sure it is in the rules list, and hit Apply.

I'm using Linux, and there is one tweak needed for the tunnel config that the HE website created for me. Instead of binding the tunnel 'local' end to the public IPv4 address, I needed it attached to the RFC1918 LAN address of the tunnel server. I expect that something similar would be needed for other OSes; the change should be fairly obvious if you look for the public v4 address that the HE website displayed for your connection. Have fun with IPv6. . .

Thursday, October 13, 2011

DNS makes for strange bedfellows

I happened to stop by the ISC page today and saw an intriguing juxtaposition. The first article in their news feed was titled "Protecting Intellectual Property is Good; Mandatory DNS Filtering is Bad". It's a short note from Paul Vixie reminding us that the PROTECT-IP effort in Congress hasn't gone away, and pointing out a recent letter written to the House Committee on the Judiciary.

The very next article reminded me - too late, since I'd neglected to put it in my calendar - about a webinar on the topic of "BIND's New Security Feature: DNSRPZ". I wish I'd remembered the date, but there should be a recorded version on the website soon.

On the surface these topics may appear unrelated. RPZ, or Response Policy Zones, is an ISC-developed concept, enabling recursive resolvers to intentionally block or rewrite DNS responses for specific queries, in the name of protecting clients from malware download sites, compromised web pages, botnet command and control hosts, etc. Such things have been done for a long time; my personal resolver used to have a blackhole zone in order to block a set of particularly annoying online ad providers. I stopped when the that machine began to serve other users, since I didn't think that it was right to impose my judgements about those ad providers on anyone else. It's possible they would have thanked me for saving them from yet another flashing box advertising miracle wrinkle cream, but it still seemed wrong.

RPZ implements the same thing in a much more scalable way because it uses the built-in DNS zone transfer capabilities, including incremental transfers that only propagate the changed records. A DNS 'reputation provider' can make an RPZ zone available for transfer and update hundreds of other servers with minimal bandwidth. In principle this isn't much different from what I do now with spam blocklists - my spam filter queries a blocklist to ask for a reputation score, and decides how to handle the email appropriately (or whether to even accept the message in the first place). RPZ could fill the same role and extend well beyond it. Then again, we might imagine ISPs deciding they don't want their users to see the websites of their competitors, or reaching services that place a high bandwidth load on their backbone (not that anyone has ever threatened that sort of thing).

PROTECT-IP, on the other hand, is all about allowing the courts to block access to websites involved in copyright violations. It's particularly designed to apply to DNS names registered outside the US, since anything in-country can simply be seized by the government. The proposed law would affect financial services providers, search engines and online advertising services, but the key item for ISPs is here:
An operator of a nonauthoritative domain name system server shall take the least burdensome technically feasible and reasonable measures designed to prevent the domain name described in the order from resolving to that domain name's Internet protocol address. . .
In other words, the court would require US-based ISPs to rewrite DNS responses for specific queries, in the name of preventing clients from accessing copyright-infringing sites.

Does that sound familiar?

Mind you, I feel the cognitive dissonance - I don't really like the idea of PROTECT-IP, but I actively use spam reputation services and rely on them to determine whether I should accept email. I know people use manually configured DNS blocklists and I'm sure that they do good work with them, protecting their users from all sorts of badness. Were I responsible for a large number of potentially vulnerable hosts, I might do the very same thing.

It's worth noting that both RPZ and the more general concept in the PROTECT-IP bill share another problem - they will utterly break DNSSEC validation. That's okay as long as the goal is to make sure that nobody can access; it doesn't work at all when the idea is to redirect the user to a page indicating why this action has been taken (or even stating that an action has been taken). The authors of the PROTECT-IP white paper cover that quite well:
By mandating redirection, PROTECT IP would require and legitimize the very behavior DNSSEC is designed to detect and suppress. Replacing responses with pointers to other resources, as PROTECT IP would require, is fundamentally incompatible with end-to-end DNSSEC. Quite simply, a DNSSEC-enabled browser or other application cannot accept an unsigned response; doing so would defeat the purpose of secure DNS. Consistent with DNSSEC, the nameserver charged with retrieving responses to a user’s DNSSEC queries cannot sign any alternate response in any manner that would enable it to validate a query.
So, what are we left with? Perhaps we can say that RPZ might or might not be a good idea depending on how it's used, but if so, then surely PROTECT-IP is in the same category. Both of them break DNSSEC, because both of them break the DNS. Is one a way to protect users and the other an example of censorship? Is one more likely to be abused than the other? Is one good and the other evil?

Wednesday, September 14, 2011


You've no doubt heard something about DNSSEC - a news release from a vendor announcing their support of the technology, an article about last year's signing of the root or one of the TLDs, or perhaps a memo from your CIO/webmaster/sysadmin/mother-in-law about whether, when and how you should implement it. For many people, the first question is the most important; asking if the extra effort, added complexity and increased potential for operational problems will be worthwhile.

There's a lengthy list of arguments in favor of DNSSEC, but to add another one - ripped from the headlines, as the saying goes - we have the sad situation of SSL/TLS certificates. If you use any services in the so-called cloud, you're dependent on SSL to protect the connection, especially while using open-access wireless networks where the traffic is visible to anyone who might want to install a simple browser extension. Still worse, there might be someone who wants to see or even modify your traffic, and is willing to go to the effort of setting up a man-in-the-middle attack between you and the cloud.

SSL should protect you against such things, since a secured connection can't be decoded or intercepted unless the bad guy has access to the secret part of the SSL key. That's closely guarded information and it's unlikely anyone could get their hands on it for a major player like Microsoft or Google - so instead, the bad guys take the route of pretending to be Microsoft and convincing the SSL certificate authority to give them another certificate. Sounds difficult, right? Perhaps, but it can be done. Alternatively, find a Certificate Authority with weak security and simply break in. This time the certificate seems to actually have been used for an attack.

Now, DNSSEC is designed to protect the DNS; it doesn't replace SSL (or VPNs, or any other version of transport security). But the DNS can contain more than just names and IP addresses. For example, it could be modified to include information about SSL certificates. What if you could use DNSSEC in your own domain to securely inform your clients that the SSL certificates you've installed are the valid ones for your domain? That's the goal of an IETF working group called DNS-based Authentication of Named Entities (DANE). You can see all the details of what they're up to in these working drafts:

Use Cases and Requirements for DNS-based Authentication of Named Entities (DANE)
Using Secure DNS to Associate Certificates with Domain Names For TLS

The idea is simple: you create a certificate (for, say) and configure it on your web server. If you want to, you can have it signed by your favorite SSL CA - that will continue to be useful at least until DANE is widely deployed. In addition, you add a new DNS record, called (at least in the draft stage) TLSA, which the client on the other side of the connection can use to verify the certificate that your server provides. Of course your names should be DNSSEC-signed as well, to make sure that the bad guy can't change the TLSA record and implement his attack that way.

Ultimately, DANE's biggest impact may be on the small sites who don't want to spend money on every single SSL-enabled web server; the TLSA record will let users know that the certificates are legitimate, and depending on the browser implementation will likely avoid the frightening pop-up window that says "This Web Site Is Not Trusted!"

And that may also be the biggest value of DANE deployment. Right now so many websites have self-issued certificates that users can easily become jaded and fall into the habit of clicking through the warning every time. When those scary pop-ups become rare, they'll be much more likely to be taken seriously.

What do you have to do to take advantage of all this goodness? Push forward with DNSSEC deployment by signing your own DNS records, validating DNSSEC signatures on your resolvers, and demanding DNSSEC support from your vendors. We need to have DNSSEC past the tipping point in order for DANE to make sense, but once that happens it should be a no-brainer.

PS - for the hardcore geeks there's a similar DNS record, already standardized, that lets you publish your server's SSH fingerprint. Look at the ssh(1) man page for "VERIFYING HOST KEYS", and at ssh_config(5) for VerifyHostKeyDNS. . .