Why DNS Blacklists Don’t Work for IPv6 Networks

All effective spam filters use DNS blacklists or blocklists, known as DNSBLs. They provide an efficient way to publish sets of IP addresses from which the publisher recommends that mail systems not accept mail. A well run DNSBL can be very effective; the Spamhaus lists typically catch upwards of 80% of incoming spam with a very low error rate.

DNSBLs take advantage of the existing DNS infrastructure to do fast, efficient lookups. A DNS lookup typically goes through three computers, like this:

The client, usually the mail server, asks a nearby DNS cache for the DNSBL entry for the IP address in question. If the cache already has a copy of the entry, it returns it immediately, otherwise it fetches a copy from the DNSBL’s DNS server and returns it. DNSBL lookups, like most other kinds of DNS lookups, tend to be fairly repetitive, since the same IP addresses tend to send multiple messages, so the local cache handles the bulk of the work, limiting the load on the remote server. DNS caches are an essential part of making DNSBLs work, Since the remote server (or typically a group of remote servers) has to handle all the requests for the DNSBL not handled by caches. Unfortunately, caches don’t work for IPv6 DNSBLs.

The reason is the vastly large IPv6 address space. IPv4 addressees are 32 bits long, allowing 4 billion addresses. That seems like (and is) a lot, but it’s few enough that all the addresses will be handed out by sometime next year, and any given network has only a limited supply of them. This means that a single host usually has a single IPv4 address, or at most a few hundred addresses. IPv6 addresses are much longer, 128 bits long. They are so long that where as in IPv4, an ISP usually allocates a single IP address to each customer, ISPs will probably allocate a /64 of IPv6 space to each customer, that is, a range of addresses 64 bits long. While there are sensible technical reasons to do this, it also has the unfortunate effect that a computer can switch to a new IP address each time it sends a new message, and never reuse an address. (As a rough approximation, if you sent a billion messages a second, each with its own address, it would take about a thousand years to use all the addresses in a /64.)

Blocking addresses one at a time isn’t going to work if a bad guy can pick a new address for each message. The obvious countermeasure is to put ranges of addresses into the DNSBL. That’s already standard practice in IPv4 DNSBLs, where if a bad guy controls a range of addresses, the BL lists the whole range. But the problem with listing IPv6 ranges is that the ranges are so vast that they risk overloading DNS servers and caches. If every spam comes from a different IP address, every DNSBL lookup will require the DNS cache to query the DNSBL server since the answer won’t be in the cache. This will overload the DNSBL servers. Worse, since DNS caches tend to keep the most recent answers around in preference to older ones, the flood of DNSBL data will force all of the other DNS info out of the cache as well. On most systems, DNSBLs use the same cache as all other DNS queries, so it will also increase the load on every other DNS server, re-fetching answers that were flushed out of the cache. Even if the DNSBL servers use a single DNS wildcard record to cover a large range of DNSBL entries, that doesn’t help, because DNS caches can’t tell that a response was created from a wildcard, and so keep a separate entry for each response.

I see a few possible responses to this situation. One is to switch from DNSBLs to DNS whitelists. The number of legitimate hosts sending e-mail is surprisingly small, probably on the order of 100,000 in the world. Even though the number of whitelist entries is small, that still doesn’t solve the DNS cache problem, since each failed request potentially takes up a cache slot as well, to remember not to retry the request.

The second is to change the DNS to handle DNSBLs with ranges more efficiently. It turns out that DNSSEC, the cryptographic security add-on to the DNS which is finally starting to see broad use does most of this already. In particular, if a DNS query is satisfied by a wildcard, the DNSSEC information that is sent along with the response identifies the wildcard and the range of queries that that the wildcard can answer. As far as I know caches don’t use this information to do subsequent wildcard responses themselves, but they could do so without any changes to the DNS or DNSSEC. Also, most DNSBLs and DNSWLs are served using a package called rbldnsd which doesn’t support DNSSEC and, due to its internal structure, would be hard to modify to do so.

Another is to modify the way that IPv6 DNSBLs work. On the ASRG list we’ve been discussing some possible changes that would improve cache behavior by telling telling query clients what the granularity of BL entries is, so they can do one query per entry rather than one query per IP address.

The last, is that for the most part mail systems simply won’t use IPv6 addresses, since all the mail that anyone wants will continue to be sent using IPv4. I will blog about that in a few days.

Written by John Levine, Author, Consultant & Speaker

How Complete is the RIPE Routing Registry?

The Internet Routing Registry (IRR) is a globally distributed routing information database. The IRR consists of several databases run by various organisations in which network operators can publish their routing policies and their routing announcements in a way that allows other network operators to make use of the data. In addition to making Internet topology visible, the IRR is used by network operators to look up peering agreements, determine optimal policies and, more recently, to configure their routers.

It is often claimed that the IRR is not used enough, is not complete, is not up to date and is, therefore, not a reliable source for routing information. And because the IRR is spread over multiple databases, some parts of it might be more complete and up to date than others.

The RIPE NCC is also maintaining a Routing Registry (RR) as part of the RIPE Database. It is considered good practice to register routing policy in a routing registry. Having routing policy registered in the RR, makes it easier for network operators to configure and manage their routers. Some Internet service providers even require their customer’s routes to be registered in the RR before agreeing to route these address prefixes. We were interested to find out how many of those organisations that received an Autonomous System Number (ASN) from the RIPE NCC use the RIPE RR. We did this in three steps:

  1. Count all ASNs assigned by the RIPE NCC
  2. Find out how many of those ASNs are visible in the routing system
  3. And of those, see how many are listed as origin AS in a route or route6 object in the RIPE RR

The image below shows how many of those ASNs that were assigned by the RIPE NCC and that are visible in the routing system are also listed as origin AS in a route or route6 object in the RIPE Routing Registry.

For IPv4 route objects, the percentage is very high and stable for the last few years—around 95%. This means that almost all organisations that receive an ASN from the RIPE NCC and that originate a route in BGP also register a route object in the RIPE Routing Registry. Even without knowing how accurate and up-to-date these objects are, the fact that such a large fraction is registered is a great sign.

For IPv6, the numbers are a little lower—86% of RIPE NCC assigned ASNs that originate IPv6 routes in the routing system are also registered in the RIPE Routing Registry. In 2007, this number was only 60%, so it has been steadily increasing over time.

In a future study, we will investigate further the accuracy of these route objects.

For more details, please refer to Interesting Graph – How Complete is the RIPE Routing Registry.

Written by Mirjam Kuehne

IPv6 Subnet Calculators

IPv6 subnetting and IPv4 subnetting are very different so you may need a tool to assist you.
Complete info at NetworkWorld.

Winnaars IPv6 awards 2010 bekend!

Zojuist zijn in Scheveningen de IPv6 awards uitgereikt tijdens het ECP-EPN jaarcongres.

Nog maar 100 dagen en dan zijn de Internetadressen op. De oplossing voor dat dringende probleem heet Internetprotocol versie 6 (IPv6).
De Nederlandse TaskForce IPv6 vraagt hier aandacht voor door vandaag voor de tweede keer op rij de IPv6 awards uit te reiken. De TaskForce wil met de uitreiking van de awards aandacht vragen voor de noodzakelijke overstap van IPv4 naar IPv6.

De winnaars van de IPv6 awards dit jaar zijn:

CategorieWinnaar
Internet Service ProvidersXS4ALL
BedrijfslevenGeenStijl
Overheid & Not-for-profitMinisterie van Algemene Zaken
Onderwijs & OnderzoekStudenten Net Twente
Publicatie &
Opleidingscurriculum
NGN
AanmoedigingsprijsPieter-Tjerk de Boer

Wij feliciteren de winnaars met hun award!

IPv6 and Transitional Myths

I attended the RIPE 61 meeting this month, and, not unexpectedly for a group that has some interest in IP addresses, the topic of IPv4 address exhaustion, and the related topic of the transition of the network to IPv6 has captured a lot of attention throughout the meeting. One session I found particularly interesting was one on the transition to IPv6, where folk related their experiences and perspectives on the forthcoming transition to IPv6.

I found the session interesting, as it exposed some commonly held beliefs about the transition to IPv6, so I’d like to share them here, and discuss a little about why I find them somewhat fanciful.

“We have many years for this transition”

No, I don’t think we do!

The Internet is currently growing at a rate that consumes some 200 million IPv4 addresses every year, or 5% of the entire address IPv4 pool. This reflects an underlying growth of service deployment by the same order of magnitude of some hundreds of millions of new services activated per year. Throughout a dual stack transition all existing services will continue to require IPv4 addresses, and all new services will also require access to IPv4 addresses. The pool of unallocated addresses is predicted to exhaust in March 2011, and the RIRs will exhaust their local pools commencing late 2011 and through 2012. Once those pools exhaust, then all new Internet services will need access to IPv4 addresses as part of the IPv4 part of the dual stack environment, but at that point there are no more freely available addresses from the registries. Service providers have some local stocks of IPv4 addresses, but even those will not last for long.

As the network continues to grow the pressure to find the equivalent of a further 200 million or more IPv4 addresses each year will become acute, and at some point will be unsustainable. Even with the widespread use of NATs, and further incentives to recover all unused public address space, the inexorable pressure of growth will cause unsustainable pressures on the supply of addresses.

It’s unlikely that we can sustain 10 more years of network growth using dual stack, so transition will need to happen faster than that. How about 5 years? Even then, at the higher level of growth forecasts, we will still need to flush out the equivalent of 1.5 billion IPv4 addresses from the existing user base to sustain a 5 year transition, and this seems to be a stretch target. A more realistic estimate of transition time, in terms of accessible IPv4 addresses from recovery operations, is in the 3 – 4 year timeframe, and no longer.

So no, we don’t have many years for this transition. If we are careful, and a little bit lucky we’ll have about four years.

“It’s just a change of a protocol code. Users won’t see any difference in the transition.”

If only that were true!

(more…)

ipv6.cnn.com

I just saw this come across Sixy: CNN has an IPv6 site. And it has real content:

This is fantastic! As expected, not all of the CNN’s content loads over IPv6 (videos, and parts of the CDN still are v4-only), but this is a big step forward.

Source http://www.personal.psu.edu/dvm105/blogs/ipv6/2010/11/ipv6cnncom.html