Well Ksilyan. It is one thing to route things in a more efficient way, it is another to derail entire regions because that optimization cannot and will not route around a problem. Example: The site for www.furnation.com was inaccessable to me or anyone else whos internet region was transfered through L.A. It remained inaccessable for over a year, due to mantainence being performed on the network some place between L.A. and the final location. This effected thousands of sights. Had I been living in Oregon or Washington or some other place where the routers optimized the connection through some other point, instead of transfering everything through L.A., I could have still gotten to the sites in that area. Think about it, this means that by optimizing things they cut off access to an area thousands of square miles in size for over a year, simply because the networks failed to recognize that something had gone wrong.
Yes. There are limitations and optimization should be used, but not at the expense of common sense. In many cases DNS servers for a site are located close to the site itself. These things are *supposed* to trickle down to other DNS sites in the chain and be cataloged there, however, unless the registration was done through one of a small handful of server systems, losing the primary local DNS server can cut off, in the worst cases, 50% of the entire internet. In the case if Ages of Despair. It was possible to find DNS entries for it in Australia and a few other places, but the primary DNS was in Sweden. This means that when that primary source failed, the internet continued to route requests through L.A., to N.Y. to Britain, to Sweden and then on to the non-functioning server.
Worse, if L.A. went down, I couldn't get internet sites outside of Arizona and a few places still accessable in L.A., because the system has lost the original capacity to reroute to alternative systems. This is irritating as a home user, for business it could be lethal, but someplace along the line they decided to ignore the original design spec that Arpnet had, which required the capacity to route around problems and just hardwired some paths into the system. This is imho a fatal flaw in the design.
As for the idea that added traffic is bad... Most dynamic servers use IPs like 68.xxx.xxx.xxx or other 'common' ranges. There are not an inifinite number of these and since they are assigned on an as needed basis, traffic routed to and from most peopls home computers often have identical addresses. There may be 50 people right now using a number like 68.134.245.34, all outbound requests get tagged with this number, as well as routing info saying what path it got followed on the way out. The other 49 people are sending out requests like that too. This means that hundreds of packets are going back the other direction, bouncing all around the internet and returning to you, where your ISP checks the adress in it to make sure that you where the one making the actual request. If not it simply drops the packet. I am not sure about this actually, it may come all the way back to your own machine where it is ignored. This is how and why packet sniffing is possible.
In any case, optimization works like a filter. Passing known requests along specific paths, that may actually be invalid due to a problem farther along. It also likely drops packets that it *thinks* are being handled by another path and it can safely ignore. Meanwhile traffic between dynamic IPs, which cannot have a fixed path, since they are unknown until connection pass randomly through all paths, arriving in multiple copies at their destination and put back together in the right order at the destination. At least 50-60% of all internet traffic works like this, since no one, from your local dial-up ISP to AOL have set block of addresses they can use, except for business accounts. All normal accounts use dynamic IPs and thus scatter out through the internet like seed on the wind, with direct routing restricted to request to the DNS and *known* addresses.
The problem however is in this case in the DNS systems themselves. If your primary DNS can't find something, it sends of requests to the next one up the line, and so on, and so on. Until it either hits a time limit or a reply is returned. However, since most DNS servers haven't a clue where all of the others are, it relies on the normal traffic routing, which means that the DNS request gets passes to where the next router thinks it should go. Even if a valid address could be found in a different region or in another country, if that country is not in a straight line from the point of request (as far as the routers are concerned) it won't request from someplace outside of that path or do what the system considers (incorrectly) to be back tracking to some location that is actually closer to you than the one it normal uses.
It doesn't need to go out at random, the network knows where the DNS servers are, it simply fails to request from any known servers that it assumes wouldn't have the information (not being in a direct line), so your request is only seen by the one that failed. This isn't efficient at all and occationally it is a complete pain in the back side. Worse, it hamstrings the network, so that as in the previous case I mentioned, every request to get to a site can dead end, even if a legitimate alternate route exists to get there. However, the DNS systems in that alternate path all bounce the request as being in the wrong direction, instead of passing the request back along the alternate path to the proper server. This would be like having a bridge go out and getting 100,000 people all parked in front of the bridge, because they are to bloody stupid to find another bridge. computer networks are *supposed* to work better than that. |