Register forum user name Search FAQ

Gammon Forum

Notice: Any messages purporting to come from this site telling you that your password has expired, or that you need to verify your details, confirm your email, resolve issues, making threats, or asking for money, are spam. We do not email users with any such messages. If you have lost your password you can obtain a new one by using the password reset link.

Due to spam on this forum, all posts now need moderator approval.

 Entire forum ➜ MUSHclient ➜ General ➜ Unable to Resolve Host Name...?

Unable to Resolve Host Name...?

It is now over 60 days since the last post. This thread is closed.     Refresh page


Pages: 1 2  

Posted by Zimri   (1 post)  Bio
Date Sat 15 Nov 2003 05:18 PM (UTC)
Message
Everytime I try to connect to me beloved MUD, it has a message that says, "Unable to Resolve Host Name for mud.archsysinc.com, code=11004, (Valid name, no data record of requested type)." Please help!
Top

Posted by Shadowfyr   USA  (1,791 posts)  Bio
Date Reply #1 on Sun 16 Nov 2003 03:43 AM (UTC)

Amended on Sun 16 Nov 2003 03:46 AM (UTC) by Shadowfyr

Message
This has happened to me occationally. It means that the DNS server that normally returns the real IP address is either offline or has temporarilly lost the address. Usually it mean the server is down. Unfortunately due to the bone head way companies have 'optimized' the internet, there is no guarranty that another DNS server has the address and as a result, it is literally unable to figure out what the text version you are using is actually connected to. There are only two, well maybe three, though that one is only a theory and would require a complete tech head to do, solutions:

1. Wait for the DNS server to start working again.

2. Use the numeric address, which hopefully isn't dynamic. If the mud is run on a server than changes numeric addresses on occation, then you may be in trouble.

You may have some luck using Sam Spade or other programs to look-up the number, since such programs can check more than one DNS server, while requests tend to go out, hit some point that doesn't recognize the address at all and simply stops. There may be 20 servers that know what the address is, but because your request it hitting a DNS that *knows* that previous requests came from server D, it will look for that and when it doesn't get one simply imform you that it failed, even if server H also has it:
You ----- A -----B-----C-----D*
           \-----E-----F-----G------H*

Where D* and H* know the address. Server A in the above case will (in order to optimize things for speed) simply not bother to pass your request to E through H. Sam Spade lets you bypass this so called 'improvement' and ask H, what ever it is actually named, directly. Often you can find an adress this way, even if Mushclient and everything else insist it doesn't exist. You have to however change your connection address to the xxx.xxx.xxx.xxx number instead of the name to then connect. For a while I did this myself because the DNS to www.agesofdespair.org kept giving me the same error you got. Of course now it is at www.agesofdespair.net and a different location, so the problem went away for me.

3. Make your own local DNS server to return 'known' addresses that you are sure don't change from day to day and skip the whole mess of asking the internet to do it. This is like I said, not something you or even I could manage. lol
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #2 on Sun 16 Nov 2003 09:25 AM (UTC)
Message
I think it's a little unfair to speak of "boneheaded optimization". I mean, sure, let's have every single DNS request go to every single server, no matter how many hops are in between, and let's make sure that we nicely flood as much data through as possible. How would you have done it, given the constraints that the Internet has? And besides, is this really any more than a very rare occasional problem? Getting rid of this feature would cost the Internet a *lot* in performance. There is so much theory behind why the Internet works that way, that to speak of it fairly you have to be aware of at least some of the theory. (And I mean mathematical stuff, graph theory, not general hand-waving theory ideas.)


Here's an idea you can try, Zimri. If you have access to a remote server via Telnet, or any machine on a different sub-network, go to that machine (physically or via telnet) and type "ping my.server.com". That will have it look up the name from that computer, and you may get a different DNS path and it'll find the IP address. Then, you can use the IP address from your other computer - the one you want to use - since that does not require DNS lookup.

This problem also comes up when a server changes IP addresses. Actually, more often than not, that's the problem. The new IP has to propagate throughout the network of DNS servers, and that takes some time. Sometimes it happens within a few minutes, sometimes it happens within a few hours or more, depending on how "well-known" the server is. Since your MUD server is probably a fairly minor server, it won't be stored in a lot of servers, so there won't be as many around to update it.

How long has this problem lasted? If it's a DNS problem, I'd be surprised if it lasts longer than 24 hours, or 48 at the most. The fact that it can't resolve the host name does suggest that it's a DNS problem.

FYI, I pinged the address you gave me, and obtained the following IP address: 198.107.27.119
Try entering that as a server address instead of the host name (use numbers instead of names.)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shadowfyr   USA  (1,791 posts)  Bio
Date Reply #3 on Sun 16 Nov 2003 08:20 PM (UTC)
Message
Well Ksilyan. It is one thing to route things in a more efficient way, it is another to derail entire regions because that optimization cannot and will not route around a problem. Example: The site for www.furnation.com was inaccessable to me or anyone else whos internet region was transfered through L.A. It remained inaccessable for over a year, due to mantainence being performed on the network some place between L.A. and the final location. This effected thousands of sights. Had I been living in Oregon or Washington or some other place where the routers optimized the connection through some other point, instead of transfering everything through L.A., I could have still gotten to the sites in that area. Think about it, this means that by optimizing things they cut off access to an area thousands of square miles in size for over a year, simply because the networks failed to recognize that something had gone wrong.

Yes. There are limitations and optimization should be used, but not at the expense of common sense. In many cases DNS servers for a site are located close to the site itself. These things are *supposed* to trickle down to other DNS sites in the chain and be cataloged there, however, unless the registration was done through one of a small handful of server systems, losing the primary local DNS server can cut off, in the worst cases, 50% of the entire internet. In the case if Ages of Despair. It was possible to find DNS entries for it in Australia and a few other places, but the primary DNS was in Sweden. This means that when that primary source failed, the internet continued to route requests through L.A., to N.Y. to Britain, to Sweden and then on to the non-functioning server.

Worse, if L.A. went down, I couldn't get internet sites outside of Arizona and a few places still accessable in L.A., because the system has lost the original capacity to reroute to alternative systems. This is irritating as a home user, for business it could be lethal, but someplace along the line they decided to ignore the original design spec that Arpnet had, which required the capacity to route around problems and just hardwired some paths into the system. This is imho a fatal flaw in the design.

As for the idea that added traffic is bad... Most dynamic servers use IPs like 68.xxx.xxx.xxx or other 'common' ranges. There are not an inifinite number of these and since they are assigned on an as needed basis, traffic routed to and from most peopls home computers often have identical addresses. There may be 50 people right now using a number like 68.134.245.34, all outbound requests get tagged with this number, as well as routing info saying what path it got followed on the way out. The other 49 people are sending out requests like that too. This means that hundreds of packets are going back the other direction, bouncing all around the internet and returning to you, where your ISP checks the adress in it to make sure that you where the one making the actual request. If not it simply drops the packet. I am not sure about this actually, it may come all the way back to your own machine where it is ignored. This is how and why packet sniffing is possible.

In any case, optimization works like a filter. Passing known requests along specific paths, that may actually be invalid due to a problem farther along. It also likely drops packets that it *thinks* are being handled by another path and it can safely ignore. Meanwhile traffic between dynamic IPs, which cannot have a fixed path, since they are unknown until connection pass randomly through all paths, arriving in multiple copies at their destination and put back together in the right order at the destination. At least 50-60% of all internet traffic works like this, since no one, from your local dial-up ISP to AOL have set block of addresses they can use, except for business accounts. All normal accounts use dynamic IPs and thus scatter out through the internet like seed on the wind, with direct routing restricted to request to the DNS and *known* addresses.

The problem however is in this case in the DNS systems themselves. If your primary DNS can't find something, it sends of requests to the next one up the line, and so on, and so on. Until it either hits a time limit or a reply is returned. However, since most DNS servers haven't a clue where all of the others are, it relies on the normal traffic routing, which means that the DNS request gets passes to where the next router thinks it should go. Even if a valid address could be found in a different region or in another country, if that country is not in a straight line from the point of request (as far as the routers are concerned) it won't request from someplace outside of that path or do what the system considers (incorrectly) to be back tracking to some location that is actually closer to you than the one it normal uses.

It doesn't need to go out at random, the network knows where the DNS servers are, it simply fails to request from any known servers that it assumes wouldn't have the information (not being in a direct line), so your request is only seen by the one that failed. This isn't efficient at all and occationally it is a complete pain in the back side. Worse, it hamstrings the network, so that as in the previous case I mentioned, every request to get to a site can dead end, even if a legitimate alternate route exists to get there. However, the DNS systems in that alternate path all bounce the request as being in the wrong direction, instead of passing the request back along the alternate path to the proper server. This would be like having a bridge go out and getting 100,000 people all parked in front of the bridge, because they are to bloody stupid to find another bridge. computer networks are *supposed* to work better than that.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #4 on Sun 16 Nov 2003 08:45 PM (UTC)
Message
I've never heard of a whole area getting shut down due to DNS failures. In fact, I would be highly surprised if DNS servers were the only culprit, because their very nature is to propagate information. Yes, I've had cases where I can't access a given site because its IP changed, but the DNS servers hadn't figured it out, but it never took longer than 24 hours for the information to re-propagate.

Besides, everything of which you speak is still a relatively isolated occurence, when you compare it against all the times when it goes right. I guess you could compare it to airplanes. It works right so often that people don't even think of it anymore, but the very rare times it goes wrong the consequences are disastrous and 300+ people die. I'd say the Internet is similar; it works so often and so well that people take it for granted, and the day it stops working people start throwing around words like "dumb system", "design flaw", "bad optimization", etc. (I'm not quoting you on all of those, I've seen them in many other places.)

I also beg to differ that sending extra packets won't slow things down. Can you imagine if every single DNS request suddenly resulted in a doubling - nay, it would be almost an exponential increase - of packets? You send to your DNS server, then it sends to not just one but many of the DNS servers it knows, and then each of these in turn sends not just one but many... I'm not so much worried about the individual people, but about the traffic on the system in general.

What you spoke of with packets being accepted or not might be what is called masquerading, but in any case that process can happen on many different levels along the food chain of routers.

And actually, dynamic IPs don't travel in random paths like you say. As soon as they arrive at a known server - which usually happens on the first hop from you to your ISP - they fall under the same Diexter's (or however you spell his name) algorithm as everyone else. So no, packets don't propagate in random directions; that's the whole point of the algorithm. And besides, how would a server know to accept or reject a packet it receives in multiple copies... maybe the sender truly meant to send it thrice?

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Samson   USA  (683 posts)  Bio
Date Reply #5 on Sun 16 Nov 2003 11:03 PM (UTC)
Message
Before you both go off and kill each other over this, you might want to take a look at the following link:

http://simplemu.onlineroleplay.com/board2k/UltraBoard.cgi?action=Headlines&BID=2

The #11004 error thing has been mentioned before and since that's the original problem that was reported the link may well be worth a look.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #6 on Sun 16 Nov 2003 11:30 PM (UTC)
Message
So what you're saying is that this is a virus, that is messing with the computer's DNS server address and the hosts file? Funky, but quite plausible.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Samson   USA  (683 posts)  Bio
Date Reply #7 on Mon 17 Nov 2003 01:59 AM (UTC)
Message
Actually it was Network Associates who said it, I was merely relaying the information. It doesn't appear to have been a very widespread Trojan, but it was out there for awhile causing havoc with some people. May not have been the cause of this person's problem, but it was just one more thing to check on. The HOSTS file is generally not hard to fix :)
Top

Posted by Shadowfyr   USA  (1,791 posts)  Bio
Date Reply #8 on Mon 17 Nov 2003 02:31 AM (UTC)
Message
I have read and heard explainations from people about the internet and packets may be sent once, but under the 'normal' operation of the internet as specified by the original Arpnet specs, those packets are replicated and sent on to more than one outbound path, each of those paths are routed to multiples etc. So if there is an algorythm now in use that routes them more efficiently, then it doesn't mean that some system may not still be using the simpler method in some places and that more than one copy can arrive, often out of sync.

As a way of making sure that minor disturbances and lag don't result in more packet loss than usual, they may even intentionally propigate extra copies anyway. However, the algorythm used still routes to the most most efficient *apparent* path. Which means that if something derails a key router it does take 24 hours at minimum some times to fix, since there is no way for the network to effectively bypass the downed router by back tracking in a direction that *it* believes is the wrong path. An since DNS servers only propigate their listings by a sort of osmosis and some DNS servers may not be considered *legitimate* by the big names, many addresses get lost or flat out ignored by DNS servers in the chain. I have had the server at the .de college address that used to host the mud I play on go down for 3-4 weeks because its addresses where not stored anyplace in the standard DNS servers (it not being part of the big name DNS registries) and yet I could use the numeric IP to get there with no problem. It may work, but is no where near as stable or fool proof as you seem to think. I am sure that if the Titanic had missed the iceberg it would have had many years of service, but that wouldn't have made it *safe*.
Top

Posted by Nick Gammon   Australia  (23,162 posts)  Bio   Forum Administrator
Date Reply #9 on Wed 19 Nov 2003 04:57 AM (UTC)
Message
I have skimmed this rather lengthy thread, but have a couple of comments.

First, if DNS is down for you, you could use mudconnector.com (assuming you can get to that of course) and find the IP address. Then use that (the numeric one).

I am assuming that if you can post to this forum you can get to *some* parts of the Internet.

For example, I connected to the MUD you mentioned and founds its address to be 198.107.27.119. You could try that.

In fact its banner page says:

Server: mud.archsysinc.com : 7000 [198.107.27.119]
Version: 1.0.11 - Private Alpha
Homepage: http://www.archsysinc.com/avp

Coding by CYBER_Aeon (cyber_aeon@hotmail.com)

Shadowfyr, as for your suggestion that there can be multiple users simultaneously with the address 68.134.245.34, I don't think so. Unless you are behind a NAT router and using a private IP address, which this one is not, only one user can, at one moment, be using a single IP address. The whole point of routers is to find a route for a packet from A to B, and have multiple possible destinations for a packet would really screw things up. I mean, what if one 68.134.245.34 was in London, and the other one in Moscow? Are the intermediate routers really going to send packets off in all directions in the hope that it is accepted somewhere? I don't think so, that would flood the network with packets.

I agree that multiple users might be assigned an IP (say by a dial-up ISP) however *at one time* only one will be given to one user. Conceivably if they disconnect with some outstanding packets coming their way, someone else might get them if the new user is given the same IP, but the SYN and ACK bits, and sequence numbers, will be wrong and the packets would be discarded.

On an internal network certainly mutiple people around the world would share addresses like 10.0.0.1, however once they are on the Internet they are changed by the router to a real, unique, address, and tagged (with a unique port number) so that it can re-route the return packet to the correct PC. This is probably what you are thinking of.

See this page for an example of private IP address ranges:

http://www.netdummy.net/privateip.html

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Shadowfyr   USA  (1,791 posts)  Bio
Date Reply #10 on Wed 19 Nov 2003 06:47 PM (UTC)
Message
Yes Nick, within any 'local' network each user has one IP only. However, there are more websites and users in existence than often can actually fit in available addresses. The way around this is to either buy more blocks, which may not be available, thus getting exclusive rights to them, or use a range of addresses that are commonly used by many IPs for dynamic alocation. This may not necessarilly happen in the same country, but there is nothing to really prevent happyshrimp.jp, dijabringabeer.au 'and' aol.com from using the same block of addresses for their users. The network still has to know how to get stuff back to them. I am not sure how common it is for it to happen, but my understanding is that it can and does happen some times with dynamic IP addresses. In most cases dynamic IPs don't run web or email servers, so they don't need exclusive access to the IP.

I can't remember where exactly I heard about it, but I think it was in an explaination of how dynamic IPs are assigned in The Hacker Quarterly and they generally know more about how this stuff works that the people using it.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #11 on Wed 19 Nov 2003 07:13 PM (UTC)
Message
Just because a bunch of people use the same range to pick IP addresses from doesn't mean that the same IP address is picked simultaneously for two different people. What exactly do you mean by block of addresses? Is that everything after the first component of the address?

I also think there are other things going on, such as masquerading. Not every person actually needs to have their own IP address on the global scope; they can have their unique local address, and connect through their router which masquerades for them as one IP address. As a simple example, at home all my computers connect through one router that has a single IP address; therefore all of us "share" the same IP address, but the router takes care of forwarding the packets where they're meant to go. Obviously, this means that incoming packets that aren't responding to an outgoing or already established connection are discarded - but if you think about it, that's the whole point of a firewall.

So I'm not convinced that what you mentioned is actually a problem or even exists, because routers are meant to take care of most of that already. But then again, I don't really feel like arguing about this, since to talk about it everybody needs to be fully aware of the theory behind it (and the implementation of the theory), which is not the case. (I'm not saying I know and you don't, just that it's meaningless to talk about it if everybody isn't qualified to do so.)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shadowfyr   USA  (1,791 posts)  Bio
Date Reply #12 on Wed 19 Nov 2003 08:28 PM (UTC)
Message
> What exactly do you mean by block of addresses? Is that
> everything after the first component of the address?

Yep. In my case I am in the 66.185.0.0 to 66.255.0.0 range or something, while another one in california uses 66.0.0.0 to 66.129.0.0 or something. Someone else probably purchased permission to use the stuff in between those.

But you are right, no point in this if everyone is not on the same page.

One final comment though, the fact that is can happen came up, if I remember right, due to someone using a sniffer to track packets to their home machine and noticing that they where recieving packets that had their destination address, but didn't correspond to any outgoing requests. I.e. nothing, including any spyware on their machine sent the request. They just happened to have at that moment the same IP as someone else who had made them. Someplace a router probably saw requests coming from two different directions with the same IP and being unable to tell which one was the correct one automatically redirected all traffic for that IP to both. However, since routers can tell if the packet is directed at them specifically, they are designed to ignore anything not requested through them, so for us end users the result is completely transparent. In other words it isn't a "problem", but it does mean that things are more complex than you seem to think.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #13 on Wed 19 Nov 2003 09:38 PM (UTC)
Message
You know, those unsollicited packets could just as easily be someone trying to probe the computer on various ports. Our home server receives thousands of such packets per day, from people trying to find open ports and exploit vulnerabilities. People have written programs to automatically poke all over the place on all sorts of ports, and I wouldn't be half surprised if at least some of those packets were from that.

The router could also be set up to forward all packets it didn't understand to everybody on the network. For instance, somebody could connect on port xxxx, and the router could be set up to forward this to all computers behind it; at this point, the computers would figure out themselves what to do with the packet.

I don't think the system is simple, but I certainly don't think it's random or unpredictable, either. It is governed by rules, and has a lot of mathematical theory and practical thought behind it. As far as I know, the only "randomness" involved would be a problem with the electrical wires themselves, e.g. external interference with the signal. And of course, program bugs or human errors, but that's a whole other story...

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shadowfyr   USA  (1,791 posts)  Bio
Date Reply #14 on Thu 20 Nov 2003 01:34 AM (UTC)
Message
But he was using a packet sniffer and looking at some of the contents. They where legit data, but not anything requested. There is a notable and obvious differences between port sniffing, attempts to do buffer overloads and legit. traffic and telling them apart when looking at the contents is easy.
Top

The dates and times for posts above are shown in Universal Co-ordinated Time (UTC).

To show them in your local time you can join the forum, and then set the 'time correction' field in your profile to the number of hours difference between your location and UTC time.


71,429 views.

This is page 1, subject is 2 pages long: 1 2  [Next page]

It is now over 60 days since the last post. This thread is closed.     Refresh page

Go to topic:           Search the forum


[Go to top] top

Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.