java<\/A>.net.BindException: Address already in use:\nconnect\u7684\u95ee\u9898
\n\u5927\u6982\u539f\u56e0\u662f\u77ed\u65f6\u95f4\u5185new\nsocket\u64cd\u4f5c\u5f88\u591a,\u800csocket.close()\u64cd\u4f5c\u5e76\u4e0d\u80fd\u7acb\u5373\u91ca\u653e\u7ed1\u5b9a\u7684\u7aef\u53e3,\u800c\u662f\u628a\u7aef\u53e3\u8bbe\u7f6e\u4e3aTIME_WAIT\u72b6\u6001,\u8fc7\u6bb5\u65f6\u95f4(\u9ed8\u8ba4240s)\u624d\u91ca\u653e,(\u7528netstat\n-na\u53ef\u4ee5\u770b\u5230),\u6700\u540e\u7cfb\u7edf\u8d44\u6e90\u8017\u5c3d(windows\u4e0a\u662f\u8017\u5c3d\u4e86pool of ephemeral ports\n,\u8fd9\u6bb5\u533a\u95f4\u57281024-5000\u4e4b\u95f4; )
\n
\n\u907f\u514d\u51fa\u73b0\u8fd9\u4e00\u95ee\u9898\u7684\u65b9\u6cd5\u6709\u4e24\u4e2a\uff0c\u4e00\u4e2a\u662f\u8c03\u9ad8\u4f60\u7684web\u670d\u52a1\u5668\u7684\u6700\u5927\u8fde\u63a5\u7ebf\u7a0b\u6570\uff0c\u8c03\u52301024,2048\u90fd\u8fd8\u51d1\u5408\uff0c\u4ee5resin\u4e3a\u4f8b\uff0c\u4fee\u6539resin.conf\u4e2d\u7684thread-pool.thread_max\uff0c\u5982\u679c\u4f60\u91c7\u7528apache\u8fderesin\u7684\u67b6\u6784\uff0c\u522b\u5fd8\u4e86\u518d\u8c03\u6574apache\uff1b
\n\n
\n\u53e6\u4e00\u4e2a\u662f\u4fee\u6539\u8fd0\u884cweb\u670d\u52a1\u5668\u7684\u673a\u5668\u7684\u64cd\u4f5c\u7cfb\u7edf\u7f51\u7edc\u914d\u7f6e\uff0c\u628atime wait\u7684\u65f6\u95f4\u8c03\u4f4e\u4e00\u4e9b\uff0c\u6bd4\u598230s\u3002
\n\u5728red hat\u4e0a\uff0c\u67e5\u770b\u6709\u5173\u7684\u9009\u9879\uff0c
\n[xxx@xxx~]$ \/sbin\/sysctl -a|grep net.ipv4.tcp_tw
\nnet.ipv4.tcp_tw_reuse = 0
\nnet.ipv4.tcp_tw_recycle = 0
\n[xxx@xxx~]$vi \/etc\/sysctl\uff0c\u4fee\u6539
\nnet.ipv4.tcp_tw_reuse = 1
\nnet.ipv4.tcp_tw_recycle = 1
\n[xxx@xxx~]$sysctl -p\uff0c\u4f7f\u5185\u6838\u53c2\u6570\u751f\u6548
<\/P>\n <\/P>\n
\uff08www.duduct.com<\/A>\u63d0\u4f9b\uff09
\n
\nsocket-faq\u4e2d\u7684\u8fd9\u4e00\u6bb5\u8bb2time_wait\u7684\uff0c\u6458\u5f55\u5982\u4e0b\uff1a
\n2.7. Please explain the TIME_WAIT state.
\n
\nRemember that TCP guarantees all data transmitted will be\ndelivered,
\nif at all possible. When you close a socket, the server goes into\na
\nTIME_WAIT state, just to be really really sure that all the data\nhas
\ngone through. When a socket is closed, both sides agree by\nsending
\nmessages to each other that they will send no more data. This,\nit
\nseemed to me was good enough, and after the handshaking is done,\nthe
\nsocket should be closed. The problem is two-fold. First, there is\nno
\nway to be sure that the last ack was communicated\nsuccessfully.
\nSecond, there may be \"wandering duplicates\" left on the net that\nmust
\nbe dealt with if they are delivered.
\n
\nAndrew Gierth (andrew@erlenstar.demon.co.uk) helped to explain\nthe
\nclosing sequence in the following usenet posting:
\n
\nAssume that a connection is in ESTABLISHED state, and the client\nis
\nabout to do an orderly release. The client's sequence no. is Sc,\nand
\nthe server's is Ss. Client Server
\n====== ======
\nESTABLISHED ESTABLISHED
\n(client closes)
\nESTABLISHED ESTABLISHED
\n------->>
\nFIN_WAIT_1
\n<<--------
\nFIN_WAIT_2 CLOSE_WAIT
\n<<-------- (server closes)
\nLAST_ACK
\n, ------->>
\nTIME_WAIT CLOSED
\n(2*msl elapses...)
\nCLOSED
\n
\nNote: the +1 on the sequence numbers is because the FIN counts as\none
\nbyte of data. (The above diagram is equivalent to fig. 13 from\nRFC
\n793).
\n
\nNow consider what happens if the last of those packets is dropped\nin
\nthe network. The client has done with the connection; it has no\nmore
\ndata or control info to send, and never will have. But the server\ndoes
\nnot know whether the client received all the data correctly;\nthat's
\nwhat the last ACK segment is for. Now the server may or may not\ncare
\nwhether the client got the data, but that is not an issue for TCP;\nTCP
\nis a reliable rotocol, and must distinguish between an\norderly
\nconnection close where all data is transferred, and a connection\nabort
\nwhere data may or may not have been lost.
\n
\nSo, if that last packet is dropped, the server will retransmit it\n(it
\nis, after all, an unacknowledged segment) and will expect to see\na
\nsuitable ACK segment in reply. If the client went straight to\nCLOSED,
\nthe only possible response to that retransmit would be a RST,\nwhich
\nwould indicate to the server that data had been lost, when in fact\nit
\nhad not been.
\n
\n(Bear in mind that the server's FIN segment may, additionally,\ncontain
\ndata.)
\n
\nDISCLAIMER: This is my interpretation of the RFCs (I have read all\nthe
\nTCP-related ones I could find), but I have not attempted to\nexamine
\nimplementation source code or trace actual connections in order\nto
\nverify it. I am satisfied that the logic is correct, though.
\n
\nMore commentarty from Vic:
\n
\nThe second issue was addressed by Richard Stevens\n(rstevens@noao.edu,
\nauthor of \"Unix Network Programming\", see ``1.5 Where can I get\nsource
\ncode for the book [book title]?''). I have put together quotes\nfrom
\nsome of his postings and email which explain this. I have\nbrought
\ntogether paragraphs from different postings, and have made as\nfew
\nchanges as possible.
\n
\nFrom Richard Stevens (rstevens@noao.edu):
\n
\nIf the duration of the TIME_WAIT state were just to handle TCP's\nfull-
\nduplex close, then the time would be much smaller, and it would\nbe
\nsome function of the current RTO (retransmission timeout), not the\nMSL
\n(the packet lifetime).
\n
\nA couple of points about the TIME_WAIT state.
\n
\no The end that sends the first FIN goes into the TIME_WAIT\nstate,
\nbecause that is the end that sends the final ACK. If the\nother
\nend's FIN is lost, or if the final ACK is lost, having the end\nthat
\nsends the first FIN maintain state about the connection\nguarantees
\nthat it has enough information to retransmit the final ACK.
\n
\no Realize that TCP sequence numbers wrap around after 2**32\nbytes
\nhave been transferred. Assume a connection between A.1500 (host\nA,
\nport 1500) and B.2000. During the connection one segment is\nlost
\nand retransmitted. But the segment is not really lost, it is\nheld
\nby some intermediate router and then re-injected into the\nnetwork.
\n(This is called a \"wandering duplicate\".) But in the time\nbetween
\nthe packet being lost & retransmitted, and then\nreappearing, the
\nconnection is closed (without any problems) and then another
\nconnection is established between the same host, same port\n(that
\nis, A.1500 and B.2000; this is called another \"incarnation\" of\nthe
\nconnection). But the sequence numbers chosen for the new
\nincarnation just happen to overlap with the sequence number of\nthe
\nwandering duplicate that is about to reappear. (This is\nindeed
\npossible, given the way sequence numbers are chosen for TCP
\nconnections.) Bingo, you are about to deliver the data from\nthe
\nwandering duplicate (the previous incarnation of the connection)\nto
\nthe new incarnation of the connection. To avoid this, you do\nnot
\nallow the same incarnation of the connection to be\nreestablished
\nuntil the TIME_WAIT state terminates.
\n
\nEven the TIME_WAIT state doesn't complete solve the second\nproblem,
\ngiven what is called TIME_WAIT assassination. RFC 1337 has\nmore
\ndetails.
\n
\no The reason that the duration of the TIME_WAIT state is 2*MSL\nis
\nthat the maximum amount of time a packet can wander around a
\nnetwork is assumed to be MSL seconds. The factor of 2 is for\nthe
\nround-trip. The recommended value for MSL is 120 seconds, but
\nBerkeley-derived implementations normally use 30 seconds\ninstead.
\nThis means a TIME_WAIT delay between 1 and 4 minutes. Solaris\n2.x
\ndoes indeed use the recommended MSL of 120 seconds.
\n
\nA wandering duplicate is a packet that appeared to be lost and\nwas
\nretransmitted. But it wasn't really lost ... some router had
\nproblems, held on to the packet for a while (order of seconds,\ncould
\nbe a minute if the TTL is large enough) and then re-injects the\npacket
\nback into the network. But by the time it reappears, the\napplication
\nthat sent it originally has already retransmitted the data\ncontained
\nin that packet.
\n
\nBecause of these potential problems with TIME_WAIT assassinations,\none
\nshould not avoid the TIME_WAIT state by setting the SO_LINGER\noption
\nto send an RST instead of the normal TCP connection\ntermination
\n(FIN\/ACK\/FIN\/ACK). The TIME_WAIT state is there for a reason;\nit's
\nyour friend and it's there to help you :-)
\n
\nI have a long discussion of just this topic in my\njust-released
\n\"TCP\/IP Illustrated, Volume 3\". The TIME_WAIT state is indeed, one\nof
\nthe most misunderstood features of TCP.
\n
\nI'm currently rewriting \"Unix Network Programming\" (see ``1.5\nWhere
\ncan I get source code for the book [book title]?''). and will\ninclude
\nlots more on this topic, as it is often confusing and\nmisunderstood.
\n
\nAn additional note from Andrew:
\n
\nClosing a socket: if SO_LINGER has not been called on a socket,\nthen
\nclose() is not supposed to discard data. This is true on SVR4.2\n(and,
\napparently, on all non-SVR4 systems) but apparently not on SVR4;\nthe
\nuse of either shutdown() or SO_LINGER seems to be required to
\nguarantee delivery of all data.
\n
\n\u539f\u6587\u8fde\u63a5\uff1ahttp:\/\/hi.baidu.com\/w_ge\/blog\/item\/105877c6a361df1b9c163d21.html
\n\n
\n************************************************************************
\n\n
\n\u6587\u7ae0\u4e09
\n
\n\u5f53\u60a8\u5c1d\u8bd5\u4ece TCP \u7aef\u53e3\u5927\u4e8e 5000 \u8fde\u63a5\u6536\u5230\u9519\u8bef ' WSAENOBUFS (10055) '
\n\u75c7\u72b6
\n\u5982\u679c\u60a8\u5c1d\u8bd5\u5efa\u7acb TCP \u8fde\u63a5\u4ece\u7aef\u53e3\u662f\u5927\u4e8e 5000, \u672c\u5730\u8ba1\u7b97\u673a\u54cd\u5e94\u5e76\u4ee5\u4e0b WSAENOBUFS (10055\uff09\n\u9519\u8bef\u4fe1\u606f\uff1a
\n\u56e0\u4e3a\u7cfb\u7edf\u7f3a\u4e4f\u8db3\u591f\u7f13\u51b2\u533a\u7a7a\u95f4\u6216\u8005\u56e0\u4e3a\u961f\u5217\u5df2\u6ee1\u65e0\u6cd5\u6267\u884c\u5957\u63a5\u5b57\u4e0a\u64cd\u4f5c\u3002
\n\u89e3\u51b3\u65b9\u6848
\n\u8981\u70b9 \u6b64\u90e8\u5206\uff0c \u65b9\u6cd5\u6216\u4efb\u52a1\u5305\u542b\u6b65\u9aa4\u544a\u8bc9\u60a8\u5982\u4f55\u4fee\u6539\u6ce8\u518c\u8868\u3002 \u4f46\u662f, \u5982\u679c\u4fee\u6539\u6ce8\u518c\u8868\u9519\u8bef\u53ef\u80fd\u53d1\u751f\u4e25\u91cd\u95ee\u9898\u3002 \u56e0\u6b64, \u786e\u4fdd\u4ed4\u7ec6\u6267\u884c\u8fd9\u4e9b\u6b65\u9aa4\u3002\n\u7528\u4e8e\u6dfb\u52a0\u4fdd\u62a4\u4e4b\u524d, \u4fee\u6539\u5907\u4efd\u6ce8\u518c\u8868\u3002 \u7136\u540e, \u5728\u53d1\u751f\u95ee\u9898\u65f6\u8fd8\u539f\u6ce8\u518c\u8868\u3002 \u6709\u5173\u5982\u4f55\u5907\u4efd\u548c\u8fd8\u539f\u6ce8\u518c\u8868, \u8bf7\u5355\u51fb\u4e0b\u5217\u6587\u7ae0\u7f16\u53f7\u4ee5\u67e5\u770b\nMicrosoft \u77e5\u8bc6\u5e93\u4e2d\u76f8\u5e94\uff1a
\n\u9ed8\u8ba4\u6700\u5927\u6570\u91cf\u7684\u77ed\u6682 TCP \u7aef\u53e3\u4e3a 5000 ' \u9002\u7528\u4e8e ' \u90e8\u5206\u4e2d\u5305\u542b\u4ea7\u54c1\u4e2d\u3002 \u8fd9\u4e9b\u4ea7\u54c1\u4e2d\u5df2\u6dfb\u52a0\u65b0\u53c2\u6570\u3002 \u8981\u589e\u52a0\u6700\u5927\u503c\u662f\u77ed\u6682\u7aef\u53e3,\n\u8bf7\u6309\u7167\u4e0b\u5217\u6b65\u9aa4\u64cd\u4f5c\uff1a
\n1. \u542f\u52a8\u6ce8\u518c\u8868\u7f16\u8f91\u5668\u3002
\n2. \u6ce8\u518c\u8868, \u4e2d\u627e\u5230\u4ee5\u4e0b\u5b50\u9879\uff0c \u7136\u540e\u5355\u51fb \u53c2\u6570 \uff1a
\nHKEY _ LOCAL _\nMACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters
\n3. \u5728 \u7f16\u8f91 \u83dc\u5355, \u5355\u51fb \u65b0\u5efa , \u7136\u540e\u6dfb\u52a0\u4ee5\u4e0b\u6ce8\u518c\u8868\u9879\uff1a
\nMaxUserPort \u503c\u540d\u79f0\uff1a
\n\u503c\u7c7b\u578b\uff1a DWORD
\n\u503c\u6570\u636e\uff1a 65534
\n\u6709\u6548\u8303\u56f4\uff1a 5000 - 65534 (\u5341\u8fdb\u5236)
\n\u9ed8\u8ba4\uff1a 0x1388 5000 \uff08\u5341\u8fdb\u5236\uff09
\n\u8bf4\u660e\uff1a \u6b64\u53c2\u6570\u63a7\u5236\u7a0b\u5e8f\u4ece\u7cfb\u7edf\u8bf7\u6c42\u4efb\u4f55\u53ef\u7528\u7528\u6237\u7aef\u53e3\u65f6\u6240\u7528\u6700\u5927\u7aef\u53e3\u6570\u3002 \u901a\u5e38, 1024 \u7684\u503c\u548c\u542b 5000 \u4e4b\u95f4\u5206\u914d\u4e34\u65f6 \uff08\u77ed\u671f)\n\u7aef\u53e3\u3002
\n4. \u9000\u51fa\u6ce8\u518c\u8868\u7f16\u8f91\u5668, \u5e76\u91cd\u65b0\u542f\u52a8\u8ba1\u7b97\u673a\u3002
\n
\n\u6ce8\u610f \u4e00\u4e2a\u9644\u52a0 TCPTimedWaitDelay \u6ce8\u518c\u8868\u53c2\u6570\u51b3\u5b9a\u591a\u4e45\u5173\u95ed\u7aef\u53e3\u7b49\u5f85\u53ef\u4ee5\u91cd\u7528\u5173\u95ed\u7aef\u53e3\u3002
\n
\n\u5bf9\u5e94\u82f1\u6587\u539f\u6587\u4e3a\uff1a
\n
\nSYMPTOMS
\nIf you try to set up TCP connections from ports that are greater\nthan 5000, the local computer responds with the following\nWSAENOBUFS (10055) error message:
\nAn operation on a socket could not be performed because the system\nlacked sufficient buffer space or because a queue was full.
\nRESOLUTION
\nImportant This section, method, or task contains steps that tell\nyou how to modify the registry. However, serious problems might\noccur if you modify the registry incorrectly. Therefore, make sure\nthat you follow these steps carefully. For added protection, back\nup the registry before you modify it. Then, you can restore the\nregistry if a problem occurs. For more information about how to\nback up and restore the registry, click the following article\nnumber to view the article in the Microsoft Knowledge Base:
\n322756 (http:\/\/support.microsoft.com\/kb\/322756\/) How to back up and\nrestore the registry in Windows
\n
\n
\nThe default maximum number of ephemeral TCP ports is 5000 in the\nproducts that are included in the 'Applies to' section. A new\nparameter has been added in these products. To increase the maximum\nnumber of ephemeral ports, follow these steps:
\n1. Start Registry Editor.
\n2. Locate the following subkey in the registry, and then click\nParameters:
\nHKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters
\n\n3. On the Edit menu, click New, and then add the following registry\nentry:
\nValue Name: MaxUserPort
\nValue Type: DWORD
\nValue data: 65534
\nValid Range: 5000-65534 (decimal)
\nDefault: 0x1388 (5000 decimal)
\nDescription: This parameter controls the maximum port number that\nis used when a program requests any available user port from the\nsystem. Typically , ephemeral (short-lived) ports are allocated\nbetween the values of 1024 and 5000 inclusive.
\n4. Exit Registry Editor, and then restart the computer.
\n
\nNote An additional TCPTimedWaitDelay registry parameter determines\nhow long a closed port waits until the closed port can be\nreused.
<\/P>\n<\/TD>\n<\/TR>\n<\/TBODY>\n<\/TABLE>\n
<\/A>
","uname":"lovo","x_cms_flag":"1","guhost":"","blogtitle":"lovo\u7684\u535a\u5ba2","tag":"java\u5b66\u4e60|\u6742\u8c08","quote":"","class":"\u6742\u8c08","blog_pubdate":"2009-08-31 16:29:34","blog_2008":"\u65e0"};var status = true;