HttpWebRequest多线程性能问题,请求超时的错误
转自:http://hi.baidu.com/ju_feng/blog/item/b1c41dbf09ad9e0119d81fb0.html
通过Netstat -abn | find ":443" 如果有两个Time_Wait 和CLOSE_WAIT状态的网络连接,则该服务就不可用
解决方法:
1 HttpWebRequest的缺省连接只有两个,因此在多线程并发的情况下只有两个并发请求
可以通过 System.Net.ServicePointManager.DefaultConnectionLimit = 20
设置网络HttpWebRequest的网络连接池的个数
2 HttpWebRequest 的请求因为网络问题导致连接没有被释放则会占用连接池中的连接个数,导致并非连接数量减少
可以通过如下方法来修改此问题 :
2.1 给ServicePointManager设置KeepLive检测,在网络没有响应的情况下关闭该连接
System.Net.ServicePointManager.SetTcpKeepAlive(true, _keepLiveTime, _intervalTime);
3 定时检测HttpWebRequest的ServicePoint的健康状况
//ServicePoint point = webReq.ServicePoint;
//string connectionsGroupName = webReq.ConnectionGroupName;
//if (point != null)
//{
// if (point. > 15)
// point.CloseConnectionGroup(connectionsGroupName);
//}
4 使用HttpWebRequest 一定要保证GetRequestStream和GetResponse对象关闭,否则容易造成连接处于CLOSE_WAIT状态
using (Stream stream = webReq.GetRequestStream())
{
stream.Write(buffer, 0, buffer.Length);
}
HttpWebResponse response = null;
try
{
response = webReq.GetResponse() as HttpWebResponse;
}
finally
{
try
{
if (response != null)
response.Close();
//ServicePoint point = webReq.ServicePoint;
//string connectionsGroupName = webReq.ConnectionGroupName;
//if (point != null)
//{
// if (point. > 15)
// point.CloseConnectionGroup(connectionsGroupName);
//}
}
catch
{
}
}
I am wondering whether there is a way to handle broken (or better: closed) TCP connections that reside in the connection pool (ServicePoint) used by a HttpWebRequest instance. What I have seen on Windows 2003 Server systems is that under certain circumstances, all sockets used to connect to a web service are in CLOSE_WAIT state and all subsequent HttpWebRequests fail with a timeout, no new connections to the web service were opened. Looking at the problem with WireShark, I couldn't see any HTTP traffic even though the application kept on making requests that all ended in a timeout.
I have tried to reproduce this behaviour on a development system by writing a simple HTTP server that closes the socket after it has sent a proper HTTP 1.1 response. If you do this, you will see the same behavior that I have described above. Of course, HTTP 1.1 assumes that by default connections are kept alive unless the server explicitly sends a "Connection: Close" header. Thus, the client-side would assume that the connection should be kept alive although the server (that does not function according to HTTP 1.1) already closed it. What stuns me is that the HttpWebRequest, or more specifically the ServicePoint does not recover from such a situation (on Windows Vista/2003 Server the client-side sockets stay in CLOSE_WAIT until they're killed by the OS, on another development machine running the Windows 7 RC they're killed by the OS rather quickly and are just gone, no new ones will be created in either case).
I have tried various things on the client-side, so far I could not solve this problem (except by disabling keep alive in each HttpWebRequest by setting the "KeepAlive" property to "false"). The following list compiles what I've tried to recover from the situation described above:
- Various combinations of using() blocks to dispose all resources (WebResponse object, stream retrieved by GetResponseStream(), StreamReader that was used to read from the stream), as well as using the WebResponse.Close() method call that do everything required from my understanding of the MSDN documentation.
- Calling CloseConnectionGroup() on the ServicePoint used by the HttpWebRequest to somehow "kill" the connections
- Temporarily disabling keep alives to recover
- Increasing the number of connections in the pool of the ServicePoint so that it doesn't stop the system from functioning properly if some connections die
- Enabling TCP keep alives on the ServicePoint
- Using a lower ConnectionLeaseTimeout on the ServicePoint (works correctly in general, fails to do anything when connections are closed)
Thus, I'd like to ask here whether someone faced similar problems and might know a way to overcome such issues. I'd also like to know whether I am hunting ghosts here, although I still doubt it at the moment. I also don't see how a socket could get into a CLOSE_WAIT state without the remote communication partner closing the connection (i.e. sending FIN) and I don't understand why Vista/2003 Server do not seem to respond to this by sending a FIN and receiving an ACK to transition from CLOSE_WAIT to LAST_ACK and then to CLOSED. The HTTP server I wrote does not tinker with the LingerState of the TcpClient, default behavior should apply when it comes down to gracefully closing sockets.
Maybe I am missing some very important point about this whole thing, I'd love to be enlightened :-). If you need any further info, I'm happy to supply it.
PS: I have done all of my tests in Visual Studio 2008 linking both .NET 2.0 and .NET 3.5, it doesn't make a difference from what I can tell.
Cheers!
HTTP连接状态:
// //服务器开通KeepLive. 第一个参数是客户端开启Keeplive,
// //第二个参数是客户端是否关闭网络连接
////Server: Close_Wait Client:Fin_Wait_2
//HttpRequestTest.TestHttpConnect(false, false);
////Server: Close Client: Time_Wait
//HttpRequestTest.TestHttpConnect(false, true);
////Server: ESTABLISHED Client: ESTABLISHED
//HttpRequestTest.TestHttpConnect(true, true);
//Server: Close Client: Time_Wait
// HttpRequestTest.TestHttpConnect(false, true);
//服务器关闭KeepLive. 第一个参数是客户端开启Keeplive,
//第二个参数是客户端是否关闭网络连接
////Server: Close_Wait Client:Fin_Wait_2
//HttpRequestTest.TestHttpConnect(false, false);
////Server: Close Client: Time_Wait
//HttpRequestTest.TestHttpConnect(false, true);
////Server: Close Client: Time_Wait
//HttpRequestTest.TestHttpConnect(true, true);
////Server: Close Client: Time_Wait
//HttpRequestTest.TestHttpConnect(false, true);
REF:
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/5c77f187-add4-46cb-a3f8-93e78910eddc
http://haacked.com/archive/2004/05/15/http-web-request-expect-100-continue.aspx
http://www.cnblogs.com/zealic/archive/2008/05/01/1107942.html
[@more@]