Understanding MaxServicePointIdleTime and DefaultConnectionLimit

To understand these settings you need to understand how the HttpWebRequest class relates to the ServicePointManager and ServicePoint classes.  When you make a request with the HttpWebRequest class, the ServicePointManager provides a ServicePoint object to the HttpWebRequest object to handle the connection to a web resource (web server).  This ServicePoint object manages all the requests for a particular URI Host and protocol.  For example:  There will be one ServicePoint object for all the requests in my sample because they are all going to the server JSAN1 and using the HTTP protocol.  The ServicePoint object has some settings to control how it services the requests.  You can set the default values for these settings on the ServicePointManager before any ServicePoint objects are created.  Once the ServicePoint object is created these default values will not affect existing ServicePoint objects, only newly created ones.

The ServicePoint model allows optimization, control and performance enhancements for the request.  When you make an HTTP protocol request, ultimately that request and response will take place over a TCP Socket.  Creating and initializing this TCP Socket is a very expensive operation in terms of computer resources and time so if you can re-use this connection for the next request, you have saved the time of creating and initializing a new socket.  The HTTP 1.1 protocol provisions for this optimization by keeping the socket connection alive for subsequent requests by the client to the server.  
Reference - How ServicePointManager works: http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.aspx (see remarks)
Reference - ServicePoint: http://msdn.microsoft.com/en-us/library/system.net.servicepoint.aspx

The ServicePoint object contains a queue of connections to the server identified by the URI of the server passed in.  By default the number of simultaneous connections to a server is controlled by the ServicePoint.ConnectionLimit property.  This property is set on this object when the ServicePointManager creates the ServicePoint object for your connections to a host.  Connections to the server are queued and will be served by the connections to the server as they become freed from use.  The default number of connections is set to 2.  Ref:http://msdn.microsoft.com/en-us/library/system.net.servicepoint.connectionlimit.aspx

Obviously if the number of connections your application has requested to a resource is greater than the connection limit, requests are queued and may timeout before they are taken from the queue and processed on an active connection.  An example may help you understand this.  This example creates 64 Worker threads and executes HTTP requests on them.  Create a simple HTTP page on a server in your network (not on the same machine you run this code on) and run this code, passing the URI to the page on the command line.  You will see some timeout exceptions generated:

Complete code listing for sample (Copy Code):

using System;



using System.Net;



using System.Threading;



using System.Text;



using System.IO;



 



// ClientGetAsync issues the async request.



class ClientGetAsync



{



    // This class is used in the worker thread to pass information in



    public class threadStateInfo



    {



        public string strUrl; // the URL we will be making a request on



        public ManualResetEvent evtReset; // used to signal this worker thread is finished



        public int iReqNumber; // used to indicate which request this is



    }



 



    // this does the work of making the http request from a worker thread



    static void ThreadProc(Object stateInfo)



    {



        threadStateInfo tsInfo = stateInfo as threadStateInfo;



        try



        {



            HttpWebRequest aReq = WebRequest.Create(tsInfo.strUrl) asHttpWebRequest;



            aReq.Timeout = 4000;  // I want the request to only take 4 seconds, otherwise there is some problem



            Console.WriteLine("Begin Request {0}", tsInfo.iReqNumber);



            HttpWebResponse aResp = aReq.GetResponse() as HttpWebResponse;



System.Threading.Thread.Sleep(500); //simulate a half second delay for the server to process the request



            aResp.Close(); // if you do not do this close, you will timeout for sure!  The socket will not be freed.



            Console.WriteLine("End Request {0}", tsInfo.iReqNumber);



        }



        catch (WebException theEx)



        {



            Console.WriteLine("Exception for Request {0}: {1}",tsInfo.iReqNumber,theEx.Message);



        }



 



        //signal the main thread this request is done



        tsInfo.evtReset.Set();            



    }



 



    public static void Main(string[] args)



    {



        if (args.Length < 1)



        {



            showusage();



            return;



        }



        // The number of worker threads to make simultaneous requests (do not go over 64)       



        int numberRequests = 64;



     



        // Used to keep the Main thread alive until all requests are done



        ManualResetEvent[] manualEvents = newManualResetEvent[numberRequests];



     



        // Get the URI from the command line.



        string httpSite = args[0];



 



        // 2 is Default Connection Limit



        ServicePointManager.DefaultConnectionLimit = 2;



 



        // Get the current settings for the Thread Pool



        int minWorker, minIOC;



        ThreadPool.GetMinThreads(out minWorker, out minIOC);



        // without setting the thread pool up, there was enough of a delay to cause timeouts!



        ThreadPool.SetMinThreads(numberRequests,minIOC);



 



        for (int i = 0; i < numberRequests; i++)



        {



            //Create a class to pass info to the thread proc



            threadStateInfo theInfo = new threadStateInfo();



            manualEvents[i] = new ManualResetEvent(false); // thread done event



            theInfo.evtReset = manualEvents[i];



            theInfo.iReqNumber = i; //just to track what request this is



            theInfo.strUrl = httpSite; // the URL to open



            ThreadPool.QueueUserWorkItem(ThreadProc, theInfo);  //Let the thread pool do the work



        }



     



        // Wait until the ManualResetEvent is set so that the application



        // does not exit until after the callback is called.



        // can wait on a maximum of 64 handles here.



        WaitHandle.WaitAll(manualEvents);



 



        Console.WriteLine("done!");



    }



 



    public static void showusage() {



      Console.WriteLine("Attempts to GET a URL");



      Console.WriteLine("\r\nUsage:");



      Console.WriteLine("   ClientGetAsync URL");



      Console.WriteLine("   Example:");



      Console.WriteLine("      ClientGetAsync http://www.contoso.com/");



    }



 



}

  

Please read the comments in the code.  There are some interesting points here.  The first is that if you do not set the ThreadPool MinThreads property to the anticipated number of threads you will use, the delay of creating these threads on the fly is enough to fail all of the requests with a timeout exception.  Next, note the Close() call on the response.  No matter what we do to the code, if you do not do the Close(), those sockets will remain open and a new socket will be created for each request (you will see that in a network trace).  This will also prevent any of the connections in the ServicePoint object from being freed, so no additional requests can be served.  Another thing that is sort of cool about this code is that it simulates the ASP.NET environment as far as the thread pool idea goes.  If your sample code works OK in this test app, chances are you will be able successfully run similar code in ASP.NET with the same ThreadPool and Connection settings.  Reference Asp.Net tuning:http://msdn.microsoft.com/en-us/library/ms998583.aspx

Next play with this line of code:

// 2 is Default Connection Limit

ServicePointManager.DefaultConnectionLimit = 2;

  

Set it to 12 and see how the number of successful requests increases.  Now set it to 4.  You will see a different number of requests succeed and fail depending on the sleep value simulated in the request (System.Threading.Thread.Sleep(500);), the response time of your target server to provide the content, network traffic and the performance of your machine.  Similar factors will affect your real world application.  Also, I set the timeout for the web request to 4 seconds because I expect the result to come back very quickly.  This may or may not be realistic in your situation.  Only testing and careful design will allow you to determine what best works for your application.       

The question often comes up “why can’t I set the DefaultConnectionLimit to a HUGE number”?  You can set this number in the hundreds if you want.  This will take more computer resources however.  More threads need to be created, more sockets open and the target server will be taxed more.  It is all a balance of resources and expected performance.  These factors need to be considered during design and testing.

The value MaxServicePointIdleTime comes in to play when you consider performance as well.  This value indicates how long a socket will be kept open after a request and response, before the client gracefully closes the socket down.  Again, the design and performance of your system will determine what a proper value would be for you situation.  If you close unused sockets down too fast, then the socket will never get reused by another request (remember I said the connection creation time is very expensive).  If the value is too long, then sockets (read resources, threads cpu time and memory) may be used unnecessarily at the source and target computer.  Finding the correct balance is important.  In addition, if the MaxServicePointIdleTime is not less than the server Keep-Alive timeout, there is a chance that your code will start using a connection at the same time the server is sending a message to close the socket resulting in a WebException.

Example App Design

Let’s work through an example together.  Let’s say this application is designed to handle 40 users every 4 seconds.  Let’s also assume it will take .5 seconds to process a web request in your code.  If this was a single threaded application, 40 users processed in 4 seconds means 4/40 or .1 seconds per user would have to be the performance of your application.  Given that the request to the backend server is .5 seconds (normally) there is no way that your single threaded application can handle this load and meet the spec.  The very best you could hope for is 20 seconds!  So you decide to create a multi threaded solution based on my example and try to apply the principles in this article.  Change the number of requests to 40 in the sample code to simulate all the requests coming in.  If this were a single threaded app, then the best we could do is 4 seconds/.5 processing time = 8 requests.  So in theory if we could do 6 simultaneous requests over the 4 second time period at 8 requests per thread, we would get abound 6*8 or 48 requests done in 4 seconds.  So set the DefaultConnectionLimit to 6 and let’s test.

Well, my results are: about 10 of the 40 connections timed out.  Why is that, and what in my calculation is wrong?  First, the sleep value I have in my code to simulate the time it takes to get a request back from the server is added to the actual time it took for the request to come back, so the socket was not freed in exactly .5 seconds, it was a little longer.  Also, it takes some time for those 40 worker threads to start processing, they are background worker threads an run at a lower priority than the main thread, so the main thread (adding the work) is going to get a higher priority and is running in a tight loop and so the worker threads will be delayed a little before they start working.  Finally there is the overhead of the connections initially spinning up!  To see if the time spent to get the connections up is part of the delay, I added this code to see if once the 6 connections are created and maintained by the ServicePoint object, would the performance be as I expected after the     WaitHandle.WaitAll(manualEvents); code in the sample code:

for (int i = 0; i < numberRequests; i++)



        {



            //Create a class to pass info to the thread proc



            threadStateInfo theInfo = new threadStateInfo();



            manualEvents[i].Reset();//Unsignalled state            

            theInfo.evtReset = manualEvents[i];



            theInfo.iReqNumber = i+40; //just to track what request this is



            theInfo.strUrl = httpSite; // the URL to open



            ThreadPool.QueueUserWorkItem(ThreadProc, theInfo);  //Let the thread pool do the work



 



        }



 



        WaitHandle.WaitAll(manualEvents);

  

Sure enough!  On my system, once the 6 connections are created in the first loop, each of the second set of requests succeeds.

Next let’s play with the MaxServicePointIdleTime to see how this affects the performance.  Let’s say in our case we expect that the 40 user in 4 seconds rate occurs in bursts every 6 seconds.  In other words, 40 users come in, there is 4 seconds processing time, 2 second delay and another 40 users come in.  Simulate this in code with this sleep statement just before the new loop you added in the previous step:

System.Threading.Thread.Sleep(2000);

 

On my machine that seemed to perform well.  But let’s see how the MaxServicePointIdleTime affects this.  Set the idle time to .5 seconds at the top of the code:

// 2 is Default Connection Limit

ServicePointManager.DefaultConnectionLimit = 6;

ServicePointManager.MaxServicePointIdleTime = 500;

 

  

 

I was surprised to see that this still succeeded.  I started Netmon to see what was happening with the network sockets, and indeed with the timeout, more sockets were being used.  I picked a couple of the conversations and filtered by the port and indeed saw that the connection was being reset by the client after .5 seconds.  Apparently in MY case, the overhead of the ServicePoint object initializing internal structures was greater than the cost of setting up the socket.  Realize in my case the target server was doing nothing else and was on the same switch that the source computer is on so the TCP overhead for creating the connection was indeed negligible.

All this said, pick a MaxServicePointIdleTime that is less than the Keep-Alive timeout of the server and small enough so that you do not have ports open when it is not necessary.  In my scenario I expected 40 users in a burst after 2 seconds of activity so a value of 2-6 seconds seems like a good choice.  I set it to 2500 milliseconds.

Next I wanted to get a good setting for the DefaultConnectionLimit that would allow the requests to succeed all of the time.  Simple experimentation showed that 10 seemed to always succeed.  Confident that I had a good bunch of settings, I now had to plan for the fact that sometimes the network, local resources or target server may create a timeout condition.  One way to address this would be to increase the HttpWebRequest.Timeout value.  Another may be to increase the number of connections available.  Finally you could simply retry the request when it fails.

To simulate the system under a heavy load and test retrying the request, I increased the sleep value in my Worker ThreadProc to sleep for a second.  I then added a retry immediately after the catch:

static void ThreadProc(Object stateInfo)



    {



        threadStateInfo tsInfo = stateInfo as threadStateInfo;



        bool success = false;



        try



        {



            HttpWebRequest aReq = WebRequest.Create(tsInfo.strUrl) asHttpWebRequest;



            aReq.Timeout = 4000;  // I want the request to only take 4 seconds, otherwise there is some problem



            Console.WriteLine("Begin Request {0}", tsInfo.iReqNumber);



            HttpWebResponse aResp = aReq.GetResponse() as HttpWebResponse;



            System.Threading.Thread.Sleep(1000); //simulate a half second delay for the server to process the request



            aResp.Close(); // if you do not do this close, you will timeout for sure!  The socket will not be freed.



            Console.WriteLine("End Request {0}", tsInfo.iReqNumber);



            success = true;



        }



        catch (WebException theEx)



        {



            Console.WriteLine("Exception for Request {0}: {1}",tsInfo.iReqNumber,theEx.Message);



        }



        try



        {



            if (success == false)



            {



                Console.WriteLine("Retry Request {0}:", tsInfo.iReqNumber);



                HttpWebRequest aReq = WebRequest.Create(tsInfo.strUrl) asHttpWebRequest;



                aReq.Timeout = 4000;  // I want the request to only take 4 seconds, otherwise there is some problem



                Console.WriteLine("Begin Retry Request {0}", tsInfo.iReqNumber);



                HttpWebResponse aResp = aReq.GetResponse() asHttpWebResponse;



                System.Threading.Thread.Sleep(1000); //simulate a half second delay for the server to process the request



                aResp.Close(); // if you do not do this close, you will timeout for sure!  The socket will not be freed.



                Console.WriteLine("End Retry Request {0}", tsInfo.iReqNumber);



                success = true;



            }



        }



        catch (WebException theEx)



        {



            Console.WriteLine("2nd Exception for Request {0}: {1}", tsInfo.iReqNumber, theEx.Message);



        }



 



 



 



        //signal the main thread this request is done



        tsInfo.evtReset.Set();            



    }

  


This allowed all of the requests to succeed!  Retrying the after the exception has the benefit that we do not use additional resources unnecessarily and in the event there is a temporary problem with the system, the requests will still go through.  


This allowed all of the requests to succeed!  Retrying the after the exception has the benefit that we do not use additional resources unnecessarily and in the event there is a temporary problem with the system, the requests will still go through. 


Summary

This is only a simple example but should give you some hands on experience with the two settings; MaxServicePointIdleTime and DefaultConnectionLimit.  If you work through this document and sample code on your own, you will have a good idea of how these values can be changed to help you create an application that is efficient and that will work for you.  NOTHING can replace careful design, initial specification and analysis.  You will need to load test and see how your system behaves in the normal case and when unexpected events affect the normal processing.  You should be able to extend this knowledge to ASP.NET and Windows applications that use the HttpWebRequest class and successfully design or troubleshoot them.  Network Monitor will be a great tool for you to see what is actually happening over the network as well.  I only showed once how I used the tool to see the connection activity over the network, but I use this tool daily to troubleshoot HTTP issues.  

Let me know if you find this a useful resource!

你可能感兴趣的:(Connection)