Monday, May 10, 2010

PainPoint = The ability to quickly pinpoint the root cause of poor Web Page Load Times.

Dear fellow IT Problem Solvers,

The following posting was inspired by a recent and very interesting commercial by Google for their Chrome browser. Here is a link to their Video on YouTube: http://www.youtube.com/watch?v=nCgQDjiotG0




I captured the websites as they loaded into all 3 browsers and then compared them. What I noticed was that all 3 browsers, for the most part, pull the data similarly. Each opens 6 TCP connections per IP (if there are 6 or more objects) and pulls the data in parallel. More importantly, each browser is a slave to how the web pages are written. If there are objects to pull, then the browser is forced to pull them. If there are servers referenced in the index page, then the browser must talk to those servers.
All in all I found that the vast majority of time it takes to load these web pages comes down to the following:
1) * Each web page has to hit over 10 different IPs to pull all of its advertising and content. This is very inefficient if you are trying to be very fast about loading a web page. Each server causes a DNS lookup (a time consuming process) and each server requires all the overhead associated with setting up multiple TCP connections, slow starting the transfer of content on those connections, and then a tearing down those connections (although tear down is transparent to the user)

Each web page had over 50 request/response pairs to its “biggest” server (the server that served up the most content). A request/response pair causes the flow of data to change direction on the network. This “turning” of the data flow is called an “application turn” (some people define the request/response pair as an application turn, but the net effect is the same). Each application turn traverses the link from one side to the other, incurring the link latency along the way. For instance, a web page with 100 application turns would require, at a minimum, 1 second to traverse a 10ms link (latency measure one-sided). This is somewhat mitigated by the fact that the browser is doing requests in parallel. However, in my experience, the higher the number of application turns, the slower the page. Using our same example (assuming linear requests), a 100 application turn web page would take 10 seconds over a 100ms link. Since it is unpredictable how much latency a user will have between them and a web server, it is always better to reduce the number of application turns to a minimum.

Unfortunately, speeding up the client side browser can only go so far. Many times developers design web pages for how they look and ignore how they transfer over a wire. Inefficient web design (hitting too many servers and retrieving too many small objects, thus causing high amounts of application turns) can cause a web page to appear slow for a user. Google is doing a good job with rendering pages, but a better solution would be to analyze the efficiency of web pages (or any other network-touching application) before releasing them. 

I even watched their "behind the scenes" and learned that getting the time synchronized of their experiments was difficult and eventually the teams had to reconfig the system settings to load the page completely from local cache to basically eliminate the variability of network and server congestion. If they would have consulted me I would have told them that and saved them dozens of hours with experiments and many potatoes!

While it is interesting to observe Google’s touting of how fast its browser can render pages, in real life the browser engine is typically the very least of the worries about having a responsive web page. After watching the Google video, I decided to analyze how each of the mentioned websites were loaded into Chrome, Firefox, and Internet Explorer.
Please note this article would not have been possible without packet analysis guru, Bill Eastman. He was able to complete his analysis in less than one hour! 

Cheers,
-Andy Fields


Feel free to follow me @PainPoint or add me as a connection on LinkedIN

3 comments:

  1. > What I noticed was that all 3 browsers, for
    > the most part, pull the data similarly. Each
    > opens 6 TCP connections per IP (if there are
    > 6 or more objects) and pulls the data in\
    > parallel.

    The browsers you tested all allow six connections at a time. Older browsers, including IE6 and IE7, have a limit of two. IE6 and IE7 are used by just under 20% of users. You can see the limits of different browser at http://www.browserscope.org/ .


    > This “turning” of the data flow is called
    > an “application turn” (some people define
    > the request/response pair as an application
    > turn, but the net effect is the same).

    The term I have heard most is RTT, for round trip time.

    > Google is doing a good job with rendering
    > pages, but a better solution would be to
    > analyze the efficiency of web pages (or any
    > other network-touching application) before
    > releasing them.

    Shameless plug: Use Page Speed to to see what you can do to make your web pages load faster:

    http://code.google.com/speed/page-speed/

    Sam Kerner

    ReplyDelete
  2. Sam,

    Yes, RTT is round trip time, however the designation of "application turn" is a metric that describes the number of times a message pattern changes direction between one server and another. Lots of app turns (especially if they are small in byte size) can cause a very "chatty" application, that will often perform terribly across a WAN.

    Nice plug!
    -Andy

    ReplyDelete
  3. FYI, Firefox increased the number of parallel connections to 16 (yikes).

    If you haven't looked at it yet, Google is pushing work on SPDY which is essentially a http encapsulation layer that attempts to fix a lot of the wrongs with http (multiplexing a single TCP connection instead of brute-forcing it).

    ReplyDelete