HTTP Caching Features and Issues
(Page 2 of 3)
Tradeoffs in Cache Location
HTTP caching can be implemented in a variety of places in the request/response chain. The location of the cache must be chosen based on a fundamental trade-off that always occurs in caching: proximity versus universality. Simply put, the closer the cache is to the requestor of the information, the more savings that result when data is pulled from the cache rather than being fetched from the source. However, the further the cache is from the requestor (and thus closer to the source), the greater the number of devices that can benefit from the cache. Lets see how this manifests itself in the three classes of devices where caches may be found.
The cache with which most Internet users are familiar is that found on the local client. It is usually built into the Web browser software, and for this reason called a Web browser cache. This cache stores recent documents and files accessed by a particular user, so that they can be made quickly available if that user requests them again.
Since the cache is in the users own machine, a request for an item that the cache contains is filled instantly, resulting in no network transaction and instant gratification for the user. However, that user is the only one who can benefit from the cache, which is for this reason sometimes called a private cache.
Devices such as proxy servers that reside between Web clients and servers are also often equipped with a cache. If a user wants a document not in his or her local client cache, the intermediary may be able to provide it, as shown in Figure 319. This is not as efficient as retrieving from the local cache, but far better than going back to the Web server. However, the intermediary has the advantage that all devices using it can benefit from its cache, which may be termed public or shared. This can be useful, because members of an organization often access similar documents.
For example, in an organization developing a hardware product to be used on Apple computers, many different people might be accessing documents on Apples Web site. With a shared cache, a request from user A would often result in items being cached that could be used by user B.
Web servers themselves may also implement a cache. While it may seem a bit strange to have a server maintain a cache of its own documents, this can be of benefit in some circumstances. A resource might require a significant amount of server resources to create; for example, consider a Web page that is generated using a complex database query. If this page is retrieved frequently by many clients, there can be a large benefit to creating it periodically and caching it rather than generating it on the fly for each request.
Since the Web server is farthest from the users, this results in the least savings for a cache hit, as the client request and server response must still travel the full path over the network between client and server. However, this distance from the client also means that all users of the server can benefit from it.
The control of caching in clients and servers is accomplished in the same manner that most other types of control are implemented in HTTP: through the use of special headers. The most important of these is the Cache-Control general header, which has a number of directives that allow the operation of caches to be managed. There are other important caching-related headers, including Expires and Vary. For a great deal of more specific information related to HTTP caching, please consult RFC 2616, section 13.
Home - Table Of Contents - Contact Us
The TCP/IP Guide (http://www.TCPIPGuide.com)
Version 3.0 - Version Date: September 20, 2005
© Copyright 2001-2005 Charles M. Kozierok. All Rights Reserved.
Not responsible for any loss resulting from the use of this site.