Operating a globally accessed webservice for the last couple of years traffic costs, web performance and user experience are familiar topics for me.
All these topics come down to web optimization and caching.
The first and probably easiest task is activating gzip compression. There are some known issues with eg some version of Internet Explorer. In my setup i made good experience with this line for Apache:
If you have a server farm you should disable ETag:
ETag is unique to each server/filesystem. Same file on 2 nodes: Different ETag.
After these 2 steps you have to do your homework:
Modify your image filenames to get unique. We hashed the content and renamed the image to include the hash. Create a table in your database to resolve your original filename to the hash. That way you can use human readible filenames in your templates. Now you are ready to configure Expires header with 1-2 month. If an image gets modified you get a new hash. Thus a new filename and the client fetches the new version.
If you have lots of images and lots of cookies in your domain you might consider using an extra domain for your images. Once i operated a dedicated web cluster for something like a REST API. You know what? I had more incoming than outgoing traffic: 8-1 split!
Even if your website differs for every user there is some hope for you: Evaluate which parts are equal for every user. Try to load the “per user” data via AJAX. Now you can cache your big data.
Talking about your cache: You should run a reverse proxy. Ive used Squid for several years now. Back in the days Apache Traffic Server wasn’t ready and other competitors didn’t worked out either. I know: It’s old. It’s ugly. For some tasks you have to be inventive. In the end: It just works. You need redirects: Send these requests to an Apache.
Your dev team and editorial staff wants frequent changes, right? Even if you place an Expire Header of 10 minutes on your dynamic pages: Your application server farm will be thankful. Even if you configure 1 minute: How many requests do you have in that time frame? Keep in mind: Every request on your back end means expensive database queries.
A resonable Expire Header is only things by halves. You need a Last-Modified Header. I can hear your moan: I have dynamic content from multiple sources. I have no clue what changed when. Our devteam and me had the same clout. Sometimes you wake up at 7am and you see it: Damn! It’s soo easy!
For debugging caching I use Firefox with TamperData, FireBug and Y!Slow. wget, Fiddler2 and Charles Proxy are also very helpful.
Web browsers are using concurrent requests to talk to web servers. It’s quite common to use 2 hostnames for the same content. Good practise is to use modulo 2 arithmetic on your filename to decide between host1 and host2. This way the content gets requested from the same hostname each time. You know: Caching!
Do you have web requests for statistics? I bet you return a 1x1 image. Around 40 bytes in size? HTTP 204 No Content 0 bytes!
Now you think: That’s it! You are right. On the web request end. But you can still tune between your application server and your database server. MySQL has something called query cache. A great feature.
Talking about your webserver farm: You are using still NFS to get the content to your webservers, right? 2 years ago I switched to HTTP. No stale NFS mounts, caching, expire and last-modified lowers my internal network traffic. I wouldn’t consider switching back to NFS for that part.