Jonathan George's Blog

Static Resource Versioning in IIS7

Posted in Technical Stuff by Jonathan on January 13, 2010

A couple of months ago, I blogged about how to improve your YSlow score under IIS7. One of the recommendations from the Yahoo! best practices guide against which YSlow rates your site is that you should set far-future expiry dates for resources such as scripts, images and stylesheets.

This is good advice, and straightforward to implement under IIS7, but can leave you with a bit of a problem – once you’ve released your site with all of your scripts and stylesheets set to expire next millenium, you can no longer edit them as you please. If you change them and release them with the same name, returning visitors to your site will not see the changes – they will use the cached version that you told them was good for another 1000 years.

Instead, if you need to change one of your files, you need to give it a new name so that clients who downloaded and cached the original version are sure to get the update when it’s deployed. This means you need to update any references to point to the new file, which has the potential to be a bit of a management nightmare.

Obviously, if you’ve only got one or two of each type of file and you’ve referenced them from a master page or similar shared component then you can handle this manually. However, if you’ve got more than that, or if you’re following the recommendation to combine and pack files, then life can get more fiddly.

On a previous project, I’ve used the Aptimize Website Accelerator, which handles all this stuff for you in an extremely slick way. However, for projects that don’t use this, I wanted to come up with a way of making the process easier.

In order to do this I decided to take advantage of ASP.NET’s internal URL rewriting feature, exposed via the HttpContext.RewritePath method. What I ended up with – which you can download from Codeplex if you’d like to give it a try – has two parts, the logic to add and remove version numbers from file names and an HttpModule to apply this logic to incoming Urls. The idea is that the code writing out links to the JS/CSS files can pass those filenames through the provider to add a version number, and the HttpModule will intercept incoming requests for those files and rewrite them to point to the correct filename.

The resourcing versioning provider

Whilst I’m not a massive fan of the ASP.NET provider model, I’ve recently been feeling the pain of using a variety of different open source projects with overlapping dependencies on different versions of the same tools. I therefore decided to use the provider model to allow me to swap out different ways of generating the version number.

The first important class here is ResourceVersioningProvider. This defines the two methods that providers need to implement.

  1. public abstract class ResourceVersioningProvider : ProviderBase
  2. {
  3. public abstract string AddVersionNumberToFileName(string fileName);
  4. public abstract string RemoveVersionNumberFromFileName(string fileName);
  5. }

It should be pretty obvious what they do from their names.

I’ve then got a base class,  ResourceVersioningProviderBase which contains the code that’s independent of how the version number is generated (basically the filename parsing/modifying logic as well as some caching.) This class is abstract and defines a single abstract method, GetVersionNumber(). This should return the version number that’s going to be used to make the filename unique.

The final relevent class is AssemblyBasedResourceVersioningProvider, which inherits ResourceVersioningProviderBase and implements GetVersionNumber by returning the full version number of an assembly specified in the configuration. The configuration itself looks like this:

  1. <resourceVersioning default=assemblyVersion enabled=true>
  2. <providers>
  3. <add name=assemblyVersion
  4. extensionsToVersion=.js,.css,.less
  5. assemblyName=WhoCanHelpMe.Web
  6. type=Cadenza.Web.ResourceVersioning.AssemblyBasedResourceVersioningProvider,
  7. Cadenza.Web.ResourceVersioning” />
  8. </providers>
  9. </resourceVersioning>

As you can see, there’s also an attribute that controls which file extensions the provider will add version numbers to. This means that if you call “AddVersionToFileName” and pass in an a filename with a different extension, no version number will be added.

The resource versioning HttpModule

The HttpModule that forms the other half of the solution is pretty straightforward. Every incoming Url is cracked open using the Uri.Segments property, and the final segment (assumed to be the requested file name) is passed into the provider to remove the version number, if present. If the provider returns a filename that differs from the one in the request url, it is used to build a new Url that can be passed to HttpContext.RewritePath. Processed Urls are cached (regardless of whether or not they require a rewrite) to improve performance.

In Practice

The module is in use on the Who Can Help Me? app I introduced in my previous post. If you have a look at the source for the homepage, you’ll see that all the CSS, JavaScript and DotLess files have the current WhoCanHelpMe.Web assembly version number as part of their name. You can get at the Who Can Help Me? source via it’s Codeplex site to see how it’s implemented.

An Alternative Approach

Despite having written this post a month or so ago, I was only spurred into posting it by reading about this alternative approach from Karl Seguin. He’s got an interesting series going on at the moment covering ASP.NET performance – it’s well worth a read. I can see that it would be fairly straightforward to replace my AssemblyBasedResourceVersioningProvider with a FileHashBasedResourceVersioningProvider, although given that no. 1 on my to do list at the moment is learning  NHibernate, it might be a while before I get round to it!

As I mentioned above, the compiled assemblies and the source code are available from my Codeplex site. Please have a play and let me know what you think.


Tagged with: , ,

How to improve your YSlow score under IIS7

Posted in Technical Stuff by Jonathan on September 9, 2009

Note: This was originally posted on my old blog at the EMC Consulting Blogs site.

On my last two projects, I’ve been involved in various bits of performance tweaking and testing. One of the common tools for checking that your web pages are optimised for delivery to the user is YSlow, which rates your pages against the Yahoo! best practices guide.

Since I’ve now done this twice, I thought it would be useful to document what I know about the subject. The examples I’ve given are from my current project, which is built using ASP.NET MVC, the Spark View Engine and hosted on IIS7.

Make Fewer Http Requests

There are two sides to this. Firstly and most obviously, the lighter your pages the faster the user can download them. This is something that has to be addressed at an early stage in the design process – you need to ensure you balance up the need for a compelling and rich user experience against the requirements for fast download and page render times. Having well defined non-functional  requirements up front, and ensuring that page designs are technically reviewed with those SLAs in mind is an important part of this. As I have learned from painful experience, having a conversation about slow page download times after you’ve implemented the fantastic designs that the client loved is not nice.

The second part of this is that all but the sparsest of pages can still be optimised by reducing the number of individual files downloaded. You can do this by combining your CSS, Javascript and image files into single files. CSS and JS combining and minification is pretty straightforward these days, especially with things like the Yahoo! UI Library: YUI Compressor for .NET. On my current project, we use the MSBuild task from that library to combine, minify and obfuscate JS and CSS as part of our release builds. To control which files are referenced in our views, we add the list of files to our base ViewModel class (from which all the other page ViewModels inherit) using conditional compilation statements.

  1. private static void ConfigureJavaScriptIncludes(PageViewModel viewModel)
  2. {
  3. #if DEBUG
  4.     viewModel.JavaScriptFiles.Add(“Libraries/jquery-1.3.2.min.js”);
  5.     viewModel.JavaScriptFiles.Add(“Libraries/jquery.query.js”);
  6.     viewModel.JavaScriptFiles.Add(“Libraries/jquery.validate.js”);
  7.     viewModel.JavaScriptFiles.Add(“Libraries/xVal.jquery.validate.js”);
  8.     viewModel.JavaScriptFiles.Add(“resources.js”);
  9.     viewModel.JavaScriptFiles.Add(“infoFlyout.js”);
  10.     viewModel.JavaScriptFiles.Add(“bespoke.js”);
  11. #else
  12.     viewModel.JavaScriptFiles.Add(“js-min.js”);
  13. #endif
  14. }
  15. private static void ConfigureStyleSheetIncludes(PageViewModel viewModel)
  16. {
  17. #if DEBUG
  18.     viewModel.CssFile = “styles.css”;
  19. #else
  20.     viewModel.CssFile = “css-min.css”;
  21. #endif
  22. }

Images are more tricky, and we’ve made the decision to not bother using CSS Sprites since most of the ways to do it are manual. If this is important to you, see the end of this post for a tool that can do it on the fly.

Use a Content Delivery Network

This isn’t something you can’t really address through configuration. In case you’re not aware, the idea is to farm off the hosting of static files onto a network of geographically distributed servers, and ensure that each user receives that content from the closest server to them. There are a number of CDN providers around – Akamai, Amazon Cloud Front and Highwinds, to name but a few.

The biggest problem I’ve encountered with the use of CDNs is for sites which have secure areas. If you want to avoid annoying browser popups telling you that your secure page contains insecure content, you need to ensure that your CDN has both a http and https URLs, and that you have a good way of switching between them as needed. This is a real pain for things like background images in CSS and I haven’t found a good solution for it yet.

On the plus side I was pleased to see that Spark provides support for CDNs in the form of it’s <resources> section, which allows you to define a path to match against an a full URL to substitute – so for example, you could tell it to map all files referenced in the path “~/content/css/” to “”. Pretty cool – if Louis could extend that to address the secure/insecure path issue, it would be perfect.

Add an Expires or Cache-Control Header

The best practice recommendation is to set far future expiry headers on all static content (JS/CSS/images/etc). This means that once the files are pushed to production you can never change them, because users may have already downloaded and cached an old version. Instead, you change the content by creating new files and referencing those instead (read more here.) This is fine for most of our resources, but sometimes you will have imagery which has to follow a set naming convention – this is the case for my current project, where the product imagery is named in a specific way. In this scenario, you just have to make sure that the different types of images are located in separate paths so you can configure them independently, and then choose an appropriate expiration policy based on how often you think the files will change.

To set the headers, all you need is a web.config in the relevant folder.  You can either create this manually, or using the IIS Manager, which will create the file for you. To do it using IIS Manager, select the folder you need to set the headers for, choose the “HTTP Response Headers” option and then click the “Set Common Headers” option in the right hand menu. I personally favour including the web.config in my project and in source control as then it gets deployed with the rest of the project (a great improvement over IIS6).

Here’s the one we use for the Content folder.

  1. <?xml version=1.0 encoding=UTF-8?>
  2. <configuration>
  3.   <system.webServer>
  4.     <staticContent>
  5.       <clientCache cacheControlMode=UseExpires
  6.                    httpExpires=Sat, 31 Dec 2050 00:00:00 GMT />
  7.     </staticContent>
  8.   </system.webServer>
  9. </configuration>

This has the effect of setting the HTTP Expires header to the date specified.

And here’s what we use for the product images folder.

  1. <?xml version=1.0 encoding=UTF-8?>
  2. <configuration>
  3.   <system.webServer>
  4.     <staticContent>
  5.       <clientCache cacheControlMode=UseMaxAge
  6.                    cacheControlMaxAge=7.00:00:00 />
  7.     </staticContent>
  8.   </system.webServer>
  9. </configuration>

This writes max-age=604800 (7 days worth of seconds) into the Cache-Control header in the response. The browser will use this in conjunction with the Date header to determine whether the cached file is still valid.

Our HTML pages all have max-age=0 in the cache-control header, and have the expires header set to the same as the Date and Last-Modified headers, as we don’t want these cached – it’s an ecommerce website and we have user-specific infomation (e.g. basket summary, recently viewed items) on each page.

Compress components with Gzip

IIS7 does this for you out of the box but it’s not always 100% clear how to configure it, so, here’s the details.

Static compression is for files that are identical every time they are requested, e.g. Javascript and CSS. Dynamic compression is for files that differ per request, like the HTML pages generated by our app. The content types that actually fall under each heading are controlled in the IIS7 metabase, which resides in the file C:\Windows\System32\inetsrv\config\applicationHost.config. In that file, assuming you have both static and dynamic compression features enabled, you’ll be able to find the following section:

  1. <httpCompression directory=%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files>
  2.   <scheme name=gzip dll=%Windir%\system32\inetsrv\gzip.dll />
  3.   <dynamicTypes>
  4.     <add mimeType=text/* enabled=true />
  5.     <add mimeType=message/* enabled=true />
  6.     <add mimeType=application/x-javascript enabled=true />
  7.     <add mimeType=*/* enabled=false />
  8.   </dynamicTypes>
  9.   <staticTypes>
  10.     <add mimeType=text/* enabled=true />
  11.     <add mimeType=message/* enabled=true />
  12.     <add mimeType=application/javascript enabled=true />
  13.     <add mimeType=*/* enabled=false />
  14.   </staticTypes>
  15. </httpCompression>

As you can see, configuration is done by response MIME type, making it pretty straightforward to set up…

Item MIME type
Views (i.e. the HTML pages) text/html
CSS text/css
JS application/x-javascript
Images image/gif, image/jpeg, image/png

One gotcha there is that IIS7 is configured out of the box to return a content type of application/x-javascript for .JS files, which results in them coming under the configuration for dynamic compression.This is not great – dynamic compression is intended for frequently changing content so dynamically compressed files are not cached for future requests. Our JS files don’t change outside of deployments, so we really need them to be in the static compression category.

There’s a fair bit of discussion online as to what the correct MIME type for Javascript should be, with the three choices being:

  • text/javascript
  • application/x-javascript
  • application/javascript

So you have two choices – you can change your config, either in web.config or applicationHost.config to map the .js extension to text/javascript or application/javascript, which are both already registered for static compression, or you can move application/x-javascript into the static configuration section in your web.config or applicationHost.config. I went with the latter option and modified applicationHost.config. I look forward to a comment from anyone who can explain whether that was the correct thing to do or not.

Finally, in the <system.webserver> element of your app’s main web.config, add this:

  1. <urlCompression doStaticCompression=true
  2.                 doDynamicCompression=true />

Once you’ve got this configured, you’ll probably open up Firebug or Fiddler, hit your app, and wonder why your static files don’t have the Content-Encoding: gzip header. The reason is that IIS7 is being clever. If you return to applicationHost.config, one thing you won’t see is this:

  1. <serverRuntime alternateHostName=""
  2. appConcurrentRequestLimit="5000"
  3. enabled="true"
  4. enableNagling="false"
  5. frequentHitThreshold="2"
  6. frequentHitTimePeriod="00:00:10"
  7. maxRequestEntityAllowed="4294967295"
  8. uploadReadAheadSize="49152">

You’ll just have an empty <serverRuntime /> tag. The values I’ve shown above are the defaults, and the interesting ones are “frequentHitThreshold” and “frequentHitTimePeriod”. Essentially, a file is only a candidate for static compression if it’s a frequently hit file, and those two attributes control the definition of “frequently hit”. So first time round, you won’t see the Content-Encoding: gzip header, but if you’re quick with the refresh button, it should appear. If you spend some time Googling, you’ll find some discussion around setting frequentHitThreshold to 1 – some say it’s a bad idea, some say do it.  Personally, I decided to trust the people who built IIS7 since in all likelihood they have larger brains than me, and move on.

There are a couple of other interesting configuration values in this area – the MSDN page covers them in detail.

Minify Javascript and CSS

The MSBuild task I mentioned under “Make fewer HTTP requests” minifies the CSS and JS as well as combining the files.

Configure Entity tags (etags)

ETags are a bit of a pain, with a lot of confusion and discussion around them. If anyone has the full picture, I’d love to hear it but my current understanding is that when running IIS, the generated etags vary by server, and will also change if the app restarts. Since this effectively renders them useless, we’re removing them. Howard has already blogged this technique for removing unwanted response headers in IIS7. Some people have had luck with adding a new, blank response header called ETag in their web.config file, but that didn’t work for me.

Split Components Across Domains

The HTTP1.1 protocol states that browsers should not make more than two simultaneous connections to the same domain. Whilst the most recent browser versions (IE8, Chrome and Firefox 3+) ignore this, older browsers will normally still be tied to that figure. By creating separate subdomains for your images, JS and CSS you can get these browsers to download more of your content in parallel. It’s worth noting that using a CDN also helps here. Doing this will also help with the “Use Cookie Free Domains For Components” rule, since any domain cookies you use won’t be sent for requests to the subdomains/CDN. However, you will probably fall foul of the issue I mentioned earlier around mixing secure and insecure content on a single page.

A great tool for visualising how your browser is downloading content is Microsoft Visual Roundtrip Analyzer. Fire it up and request a page, and you will be bombarded with all sorts of information (too much to cover here) about how well the site performs for that request.

Optimise Images

Sounds obvious, doesn’t it? There are some good tools out there to help with this, such as, now part of YSlow. If you have an image heavy website, you can make a massive difference to overall page weight by doing this.

Make favicon.ico Small and Cacheable

If you don’t have a favicon.ico, you’re missing a trick – not least because the browser is going to ask for it anyway, so you’re better returning  one than returning a 404 response. The recommendation is that it’s kept under 1Kb and that it has a fairly long expiration period.

WAX – Best practice by default

On my last project, I used the Aptimize Website Accelerator to handle the majority of the things I’ve mentioned above (my current project has a tight budget, so it’s not part of the solution for v1). It’s a fantastic tool and I’d strongly recommend anyone wishing to speed their site up gives it a try. I was interested to learn recently that Microsoft have started running it on, and are seeing a 40-60% reduction in page load times. It handles combination and minification of CSS, Javascript and images, as well as a number of other cool tricks such as inlining images into CSS files where the requesting browser supports it, setting expiration dates appropriately, and so on.

Update: Microsoft’s SharePoint team have blogged about using WAX on, with the detail being provided by Ed Robinson, the CEO of Aptimize.


Tagged with: , ,