Jonathan George's Blog

Optimising an ASP.NET MVC web site, part 3: Application Caching

Posted in Technical Stuff by Jonathan on October 12, 2009

Note: This was originally posted on my old blog at the EMC Consulting Blogs site.

This is part 3 in a series of posts on optimisation work we carried out on my current project, www.fancydressoutfitters.co.uk – an ASP.NET MVC web site built using S#arp Architecture, NHibernate, the Spark view engine and Solr. Please see  part 1 and part 2 for the background.

I doubt many people will disagree with me when I say that one sure-fire way of making a piece of code run faster and consume less of your precious processing power is to do less work in that piece of code. Optimising your code is part of the answer, but however quickly your pages run when it’s just you on the site you will struggle to serve large numbers of users unless you have a good caching strategy in place.

There are a number of places that caching can be applied in an application built using S#arp Architecture. Here are a few:

  • You can use the NHibernate second-level cache to store the results of database queries.
  • You can apply caching wherever it’s appropriate in your model layer.
  • You can cache the results of controller actions; and
  • You can apply output caching to the rendered views (to be covered in part 4).

However the whole thing is something of a balancing act. You have to understand how often things are going change and weigh up the benefits of longer caching durations against the need to reflect updates in a timely fashion. You need to understand how your platform architecture affects your caching strategy – for example, what happens if you have multiple web servers each maintaining it’s own local cache? You also need to make sure that once you’ve decided on an approach, people stick to it. I’ve never been a big fan of using the standard HTTP Cache directly – it requires you to specify the caching interval every time you add an object to the cache. Even if you implement some constants to hold the intervals, you still run the risk of people choosing the wrong one.

How we approach caching

Both my current S#arp Architecture-based project and my previous WebForms one make extensive use of Dependency Injection to decouple our assemblies and increase testability. It follows that we needed to create an abstraction for the cache anyway, so we took the opportunity to kill two birds with one stone and introduce a more convention-based approach to the cache. James has recently blogged about the benefits of removing decisions from our day-to-day coding activities, and this is another example. In our abstraction, we treat the cache as a set of named areas and assign cache duration to those areas instead of to the individual objects. A few examples of the named areas we use are:

  • Content – medium lifespan, intended for content pulled from the CMS;
  • Resources – very long lifespan, containing values from our SQL-based resource provider (used instead of the standard RESX file-based provider); and
  • Site Furniture – long lifespan, containing less frequently changing data used in the site header and footer.

(Note: If this approach interests you at all, let me know – the specifics of the interface and implementation are two much detail to include here, but I can cover them in a separate post if there is interest.)

A nice bonus from using this abstraction on the current project was that it allowed us to use the current CTP of Velocity, Microsoft’s new distributed cache. The abstraction – together with our use of the Windsor IoC container – provided us with a handy safety net: if at any point we found an issue with using Velocity, switching back to the standard HTTP cache would be a simple configuration change. As I’ll cover in a future post our soak testing showed that Velocity is fast and stable, but even post go-live, it would still be a simple matter to make the switch if necessary. One of the tenets of lean development is that you should delay decision making until the last responsible moment – for us, the last responsible moment for deciding on a caching solution could be deferred until go-live, by which point we’d been using Velocity for around 5 months.

Applying caching in the controllers

Our first area of focus was to look at caching the results of controller actions. One of the great things about ASP.NET MVC is that things are so much simpler than in the web forms world (check this out for a pretty picture of the request handling pipeline). It’s far easier to understand where you do and don’t need to apply caching in your application code, and we realised that the most effective thing to do was to put the majority of ours in the controllers.

Howard posted a diagram of our architecture on his recent blog post about AutoMapper. From this you might be able to infer that our controllers tend to follow a pretty simple pattern:

  1. Input view model is populated from request data by a model binder.
  2. Input view model is mapped to domain objects.
  3. Application services layer is used to do work with those domain objects.
  4. Resulting domain objects are translated back into a view model that can be passed to the view.
  5. Appropriate ActionResult is returned.

This is the most complex scenario – not all steps are required for every action.

In an ideal world we’d be caching the ActionResults returned by the controllers but this isn’t something we can do because they aren’t serializable and therefore can’t be cached in Velocity. We therefore have to settle for caching the ViewModel for each action, which gives us this pattern:

  1.         public ActionResult Index()
  2.         {
  3.             return View(Views.Index, MasterPages.Default, this.IndexInner());
  4.         }
  5.         [Cached]
  6.         private MyViewModel IndexInner()
  7.         {
  8.         }

The [Cached] attribute is actually a PostSharp aspect that caches the result of the method call, and I have both good intentions and half finished blog posts on this subject.

If the action method takes parameters, these are passed straight through to the ActionInner method – we do the minimum amount of work possible in the action method itself.

Dealing with changing content in the CMS

In an ideal world, we would set most of the caches to have an extremely long life, then implement a mechanism to allow individual caches to be flushed as and when required. Anyone who has ever used Microsoft Commerce Server will be familiar with the SiteCacheRefresh handler which allows caches to be cleared in response to a web request. However we encountered an issue here: the current release of Velocity (CTP3), does not support enumeration or flushing of named caches – these operations are limited to cache regions. The downside with cache regions is that they are tied to a single server, which pretty much removes the benefits of using a distributed cache.

As I’ve previously mentioned, our system uses N2 for Content Management and the majority of our pages are represented in the CMS. We therefore cached the ViewModel for each page using a key based on the unique ID of the page and the last updated date of the page. This works well because it means that pages can be cached indefinitely, but as soon as someone makes a change to the page then a new version will be created and cached. Obviously this means that the old version will be lingering in the background – the downside to this approach is that you may well fill up the cache more quickly and require Velocity to evict more frequently.

This approach isn’t perfect – it doesn’t cover the product pages, and doesn’t take care of changes in the site hierarchy that would mean we need to regenerate the navigation.  As a result, we’ve implemented an additional hack: all cache keys used for content pages include the last updated time of a single page within the site. Touching that page will cause every cache key used at the controller level to change. I don’t particularly like it, but it does work.

The longer term plan would be to look at this again as and when Velocity moves on. With any luck, the next CTP (now overdue) will make some changes in this area. The ideal plan would be to:

  • Hook into N2’s persistence mechanism – from reading the documentation and looking at the code, it should be possible to receive a notification when things change.  This could then be used to remove stale objects from the cache, or flush an entire named area as required.
  • Implement a restricted controller that allows clearing of individual caches (in the style of the CS SiteCacheRefresh handler mentioned above.) This would be useful if we needed to clear a cache in response to an external action – such as an ETL process pulling data from a back office system.

Caching elsewhere in the application

Once we’d implemented the caching in the controllers and configured the NHibernate second-level cache in both our custom code and N2, there were very few places in our model layer that we needed to apply caching. Once again we hit the point of diminishing returns – we could spend time profiling the application and identifying any remaining bottlenecks, or we could move on and look at what we expected to be the final big win: output caching, which I’ll be talking about in the next post of this series.

A final word about caching

On a lot of projects I’ve worked on, caching is something that tends to be implemented at the end. There are advantages to this – it can help to avoid premature optimisation, and you will (hopefully) have a good understanding of the moving parts in your codebase. However, there is one major downside that anyone who’s done this will be familiar with: once you check in the changes, every single bug that is raised for the application will be blamed on your caching. Accept it, live with it, and be prepared to be known as a) grumpy, and b) easy to wind up. Just take satisfaction in the knowledge that when you get it right, you will be setting yourself up for success when you finally put your site live.

As always, I’d appreciate feedback on these posts – either by leaving a comment, dropping me an email or via Twitter.

@jon_george1

Optimising an ASP.NET MVC web site, part 2: Database and NHibernate

Posted in Technical Stuff by Jonathan on October 3, 2009

Note: This was originally posted on my old blog at the EMC Consulting Blogs site.

This is part 2 in a series of posts on optimisation work we carried out on my current project – an ASP.NET MVC web site built using S#arp Architecture, NHibernate, the Spark view engine and Solr. Please see “Optimising an ASP.NET MVC web site part 1 – Introduction” for the background.

Whilst there are many different things you can do to a web application to kill performance, having a badly implemented database – or a well implemented database that you access in a bad way – has got to be right up there with the best. We therefore made the database our first port of call, and started off with a pretty simple approach – fire up  SQL Server Profiler, start a new trace and see what the pain points are in each database.

Using SQL Server Profiler to find data access mistakes

As I mentioned in the introduction, the goal of our optimisation wasn’t to do everything, just to get the main things we thought were causing problems before we began our load testing. If you use the TSQL_Duration template in SQL Profiler for your trace, you can hit your site and quickly see what the most expensive queries are.

Sample TSQL_Duration Trace

The screenshot above shows the results of repeatedly executing three different stored procedures in a random order. As you can see, the results are organised in ascending order of cost, with the interesting column being “Duration” – which shows the execution time in milliseonds.

When Howard first ran this against some of our landing and search result pages he quickly noticed a large nugget of smelliness at the bottom of the list, in the form of repeated calls to a stored procedure that was taking a whopping 400ms to execute. Whilst 400ms is not a long time – I’m sure it’s taken longer than that for you to read this sentence so far – when you call it 20 times you’ve suddenly spent 8 seconds just on database calls before any other page activity.

Digging into this identified three separate, but related issues.

1. One of our NHibernate mappings included a view based on a recursive query

Of course, this was something we’d told NHibernate to do. Our entities are organised hierarchically, and an entity can appear in multiple positions in the hierarchy. When we pass entity data to Solr for indexing, we work out the full list of paths for an entity and pass that through (see item 3 for more details.) This was done by creating a CTE to generate the list of hierarchy paths for each entity.

For the uninitiated, a Common Table Expression (or CTE) is a T-SQL construct that (amongst other things) enables you to execute recursive queries. They are very handy when dealing with hierarchical datasets, but aren’t the most efficient queries you can execute. Some time after the initial creation of the view, we’d realised that it would be useful to have the data it contained as part of our entity so we added a mapping into our NHibernate configuration. This meant that accessing that data would cause NHibernate to go off and execute a SELECT statement which included a join from the main entity table to the view. This query took in the region of 400ms.

We have two main types of page on the site landing pages and search/browse pages. The landing pages were causing this query to be executed 13 times and the browse pages were causing it to be executed 20 times, so it’s no wonder that performance had dropped. Whilst the view was never intended for use in this way, the requirement to have the data as part of our entity was still valid.

The simple solution to the problem was essentially to materialize the view. SQL Server can do this by turning the view into an indexed view – adding a unique clustered index to it does this. However, this approach isn’t applicable when the SELECT statement for the view uses a CTE, so we went with a more basic approach – since our product catalogue is actually managed in the back office and populated via ETL, we replaced the view with a table (complete with all the appropriate indexes) and tweaked the ETL to populate this table at the same time as all the rest.

For the pages in question, load time dropped from around 8 seconds to under 2, at which point we all got quite excited. However, this wasn’t solely to do with the materialisation of the view, as the investigation also showed up that…

2. Everything was being lazy loaded

By default, NHibernate uses lazy loading across the board. Depending on your object model and how it is used, this can lead to massive inefficiences. The classic example is the "SELECT N+1” anti-pattern, in which you retrieve an entity from a repository then iterate over a collection on that entity. If you’ve left NHibernate lazy loading the values, then this results in a SELECT statement being issued for every iteration of the loop. Have a look at this page on the NHibernate Profiler site for more information.

Howard spent some time using the NHibernate profiler to better understand what our app was doing, and made a number of tweaks to our mapping configuration to apply eager loading where it made sense. This provided another significant improvement, dramatically reducing the number of database calls made by the application.

3. An architectural rule imposed for performance reasons had been broken

We’d made an architectural decision to drive the search and browse functions on our website exclusively from Solr. We did this because it gives us more options for scalability. Since all the data in question comes from a single database, pulling the data directly out of that database would mean that as website load increases then so does the load on the database. The problem with that is that it’s difficult to scale a single database out – you can throw CPUs and RAM at your database server, but you’re always going to hit a limit, at which point you face some architectural challenges. This kind of basic architecture is shown in the following diagram:

Simple architecture

Even though the main entity database is static, meaning that it would be possible to have a number of replicas of this database (potentially even one per webhead), this would require architectural change and would bring with it a new set of issues around data consistency. By pushing that load onto Solr, which has a far more straightforward scale-out story, we can grow far further without requiring a redesign. Solr basically stores a flat set of data in a form optimised for searching, and provides access via a web service. This means it is straightforward to have multiple instances of Solr running behind a load balancer. Solr makes this kind of setup even easier, as it supports a master-slave configuration as shown in the following diagram (I apologise now for the proliferation of connecting lines – I’m more of a reluctant architect than a Powerpoint architect):

With solr tier

In this example, the web tier will still talk direct to the data tier for some tasks – it’s unavoidable. However, we introduce the search tier which consists of a set of load balanced search servers, each of which contains an identical copy of the search index. In order to build that index, we push data from the database into the Solr master server, and the Solr master server indexes it and pushes the result out to each slave. If you can see past the nasty mess of lines, it should be obvious that as load grows, adding more webheads and/or Solr slaves is a relatively trivial operation.

However, you can have the best intentions in the world when you design your solution, but if you then go on to break the rules then it’s not going to happen. In our case, the code had ended up in a state where for each result retrieved from a Solr search, a database query would be made. Not only that, but the query in question was the horribly expensive one I mentioned in point 1.

This will no doubt cause some brow-wrinkling activity if you’re paying attention, as I mentioned that the original intended purpose of the view being used for the expensive query was to push data into Solr – so why, if the data was already in there, would we be accessing it again from the database once we’d pulled it out of Solr?

I can think of a number of good explanations, the best of which might be “My name’s Jon, and I’m a terrible programmer”. The day I get it right 100% of the time is quite probably the day I get to decide which of my Ferrari’s I drive to work, as well as the day that my projects no longer require testers, and I doubt that will ever happen. Maybe I just missed the Ballmer Peak on that day, but whatever happened, I’m just happy when the mistakes I make are as easy to identify and fix as this one was.

Using the SQL Server Database Engine Tuning Advisor

In tandem with this, we also made use of the SQL Server Database Engine Tuning Advisor. This is the next stage of evolution for the Index Tuning Wizard that (I believe) first appeared in SQL Server 2000. The MSDN page for it is here; the short (aka .NET developer) version is that you put a trace file into one end, and out of the other comes a variety of changes you can make to your database to make it go faster.

In order to generate the input, you use SQL Server Profiler with the “Tuning” template. Once this is running, and saving the trace to a file, you need to generate a “typical” load against your application. There are various ways to do this, ranging from fully manual to fully automated. We were fortunate on our project that we had a complete set of Selenium tests for the site and a way of running them as part of the build. I’m hoping that Tom, the consultant responsible for this, will start blogging about it at some point as it’s really interesting stuff. It meant that to generate our trace, all we had to do was start the profiler, kick of the test suite and go for a beer. The next morning, we pushed the resultant file through the tuning advisor and received our recommendations out of the other end. There was little we disagreed with, and most of the recommendations were subsequently applied to the database.

The rest?

You may have noticed that the main focus in this post has been around the way the application accessed the data, rather than the way the database was built. Over the duration of the project, and in addition to the fantastic data architect we had in the team, we’ve had guest appearances by JRJ, Jamie and Simon, so it shouldn’t come as a massive surprise that the database works well. EMC Consulting is rightly proud of the database guys we have on the team and whilst I’m sure that there are many further tweaks that could be made to our databases to add further go-faster stripes, they aren’t needed at the moment.  Optimisation is one of many things that doesn’t benefit by having the word “premature” put in front of it – it’s basically another form of  YAGNI. So, until testing proved otherwise, we were happy to stop at this point and move on to something else.

In the next exciting epsiode of “Optimising an ASP.NET MVC web site”, we look at a pattern for applying application layer caching. Don’t touch that dial!

Please let me know what you think of my blog posts, either by commenting, dropping me an email, or via Twitter.

@jon_george1