Jonathan George's Blog

Optimising an ASP.NET MVC web site, part 5: Putting your money where your mouth is

Posted in Technical Stuff by Jonathan on November 3, 2009

Note: This was originally posted on my old blog at the EMC Consulting Blogs site.

This is the final part of a series of posts on optimisation work we carried out on my last project, www.fancydressoutfitters.co.uk – an ASP.NET MVC web site built using S#arp Architecture, NHibernate, the Spark view engine and Solr. There’s not much point starting here – please have a look at parts 1, 2, 3 and 4, as well as my post on improving YSlow scores for IIS7 sites, for the full picture.

In the posts on this series, I’ve reflected the separation of concerns inherent in ASP.NET MVC applications by talking about how we optimised each layer of the application independently. Good separation of concerns is by no means unique to applications built using the MVC pattern, but what stood out for me as I became familiar with the project was that for the first time it seemed like I hardly had to think to achieve it, because it’s so baked into the framework. I know I share this feeling with Howard and James (respectively architect and developer on the project), who’ve both talked about it in their own blogs.

The MVC pattern also makes it much easier to apply optimisations in the code. For example, it’s much easier to identify the points where caching will be effective, as the Model-View-ViewModel pattern makes it straightforward to apply a simple and highly effective caching pattern within the controllers. I know that this kind of thing isn’t limited to performance work – for example, our team security guru certainly felt that it was easier to carry out his threat modelling for this site than it would have been in a WebForms equivalent.

On the flip side, this process also brought home to me some of the dangers of using NHibernate. It’s an absolutely awesome product, and has totally converted me to the use of an ORM (be it NHib or Entity Framework). However, the relatively high learning curve and the fact that most of the setup was done before I joined the project made it easy for me to ignore what it was doing under the covers and code away against my domain objects in a state of blissful ignorance. Obviously this is not a brilliant idea, and properly getting to grips with NH it now jostling for first place on my to-do list (up against PostSharp 2 and ASP.NET MVC 2.0, amongst other things.)

My big challenge for future projects is ensuring that the optimisations I’ve talked about are baked in from the start instead of being bolted on at the end. The problem with this is that it’s not always clear where to stop. The goal of optimising the site is to get it to the point where it performs as we need it to, not to get it to the point where we can’t optimise any more. The process of optimisation is one of diminishing returns, so it’s essential to cover issues you know need to be covered and to then use testing tools to uncover any further areas to work on.

That said, in an ideal world I’d like to be able to build performance tests early and use them to benchmark pages on a regular basis. Assuming you work in short iterations, this can be done on an iteration by iteration basis, with results feeding into the plan for the next iteration. My next series of posts will be on performance and load testing, and as well as covering what we did for this project I will be looking at ways of building these processes into the core engineering practices of a project.

Was it all worth it?

I’ll be talking separately about the performance and load testing we carried out on the site prior to go live, but in order to put these posts into some context I thought it might be interesting to include some final numbers. For our soak testing, we built a load profile based on 6 user journeys through the site:

  • Homepage: 20% of total concurrent user load
  • Browse (Home -> Category Landing -> Category Listing -> Product): 30%
  • Search (Home -> Search Results): 30%
  • News (Home -> News list -> News story): 10%
  • Static Pages (Home -> Static page): 5%
  • Checkout (As for browse journey, then -> Add to basket -> View Basket -> Checkout): 5%

With a random think time of 8 – 12 seconds between each step of each journey, we demonstrated that each of the web servers in the farm could sustainably support 1000 concurrent users and generate 90 pages per second. Given the hardware in question, this far exceeds any project I’ve worked on recently.

In the end, we put www.fancydressoutfitters.co.uk live in the run up to Halloween, the busiest time of the year for the fancy dress industry. We did this with no late nights and enough confidence to go to the pub for a celebratory pint within the hour. It was also interesting that the majority of technical colleagues who responded to our go-live announcement commented on how fast it runs (which given the machinations of our corporate network’s internet routing is even more remarkable.) And best of all, we’ve had no major shocks since the site went live.

A final note

If you’ve read this series of posts, I hope you’ve got something out of it. I’d certainly be interested in any feedback that you might have – as always, please feel free to leave a comment or contact me on Twitter.  In addition, the EMC Consulting blog site has been nominated in the Computer Weekly IT Blog Awards 2009, under the “Corporate/Large Enterprise” category – please consider voting for us.

I’d also like to extend a final thanks to Howard for proof reading the first draft of these posts and giving me valuable feedback, as well as for actually doing a lot of the work I’ve talked about here.

@jon_george1

Optimising an ASP.NET MVC web site part 1 – Introduction

Posted in Technical Stuff by Jonathan on October 3, 2009

Note: This was originally posted on my old blog at the EMC Consulting Blogs site.

One of the things I’ve been involved in over the past couple of months is performance tuning work for my current project (now live at www.fancydressoutfitters.co.uk). One of my EMC Consulting colleagues, Marcin Kaluza, has recently started posting on this subject and I’ve been encouraged by Howard to post some “war stories” of the kind of things I’ve encountered whilst doing this on projects, starting with the most recent.

So first, some background. It’s an public facing website project, and is based on Billy McCafferty’s excellent S#arp Architecture – which means it’s ASP.NET MVC with NHibernate talking to SQL Server 2008 databases. We’re using the Spark View Engine instead of the out of the box one, and the site uses N2 CMS to provide content management capabilities (James posted a while back on the reasons for choosing N2.) Finally, we use Solr to provide our search functionality, integrated using Solrnet. I joined the team a few months into the project, by which point they had laid a firm foundation and were about to be abandoned for 6 weeks by their technical lead who had inconsiderately booked his wedding and an extended honeymoon right in the middle of the project.

When the project was set up it was done so in strictly in accordance with agile principles. A small team was given a fixed date for go-live and the directive to spend the client’s money as if it were their own. One of the first things that happened was the adoption of a number of principles from the excellent 37signals e-book “Getting  Real”. A product backlog was assembled, and then – in accordance with the “build less” maxim – divided into “core” and “non-core” user stories. The core stories were what was essential for go live – things the client couldn’t live without, such as basic search and content management. The non-core stories are things that might enhance the site but aren’t essential – for example, advanced search features such as faceted navigation.

The absolute focus the team maintained on the core functionality and target delivery date has made this one of the best and most successful agile projects I’ve worked on – we reached our go live date on budget and were able to substantially over deliver on functionality. Whilst the site is relatively basic compared to some I’ve worked on, it stands out amongst its peers and provides a great platform for new functionality to be built on.

However, now I’ve extolled the virtues of the approach that was taken, I should talk about the performance optimisation and testing work we did. Since I have some experience from previous projects, I took on the task of testing the site to make sure it could handle an acceptable level of load without falling over in an embarrassing heap. However, before I started on that, we did some optimisation work on the site.

The aim was to hit the major pain points, since we knew performance had degraded over the previous few sprints. Once this was done, we could run some load testing and perform further tuning and optimisation work as required. I was originally intending to write a single post covering the optimisation process, then follow that up with one about the load testing process. However, that resulted in a rather lengthy post, so I’m splitting it up into several parts that I will post over the next week or two:

In addition, I’ve already covered the work we did to correctly configure IIS in my post How to improve your YSlow score under IIS7.

I hope you find these posts interesting – please let me know what you think by leaving a comment.

@jon_george1