Jonathan George's Blog

Introducing Who Can Help Me? – A S#arp Architecture Demo Site

Posted in Technical Stuff by Jonathan on December 21, 2009

A couple of months ago, myself and some colleagues had the pleasure of pushing the metaphorical big red button and seeing www.fancydressoutfitters.co.uk go live. We all agreed it was one of the best projects we’d worked on in a long time, for a number of reasons. Those reasons can be split into two categories: the team and the technology.

The core dev team was mostly made up of people who’d worked with one another before. For example I first worked with James back in 2002, well before either of us joined Conchango (as it then was.) James had spent a lot of time working with Howard on Tesco Digital, and although I’d never worked with Howard directly I’d had a lot of contact with him on my previous project. We’d all previously worked with Ling (our data architect), Justin (developer, yet to create his own online presence for reasons unknown) and Naz (tester, ditto) on other projects over the previous 3 years.

On most projects, the team spends a fair amount of time up front becoming effective, and a big part of that is getting to know one another – strengths, weaknesses, the way we all communicate and so on. For us, there were only a couple of people that none of us had worked with on previous projects, and that made it simple for the team as a whole to gel really quickly. The process was simplified even further by a great project manager who knew how to make things happen without needing to control the dev team, and a business analyst who managed to take a huge amount of grief from us all about his gratuitous use of clip art in his Powerpoint presentations whilst at the same time winning the respect of the client for the speed with which he understood how they worked internally.

Also, because the team was well resourced up front, Howard and James were able to spend the time laying the appropriate foundations. James has talked about this previously, so I won’t cover it again, but suffice it to say that the time invested up front paid massive dividends later in the project.

The foundations they laid included the core software stack we would use and the tools and techniques we would adopt. They decided that the core of the site would be based on ASP.NET MVC and S#arp Architecture, which is a best practice framwork joining MVC with NHibernate. They also picked some other bits and pieces to reduce friction – things like Spark, AutoMapper and xVal – and James decided that the project would use Behaviour Driven Development as a testing approach.

When all of these things came together, one thing that stood out was how clean the resulting solution became. Core design concepts such as separation of concerns, encapsulation and dependency injection were baked in, making the code clean and easy to test. The layers of the solution were well defined and understood, meaning it was always obvious where a particular piece of code should live and it was easy to discuss the codebase between us because we were all talking the same language. The adoption of BDD finally made using a test-driven approach make sense for those like me who always found it a challenge.

As the project neared it’s end, myself, James and Howard wrote some blog posts talking about various aspects of the solution, but it became hard for all of us to pull out specific bits of code to talk about because removing them from the wider context of the solution caused them to lose their meaning. Howard suggested that we could address this – as well as giving something back to the community that gave us such great software – by putting together an app based on the same architecture and publishing the source code. We could then use it to talk around specific features or areas of the architecture, and at the same time demonstrate to the community the approach we took for the FDO site.

A couple of years back, Howard spent a few hours one evening creating an app called “Who can help me?” Originally intended to capture information about the skills and experience available within Conchango, it took him about 3 hours to put together using ASP.NET WebForms and Linq to SQL. It was the ideal application for our purpose; it has a practical application (we still use it inside EMCC) and it’s simple so the focus can be on the solution design rather than the business logic.

So, here it is: Who Can Help  Me, built using S#arp Architecture, NHibernate, Spark View Engine, xVal, AutoMapper, Castle Windsor, MSpec, RhinoMocks and PostSharp. We’ve pulled in some new bits, such as MEF and DotNetOpenAuth, hooked it up to Twitter using TweetSharp to demonstrate the pattern for external service integration, and published the whole lot on Codeplex.

The result is definitely more complex than the problem requires, but that’s ok – as I mentioned above, we specifically chose a simple application to demonstrate an enterprise level architectural approach with the hope that the focus could be on the latter. So please have a play with the live site, download the code, give it the once over and let us know what you think. We’ll be blogging about the bits we find interesting (links below), as well as tidying up the bits we’re not so keen on and we’d love to hear any feedback you have.

We’ll also be using the hashtag #WCHM on Twitter when we talk about this, so keep an eye out.

@jon_george1

Optimising an ASP.NET MVC web site, part 5: Putting your money where your mouth is

Posted in Technical Stuff by Jonathan on November 3, 2009

Note: This was originally posted on my old blog at the EMC Consulting Blogs site.

This is the final part of a series of posts on optimisation work we carried out on my last project, www.fancydressoutfitters.co.uk – an ASP.NET MVC web site built using S#arp Architecture, NHibernate, the Spark view engine and Solr. There’s not much point starting here – please have a look at parts 1, 2, 3 and 4, as well as my post on improving YSlow scores for IIS7 sites, for the full picture.

In the posts on this series, I’ve reflected the separation of concerns inherent in ASP.NET MVC applications by talking about how we optimised each layer of the application independently. Good separation of concerns is by no means unique to applications built using the MVC pattern, but what stood out for me as I became familiar with the project was that for the first time it seemed like I hardly had to think to achieve it, because it’s so baked into the framework. I know I share this feeling with Howard and James (respectively architect and developer on the project), who’ve both talked about it in their own blogs.

The MVC pattern also makes it much easier to apply optimisations in the code. For example, it’s much easier to identify the points where caching will be effective, as the Model-View-ViewModel pattern makes it straightforward to apply a simple and highly effective caching pattern within the controllers. I know that this kind of thing isn’t limited to performance work – for example, our team security guru certainly felt that it was easier to carry out his threat modelling for this site than it would have been in a WebForms equivalent.

On the flip side, this process also brought home to me some of the dangers of using NHibernate. It’s an absolutely awesome product, and has totally converted me to the use of an ORM (be it NHib or Entity Framework). However, the relatively high learning curve and the fact that most of the setup was done before I joined the project made it easy for me to ignore what it was doing under the covers and code away against my domain objects in a state of blissful ignorance. Obviously this is not a brilliant idea, and properly getting to grips with NH it now jostling for first place on my to-do list (up against PostSharp 2 and ASP.NET MVC 2.0, amongst other things.)

My big challenge for future projects is ensuring that the optimisations I’ve talked about are baked in from the start instead of being bolted on at the end. The problem with this is that it’s not always clear where to stop. The goal of optimising the site is to get it to the point where it performs as we need it to, not to get it to the point where we can’t optimise any more. The process of optimisation is one of diminishing returns, so it’s essential to cover issues you know need to be covered and to then use testing tools to uncover any further areas to work on.

That said, in an ideal world I’d like to be able to build performance tests early and use them to benchmark pages on a regular basis. Assuming you work in short iterations, this can be done on an iteration by iteration basis, with results feeding into the plan for the next iteration. My next series of posts will be on performance and load testing, and as well as covering what we did for this project I will be looking at ways of building these processes into the core engineering practices of a project.

Was it all worth it?

I’ll be talking separately about the performance and load testing we carried out on the site prior to go live, but in order to put these posts into some context I thought it might be interesting to include some final numbers. For our soak testing, we built a load profile based on 6 user journeys through the site:

  • Homepage: 20% of total concurrent user load
  • Browse (Home -> Category Landing -> Category Listing -> Product): 30%
  • Search (Home -> Search Results): 30%
  • News (Home -> News list -> News story): 10%
  • Static Pages (Home -> Static page): 5%
  • Checkout (As for browse journey, then -> Add to basket -> View Basket -> Checkout): 5%

With a random think time of 8 – 12 seconds between each step of each journey, we demonstrated that each of the web servers in the farm could sustainably support 1000 concurrent users and generate 90 pages per second. Given the hardware in question, this far exceeds any project I’ve worked on recently.

In the end, we put www.fancydressoutfitters.co.uk live in the run up to Halloween, the busiest time of the year for the fancy dress industry. We did this with no late nights and enough confidence to go to the pub for a celebratory pint within the hour. It was also interesting that the majority of technical colleagues who responded to our go-live announcement commented on how fast it runs (which given the machinations of our corporate network’s internet routing is even more remarkable.) And best of all, we’ve had no major shocks since the site went live.

A final note

If you’ve read this series of posts, I hope you’ve got something out of it. I’d certainly be interested in any feedback that you might have – as always, please feel free to leave a comment or contact me on Twitter.  In addition, the EMC Consulting blog site has been nominated in the Computer Weekly IT Blog Awards 2009, under the “Corporate/Large Enterprise” category – please consider voting for us.

I’d also like to extend a final thanks to Howard for proof reading the first draft of these posts and giving me valuable feedback, as well as for actually doing a lot of the work I’ve talked about here.

@jon_george1