Who Can Help Me? started life towards the end of 2009, after myself, Howard, James, and a host of other amazing ex-colleagues worked together on www.fancydressoutfitters.co.uk. We’d built the client-facing site using S#arp Arch and a bunch of other open source tools, and wanted to put together a vastly simplified sample app demonstrating how we did it. And so, we build Who Can Help Me? and put it out there for the community to play with.
Despite not originally being intended as a S#arp Arch demo, it was adopted as one. Which gave rise to the question: what do we do with it for S#arp Arch 2.0? When we built it, we wanted to demonstrate the architecture and toolset we used for an enterprise-level e-commerce site. We didn’t go out of our way to demonstrate any specific patterns, and we made no effort to demonstrate many of the nice features offered by the different tools we used. After some back and forth between the team we decided that things have moved on in the last 2 years, and whilst we are all still proud of the e-commerce site it was based on, WCHM is not representative of the way we choose to build websites today.
In addition, those who follow Ayende’s blog may have noticed his recent code review on WCHM. He raised some valid issues with the code, and fortunately there are several upcoming SA 2.0 features that will help me address them. So here’s the hit list for how WCHM 1.9.6 will go not only to 2.0, but also to being a true Sharp Architecture sample app.
Firstly, relevant points from Ayende’s review:
The mapping sucks
The pattern we adopted when writing our controllers is as follows:
- Receive request.
- Make request to tasks layer (aka application services); get domain entities returned.
- Use mapper to map entities to view models.
- Return view result, passing view model.
This looks extremely simple on paper, and for the most part it is. However it has a few issues:
- It can get messy quickly, especially if your view requires several disparate pieces of data – in this case, the controller makes multiple calls to the tasks layer to retrieve the required data, which is then passed into a mapper. Often this mapper will use other mappers to map collections or child view models.
- It is, as Ayende pointed out, a breeding ground for SELECT N+1 issues.
S#arp Arch 2.0 has a solution to this problem in the form of the Enhanced Query Object approach. So, to populate a view model we no longer make calls into the tasks layer. Instead we use a query object which retrieves all necessary data. Because the EQO talks directly to the NHibernate session we can bypass the layers of abstraction that Ayende derides in his posts, replacing it with LINQ/Criteria/HQL/named queries/whatever else we think is the right fit for the problem.
Too many specifications
There are quite a few specifications in the domain layer. They fit nicely into two categories:
- Specifications that just shouldn’t be there at all. This covers ProfileByIdSpecification and CategoryByIdSpecification. Simply put, these shouldn’t exist. Instead, the FindOne method on the relevent repository should be used, which simply passes through to the Session.Get<T> method.
- Specifications that aren’t really specifications. Unfortunately, this covers the remainder of the specifications. The problem is, a specification is really intended for codifying a business rule – so an OverdueInvoiceSpecification would capture what it means for an invoice to be overdue. When you look at it in that way, it’s pretty obvious that TagByNameSpecification (for example) is not codifying any business rule and as such has no place in the domain layer.
Unfortunately this means I’ll be deleting all the existing specifications. Hopefully I’ll be able to come up with a good example or two to replace them with.
Now, onto a few of my own points:
Anaemic Domain Model
Anaemic Domain Model (Wikipedia, Fowler) normally – and in my opinion, somewhat unfairly – gets labelled as an anti-pattern. It actually made a lot of sense in the original application, because in fact there was no domain logic to speak of. However, since Sharp Arch has all the DDD terms baked in, WCHM needs to do more than pay lip service to these, so I’ll be doing my best to improve this.
No obviously identified aggregate roots
Following on from the previous point – the code in WCHM doesn’t follow the DDD practice of accessing entities via aggregate roots only, so this will be another thing to improve.
Replace existing Tasks with Commands
WCHM currently uses a traditional tasks layer, with the different methods grouped up into a few different application services – CategoryTasks, ProfileTasks, etc. As described above, the query methods will be pulled out of these tasks and moved into queries that will be homed alongside the controllers. The remaining methods will be separated into commands, which will remain in the Tasks layer.
Merge Web.Controllers and Web
As detailed in Geoff’s Introductory post in this series, we’re merging controllers back into the main web project, and renaming to Presentation.
In reading this, you might think that I’m saying WCHM is a steaming pile of something unpleasant. Nothing could be further from the truth – www.fancydressoutfitters.co.uk is a project that I’m proud to have been part of, and WCHM is an extension of that. The site has been running with no significant unplanned downtime for nearly 2 years now, from a project that went from nothing to live in just ten 2 week sprints.
However, I hate it when code I’ve written is allowed to stagnate – like most of us, I look back on code I’ve written and think about all the ways it could be better. I hope that there will never be a day when I look back on one- or two-year old code and realise I’ve learned nothing since it was written. In the meantime, I’d appreciate any feedback people have on WCHM and this post.
A couple of months ago, myself and some colleagues had the pleasure of pushing the metaphorical big red button and seeing www.fancydressoutfitters.co.uk go live. We all agreed it was one of the best projects we’d worked on in a long time, for a number of reasons. Those reasons can be split into two categories: the team and the technology.
The core dev team was mostly made up of people who’d worked with one another before. For example I first worked with James back in 2002, well before either of us joined Conchango (as it then was.) James had spent a lot of time working with Howard on Tesco Digital, and although I’d never worked with Howard directly I’d had a lot of contact with him on my previous project. We’d all previously worked with Ling (our data architect), Justin (developer, yet to create his own online presence for reasons unknown) and Naz (tester, ditto) on other projects over the previous 3 years.
On most projects, the team spends a fair amount of time up front becoming effective, and a big part of that is getting to know one another – strengths, weaknesses, the way we all communicate and so on. For us, there were only a couple of people that none of us had worked with on previous projects, and that made it simple for the team as a whole to gel really quickly. The process was simplified even further by a great project manager who knew how to make things happen without needing to control the dev team, and a business analyst who managed to take a huge amount of grief from us all about his gratuitous use of clip art in his Powerpoint presentations whilst at the same time winning the respect of the client for the speed with which he understood how they worked internally.
Also, because the team was well resourced up front, Howard and James were able to spend the time laying the appropriate foundations. James has talked about this previously, so I won’t cover it again, but suffice it to say that the time invested up front paid massive dividends later in the project.
The foundations they laid included the core software stack we would use and the tools and techniques we would adopt. They decided that the core of the site would be based on ASP.NET MVC and S#arp Architecture, which is a best practice framwork joining MVC with NHibernate. They also picked some other bits and pieces to reduce friction – things like Spark, AutoMapper and xVal – and James decided that the project would use Behaviour Driven Development as a testing approach.
When all of these things came together, one thing that stood out was how clean the resulting solution became. Core design concepts such as separation of concerns, encapsulation and dependency injection were baked in, making the code clean and easy to test. The layers of the solution were well defined and understood, meaning it was always obvious where a particular piece of code should live and it was easy to discuss the codebase between us because we were all talking the same language. The adoption of BDD finally made using a test-driven approach make sense for those like me who always found it a challenge.
As the project neared it’s end, myself, James and Howard wrote some blog posts talking about various aspects of the solution, but it became hard for all of us to pull out specific bits of code to talk about because removing them from the wider context of the solution caused them to lose their meaning. Howard suggested that we could address this – as well as giving something back to the community that gave us such great software – by putting together an app based on the same architecture and publishing the source code. We could then use it to talk around specific features or areas of the architecture, and at the same time demonstrate to the community the approach we took for the FDO site.
A couple of years back, Howard spent a few hours one evening creating an app called “Who can help me?” Originally intended to capture information about the skills and experience available within Conchango, it took him about 3 hours to put together using ASP.NET WebForms and Linq to SQL. It was the ideal application for our purpose; it has a practical application (we still use it inside EMCC) and it’s simple so the focus can be on the solution design rather than the business logic.
So, here it is: Who Can Help Me, built using S#arp Architecture, NHibernate, Spark View Engine, xVal, AutoMapper, Castle Windsor, MSpec, RhinoMocks and PostSharp. We’ve pulled in some new bits, such as MEF and DotNetOpenAuth, hooked it up to Twitter using TweetSharp to demonstrate the pattern for external service integration, and published the whole lot on Codeplex.
The result is definitely more complex than the problem requires, but that’s ok – as I mentioned above, we specifically chose a simple application to demonstrate an enterprise level architectural approach with the hope that the focus could be on the latter. So please have a play with the live site, download the code, give it the once over and let us know what you think. We’ll be blogging about the bits we find interesting (links below), as well as tidying up the bits we’re not so keen on and we’d love to hear any feedback you have.