Simple Debug Panel for AngularJS

I am aware that there are many many other ways to get debug information from an AngularJS application but that’s not stopped me implementing another one. I do actually find a customised output of JSON objects useful particularly when working with deep object graphs. The implementation also is useful as a worked example of a couple of AngularJS principles.

The Demo Application

The demo application is a bookshelf application that searches the google book API and enables the user to add books to their own library. This is my ‘go to’ demo application that I’ve implemented a couple of times in different frameworks. This one is using the MEAN stack and the source code can be found here.

The Problem

When I search the google API I get a very large complex JSON object as the output. I want to know what is in there as I’m saving bits of it in my own database and might be interested in other bits. I want it displayed but easy to toggle on and off. I’m ultimately going to read something from the query string to do this as it is easy to toggle with minimal fuss.

When it’s finished it’s going to look like this…

debug panel
Google Bool API results displayed in debug panel

I’m going to use bootstrap for styling throughout as I’m not hugely concerned about how it looks – it’s just got to be reasonably neat. So its bootstrap classes throughout I’m afraid.

The Solution

The solution is going to demonstrate the following

  1. The use of directives in AngularJS
  2. Using global values and config
  3. Setting globally accessible values from the query string.

    Step 1: Implement a debug panel

    The initial implementation is directly on the search page. When I do a search I want to see the following debug output

    Debug panel displayed on book search page

Before I do a search I don’t want to see anything.

The implementation is

The scope has a property SearchResults with a giant json object direct from the GoogleAPI. I want to show this. I use the json filter to prettify the result and it’s wrapped in pre tags so it displays like code. The ng-show attribute hides it if the searchResults is null or generally falsy which is what I want. That’s pretty good and works for me.

Step 2: Make it a directive so it can be reused

Now I want this on every page and I want it to display anything I want not just the searchResults. I want it to be generally useful. To achieve this I’m going to use directives which allow me to include fragments of HTML across different parts of my site. I now want it on the book details page as well.

The first job is to extract out the debug panel into a separate html file and to remove the dependency on searchResults. The debug panel html is extracted and saved into its own file at

And it looks like

This is the contents of the entire file now. I’ve removed the binding to searchResults and I have now got a property called ‘display’. This is any JSON object I want to display in my results panel.

Next job is to tell my angular application about my new directive. So I create a new JavaScript file and put this in

This does a few things. It tells angular where to find the html fragment for display via the templateUrl property. More interesting it also uses scope isolation to feed in the JSON object I want to display.

The scope property is the link between the page and the directive. I can insert my parent page scope into an attribute called display and this becomes available to the directive. It breaks the hardcoded dependency on the parent scope and makes it all more flexible. The ‘=’ in ‘=display’ tells angular it is an attribute.

That’s probably not hugely clear but finishing the example should help us. The last step is to reference the directive on the book details page. I need to remember to include the script

Then the directive is referenced as markup which looks quite neat

So the display attribute is bound to the book property which is then fed through to the debug panel and displayed. I can put anything into it not just searchResults as before.

debug panel on book details
Debug panel displayed on book details page

I can now go back to the search page and implement the directive in the same way.

And that will display my search results in prettified json which I quite like. It’s even better now and I am mentally slapping myself on the back. However I’m not finished. I don’t always want to see the debug panel. Sometimes I just want to see the page as it will be when it goes live with no debug information at all. I want to be able to toggle it on and off with ease.

Step 3: Using a global setting to toggle the debug panel

I have ambitions for my BookShelf demo site to be truly huge; a site which will make eBay look small. To that end I don’t want to have to turn my debug panel on and off manually everywhere it is used. I want one point of control.

I’m going to set up a DebugOn flag when the application initialises. For this I could use a global constant like so

However (spoiler alert) I’m going to want to amend them later on so a global value is more the way to go. I’m setting this when the app initialises so it will be

I can use the built in dependency injection to access the global variable in any service, filter etc.. so I’m going to use that to grab it in the directive script …

The directive has become a bit more complex but it’s not too terrifying. I’m taking the DebugOn value as a DI injected parameter – very standard Angular stuff. I want to hide the debug panel if it is set to 0. In essence I want to change the DOM if the DebugOn variable is 1 and the link property enables us to do that.

The elem parameter is a collection of all the directive element outputs in the app at that time. For the debug panel it will be one but I will be good and iterate through it anyway – we just obliterate the innerText and thus hide them from the user.

Now we have done this turning the debugging panel off becomes a simple matter of amending the value of DebugOn variable in one place and restarting the application.

Just what I wanted.

Step 4: Using the query string to toggle the debug panel

But it’s not enough. It never is. I’m not satisfied with amending the application – I want to just pop something on the query string, have my application pop into debug mode and have my debug panels spring into life like desert flowers in the rain. I want it and I can have it.

Should turn on the debugger but as I’m navigating around my single page application my querystring is going to disappear. I could maintain it but I don’t want to – it seems like hard work. So I’m just going fall back to some good old fashioned JavaScript. The full url is available in

So I’m going to use a couple of functions to grab the value and put it in the global variable

And the setting of the debug variable becomes

And done. I done even need to change my app to get the debug panels on. A quick fiddle with my querystring and there they are. Lovely.

Further Development

As much as I like the query string idea I wouldn’t want to see it in production. As an immediate project I would want to disable this in prod environments. It’s not hard but needs to be done. I’ve got other ideas around multiple panels, debug levels and improved display but I’ll leave those for another day.

That’s it – end of post other than notes. Happy New Year everyone for 2017.

Full Script

There is a bunch of code here so to aid the discerning coder – this is the all code needed for the debug panels





Note about Immediately Invoked Function Expressions

I haven’t wrapped all my examples in an IIFE. This is just to save space and reduce nesting in the examples. Just mentally put them back in if they are absent i.e.

Useful Links

AngularJS directives

Scope isolation

AngularJS Constants and variables

Quick explanation about IIFE

The getParameterByName function is from the incredibly highly upvoted answer on Stack Overflow

Google Books API as referenced throughout demo code

Simple String Concatenation Filter for AngularJS

I’m currently tinkering around with Angular JS. I’ve always liked JavaScript and since there is currently one or two (thousand) JavaScript frameworks around I thought I would have a go at the Angular. It wasn’t immediately obvious how to cleanly concatenate strings for display so I wrote a filter and it works quite nicely. As ever I’m not making any claims for this to be technically brilliant but it does work, I wrote it with my own coding fingers and I like it.

The Demo

The demo system for this is a simple search for books using the Google Books API. The results display on an infinite scroll. That’s totally irrelevant to this but again I like it. The demo project code is on github at

The Problem

Books have multiple authors. The Google API returns them in an array i.e.

We display them on the view with a simple binding

But unfortunately the display looks rubbish


The Solution

We could display them correctly by putting a bit of looping logic in the view but I see enough abuse of views in my day job (and I’m guilty of some of it myself) so I would rather not continue view abuse in my spare time. So let’s whip up a filter. It will be cleaner and reusable.

To implement create a file called filters.js and pop in this

It’s embarrassing straightforward. An array and a delimiter in, join and return. We added it directly into our angular application which is imaginatively called ‘app’. It’s then available to be called in the view with a PowerShell style pipeline i.e.

Don’t forget to reference the new file in the view (html page)

And that’s it, everything looks a lot better.


It’s available to use generally in the application whenever we need to concatenate strings. Lovely and easier to implement than I remembered.

Useful Links

Documentation for AngularJS filters can be found at

and an end to end tutorial including filtering is at



5 Ways To Write More Robust SpecFlow Tests

pass-failI’ve lost track of the number of times I’ve walked into the office on a morning to be faced with a wall of failed SpecFlow tests. I care about my SpecFlow tests, I look after my SpecFlow tests, I feed my SpecFlow tests and I might even love them. Even so, it still doesn’t stop them failing on me for the most minor of reasons. Here’s some tricks, tips and techniques that I have used to write more robust SpecFlow tests.

What are SpecFlow tests?

SpecFlow tests are a way to write automated tests in a natural language format. This lends itself to tests focused around user stories and behaviour driven development working practices.

A test would be something like

The test should make sense to all members of a project team irrespective of their technical know-how and background. Each individual step maps to a method via a regular expression

When built each test is transformed into NUnit which can then be run by the NUnit test runner

It’s useful to bear in mind that really these tests are NUnit tests and can be treated as such. SpecFlows can be configured to transform into different test frameworks such as MSTest but NUnit is the default.

I’m assuming that the spec flow tests are running on some kind of continuous build cycle. If they aren’t then really that is the top tip – get them running on a schedule in Team Foundation Server, Team City or the like. Once you have your tests running on some kind of schedule then it’s time to toughen them up.

1. Run the most important tests more frequently

If you have plenty of tests, a useful thing to do is split them up into smaller groups. I have got a full set of SpecFlows that take well over an hour to run which are often failing due to minor disturbances. I’ve got a smaller group of tests that take 15 minutes to run that I’ve identified as being particularly critical to the running of the system. I can accept the full set being red (although I’m not hugely happy) but I really need to see the smaller group of critical test passing all the time.


Given the above tests I have identified the first two as being of particular importance. The last one I’m less concerned about. So I have tagged the first two as @important and I can use that to limit the run to the first two.

The @important tag is transformed into a category attribute on the underlying NUnit test i.e.

Since these are just unit tests under the covers I can use the NUnit command line to just run the spec flows that I have deemed the most important i.e.

will just run the first two and ignore the last one.


It’s far easier to keep your most important 20% of tests running than it is to keep all of them up and running. Your 20% are the baseline to your applications health. If they aren’t running, then it’s red alert.

Running continuously

This technique can be rolled into your continuous builds where it is most useful. To take the example of Team Foundation Server, the category can be specified in the build definition under TestCaseFilter.


Team city offers the same functionality.

This technique could be extended to split SpecFlow tests into functional areas so it becomes obvious that the tests are failing in a particular part of the application where other aspects of the system may be very robust and healthy. It also could be inverted to exclude the most fragile tests and keep the majority of the tests that you are most confident about in the continuous build runs.

2. Stress testing

It’s always difficult to deal with tests that fail occasionally. If you have 200 tests and each one fails 1% of the time then you will only get a full pass of your integration tests 13% of the time. If you’ve got a good set of robust tests but 20 of these are flaky and fail 10% of the time then at least one of these bad apples will fail 88% of the time. On an overnight build you will see a fully passing green build once a fortnight. This hardly inspires a massive amount of confidence in your software quality. Of course when you run these bad boy tests individually they will probably pass and claim everything is OK. Frustrating.

However it is possible to run these tests over and over and catch them in the act of failing. The key is to remember that they are really unit tests and can be treated as such.


Considering this test

When built it transforms into this unit test in a class named after the feature

We can then create a wrapper unit test to rerun this spec flow scenario as many times as we like

The only thing to remember is to set up the feature beforehand and clean up afterwards. With some decent logging in the spec flow tests you’ll be able to identify why it fails 1 time in 20 and fix it. I’ve really found this technique useful in getting rid of intermittent failures.

3. Improve logging in the tests

Which brings us neatly onto writing good test logging. It’s a minor point but like a lot of minor issues it can have a really significant benefit. It is a generally useful thing to instrument your SpecFlows so you can see what is happening when they fail (again) overnight.


The ‘Then’ statement is implemented by the following step.

This won’t tell us a huge amount when it fails but it can be quickly improved by

The console command will write out to the output pane of whichever test runner I am using. I like to serialise any object to JSON so I’ve got even more detail on the output. Maybe the object has a helpful ToString method that I can use but it probably doesn’t so JSON serialisation is useful here. Also the assertions have been amended to give more detailed output when they fail. My output is much improved.

Test Name:    RetrieveAListOfBooks

Test Outcome:           Passed

Result StandardOutput:

Given I am using the virtual bookshelf

-> done: BookShelfSteps.GivenIAmUsingTheVirtualBookshelf() (0.3s)

Given I have searched for a book about ‘dinosaurs’

-> done: BookShelfSteps.GivenIHaveSearchedForABookAbout(“dinosaurs”) (1.5s)

Then I retrieve a page of 10 results

Search results {“Results”:[{“BookID”:null,”BookIDSource”:”tAr9XAv136kC”,”Title”:”Dinosaurs!”,”ISBN_10″:”0307982696″,”ISBN_13″:”9780307982698″,”Description”:”Dinosaurs! follows the evolution of these spectacular creatures from their earliest beginnings as little fellows who had to evade attacks from giant croc relatives to today’s living dinosaurs.”,”Rating”:0.0,”PageCount”:24

… truncated …


-> done: BookShelfSteps.ThenIRetrieveAPageOfResults(10) (0.3s)


4. Use fuzzy string matching

I find that failed string matches often trip up the tests. Often the tests are checking validation strings or results from a search and minor variations can cause a failure. I’ve found it useful to employ fuzzy string matching to mitigate this.


Considering the following test

It could be that the return results don’t quite match. Perhaps the result isn’t pluralised or has a minor typo. I don’t want to declare the search function broken in those circumstances. The standard way to implement the ‘Then’ statement would be

However we could make this looser by installing a fuzzy matching library such as DuoVia.FuzzyStrings

The step then becomes

which will be more resilient to minor issues. Obviously you can play around with the tolerances (0.7 currently) to match your circumstances but minor differences won’t detonate your tests any longer.

5. Write fewer SpecFlow tests

I’ve saved my least helpful tip till last. If SpecFlow aren’t particularly robust then it would help to write fewer of them and more unit tests which will be more solid. I had exactly that message delivered to me by a consultant and it’s a hard message to hear. It’s made all the more harder to hear since it’s probably right.

This isn’t going to help an established project with a mountain of failing SpecFlow tests. But on a new project it might be worth trying to favour the more robust unit tests and keep the integration tests for more genuinely end to end testing. Unhelpful. Sorry.

Useful links

SpecFlow is an port of Cucumber. Project page at

How to use the default unit test provider in SpecFlow

Details of NUnit test selection language that I used to filter the SpecFlow tests by category

I like to use Fluent Assertions for my automated testing. It’s just a bit nicer syntax.

Fuzzy string matching and suggested tolerances can be found at

As ever, the source code for the examples is at my GitHub site


Sorting Unknown Images With PowerShell

The Problem

You’ve got a large amount of binary files. Some are images but you’ve no idea which ones. Some will be gifs, some jpegs, some bmp and other strange formats. They’ve been dropped on you without file extensions. Perhaps they’ve been extracted from a database blob field. Perhaps they have partially retrieved from some backup tapes after a system crash. Perhaps they have been under earthed in an Anglo-Saxon burial mound just outside of Norfolk. However they arrived, it is now your task to sort them by file type.

The Solution

The broad principle here is that image types are identifiable from their first few bytes. So in hex

  • Jpg starts with “FFD8”
  • gif starts with “474946”
  • bmp starts with “424D”
  • png starts with “89504E470D0A1A0A”

We could use any programming language we choose to sort the images on this basis. I’m going to use PowerShell because

  1. This seems to me like a dev ops type of activity. PowerShell scripts are easy to hook into and run in continuous build and the like
  2. I don’t need to write any kind of UI
  3. I’m practising my PowerShell and trying to get better (the real reason)

The Script

This is the entire script with explanatory comments

The core part of the script is

Get-Content gets the content of the file, in this case the first 8 bytes. Each byte is then iterated through and changed into a hexadecimal string (ToString(“X2”)). This is appended to the $FileHeader variable which we use to compare against the known image headers. This allows us to identify which type of image this is. The rest of the script is moving the files around and sorting them into different directories.

If the script is saved into a file e.g. ImageSorter.ps1 it can then be run with the dot sourcing command.

So that’s it, image sorting in a nutshell. Hopefully the above script will be useful for someone, somewhere at some point. Happy unknown image sorting everyone.

Useful Links

File signatures
A useful, easy to read list of file signatures which could be used in identifying unknown files. Most are headers but some have an offset which is given. The above script could easily be extended to account for alternative types using this list. gives a wealth of information on file structure and formats. For the (very) interested here is a detailed breakdown of bmp, gif, jpeg and png files including header information.

Other file types that I didn’t implement such as tiffs  are also detailed.

Alternative implementations

This Stack Overflow answer is a C# implementation of an image sorter should anyone require it. I did pinch the file header information from here (easier than but the rest is my very own work crafted by my very own coding fingers – promise.

10 tips for working with remote development teams

remote developer

I’ve been working closely with two remote teams for a couple of years.  It works well but it’s not without its challenges. No-one would decide to work with remote teams to make a developer’s life easier. It might make your developer’s lives more interesting, more varied or more frustrating but it won’t make it easier. Here are some general observations about working with remote teams and little tips that might make things easier for everyone involved – based round nothing more rigorous or scientific than my own experience.

1. What you measure, they will do

This is true for developers generally but it is doubly true for remote teams. What you measure, they will do. If you make a big thing about speed then your remote team will deliver quickly. It will be full or bugs but it will be quick. If you have daily calls with the remote team and tell them what an evil thing it is to have bugs reopened then they will make damn sure that they don’t cause bugs to be reopened. They will scour the bug list of easy bugs, take a lot time over medium ones and make every effort to get horribly, gnarly bugs back to the home team. Measure quality and you will get quality but your development might go at the speed of a particularly sluggish glacier.

So do measure but make sure you are measuring the things you really care about and understand the trade-offs that this might entail.

2. Mind your language

It is startlingly easy for the relationships between teams to become strained. You’re convinced the quality isn’t good, the communication is bad and the number of bugs is just ugly. Tensions rise and everyone in the home team is talking about how the ‘Romanian’s’ aren’t doing their job; how the ‘Indian’s’ haven’t got a clue; how the ‘Serbians’ clock watch and go home at the earliest opportunity. The home team is so much better than the rest of them.

Listen to what you are saying. Are you talking about ‘the Serbs’, ‘the Indians’ and ‘the Romanians’? Are you speaking about the remote teams as one homogeneous mass of development unpleasantness?  It’s so easy to slip into but it might be a symptom of deteriorating relationship. It might even be a cause of it. It’s far better to address people by their names rather than their nationality.

3. Time zones matter

It seems obvious but time zones do matter. Obviously if there is 8 hours difference then working practices will have to be adapted to cope with that. But in my experience even an hours’ time difference has an effect. I have found myself skyping the remote team at 16.45 with complex demands, forgetting that for them it is 17.45 and home time. Better to show sensitivity and let them go home unless it is important and needs to be done.

4. Everyone does more than their fair share of work

It’s a psychological fact* that everyone thinks they do more than their fair share of work. Take any group of people and ask them what percentage they contribute to the collective whole, add up the answers and marvel when the combine contribution is well over 200%. Everyone thinks they do more than everyone else.

This is magnified with remote teams. The home team becomes convinced the remoters are not doing enough. The remote team suffers from increasing and hysterical demands and becomes increasing disillusioned as their phenomenal efforts are just not appreciated. Everyone loses.

Just remember that you will be utterly convinced that you are doing more than your fair share. In reality you are probably just doing your fair share and everyone is also doing their fair share too.

*(I didn’t even make up this psychological fact – it’s detailed in the book ‘Thinking Fast and Slow’ ).

5. Communication, communication, communication

Make sure that you have excellent lines of communication and the means to reliably do it. In practice this could mean

  • Everyone has a Skype account and is always logged in when they are in the office
  • Everyone has a working headset and knows how to use it.
  • There are spare headsets in the office. Many spare sets.
  • There is an established way to share screens (Skype, TeamViewer etc…). Everyone has accounts and knows how to use them. The IT department supports it and installs the software for all new developers/testers etc…
  • There is an established way for developers to control the machines of the home team – especially important for tester machines when trying to reproduce bugs
  • Everyone who has any cause at all to contact remote workers has all of the above. The remote team has all this as well.

It’s so obvious and so obviously not done by everyone. We certainly could do better at this.

6. Have one point of contact but everyone can talk to everyone

It’s so important to have one person who is your point of contact for everything that your remote team does and it is so important to know when to ignore this and go direct to the source. Sometimes a bug can only be resolved by a remote developer talking directly to the home team tester and working through it. Sometimes a full understanding can only happen with a direct developer to analyst call. Sometimes developers just need to talk to developers.

This does vary for us though. One of the remote teams I work with has very strict lines of communication. The other team also has strict lines of communication but that one has benefited from some direct communication.

7. Have great development infrastructure

It makes things much easier to have great tooling and great development infrastructure. Initially we had 3 source control systems, 2 project tracking systems and a bug tracking system that didn’t integrate with anything else. Our deployment process was based around 3 witches, a giant cauldron and a book of spells. No good for anyone and triply difficult with remote teams. We changed this to 1 source control system, 1 project tracking system and 1 click deployment.

Clearly any development team is going to benefit from this kind of infrastructure goodness but working with remote teams adds complexity so you want everything else to be as frictionless as possible.

8. Write better code

Of course we all write perfect code all the time – it’s just other people that write bad code. However even the best of us sometimes write complex code that is hard to understand. It’s not too much of a problem if you are sat among your fellow developers and can explain to them why the byzantine morass of code you’ve just written is in fact a work of great elegance and is the best way to solve this particular problem. It’s more of a problem if the team is remote.

They can and will be baffled by what you have just written. Much time will be spent on conference calls explaining what you have done. The remote team will desperately try to get you to do the work so they don’t have to look at the code at all. It’s ultimately far easier to write easy code in the first place.

9. Pay attention to the fixed costs of outsourcing

I think that any remote team has a fixed cost of outsourcing so 3 remote teams of 4 developers is going to be more time consuming to manage than 2 teams of 6 developers. It is obvious but we ended up with isolated individual developers working remotely which worked less well. So size does matter and it’s worth paying attention to the size and structure of the remote teams. We found individual remote workers to be problematic.

10. Go visit

Really this is tip number one and makes the other tips obvious. It’s great to go visit the remote team if you at all have the opportunity. Of course not everyone is going to be able to do this but if the opportunity is there then it’s great to do it. The benefits I found were

  • It’s suddenly really obvious what the communication problems are
  • Email addresses and Skype ids become real people
  • You realise that time zone do matter
  • It becomes apparent how hard these people do work and how much they care about what they do
  • You might spot other opportunities for outsourcing or what activities would be far better done by the home team
  • The remote team will have better ways of doing things that you can adopt and take home

And of course you get to visit another country which is always great.

And if you disagree with all this

These are just from my own experiencing working with 2 remote teams. I’m sure every situation is different but I really do think there are common problems. If you are in a furious rage and disagree with all this then it’s all good. Just have a lovely cup of tea and calm down by reading outsourcing tips courtesy of Scott Adams and Dilbert.

70-486 Developing ASP.NET MVC Web Applications – Study Notes Part 2

I’ve previously posted about preparing for 70-486 and some of the general materials that are available. Now I’m going to go through the syllabus a section at a time and highlight additional resources that I found useful. As I said before a lot of the syllabus is well covered in these two books.

Professional ASP.NET MVC 5
By Jon Galloway, Brad Wilson, K. Scott Allen, David Matson

Professional ASP.NET MVC 5
Professional ASP.NET MVC 5


Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications
by William Penberthy

Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications
Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications

The links below are a supplement to reading these. Sometime the books coverage is all you need – I’ve indicated where this is the case.


The syllabus is at

I’ll go through each section and comment and provide links.

Syllabus part 1: Design the application architecture

Plan the application layers

Plan data access; plan for separation of concerns; appropriate use of models, views and controllers; choose between client-side and server side processing; design for scalability

Fairly nebulous content that is covered well by Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications.

Using an Asynchronous Controller in ASP.NET MVC

Task Cancellation

Wait Handles

Unit of work and repository pattern

More on repository pattern

Design a distributed application

Design a hybrid application (on-premises versus off-premises, including Azure), plan for session management in a distributed environment, plan web farms

Vague content that has variable coverage in the books. Content focused around azure, web farms and web service (SOA) based architectures. Professional ASP.NET MVC 5 has an excellent section on WebAPI but you will need to look elsewhere for alternative web service technology.

WCF attributes

Consuming WCF


Hybrid Applications

Azure AppFabric

Design and implement the Azure role life cycle

Identify and implement Start, Run, and Stop events; identify startup tasks (IIS configuration [app pool], registry configuration, third-party tools)

Well covered by Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications but it was unfamiliar to me so I needed extra reading.

General introduction to cloud services

Startup Tasks


Configure state management

Choose a state management mechanism (in-process and out of process state management), plan for scalability, use cookies or local storage to maintain state, apply configuration settings in web.config file, implement sessionless state (for example, QueryString)

A lot of this content hasn’t changed much since web forms so shouldn’t be much of a problem however there are additional considerations to bear in mind when dealing with Azure.

State management overview

Session State

State server vs SQL Server

Application State

Profile vs Session state

View Bag vs View Data

Windows Azure state management

Design a caching strategy

Implement page output caching (performance oriented), implement data caching, implement HTTP caching, implement Azure caching

Again, a lot of this is content that hasn’t changed that much since the old web form days. I needed extra reading about caching with Azure sites however.

Good overview of non-Azure caching

Data caching

Page Output Caching

Output Cache Attribute

Output Cache Attribute Location

Azure Caching

Design and implement a WebSocket strategy

Read and write string and binary data asynchronously (long-running data transfers), choose a connection loss strategy, decide a strategy for when to use WebSockets, implement SignalR

Unlike the last two sections, this is very much new stuff. It would be easy to spend a long time on this but it’s only a small part of the exam. An overview and understanding of when to use these techniques is probably about the right level. The exam ref book gives a good overview.

Web socket API

Web socket client

Web socket server

Signal R

Signal R example

Design HTTP modules and handlers

Implement synchronous and asynchronous modules and handlers, choose between modules and handlers in IIS

Not too difficult. This content hasn’t changed much in recent versions of MVC which makes things a lot easier. Know the difference between modules and handlers and in what situations each should be used.


Order of Http Module calls

Syllabus part 2: Design the User Experience

Apply the user interface design for a web application

Create and apply styles by using CSS, structure and lay out the user interface by using HTML, implement dynamic page content based on a design

I don’t spend an awful lot of my time creating beautiful UIs for web front ends so this content was less familiar to me. A solid understanding is required – a bit more than just an overview.


CSS Selectors

HTML5 tutorial

HTML5 Canvas element

HTML5 Canvas element fallback

HTML5 Video element

HTML5 Video element fallback

Design and implement UI behaviour

Implement client validation, use JavaScript and the DOM to control application behavior, extend objects by using prototypal inheritance, use AJAX to make partial page updates, implement the UI by using JQuery

Like the previous section but this time your JavaScript and JQuery skills are under scrutiny. There is a lot of content out there – I’ve just put links to the content where I personally had gaps.

JavaScript Prototypal Inheritance


Ajax Helper


Compose the UI layout of an application

Implement partials for reuse in different areas of the application, design and implement pages by using Razor templates (Razor view engine), design layouts to provide visual structure, implement master/application pages

Standard MVC stuff focussed around Views and Razor engine. Professional ASP.NET MVC 5 is very good and covers this off well so probably no need to look any further. I’ve provided a few links just in case.

MVC Views



Razor view engine

Enhance application behaviour and style based on browser feature detection

Detect browser features and capabilities; create a web application that runs across multiple browsers and mobile devices; enhance application behavior and style by using vendor-specific extensions, for example, CSS

The exam ref book was good enough for me for this one. There isn’t a huge amount of content as compared to some of the other sections.



Plan an adaptive UI layout

Plan for running applications in browsers on multiple devices (screen resolution, CSS, HTML), plan for mobile web applications

More mobile adaptation content. A bit meatier than the previous section but nothing to worry about. Professional ASP.NET MVC 5 has some good content on this.

CSS Media Queries

JQuery Mobile

MVC Mobile Features

Designing for mobiles

Syllabus part 3: Develop the user experience

Plan for search engine optimization and accessibility

Use analytical tools to parse HTML, view and evaluate conceptual structure by using plugs-in for browsers, write semantic markup (HTML5 and ARIA) for accessibility (for example, screen readers)

I remember Dilbert cartoon when he refers to SEO consultants as pantless weasels. That’s unlikely to come up on the exam. I really don’t think there is much to this really – the exam ref book is perfectly adequate. There is a bit more meat in the accessibility content but again the exam ref book is fine. No additional links this time.

Plan and implement globalisation and localisation

Plan a localization strategy; create and apply resources to UI, including JavaScript resources; set cultures; create satellite resource assemblies

Globalisation hasn’t changed a huge amount over the years so anyone with general MVC experience should be OK. The exam ref book is good here particularly going through globalisation with JavaScript which I was personally not that familiar with.

Good overview

Resx Files

Design and implement MVC controllers and actions

Apply authorization attributes, global filters, and authentication filters; specify an override filter; implement action behaviors; implement action results; implement model binding

A lot of content here and one to definitely be familiar with. The book Professional ASP.NET MVC 5 is excellent here so no extra reading is required.

Design and implement routes

Define a route to handle a URL pattern, apply route constraints, ignore URL patterns, add custom route parameters, define areas

Again Professional ASP.NET MVC 5 is excellent with a comprehensive chapter dedicated to this. However this has changed with attribute routing so make sure you are covering the most up-to-date material.

Routing overview

Attribute routing

Control application behaviour by using MVC extensibility points

Implement MVC filters and controller factories; control application behavior by using action results, viewengines, model binders, and route handlers.

A complex area that is again well covered in Professional ASP.NET MVC 5 however you may find additional material useful in this area.

Routing Extension

Custom Action Result Http Headers

Filter Extensions Action Filters

Custom Authorisation

Custom Exception Filter

Reduce network bandwidth

Bundle and minify scripts (CSS and JavaScript), compress and decompress data (using gzip/deflate; storage), plan a content delivery network (CDN) strategy (for example, Azure CDN)

Not much content in this one. The exam ref book gives a perfectly adequate coverage.


Syllabus part 4: Troubleshoot and debug web applications

Prevent and troubleshoot runtime issues

Troubleshoot performance, security, and errors; implement tracing, logging (including using attributes for logging), and debugging (including IntelliTrace); enforce conditions by using code contracts; enable and configure health monitoring (including Performance Monitor)

I found this surprisingly hard going. Health monitoring is a drag to learn particularly as I don’t believe people actually use it. IntelliTrace feels a slog as well. Code contracts are interesting though and do come up on the exam. One to know.


Using IntelliTrace to debug live issues

Code Contracts

Health Monitoring

Interestingly Health Monitoring broken in MVC (2.0)

Does anyone actually use Health Monitoring?

Design an exception handling strategy

Handle exceptions across multiple layers, display custom error pages using global.asax or creating your own HTTPHandler or set web.config attributes, handle first chance exceptions

Definitely one to be familiar with but standard stuff with few surprises. Professional ASP.NET MVC 5 has good coverage once again.



Test a web application

Create and run unit tests (for example, use the Assert class), create mocks; create and run web tests, including using Browser Link; debug a web application in multiple browsers and mobile emulators

A frustrating section. The exam is focussed around Microsoft testing technologies (Shims, MSTest etc..) but I personally don’t use these and I doubt they are in wide use. That said, knowledge of NUnit or similar is useful here but specific knowledge about MS technologies is sadly required.

Browser Link



Debug an Azure application

Collect diagnostic information by using Azure Diagnostics API and appropriately implement on demand versus scheduled; choose log types (for example, event logs, performance counters, and crash dumps); debug an Azure application by using IntelliTrace, Remote Desktop Protocol (RDP), and remote debugging; interact directly with remote Azure websites using Server Explorer.

One of those subjects that it’s really difficult to get practical experience of unless you happen to be using it on a day to day basis. Realistically it’s not a good use of time to set up an entire Azure solution just to you can practice debugging it. Do your best with the reading materials available. The exam ref book has some coverage and here are a few more links.


Enabling debugging in Azure

Performance counters

Syllabus part 5: Design and Implement security

Configure authentication

Authenticate users; enforce authentication settings; choose between Windows, Forms, and custom authentication; manage user session by using cookies; configure membership providers; create custom membership providers; configure ASP.NET Identity

Authorisation and authentication have been changed quite a bit over the years in ASP.Net so this is quite a big subject. Try to ensure you are current. Lots of links here to help out.

IIS Authentication

Difference between digest and basic authentication

Windows authentication

.Net Authorisation History

SQL Membership Provider


ASP.Net Identity

Advantages and disadvantages of .net identity

Encrypting Credentials in web.config

Configure and apply authorisation

Create roles, authorize roles by using configuration, authorize roles programmatically, create custom role providers, implement WCF service authorization

I do realise that there is a difference between authorisation and authentication (really I do) but there is overlap in the materials so many of the links in the previous section cover this material as well. Watch out for the WCF material here though.

Authorise filters

AllowAnonymous attribute

WCF authorisation

Design and implement claims-based authentication across federated identity stores

Implement federated authentication by using Azure Access Control Service; create a custom security token by using Windows Identity Foundation; handle token formats (for example, oAuth, OpenID, Microsoft Account, Google, Twitter, and Facebook) for SAML and SWT tokens

I personally found this the hardest topic by far. Very technical, almost academic content. It’s hard to find resources that give a ‘jump start’ to this topic. These links are the most useful of what I found.



Windows Identity Foundation

SAML Tokens

SWT Tokens

JWT Tokens


Creating a Security Token Service

Claims with WIF

Security Token Handlers

Azure Access Control Service

Azure Access Control Service Road Map

Manage data integrity

Apply encryption to application data, apply encryption to the configuration sections of an application, sign application data to prevent tampering

Good coverage in  Professional ASP.NET MVC 5. Here are a couple of extra links to fill out that content

SHA1 is stronger than MD5

MD5 is not considered secure

Implement a secure site with ASP.NET

Secure communication by applying SSL certificates; salt and hash passwords for storage; use HTML encoding to prevent cross-site scripting attacks (ANTI-XSS Library); implement deferred validation and handle unvalidated requests, for example, form, querystring, and URL; prevent SQL injection attacks by parameterizing queries; prevent cross-site request forgeries (XSRF)

High fives and celebratory backslaps all round. You’re nearly at the end. And happily this is some of the best and most interesting content. Professional ASP.NET MVC 5 has the best content that I have ever read in this area so there really is no need to go elsewhere. No extra links this time. None needed

Good Luck

So best of luck everyone. Microsoft exams aren’t perfect but when I’m looking at CVs for potential hires it always gives me a warm glow when someone has a couple of current MS exams under their belt. Hope it goes well for you.

70-486 Developing ASP.NET MVC Web Applications – Study Notes

I recently studied and passed the Microsoft exam Developing ASP.NET MVC Web Applications. Hooray. I thought it was a fair exam covering mostly helpful content – not something I can say for all the exams (70-551 I’m looking at you). I wrote quite extensive notes on the exam so I thought I would tidying them up and post them for general use. I’ve posted general tips here and a more detailed breakdown of the syllabus in the next post.

Overall impression

If you use ASP.Net MVC in your day to day job that’s really going to help but it’s not enough. Generally be familiar with

  1. MVC ASP.Net (obviously)
  2. HTML5 and CSS3 – you don’t need to be an expert but a good grounding is helpful
  3. Azure platform as a service – there is a goodly amount of content on this
  4. Security – this has evolved in MVC 5 so a current understanding is needed
  5. Good understanding of HTTP and how the web works generally

So, just because you use MVC doesn’t mean you know enough. Things like security are something no-one does on a day to day basis. Typically, someone has set this up year ago in your organisation and no-one has gone near it since. Not good enough – you need to know about it.

Programme of study

I always make heavy weather over studying and do too much but this was my general pattern

  1. Watch an overview video – just to get into the mood. Take a bath, light some candles and whet the development appetite.
  2. Read the syllabus
  3. Buy a couple of MVC books. Read them but cross reference against the syllabus. Unless you are desperately interested, focus on exam content.
  4. Read syllabus again. Get onto Internet and fill in the gaps. Make copious notes
  5. Read syllabus again. Buy so practice exam questions (but see warning below)
  6. Take exam – pass hopefully
  7. (Optional) write a blog post about it all

Use the syallabus

Sounds obviously but the key with these exams is go through the syallbus and ensure you have covered it off. It’s easy to lull yourself into a false sense of security by reading some MVC books and watching some videos and feel that all is well and you’ve covered it. Read the syllabus again and make sure that you have.


I read two books for this – both of which were very good.

Professional ASP.NET MVC 5
By Jon Galloway, Brad Wilson, K. Scott Allen, David Matson

Professional ASP.NET MVC 5
Professional ASP.NET MVC 5

A good book and recommended. It is excellent for general MVC, security and extending MVC. Security is particularly good. Nothing on Azure though, so reading this book isn’t going to be enough. I would recommend reading it whether you are taking the exam or not. I enjoyed it.

Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications
by William Penberthy

Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications
Exam Ref 70-486: Developing ASP.NET MVC 4 Web Applications

Good again. Its makes a good job of covering the syllabus and focuses the mind on the exam. I read this through and read some parts twice or three times. Not a particularly enjoyable ready though and I wouldn’t recommend this as general reading. It is high level and you will need to research the unfamiliar parts yourself to bolster understanding.


I personally don’t learn well from videos but I did watch some for this. I know other people find videos the best way to learn.

Microsoft Virtual Academy

Free good quality videos presented by chirpy Americans. What’s not to like. I watched Developing ASP.NET MVC 4 Web Applications Jump Start which was excellent (though not that much help for the exam ironically). The updated one is Introduction to ASP.NET Core (formerly ASP.NET 5). Don’t expect great exam coverage but unless you are a ‘manic exam crammer who only wants to pass and nothing else’ then I would definitely watch it.


There is an extensive learning path for 70-486 on Pluralsight. I didn’t watch any of it. 60 hours of videos is too much for me and as I’ve said I don’t learn well from videos. But for those that do then this is a good option. I have watch many Pluralsight vids over the years and they are excellent so I’m going to make an uninformed recommendation on this content on that shaky basis.

General internet resources

These are general resources or general web stuff that is good to know but isn’t explicitly on the syllabus so is easy to miss. For a detailed breakdown of the syllabus see the next post.

ASP.Net site

Good general resource. Good start and fills you in on lesser known subjects.

Web lifecycle

Two good general resources on web application life cycle are

Http request response cycle
Life cycle of an MVC application 

The exam expects you to understand this. It assumes you do so if you don’t – well learn it. Also know about HttpRequest and HttpResponse headers. This stuff comes up time and time again.

What’s new in MVC 5

One of the hard things about MS exams is that a lot of the resources aren’t quite current enough and will be focused on the wrong techniques. This excellent code project post highlights the new stuff in MVC 5. Just so you know, the new stuff is around …

  1. Attribute based routing
  2. Filter overrides
  3. ASP.Net Identity
  4. Scaffolding

The security features particularly are a big change and aren’t that well documented.

are the bests posts I found around this area. There are more comprehensive posts but they tend to be baffling.

Exams questions

Who doesn’t like to use exam questions for revision? It’s like a giant security blanket of exam goodness wrapped around you.


There are several legitimate sellers of exam material. On this one I used Transcender as I have happy memories of the quality of the questions from a few years ago. I was shocked.  The product is of an extremely low quality. Questions were missing great chunks of information and didn’t make sense, the topics covered were all wrong (questions about webforms on an MVC exam), correct answers were clearly wrong etc.. etc.. etc… I spent my company’s money on this (much appreciated) and even then I wanted my money back. If I had spent my own money then I would have felt even more abused and exploited. The exams questions were written by someone who hadn’t read the exam or even knew anything about the subject. I think they plundered the old web forms exam (70-315) for material. Poor show guys.


The Microsoft recommended seller of questions. I didn’t use this for 70-486, however I recently used it for another Microsoft exam.  It was significantly better than Transcender but far away from perfect. You can buy them in a package that includes resits as detailed next …..

Second Shot

The best thing is probably to use the exam itself as its own practice. For not very much more money you can get 4 resits so a few goes at your local test centre will stand you in good stead. It is time limited but every year Microsoft have some kind of free retake offer that lasts 6 months or more and can include exam questions.

I bought 4 resits but I can’t go into an exam unprepared so psychologically this didn’t work for me. I meant to just ‘give it a go’ but instead I spent hour after hour preparing for the exam anyway even though I had free resits in the bag. But mental quirks aside, this is a good option for exam prep.

Next Steps

So now you’ve done the general reading it’s time to bite your lip and dive deep into the syllabus. The next post holds your hand while you are swimming about in the murky depths of the syllabus.

Passing Parameters with Automapper

OK I admit it – sometimes I find automapper hard to work with. It’s fantastic when everything is well … auto, but some of the mappings I work with on a day to day basis are anything but automatic. There is not much auto about the automappings. But enough of my troubles – one of the lesser known features of automapper that I have found useful is the ability to pass parameters around. This allows you to implement mappings that are dependent on some other part of the object graph; a part that is not normally accessible in the context you are working with.


For a demo I have a simple application that retrieves customer who have visited my lovely shop. I retrieve customer database entities and I want to map them to some DTOs. Just for the example we will display the object on a console application – I’m not concerned with the UI right now so this is good enough. I guess I missed the UI design parts of my university course. I must have slept in that day.

Database Entities

The customers object that we retrieve from the database are

This is just a straight representation of our database. It’s a customer of a given type with an address. Too easy.

DTO message

As is very standard we want to map the database entity to a set of DTOs. These are

The Problem

We want the residential property on the address DTO to be false if the customer type is business but true otherwise. So let’s try that.

Constructing the mappings

Mapping an object with no parameters

So our first mapping is very straightforward

We are letting the automapper magic do its thing. HouseName maps to HouseName, Street maps to Street and so forth. It all looks very good.

However if I retrieve a customer that I know is residential then the residential property of the customer object isn’t set. The output of my mapping is


The residential property is false. This makes sense – I haven’t mapped

to anything. In fact my mappings aren’t really valid at all. If I check the mappings in code with

It will throw an exception with the message

Unmapped members were found. Review the types and members below.

Add a custom mapping expression, ignore, add a custom resolver, or modify the source/destination type

For no matching constructor, add a no-arg ctor, add optional arguments, or map all of the constructor parameters


Address -> AddressDto (Destination member list)

Automapper.Entities.Address -> Automapper.Messages.AddressDto (Destination member list)

Unmapped properties:


This tells me exactly what my issue is. The valid mapping is

And my full code for this is

Which has the same output as before but this time is actually valid.


How can I set the residential property? The problem is that I need information from the customer object when the address object is being mapped. However when I am mapping the address object I’m in the context of the address and I don’t have the customer information. I can’t do the mapping.

Mapping an object with parameters

What I want to do is pass information from the customer into the address mappings. I’m going to use parameters for that. The call to map has a ResolutionContext. This has dictionary object that I can use to pass through information i.e.

So when I call the mapper I can pass what the customer is using the overload with ResolutionContext

So now the context knows what type of customer this is. I need to grab the context in the address mappings. Happily there are several hooks into the resolution context I can use within the mappings. For this I’m going to use a custom value resolver

The interesting part of the code is

I am using a resolver to access the information in the resolution context and set the residential property using this. It works and the residential property is set correctly


However, I am relying on passing the information myself into the mappings. If I forget to pass in CustomerType then the mapping collapses with a null exception. Also I want these mappings to use the full automapper goodness and be able to map lists of customer entities into list of customer DTOs. It’s not going to do it.

Mapping lists of objects with parameters

To map lists of objects I want the mappings to set the context themselves. That going to be good as it will hide this away from consumers of the mappings and avoids mistakes. I’m going to use the BeforeMap function which also has an overload for ResolutionContext.

As the name implies it runs before the mapping takes place so it’s a good place to set up the context. So the call to the mapper becomes

End consumers no longer need to worry about the context; it just works. We can use the amended code to map lists of objects i.e.


The residential property is set correctly for each object in the list. Job done!

Demo Project

Full source code for this post is on github at

Useful Links


Worked example of automapper custom value resolvers

To output a string representation of the object to the console I used a nifty bit of code on stack overflow to create a generic ToString method for objects. Nice!


Handlerbars.js and Working with Complex Objects in MVC

In a previous post I used an EditorFor template to display child objects correctly on a view. Now we are going to use some simple JavaScript templating to add child objects. We have a page display book details and want to add and save additional authors.

Demo app

The demo app is a simple bookshelf application. It has a view that uses EditorFor templates to display a book which we want to save. The book can have multiple authors. It is a complex object.

The book is rendered onto the Book.cshtml view and the user sees a book then can be edited and saved as required.


The authors are rendered in this format

When this form is posted back, the default model binder in MVC uses the name attribute to correctly bind the authors as a child object of the book object. So we get a correctly populated book object passed through to the controller action.

The book parameter is populated will the correct authors. Nice. In general, to work with child objects the correct format for the name attribute in the HTML form element is ..

We’ll need to bear that in mind when we are creating a control to add new authors.

The Problem

The authors display correctly and work well but what if we want to add additional authors to the book before saving? Perhaps the book is missing authors. Perhaps I’m a rampant egotist and want to add myself as an author to every book in my virtual bookshelf. Let’s use Handlerbars.js and some JQuery goodness to do this.

The Solution


Handlebars.js is one of many JavaScript templating engines that bind fragments of HTML and produce data driven templates. Handlebars takes a template such as

And binds it against some JSON

To produce HTML

That’s going to be good for us. We’ve got some pretty chunky fragments of HTML and we want to work with it in the cleanest way we can.

Integrating Handlerbars.js into MVC

Install the package using nuget

In it goes to the application. Now create a script bundle in BundleConfig.cs

and reference that bundle in the view

Creating the HandleBar template

Let’s dive straight in and create the template. We want a HTML fragment that will generate us a new author. HandlerBar templates need to be in script tags with the type

our full template is

This gives us a hidden field for the AuthorID and a text field for the author name. The curly brackets tell us where the holes in the template are so

Are the two pieces of data that the template needs. Handlebars will replace these with the data we supply it.

Constructing the Author add control

Now we need a HTML control to add the author. We want something like this


So a textbox and a button with a squirt of CSS

Let’s assume that JQuery is already reference and available. Next job is to wire up the click event to the button

Which is standard stuff.

Obtaining the data for the template

Next we need to construct the JSON data that Handlebars will use to generate the HTML. We need two pieces of information

  1. The authors name as inputted by the user
  2. The next index of the author elements

Grabbing the author name is straightforward

However to get the next index we need find the current highest index. First we need to navigate the DOM to find the last author. The authors are all wrapped in the ‘shelf-authors’ class to facilitate this

then we want the name attribute

Then we want the last index. If there are 3 authors then lastAuthorFormName will contain

So some regex the current index i.e. the number in square brackets

The entire regex matches “[2]” and the first and only group (\d*) matches the digit itself. The output for this is an array. The first element of the array is the full match. All subsequent elements are the matches for the containing groups (defined in regex by parenthesis). So

would equal “[2]” and

would equal “2”.  So to get the next index we need

Now we have all the information we need to construct the data for the handlebar template

Gluing the template together

Gluing the data into the template is pretty much boilerplate code

Grab the template with a CSS selector then compile and combine. The HTML is now correct and can be inserted into the DOM

The Complete Solution

So combining all the steps together we get

The HTML element is now correct and when this is posted back the new author will be present in the action method i.e.

The book parameter will have a fourth author which will be saved.


So I can now add myself as an author to every book in my library which goes some way into feeding my monstrous and uncontrolled ego – which is what it’s all about really.

Useful Links

Project page for handlebars

Nuget page for handlebars

Demo application source


ASP.Net MVC and Binding Complex Objects Magic

I’ve a habit of declaring things magical when I don’t fully understand how they work and haven’t the time to look into them any further. One of the things I’ve often declared magical is model binding with MVC. It just works. Magical.

Recently though I found model binding with child objects a bit on the difficult side of things. The magic was failing to sparkle correctly and needed a little bit of extra developer sprinkles to work. Perhaps it even needs developer understanding.

Demo Application

I’m going to demonstrate the problem and solution using a demo library application. The application queries Google Books API and allows users to edit and add those books to their own virtual bookshelf. The book shelf app can be found here


The relevant part of the application is adding of books to the library. The work flow is …

  1. User searches for a book
  2. Application queries GoogleBooks and displays a list of matches
  3. User selects a book
  4. Application displays the selected book.
  5. The user amends the book record as required
  6. The application saves the book

So once the user has typed in a search term they are presented with a list of books which they can select.

book list

Once selected a view showing the detail of the book is present to the user. The user is invited to save the book into his or her book shelf.

Fig 2: Book.cshtml

The problem is that the authors are not saved correctly. It’s pretty obvious that the authors aren’t displayed correctly. The HTML output looks fairly nonsensical.

We are going to have to dig into the application a bit further to fully understand this.

The Model

The model of the application is

So a book can have multiple authors. It is a complex object. It is the complex object and how that is rendered on the Book.cshtml view that causes us the problem. When it is posted back the book object lacks authors. The binding of the complex model has failed.

Book Create Action

When the user selects a book from this list this action is called. It displays the book ready for a save. We pass in an id (from the list selection) then the service goes away and grabs the full details of the book.

The book is then displayed on the view Save.cshtml. The user presses submit which posts the book back to the server.

Book Save Action

This action is called when the user saves a book to his or her bookshelf. The posted back book is saved.

This action takes the book object and it is here that we want form data to bind and produce the complex object. We want the model binding to produce the book object as a parameter. We want the magic. The magic fails us.

The Problem

It doesn’t just magically work much to my disappointment. What is displayed is not helpful.

Fig3. Book.cshtml with incorrect author display

Posting this page doesn’t binding the child object and the authors aren’t there. The HTML is just not there. So how can I easily, and with as much magic as possible, change the view so that the authors are present and the form elements are named correctly? What does the default model binder actually want and how can I provide it?

The Solution

EditorFor Templates

The view needs to know how to display the author object. I could loop through in in the view but a better solution is the EditorFor template.

In View/Shared/EditorTemplates create a file called Author.cshtml. By convention MVC will find this and use it to display the Author property. We are telling it what to write out when the author property is edited i.e. what HTML to write out when the following call is made in the view

The Author.cshtml is pretty simple

The Corrected View

Now this is in place the view looks a lot better and it works.


Fig 4. Book.cshtml with corrected author functionality

When save is pressed the full object is passed to the Save Action and the object included authors can be persisted

What is interesting is the rendered output. Viewing the page source (stripping out the validation data elements, styling etc..) we see this

So the convention that the default model binding requires is revealed. I wouldn’t have just guessed it but the EditorFor Template does it for us.

Although both the id and name look like reasonable candidates for model binding, it is the name attribute that is used. The format for child objects is

so more complex objects are perfectly possible

and so on until you gone down a complex rabbit hole of 7 nested objects from which you might never return.

Adding New Authors (child objects)

One of the things we might want to do is enable a user to add new authors. Perhaps the retrieved book doesn’t have all the authors and the user is a stickler for detail. Knowing the required format of the child object html is going to help us with that. In its very simplest form it could a bit of jQuery based on this idea

But we would need to bit cleverer, use templating, work out the next index etc… But this gives us the starting point to be able to provide that functionality. An exercise for another day perhaps.

I’ve now implemented a solution using Handlebars.js to add authors into the complex object. See this post for details.

DisplayFor Templates

As an aside we could/should also do a template for displaying a read version of the model i.e. when the following call is made in a view.

It’s not needed to get this to work but it seems like a generally good and wholesome thing to provide this as well. The display template can be very simple..

 Useful Links

More detail about EditorFor and DisplayFor templates

Some more insight into how complex data binding works

How to manipulate the DOM through JQuery which we would need to dig into a bit to implement adding an author

Google Books API reference in the blog post