5 ways to extract the computer name from a network file path with PowerShell

 (and a sprinkle of Regex)

The Aim

They say that a good definition of madness is doing the same thing over and over and expecting different results. Surely then another definition of madness is do a thing perfectly well once then dream up 4 different ways to do it in slightly less lines of code. With this in mind, here are 5 ways to extract the computer name from a UNC filepath using PowerShell – a task I found surprisingly difficult.

  1. Given a UNC filepath i.e.

    I want a powershell script that returns
  2. If I pass in a local directory then I want an empty string
  3. I want it with as little code as possible. Really I want to see it on one line
  4. Practice some PowerShell and learn something – as always

Function 1- splitting the string

So working it through one at a time

splits the string into an array using \ as the delimiter (note it is \\ because it is escaped). The elements it splits the string in to are …

the output is piped to

Filter out any empty strings (our first result will be empty as \\ is split into 2 parts)

Return the first (non-empty) element in the array which is our hostname

Evaluation

Not good. If I pass in a local path i.e.

Then I get

as the output (the first non-empty string as the output). Misleading – this isn’t a machine name therefore I shouldn’t return it. Back to the drawing board

Function 2 – Regular expression

Really, this feels like a task for regular expressions. So the first regex pass is

The regex

The regex we are going to use is

It’s better to look at it without the escape characters so ..

Breaking it down

Is a straight character match of two backslashes

Is any number of character BUT the ? makes it non-greedy, So it will match the least amount of characters it can to still make the match. Without that it would be greedy and match everything it could up to the last \ rather than just matching to the first.

Another character match

So it matches \\ then anything then \. The trick is that the anything (*.?) is in parenthesis so it will be available to us as a group – the parenthesis does that

The PowerShell function

So step at a time

Matches the file path to the regex. The match function then copies the result into a magic global variable called $Matches. This contains the results of the match and all the groups.

So we can see the overall match

And the group

To return the group we check that there is at least 2 elements in $Matches

Then return the hostname which is in the 2nd position in the matches collection

What are we really returning?

Powershell is odd with returning values out of functions. It will return all values that haven’t been used. The return keyword just signals the end of the function so…

Would work as would

There is additional weirdness though

Returns true – we haven’t used it so that would be returned too. Out-Null ‘uses it’ and stops it returning so getting us just the single return value we want.

It’s so odd (to me) that I might write a separate blog post about it one day. Anyway digression over.

Evaluation

It works. Host name for UNC and Null for local paths. I hate it though (an extreme reaction to a PowerShell script admittedly).

  1. Magical variable called $Matches – what’s that about?
  2. Having to use Out-Null to monkey around with the return value
  3. Too many lines – I can do this in one line surely.

    Function 3 – regex and split

    Trying to get away from the magic $Matches variable I’ll combine the first two attempts to get

    This one is fairly transparent so

    Checks to see if the input is in a UNC type format. If it is then

    We split it. The return doesn’t need to be there but for me points out the intention. We don’t need Out-Null because we are using the return value of the –match function so it won’t be put on the pipeline and returned out.

    Evaluation

    It’s OK. It returns empty for a local path which is good. It actually can be understood. In real life I would be happy with this – I’ve seen far worse PowerShell. But in my heart of hearts I know I can do better

Function 4 – Select-String

I’m abandoning – match now and using Select-String. Select-String will also pattern match a string to regex but it returns out the results as a MatchInfo object which we can then consume by piping it to other operators. It gets us to the one liner that I want so…

Examining this a piece at a time

Matches the rfegex to the string and returns all matches in a collection of match info objects

So we have the match and then the group collection

Takes us through all the matches

Takes us through each group for each match. Our machine name is put in a regex group (remember (.*?)). The first group is the full match and the second group is the machine name so

Skips the first and picks up the next one. It works.

Evaluation

Good. It’s one line with the output all flowing along the pipeline which I like. It works – I’m nearly done. But I’ve a tiny bit of disquiet – am I really doing it in the simplest way I can?

Note on aliases

To shorten this we can use the % alias instead of Foreach (which is itself an alias for ForEach-Object). So the main body of the function could become

Shorter still. Nice.

Function  5 – lookahead and lookbehind

Reflecting on this – a lot of the complexity is the use of groups in this regex. Do I need them? Well no I can use the zero length regex assertions lookahead and lookbehind

The regex

Once again it’s got escape characters in i.e.

It’s easier to understand if we just remove them while we are dissecting it

So there is three parts

Looks behind the match to check for \\. It isn’t part of the match though

Matches the least amount of anything it can (remember the non-greedy stuff).

Looks ahead of the match to check for \. Again it isn’t part of the match. So the match is only the machine name i.e. the least amount of anything.

The function

So in parts

Returns a match info with just one match (and no extra groups)

Select the match property of the match info

Selects the value property of the match object. This is the machine name. Done and Done!

The madness ends

It’s madness to write it all out – but then again there is a lot even a simple task. To go all the way through we covered off

  • Powershell return keyword and Out-Null
  • How regex groups work
  • How regex look ahead and look behind work
  • The powershell pipeline operator
  • PowerShell select-string vs –match

So perhaps not quite as mad as all that.

Useful Links

https://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming_Convention
UNC means Uniform Naming Convention i.e. paths in the form of \\MYHOST\more\more1

https://mcpmag.com/articles/2015/09/30/regex-groups-with-powershell.aspx
Good description of the magic $Matches object

http://www.regular-expressions.info/lookaround.html
Lookahead and Lookbehind in regex. This site is so old but still really useful – I’ve been looking at it for years now

https://msdn.microsoft.com/en-us/powershell/reference/5.0/microsoft.powershell.utility/select-string
MSDN documentation for select-string. Useful

https://blog.mariusschulz.com/2014/06/03/why-using-in-regular-expressions-is-almost-never-what-you-actually-want
Good post on greedy vs non-greedy regex operators

https://code.visualstudio.com/
All PowerShell was written with Visual Studio Code. My IDE of choice for PowerShell, Nice debugging.

https://github.com/timbrownls20/Demo/tree/master/PowerShell/UNC%20FilePath
As ever the code is on my git hub site

Choosing between Visual Studio Team Services and On-Premises TFS

A few days ago I bit the bullet and upgraded our on premises Team Foundation Server from TFS 2015 to TFS 2015 Update 4. As I sat there, after hours, watching the update spinner with the IT department on speed dial, it made me reflect on whether putting in on premises TFS was the best idea. If we had gone for Visual Studio Online (now Visual Studio Team Services) I wouldn’t have to do upgrades – ever.

Flashback 2 years: we had a mishmash of ALM tools that didn’t talk to each other, a git repository that worked at a snail’s pace and a deployment process that involved manually editing csproj files but never checking them in. We had to change. We discussed online vs on premises and after some pen chewing, fingernail biting and some proper pondering we went for on premises. Did we do the right thing? Here is why we took the decision to go on premises and on reflection here’s if I think we did the right thing.

1. Code stored offsite

With Visual Studio Team Services (VSTS) your code is clearly stored offsite on someone else’s machine – your crown jewels are stored under someone else’s bed. This shouldn’t be a problem. Microsoft bunkers are secure, more secure than what I could come up with. Surprisingly this was an issue and various stakeholders where very much happier with code being stored by us. I’ve had very experienced IT staff visit us since then and they were also happy to see we had complete control over our code. So on-premises won out on that score even if it was perception over reality.

Was I right?

Yes and no. Our source code is probably better off in Microsoft’s super secure bunkers but if the decisions makers want the code with us then that’s fine. I didn’t push for it to be stored offsite and I wouldn’t push for it now.

2. High levels of Customisation

This wasn’t a Greenfield project and we had pieces that needed to be integrated into TFS.  With on premises we could break it open and code against it. At the time with Team Services you got what you were given. Our two pain points were

  1. MSpec tests. We could only get these under continuous integration with a lot of on-premises custom tinkering
  2. Labelling build versions. I put a piece in to code against the TFS object model to get the build versions automatically populating and integrating this into continuous build. Again custom on premise tinkering but well worth it.

Was I right?

Yes and No – we ultimately retired MSpec but for a year about a third of our test coverage was using this so needed to be included. Labelling build versions is still really important to us and works well. But VSTS has come on a lot and would now do most of this if not all.

3. Infrastructure

With on-premises you are responsible for your own servers – you are masters of you own bits of tin. You want more infrastructure then you will need to find it. I didn’t think this would be a problem

Was I right?

No. Infrastructure became a big issue. Disk space ran out and I had to keep scavenging around for more. Our builds proliferated and our build server became very under powered. It took a long time to get another – we just didn’t have the rack space. Don’t underestimate the infrastructure – if the TFS implementation is successful them people will want more and more of it.

4. Quality of Internet access

2 years ago our internet was powered by a very tired hamster on a wheel. It wasn’t good – our existing online tools were causing a headache. No-one had much appetite for further reliance on our poor internet connectivity. On-premises seemed the better option.

Was I right?

Yes but not for long. We got a better Internet connection after 6 months so we could have ultimately used Team Services. But I think an initial 6 months of a poor user experience would have led to a rejection of the implementation. Not everyone in the world has great Internet so Team Services is always going to battle against that.

5. Upgrading

With Team Services your upgrade is done for you. You are always on the latest version. With on premises you do your own upgrades when you have the required levels of emotional resilience to embark on it. I didn’t think this would be a problem.

Was I right?

No. Upgrading is a pain. I’ve done it twice and it worked well both times but there is a lot of thought that needs to go into rollback plans and minimising outages. Now I’ve gone from TFS 2013 to 2015 R4 the next is 2017 and that’s going to take an upgrade to SQL Server as well. It will be much more difficult next time.

6. Amending Bug  and Work Item definitions

At the time Team Services didn’t offer customisation of the process templates i.e. you couldn’t put in extra field, dropdowns and process flows into your project tracking and defect management. This was a deal breaker for us – we had absolute requirements around this. Over two years we have done a lot of customisation of the templates and we really benefit. Once again customisation is king.

Was I right?

Yes and No. Team Services has added some customisation but it is still work in progress. We couldn’t and still wouldn’t wait for a feature that may or may not happen. We needed that level of customisation – it wasn’t just a nice to have. That said this feature is definitely on the road map for VSTS so any concerns I have will continue to evaporate.

7. Availability

Team Services promises 99.9% availability.  On-premises is going to struggle to complete with this particularly when factoring in downtime for upgrades. I didn’t think this would be an issue for on-premises.

Was I right?

Yes – it wasn’t an issue. Over 2 years we didn’t get anywhere near 99.9% for availability but it didn’t cause problems. Outages were planned and developers could keep developing, testers keep testing and project managers could keep doing whatever it is that project managers do. It was fine.

8. Other Issues

There are a few other issues that didn’t come up at the time but I would probably consider if I was to take the decision today

Costing

The costing model for on premises and VSTS is comparable currently.  You can get monthly users of on premise TFS with an online subscription. From the costing page

Each paid user in your Team Services account also gets a TFS client access license (CAL). This lets you buy monthly access to TFS for your team.

This really helps out when you are ramping up testing and need a whole bunch of new users. I’m just a little wary if this will always be the case – monthly users are baked into VSTS but I could imagine them being quietly dropped from on-premises. Paranoid maybe (maybe not).

Support for extensions

I was interested in a decent wiki for TFS but the extension is just for VSTS. The extension ecosystem seems better supported for VSTS. I suspect it always will be.

The future

I don’t need to climb inside the mind of the Microsoft Board to understand that there is a big focus on cloud offerings. VSTS fits nicely within this. TFS on-premises feels a little bit like a relic now. Perhaps not but I would be wary about ongoing support for the on premise version for small shops. I wouldn’t want to be chopping and changing my ALM solution every couple of years so I would have an eye on what the future may hold.

So … on-premise or online?

From my own experience I would say that for

Greenfield project with flexible ALM requirements
Pick VSTS – it’s easier to get going

Established project with requirements for high levels of customisation
Consider on premises but do review VSTS – it might do everything you need. It probably does by now.

For large shops with big IT departments and deep pockets
Go for on premises if it suits and you feel better with your code in house. Infrastructure and upgrading will be no problem. You’ll eat it up

For small shops with a small/non-existent IT department
Be realistic about your IT capacity and willingness to upgrade. Even if stakeholders are screaming for on-premises be certain about your own capacities.

And if I was to implement a Microsoft ALM today?
I would go for Visual Studio Team Services though on-premise was the right decision at the time.

As ever this is just my opinions. Feel free to disagree and if you post a comment telling me why I’m terribly wrong – that would be great.

Useful Links

Visual Studio Online home page
https://www.visualstudio.com/vso/

Customising process templates
https://www.visualstudio.com/en-us/docs/work/reference/process-templates/customize-process

Team Foundation Server object model – which I used to do a fair bit of customisation to our on-premises installation
https://www.nuget.org/packages/Microsoft.TeamFoundationServer.ExtendedClient/14.89.0

 

 

 

Lightning Talk at Leeds Sharp

I’m doing a lightning talk at Leeds Sharp user group tomorrow. It’s going to be 10 to 15 minutes about how to write more robust SpecFlow tests – subtitle ‘how I stopped worrying about my Specflows and started living life to the full’. It’s only a quarter an hour so the damage I can do is probably quite limited. I’m up last – my daughter tells it’s always best till last. I’m more skeptical.

Anyway Leeds Sharp is always good and I sure I’ll learn a lot from the other lightning talkers. My PowerPoint for the talk is here, archived for future generations. They’ll thank me for it.

A web.config implementation for AngularJS

It’s said that when learning a new language or framework it is best to approach it fresh, with no preconceived ideas from other languages. So completely ignoring that advice here is my implementation of an ASP.NET web.config file in AngularJS.

I’m used to squirrelling away my config in one external file and I want to do that again in AngularJS. Really it’s the appSettings part of the web.config file that I’m implementing but the point is that all my config for the application is in one file that I can use throughout my application. I like it in .NET and I want to do it again here.

The implementation

App.Config.js

The first step is to implement a config object in an external file like so…

We have just 2 config items at the moment, the root of the API and if it is in debug mode or not. In the normal run of things I wouldn’t want to pollute the global scope with JSON objects. I would wrap them in an IIFE. However this is a special case as it is genuinely global so I’m explicitly placing it in the global scope i.e. using the window object.

As it is an external file I need to remember to reference in the main application page.

App.Module.js

The next task is to reference the config in the main application file – app.module.js in this case. This is how I’m going to make it available throughout the entire application. Strictly speaking I don’t need to do this. As it is on the global scope I could leave it like that and directly reference it via the window object. I’d rather not do that. It’s going to make unit testing problematic in the future and anyway I don’t like it. Instead, I’m going to use the app.constant property in the main app.module file thus

My naughty accessing of a global object is here and nowhere else. The config object is placed into app.constant which is then available throughout the application via dependency injection. I could have used app.value and it would work in much the same way. However I’m never going to change these values during the lifetime of the application so the constant property is more appropriate.

Consuming the config

The config is then available via dependency injection in controllers, services, directives e.g.

In a service

The consumption in a service is the same. The use case here is a service can use the config to set the API url which could, should and will change in different environments.

In a directive

I’ve previous blogged about a debug panel that can be toggled on and off. In that post I used a query string parameter to turn the display of the debug information on and off.  In truth I don’t really like that. This is the debug panel directive implemented with the web.config style config. As before it is injected in and it does a bit of DOM manipulation to remove the panel in the link method if debug is on. I prefer this way rather than my crazy query string implementation from before.

Deployment

One of the reasons that I like external config files is that I can have different settings in different versions of the files. I can then use some sort of task runner or deployment script to swap them out as it moves through different environments. So I could have the different app.configs in my project i.e.

  • app.config (standard development one)
  • app.devtest.config
  • app.qa.config
  • app.preprod.config
  • app.prod.config

I can then delete the app.config file and rename the environment specific file so it is picked up in different environments. So when deploying to QA I would ….

  1. delete app.config
  2. rename app.qa.config to app.config

And as if by deployment magic I get different settings in QA. I currently use a bunch of PowerShell scripts to do this so it’s all automated but it can be done with whatever script or tool is most familiar or fashionable or nearest to hand.

Alternative implementations

As always, I’m not claiming that this is the best implementation for consuming application config. Global scope and config is going to be something that can be done in any number of ways. In fact I can think of several just while sat here typing in my pyjamas.

Global objects

I could just access the config directly wherever I like by using the global windows object like so..

I personally don’t like having the windows object scattered through my code but it could be done. It’s not going to work well with any kind of unit testing though so unless you’re a no test kind of developer it’s probably best avoided.

Global scope

We could use $rootScope instead of app.constant to stash the external config file there. So accessing it would be a matter of grabbing the config from $rootScope thus

I still like the idea of only the config being available in the method. I just get what I need, not the entire rootScope – that said this would work perfectly well.

Log Provider

This isn’t a complete replacement but it’s worth noting that debug config could be implemented through the $logProvider component if we wanted. To toggle on and off we would do this

It’s not a complete replacement but in the interests of full disclosure there are other (better??) ways to get debug information in and out of angular that have their own config.

Useful Links

https://ilikekillnerds.com/2014/11/constants-values-global-variables-in-angularjs-the-right-way/
Article on app.constant vs app.value

http://stackoverflow.com/questions/6349232/whats-the-difference-between-a-global-var-and-a-window-variable-in-javascript
Useful stack overflow question comparing the windows object and global scope.

http://odetocode.com/Articles/345.aspx
Ode to Code on appSettings and web.config. That’s the part of the web.config I’m aiming to emulate in this post.

https://thinkster.io/egghead/testing-overview
Testing with angular. Here’s what you might miss if you want to implement this with a global object.

http://stackoverflow.com/questions/18880737/how-do-i-use-rootscope-in-angular-to-store-variables
Using rootscope to store variables

https://thinkster.io/egghead/index-event-log
logProvider examples – about halfway down.

Dealing with invalid js characters in VS Code

Even though mental stability is core personal principle that is rarely violated, here is something that was driving me mad recently.

The Problem

I was using VS Code do write a demo AngularJS application and I kept getting this error in my JavaScript files

[js] invalid character

With funny red squiggles thus

There was no rogue comma or strange character visible and no amount of deleting would resolve it. I’m more used to Visual Studio where odd things happen fairly frequently and this kind of thing can be ignored while the environment catches up with itself. However this floored VS Code and my lovely demo app just stop working. It was happening all the time. Bah!!

The Investigation

Although it isn’t visible in VS Code, the rogue character is visible in Chrome developer toolbar.

Press F12 -> Select Sources -> Navigate to the broken js file

The dodgy character is visible as a red dot and the character can be identified by hovering over it

In this case the character is

\u7f

\u donates Unicode and the 7f is hexadecimal so converting to decimal the character is 127. Looking this up on an ASCII table we find out it is a [DEL] character.

The Solution

Once the culprit is identified it’s an easier matter to use this regex

\u007f

to find and replace the rogue character and replace it with an empty string. Don’t forget to do a regex search in VS code – select the star icon in the search box as show below. This fixes the application and thus repairs my fragile mental state.

Why delete characters keep occurring in my JavaScript code is anyone’s guess. It could be a VS Code quirk but I’ve googled around it and haven’t found anything. It could be me flailing around with my new laptop and somehow inserting odd characters into files. I guess some things we aren’t meant to know.

Useful links

http://www.regular-expressions.info/unicode.html
How to write regex to find Unicode. The important point to remember is that it must contain 4 hexadecimal digits. So \u7f does find my DEL character – the regex needs to be padded out to \u007f

https://code.visualstudio.com/
Download page for VS Code. Recommended.

 

Better Numeric Range Input with ASP.NET MVC, HTML5 and JQuery

The Sin

I recently wrote this horrible code to generate a drop down box to select any number from 1 to 12.

Ouch! It was only a test harness but after an hour I still got sick of looking at it. It’s a sin that needs to be atoned for.

The Penance

For my penance I’ve said 5 Hail Marys and identified three better ways to generate numeric range dropdowns.

Using Enumerable.Range

If I want to stick with MVC and C# then there is a static method on Enumerable that will generate sequences of numbers i.e.

Will output 1 through to 12 as a list of ints. Leveraging this, my grotesque piece of Razor becomes

Which renders out as a normal dropdown as before

Enumerable.Range

Much better.

Using HTML5

Of course we don’t even need to get into C#. If we’ve got the luxury of a modern browser we can just use HTML5 to give us a slider control.

Which renders as

HTML5

Better again.

Using JQuery

If you don’t have the luxury of a modern browser then you can fall back to JQuery UI which has a larger range of supported browsers. The code isn’t that much more but of course you need to reference JQuery and JQuery UI libraries. It’s another slider type control and the implementation is

Which looks like

JQuery UI

So much nicer.

The Atonement

To atone for my sin I swapped out the horrible code for the Enumerable.Range implementation. I think that’s my preference really. I don’t really want to include a whole bunch of scripts and css to get a decent control (JQuery) and I don’t want to limit myself to the latest and greatest browsers (HTML5). Beside I think Enumerable.Range is little known but pretty smart and let’s face it – who doesn’t want to be little known but pretty smart.

Useful links

https://www.dotnetperls.com/enumerable-range
Nice article on Enumerable.Range

http://www.htmlgoodies.com/html5/tutorials/whats-new-in-html5-forms-handling-numeric-inputs-using-the-number-and-range-input-types.html
More on the HTML slider control

https://jqueryui.com/slider/
Official documentation for the JQueryUI slider control

https://github.com/timbrownls20/Demo/blob/master/ASP.NET%20Core/Demo/Demo/Views/Demo/NumericDropdown.cshtml
As always there are code sample of all implementations on my git hub site.

 

Why Doesn’t My Visual Studio Solution Build? A Troubleshooting Guide

broken building

I’m pretty good at getting Visual Studio projects building correctly – I’m a bit of a Visual Studio whisperer. In one job I could get the ‘flag ship’ application up and running on a new machine in half the time of anyone else – that still meant half a day’s work. Strangely that’s not on my CV.

So here are the steps I go through to get a rogue project building in Visual Studio. These steps range from the basic to the bizarre. They are in the order I would do them and by following them I can pretty much get any Visual Studio project back on track. So put on your tin hat and let’s get that errant Visual Studio solution building.

Sanity Check

Before embarking on a full blown Visual Studio troubleshooting endeavour it’s best just to do a few simple checks.

Have you got latest code?

Everybody has wasted hours trying to get an old version of the code working. I still do it. The developer that sits opposite me still does it. My boss still does it. Don’t do it. Get the latest version of the code from your repository.

Does it build for other people in your team?

Just see if other people are having the same problem. If you have continuous build – check that is still working and running through cleanly. The code might not work for anyone. More illuminating – it might work for some at not others.

Basics

Every few weeks you will probably be faced with a non-building Visual Studio project. Here are some basic steps to help. These will probably be enough.

Clean and rebuild

clean solution

Go to

Solution Explorer -> Right Click -> Select Clean -> Wait -> Select rebuild

Often the bin directories have gone all wonky and are full of junk. Who knows what has happen to them. Clean and rebuild will refresh them and often work when a normal build doesn’t. Standard stuff – but then we are just beginning.

Build each project individually

Often you are faced with mountains of errors which is misleading. It could be one of your low level library projects that is failing to build and causing all other projects to fail. Rebuild each project individually starting with the low level ones that are dependencies for others. Sometimes that’s enough to get the entire solution building. At a minimum, you will better be able to see where the actual issue is and not be swamped by a heap of irrelevant error messages.

Close and reopen Visual Studio

It’s time to restart your Visual Studio. Don’t leave this till the end – it is often the problem. Turn it on and off again – it might help.

As a note – in my experience restarting your computer rarely helps. By all means try it but don’t be surprised when the solution still stubbornly refuses to build.

Manually delete all bin folders

This is really worth a try if you are working with an application with language variants. The language specific resource (e.g. Messages.fr-CH.resx) files compile down into satellite resource files in your bin folder that are contain in their own subfolder e.g.

Weirdly Visual Studio Clean can leave these satellite assemblies behind. Your application will still build but it can cause changes in languages variants not to shine through.

This might seem like an edge case but this exact thing kept a colleague of mine baffled for 6 hours. It was a very emotional moment when we finally sorted it out. So this is very much worth a try.

Start Visual Studio in admin mode

run as administrator

I’m master of my own machine (i.e. I’m a local admin) and I’ve got my Visual Studio set to always open as an administrator. You might not. Right click and run as administrator.

If it isn’t possible get someone who is an administrator to open Visual Studio on your behalf. Then complain bitterly about not being a local admin or your own machine – you are a developer; you really should be.

Have you got bad references?

bad references

Just check all projects and make sure that the references are all there. Look out for the little yellow triangle. If you have bad references then jump to section below dealing with that.

Check IIS

This isn’t relevant for all projects but if you are building a web project using IIS then it’s worth doing checks around that. Obviously it’s a completely legitimate setup to use IIS Express. In that case this isn’t relevant.

IIS pointing at the wrong folder

This is often a problem when changing branches. Visual Studio normally makes a good job of switching for you but not always. If you are hooked up to the wrong folder then your project will build but you won’t be able to debug it. Also changes that you are convinced you are made won’t shine through. This has driven me crazy before.

IIS explore folder

To check

  1. Open IIS (run -> inetmgr)
  2. Navigate to website
  3. Press explore and confirm you are where you think you should be. You might not be in Kanas anymore.

Permissions on app pool

This won’t manifest itself as a build failure but the project won’t be running as expected particularly when accessing external resources. It might be worth checking what the app pool is running as.

 

application pool settings

To check

  1. Open IIS (run -> inetmgr)
  2. Application Pool -> select the one for the website
  3. Advanced settings

You can now check the user.

The application pool runs under ApplicationPoolIdentity by default. What I’ve sometimes seen is that it’s been changed to a normal user whose password has expired. This is typically fallout from previous Visual Studio failures echoing through the ages.

Bad references

bad references

If you are noticing a yellow triangle on your project references then Visual Studio can’t find a dependency. Sometimes there is no yellow triangle and all looks fine but it still can’t find the dependencies.

With the advent of nuget this is less of a problem now but it does still happen. There are instances where incorrect use of nuget makes it worse and more baffling.

Check reference properties

Go to the references -> right click -> properties

references properties

There are two things to look for

  1. Is it pointing to where you think it should be? It might not be
  2. Copy Local. Bit of witchcraft but I always set this to copy local. It will copy the dll into the bin folder so at least I can satisfy myself that it has found it and is copying it through OK.

Have you checked in the packages folder – don’t!

Even if you are using nuget your bad reference problem might not be at an end. Checking in the packages folder can cause the references not to be found. This is especially baffling since looking at the reference properties reveals no problems. The path to the required dlls is definitely valid and the dlls are definitely there. But it cannot be found – frustration and finger biting.

To resolve delete the packages folder then remove the folder from source control. Rebuild and all will be well.

Resetting Visual Studio

Resetting Visual Studio can (probably will) cause your environment to lose all your custom defaults – so all your shortcuts, settings perhaps plugins will go. Therefore I’ve left this step towards the end as it is not without consequences. That said it is often the resolution so it’s not a bad idea to do it earlier in the ‘what on earth is going wrong with Visual Studio’ resolution process.

Delete all temporary and cache files

This was a fix that often worked in slightly older versions of Visual Studio. I’m using VS 2015 currently and this often isn’t a problem. Still it is worth clearing out these folders and rebuilding

Delete project setting files

Your user settings for a project are stored in files with the extension *.csproj.user i.e

It’s worth deleting all those and rebuilding so reset user specific settings. Also if you have these checked into source control then remove them. They are specific to you and shouldn’t be in a shared repository.

Reset Visual Studio on the command line

When Visual Studio appears utterly broken and you are reaching for the uninstall button, this can often help. It takes Visual Studio back to its initial settings you will need to reapply any custom settings that you have.

Close VS then in a command prompt go to the folder that has the Visual Studio.exe (devenv.exe) i.e.

For VS 2015 it is

Then

Reopen visual studio

A similar approach can be used with devenv.exe /Resettings as detailed here.

Reset Visual Studio through file explorer

If the command line doesn’t work then try resetting Visual Studio via the file system. This often works when the command line doesn’t. Try this when Visual Studio is undergoing a profound collapse particularly when it keeps popping up an alert box detailing errors being written here…

i.e. for Visual Studio 2013

To resolve for Visual Studio 2013

  1. Close VS
  2. Go to C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\12.0
  3. Rename the folder to C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\12.0.backup
  4. Reopen VS. The folder C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\12.0 will be regenerated.

The process is the same for other versions of Visual Studio except the version number at the end will be different i.e C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\14.0 for VS 2015.

Disable plugins

Leave this one till last because it’s a pain. Go to Tools -> Extensions and plugins. Then disable each plugin you can one by one.

disable plugin box

It’s a pain because even if it is a plugin that causes it, you have a choice of uninstalling and living without it or contacting the vendor. Clearly if you didn’t buy it then the vendor isn’t going to be interested in helping you. I’ve found PostSharp and Resharper the likely culprits here. The Resharper vendor was very helpful. PostSharp weren’t (because we hadn’t bought it!!).

Bizarre Ones

Be careful what you check in

Checking files into source control that you really shouldn’t can cause difficult to diagnose problems. This often happens to me for continuous builds where the user building has fewer privileges than I’m used to. It does happen locally too. The following files shouldn’t be checked in

  • Bin folders
  • Obj folders
  • Autogenerated xml (i.e. documentation generated during build)
  • Packages folder
  • .csproj.user files

If you have checked them in then delete from your disk, remove from source control and rebuild.

Is your file path too long

Windows limits the file path to 260 characters. It could be that you have exceeded this and Visual Studio has started to complain. The awkward thing is that it complains in a very oblique way. The error that you see will be something on the lines of…

“obj\Debug\LongFileName.cs” has an invalid name. The item metadata “%(FullPath)” cannot be applied to the path “obj\Debug\LongFileName.cs “. obj\Debug\\LongFileName.cs           ContainingProject.proj            C:\Program Files (x86)\MSBuild\14.0\bin\Microsoft.Common.CurrentVersion.targets

So not obvious. Double click the error and it will take you to the Microsoft.Common.CurrentVersions.targets file in the depths of the framework folder. Less than illuminating.

Once you have diagnosed this then the resolution is easy – move your project to a shorter file path. I’ve found this a problem when switching branches which can have longer file paths than the Main branch.

If all else fails

If all else fails uninstall Visual Studio and reinstall. But honestly, I have reached this frustrating point, uninstalled, reinstalled, waited hours and the problem persists. This might not be the cure that you were looking for. Go into a corner and have a good long think before you resort to this one.

So congrats if you have read this far (or commiserations as your Visual Studio install is clearly in a bad way) but hopefully this guide will enable you to have a healthy and happy Visual Studio for years to come.

Useful Links

http://stackoverflow.com/q/1880321/83178
Good stack overflow explanation on why the 260 character filepath limit exists in windows.

https://msdn.microsoft.com/en-us/library/ayds71se.aspx
Official advice from Microsoft about bad references.

http://stackoverflow.com/questions/1247457/difference-between-rebuild-and-clean-build-in-visual-studio
There is a difference between Clean + Build and ReBuild as detailed here.

10 Beautiful Software Development Haikus

mount fuji

In my day job I send a lot of very routine emails announcing new deployments or pleading with people to fix the continuous build. I’ve started to amuse myself by sending them as haikus (3 line poems with 5, 7 then 5 syllabus on each line). I don’t think anyone has noticed. They just think I’m being a bit more terse than usual.

I this spirit I’m going to share with you 10 more beautiful haikus about software development. I’m sure you’ll agree that they capture the exquisite beauty of the software craft, the bittersweet sadness of an application crash and the inevitable fragility of a software patch to live. They are also short, don’t have to rhyme and are easy to write.

Traditional haiku collections are split into seasons. I’ll also split them but into categories that better juxtapose the joys and frustrations of the software development cycle.

Development Operations

Build server is slow
Disk whirs, lights flash but no builds
Request SSD

Test build being deployed
We wait for the build green line
Sadness. It’s amber

TFS build red
Queue the integration build
TFS still red

Build has been deployed
Is available for test
Please handle with care

Coding

SOLID principles
SRP and OCP
Still don’t know Liskov

Flakey code base
Instigate peer code reviews
No one has the time

Application crash
Detailed error report but
Where is the stack trace?

The Workplace

Corporate fruit bowl
Apples, bananas and plums
No-one eats kiwi

Our daily stand up
Managers attend. Involved,
But not committed

Professional Development

Personal dev blog
I watch the web traffic. Why?
I don’t really know

The Worst Thing About Microsoft Exams

Exam Upset I’ve long been a bit of a fan of Microsoft exams – shiny new exams that give the veneer of professionalism to a CV. Lovely. I did a fair number at the start of my career and a few thereafter. I’ve dined out on the knowledge gained for years. However there is a real risk being steered towards obsolete or just strange technology choices. From the last few exams I took here are just some examples of the odd/skewed/’never going to use’ technologies that the MS exam devotee might end up being steered towards.

Health Monitoring

web event hierarchyWith ASP.Net there is an extensive health monitoring architecture for your applications that you are never going to use. It’s got everything you need and much, much more – way too much more. I’ve never seen anyone use it and even the lead developer who implemented it admits it’s over-engineered.

Just 2 seconds looking at the above web event hierarchy makes my retinas ache. Every uses log4net, elmah or just rolls their own. Unfortunately it’s on every ASP.Net exam I’ve ever done (70-315, 70-551, 70-486) and you have to learn about it (sigh).

Lab Management

Lab management is a test environment provisioning architecture in Team Foundation Server. It forms a not insignificant portion of the TFS admin exam (70-496). It looks great and the books and videos are convincing. I was convinced it was a great idea and wanted to implement it and tried to convince others.

It turns out though this is not best practice for deployments at all. You should use the same method of promotion that you use into live. This is becoming more current with the advent of technologies such as Docker. Lab Management though is very particular and the deployment into lab management environments bears no relationship to deployment into production. None! It’s a pain to learn and it’s still on the exam (sigh).

App Fabric

A minor gripe now. In the MVC exam (70-486) the azure caching section steers you towards learning about App Fabric as a service bus for hybrid applications. Fine but turns out this technology is end of life and will be retired in 2017. But it’s still on the exam (sigh).

Windows Azure

 

azure vs aws

There is an entire exam track around Azure cloud computing and it’s a goodly portion of the MVC exam 70-486. Being a Microsoft exam devotee I imagined that Azure was THE cloud computing solution. I was surprised when I saw the above graph and realised that Amazon Web Services has way more implementations out there.

Azure is fine and interesting and good and healthy and I’m sure your mother approves of the use of it at the dinner table. But this is a textbook example of the MS exam narrowing your perspective and encouraging people (me) to ignore other potentially more viable alternatives. But it’s on the exams and you have to learn about it (sigh).

MSTest, Fakes and Shims

Unlike some of my other gripes, I do unit testing on a regular basis so it’s a drag to have to learn an alternative unit test framework for an exam that I’m unlikely to use. Maybe millions of developers use MSTest but I’ve never met them. I wish the exams were a bit more neutral with the choice of unit test technology but they predictably steer you towards Microsoft technologies. But it’s on the MVC exam so put on your big boy pants and learn about it (sigh).

But it’s all good really

Nothing is perfect. I’m not, the software I write isn’t and Microsoft exams aren’t. I still think they are great for developers at the start of a career – it gave me a boost and I still like to see them on developer CVs. It’s a good signal of career dedication. I’d much rather have a Microsoft exam than not. It just helps to be aware of their limitations.

Troubleshooting blank pages and render failures in AngularJS.

An odd thing happened this weekend when I was tinkering with some code while pointedly ignoring my daughter and parenting responsibilities generally.

The Task

Put in a new directive to implement a reusable panel to display error information. Not ground breaking stuff by any means. The html is simple enough

Very bootstrappy as usual. Reference the directive on the view thus

Happy days – a lovely reusable error panel.

The Problem

Sadly it isn’t to be. The previously functional page becomes entirely blank after the directive.

Badly formed directive

Unhelpfully there is no errors in the console and viewing the HTML source does nothing for us since it is a single page application.

The Solution

The problem is the use of self closing tags when referencing the directive. Referencing the directive with an explicit closed tag resolves the issue. Example – use this markup

And the page springs back to life

Well formed directive

Relief all round. Fortunately I found the solution pretty quickly (though I still continued to ignore the daughter). It’s the sort of error that can wasted hours if it hits on a Friday afternoon when you are feeling generally depleted.

Actually the use of self closing tags isn’t that helpful in HTML5. I’ve got into the habit of using them but looking into it more deeply it’s actually a bad habit. They are valid but optional in HTML5 for void elements but invalid for non-void elements. Since they are either optional or invalid its best to avoid them and write out the tag – particularly as it causes obscure issues like this.

Useful Links

http://stackoverflow.com/questions/3558119/are-self-closing-tags-valid-in-html5
Good discussion of Stack Overflow about HTML5 and self closing tags.

https://docs.angularjs.org/guide/directive
The official documentation for AngularJS directives if more information is required.