Accessing form data with React.js

Accessing form data seems a standard thing to be doing so I disappointed myself and my entire family when I couldn’t do it straight away with when using React. After a little thought and a bit of practice I implemented 3 ways of doing it thus redeeming myself in the eyes of my relatives and co-workers. For additional redemption I’ve written up the 3 ways with an extra one tagged on the bottom.

The Demo Application

The demo application is a Todo list implementation. We can add and delete our todo tasks from the list. We can even change their colour (though that’s not hugely relevant for this but I’m bizarrely quite pleased with it).

The react component hierarchy is a bit more complex than my last demo app but nothing to get too alarmed about


  • Application
    • Header
    • TodoList
      • TodoItem
    • TodoInsert

The application state is held at the application level and all changes are handled there. I don’t want the application state spread out willy nilly throughout the components.

The Task

When I add a task then I want my application to access the name I type in the select box. The state is handled at the application level so the form data (todo task name) needs to propagate up to here. Easy – well let’s see.

Solution 1: Using ref to access the text box

The principle here is the use of the ref attribute to expose the textbox DOM node to React. Once this is done I can grab the value and pass it on.

Full code

TodoInsert component

The majority of the task insert magic is done by the TodoInsert component shown below.

class TodoInsert extends React.Component{
{
 //.. constructor omitted

 handleClick(){
 this.props.addTask(this.textInput.value);
 }


 render(){

   return <fieldset className="form-group">
           <h3>Add Task V3</h3>
           <div>
             <div>
               <input ref={input => this.textInput = input} type="text" 
                                          defaultValue="New Task" />
             </div>
            <div>
            <button onClick={this.handleClick}>Add Task</button>
           </div>
         </div>
       </fieldset>;
 
 } 
}

Application Layer

The application layer receives the textbox value and passes it into the application state.

class Application extends React.Component{

 addTask(input){

   var newTask = { 
     id: this.state.todoitems.length + 1, 
     task: input
   }
   var todoitems = this.state.todoitems;
   todoitems.push(newTask);
  
   this.setState({ 
     todoitems:todoitems
   });
  }

 removeTask(taskId)
 {
   //.. detail omitted
 }

 //.. constructor omitted

 render(){
   return <div >
     <FullRow>
       <h2>{this.props.label}</h2>
     </FullRow>
     <TodoList todoitems={this.state.todoitems} 
                    removeTask={this.removeTask} />
     <TodoInsert addTask={this.addTask}/>
     </div>;
  } 
};

Code explanation 

The key thing is the use of refs within the TodoInsert component

<input ref={input =>  this.textInput = input} type="text" defaultValue="New Task" />

This makes the input box DOM available within the component i.e

this.textInput

which we can access on the click handler

  handleClick(){
    this.props.addTask(this.textInput.value);
  }

And grab the value and pass onto the addTask method we have passed in from the application layer i.e.

class Application extends React.Component{

 addTask(input){

 //.. more logic
 }
 
 render(){
 
 return <Application>
 <TodoList />
 <TodoInsert addTask={this.addTask}/>
 </Application>

 }
}

So the value passes to addTask method on the application layer which we can then use within the application to set and update the state. So the textbox value becomes available to the application layer. Job done ….

Evaluation

Well kind of job done. I’ve read people really objecting to the use of ref and tying the application tightly to the DOM. With those objections in mind here is an alternatively implementation without refs.

Solution 2: Using onChange to track the state of the text box

This time we are going to fire an OnChange event whenever the text changes in the text box. The general flow is

  1. Text is typed into the text box. When the text is typed in onChange event fires.
  2. The onChange event updates the state of the component. The component has it’s own state in addition to the state of the main application component
  3. When the form is submitted the form triggers an onSubmit method.
  4. The onSubmit method picks out the value from the state and passes it onto the Application layer.

Full Code

Todo Insert component

Note we are now using an entire form in this component and it now has it’s own state

class TodoInsert extends React.Component{

 //.. constructure omitted

 handleChange(e){

 this.setState({ 
 todoText: e.target.value
 });

 }

 handleSubmit(e){
 e.preventDefault();
 this.props.addTask(this.state.todoText);
 }

 render(){

 return <form onSubmit={this.handleSubmit}> 
 <h3>Add Task</h3>
 <div>
   <div>
     <input onChange={this.handleChange} type="text" 
      value={this.state.todoText} />
   </div>
   <div>
     <input type="submit" value="Add Task" />
   </div>
 </div>
 </form>;
 
 } 
}

Application layer

The application layer is unchanged from the first example but I’ll reproduce for completeness

class Application extends React.Component{

 addTask(input){

   var newTask = { 
     id: this.state.todoitems.length + 1, 
     task: input
   }
   var todoitems = this.state.todoitems;
   todoitems.push(newTask);
  
   this.setState({ 
     todoitems:todoitems
   });
  }

 removeTask(taskId)
 {
   //.. detail omitted
 }

 //.. constructor omitted

 render(){
   return <div >
     <FullRow>
       <h2>{this.props.label}</h2>
     </FullRow>
     <TodoList todoitems={this.state.todoitems} 
                    removeTask={this.removeTask} />
     <TodoInsert addTask={this.addTask}/>
     </div>;
  } 
};

Code explanation 

Let’s work through the application flow again and link it to the relevant pieces of code.

  1. When the text is typed into the text box an onChange event fires.
<input onChange={this.handleChange} type="text" />
  1. The onChange event updates the state of the component.
 handleChange(e){

 this.setState({ 
 todoText: e.target.value
 });

 }
  1. When the form is submitted the form triggers an onSubmit method.
<form onSubmit={this.handleSubmit}> 
<!-- form stuff -->
</form>
  1. The onSubmit method picks out the value from the state and passes it onto the Application layer.
handleSubmit(e){
 e.preventDefault();
 this.props.addTask(this.state.todoText);
 }

Remember the props.addTask method is passed in from the Application layer – so this is the link back up the stack into the main Application section.

Evaluation

This works perfectly well with no ref usage. It does cause the TodoInsert render method to fire frequently. This is only going to update the text node of the textbox so doesn’t cause any notable performance issues. We’ll reused code from the previous 2 examples for our final work through.

Solution 3: Accessing the state of the text box from the parent control

The final method is to change focus and access the component state from the parent application. It can be done and it will be done.

Full Code

Todo Insert component

This time we have omitted the form tags again and the button just handles the clicks.

class TodoInsert extends React.Component{

 //..constructor omitted

 handleChange(e){

 this.setState({ 
 todoText: e.target.value
 });

 }

 handleClick(){
 this.props.addTask();
 }

 render(){
 
 return <div>
 <h3>Add Task</h3>
   <div>
     <input onChange={this.handleChange} type="text" 
                          value={this.state.todoText} />
   </div>
   <div>
     <button onClick={this.handleClick}>Add Task</button>
   </div>
 </div>;
 } 
}

Application Layer

This time the application layer has changed slightly as well. The application layer is now responsible for accessing the state in the child component.

class Application extends React.Component{


 addTask(){

   var newTask = { 
     id: this.state.todoitems.length + 1, 
     task: this.todoInsert.state.todoText
   }
   var todoitems = this.state.todoitems;
   todoitems.push(newTask);
 
   this.setState({ 
     todoitems:todoitems
   });
 }

 removeTask(taskId)
 {
 //.. code omitted

 }

 //.. constructor omitted


 render(){
 
 return <div >
 <FullRow>
   <h2>{this.props.label}</h2>
 </FullRow>
 <TodoList todoitems={this.state.todoitems} 
    removeTask={this.removeTask} />
 <TodoInsert addTask={this.addTask} 
    ref={input => this.todoInsert = input} />
 </div>;
 
 } 
};

Code explanation

Working through the flow of the application ….

Text is typed into the input box and the onChange event is fired

 <input onChange={this.handleChange} type="text" value={this.state.todoText} />

The click event handler now tracks value within the components own state as in the previous example.

 handleChange(e){

 this.setState({ 
 todoText: e.target.value
 });

 }

When the add button is pressed then a click handler is fired

 <button onClick={this.handleClick}>Add Task</button>

This then triggers a method on the application component. We don’t pass up the state this time – we are just notifying the application layer that it is time to save.

 handleClick(){
    this.props.addTask();
  }

Back to the application layer we are now putting a ref instruction into our entire TodoInsert component

 <TodoInsert addTask={this.addTask} 
          ref={input =>  this.todoInsert = input} />

Which then allows us to reference the state within the todoInsert component when we are adding the Todo task

 addTask(){
    var newTask = {
        id: this.state.todoitems.length + 1,
        task: this.todoInsert.state.todoText
    }

    var todoitems = this.state.todoitems;
    todoitems.push(newTask);

    this.setState({  
      todoitems:todoitems
   });
  }

The important part being this

this.todoInsert.state.todoText

i.e. we are accessing the state of the TodoComponent itself – this is how we are passing around the form values.

Evaluation

Although the is more complex then the previous two examples I like this one. It enables us to manage the form values within the component and pick them out from higher up in the application hierarchy. I feels nicely encapsulated too me and I can see it extending nicely.

Additional thoughts

All three methods of passing form state around work so take your pick. I’ve used them all in various places. If I had to pick my favourite it would be the one that I’m not writing about at the moment – flux architecture. Putting in flux architecture would enable me to access the values anywhere via a store – a topic for another day perhaps. A foreshadowing teaser if you will.

Notes on sample code

I’ve simplified the code throughout and put it comments where omission occur. The entire source code is available on my github site and I’d encourage interested parties to look there for working code samples. Specific amendments and shortcuts are ….

  1. ES6 classes are used throughout. A little bit of syntactical sugar to simplify.
  2. There is a FullRow component that I use to simplify the markup but it is just that – the markup for a fullrow in the UI so just read it as that.
  3. I’ve removed all css classes from the markup. For our purposes they are just noise and serve to distract
  4. I have omitted a lot of the setup in the constructor. Again it’s boilerplate and a distraction but for the interested here is an example of what you are missing
constructor(props) {
 super(props);
 this.props = props;
 this.state = 
 {
    todoitems :[]
 };

  this.addTask = this.addTask.bind(this);
 this.removeTask = this.removeTask.bind(this);
 }

Useful links

https://scotch.io/tutorials/better-javascript-with-es6-pt-ii-a-deep-dive-into-classes
ES6 classes. Used throughout the code samples.

https://stackoverflow.com/questions/35303490/uncaught-typeerror-cannot-read-property-props-of-null
I missed out the constructors in the code samples. One of the things is the boilerplate code to bind the this keyword to the scope of the function in the classes i.e.

this.addTask = this.addTask.bind(this);

The above link is a good post on what that’s all about plus some ES7 syntax to render this unnecessary

https://facebook.github.io/react/docs/refs-and-the-dom.html
Official guidance on the use and abuse of refs from Facebook.

https://facebook.github.io/flux/docs/overview.html
Flux architecture articles again from Facebook. Another method to pass around form data and perhaps my preferred one.

https://github.com/timbrownls20/Demo/tree/master/React/TodoList
As ever, all code is on my GitHub site.

Simple Debug Panel for React.js

A few months ago I wrote about implementing a debug panel in AngularJS. I’m just getting into React.js so I thought I would do the same by way of comparison. Maybe I’ve become a less complex person in the intervening months but I found the React implementation a lot simpler. It just kind of fell out.

The Problem

I want a panel that will display the current state of my react application that can be easily turned on and off.

The Demo Application

My demo application is a timer application. We can start, stop and reset the counter. It’s not the application that is going to kickstart my unicorn tech startup but it will serve for this purpose.

The react components of the simple timer are, well, simple.

  • Application
    • Header
    • Button (Start/Stop)
    • Button (Reset)
    • Label (Output)
    • Label (Debug Panel)

So the debug panel is just an instance of the my label component.

The Implementation

Label Component

This is a bootstrappy label that takes in a couple of properties. In it’s simplest form the render method is

 render(){ 
   return <div className="row">
            <div className="col-lg-12">
              <div className="alert alert-info">{this.props.label}</div>
            </div>
          </div>;
 }

So it returns some JSX that can take a prop value that sets the label text. We build it from a parent component like this…

<Label label=”My First Label” />

Adding a bit more complexity we can toggle the visibility which is one of our requirements for a debug panel. So adding this in and taking the opportunity to show the entire label component

class Label extends React.Component{

  constructor(props) {
    super(props);
    this.props = props;
  }

  render(){

    if(this.props.visible != "false"){
 
      return <div className="row">
               <div className="col-lg-12">
                 <div className="alert alert-info">{this.props.label}</div>
               </div>
            </div>;
    }
    else{
      return null;
    }
  }
};

So now it has a property ‘visible’ that can show and hide the label which we call thus

<Label label=”My First Label” visible=”true” />

The Debug Label

The debug label is just a variant on how this label is constructed. I’m holding my state at the application level – I don’t want it scattered around my lovely simple timer willy nilly. The component is the same and it is constructed from the application component like this…

<Label label={JSON.stringify(this.state)} visible="true" />

with the state parsed to a string and passed in. It renders like this

So I can easily see the state of the application. It can be turned off easily with the visible prop

<Label label=”My First Label” visible=”false” />

Though to toggle on and off does require a rebuild (with gulp in this case). I don’t think this is a significant hardship.

Further Thoughts

I am aware that I’ve basically implemented a very simple version of the chrome developer toolbar but I still like it. I like to see the state up front and it’s a simple little thing that shouldn’t cause anyone any problems or upset.

I could enhance it a bit – changing the colour of the debug panel or doing the stringify in a dedicated debug panel which consumes the standard label. But once it was done I was happy and didn’t want to tinker. It was also notably simpler than the Angular implementation but I feel react itself is a simpler proposition and living in simpler times is no bad thing.

Useful Links

As ever the source code can be found at my github site
https://github.com/timbrownls20/Demo/tree/master/React/SimpleTimer

The styles are generously provided by bootstrap. Documentation can be found here
https://getbootstrap.com/docs/4.0/getting-started/introduction/

React homepage from the Facebook mothership
https://facebook.github.io/react/

 

HTTP 301: Moved Permanently (to Australia)

After 18 months in the planning I am formally issuing a HTTP 301 request for the Brown family – we are emigrating to Australia. Right now our life has been dismantled and put in a 20 foot shipping container – I’m surrounded by removal people, boxes and general chaos as I write. We fly out in two days and land in Brisbane on 10th August where we will reassemble the Brown life in sunnier climes.

I’m very excited / tired / sad / hopeful plus a number of other emotions that I don’t know the names of. But whatever happens it will be an adventure. I do know that you regret the things you don’t do; you don’t regret getting out there and giving things a go – a fair go.

So the 301 request has been sent. As Wikipedia (almost) says

The HTTP response status code 301 Moved Permanently is used for permanent software developer and family redirection

Wish us luck!!

6 ways to extract the computer name from a network file path with PowerShell

 (and a sprinkle of Regex)

The Aim

They say that a good definition of madness is doing the same thing over and over and expecting different results. Surely then another definition of madness is do a thing perfectly well once then dream up 4 different ways to do it in slightly less lines of code. With this in mind, here are 5 ways to extract the computer name from a UNC filepath using PowerShell – a task I found surprisingly difficult.

  1. Given a UNC filepath i.e.
    \\CODEBUCKETSERVER1\wwwroot\WebSite\ImageDir

    I want a powershell script that returns

    CODEBUCKETSERVER1
  2. If I pass in a local directory then I want an empty string
  3. I want it with as little code as possible. Really I want to see it on one line
  4. Practice some PowerShell and learn something – as always

Function 1- splitting the string

function GetHostName_V1{
  param ([string] $FilePath)
  return $FilePath -split "\\" | Where {  $_ -ne ""  } | Select -first 1
}

So working it through one at a time

$FilePath -split "\\"

splits the string into an array using \ as the delimiter (note it is \\ because it is escaped). The elements it splits the string in to are …

the output is piped to

 Where {  $_ -ne ""  }

Filter out any empty strings (our first result will be empty as \\ is split into 2 parts)

Select -first 1

Return the first (non-empty) element in the array which is our hostname

Evaluation

Not good. If I pass in a local path i.e.

C:\intepub\wwwroot\WebSite\ImageDir

Then I get

C:

as the output (the first non-empty string as the output). Misleading – this isn’t a machine name therefore I shouldn’t return it. Back to the drawing board

Function 2 – Regular expression

Really, this feels like a task for regular expressions. So the first regex pass is

function GetHostName_V2{
  param ([string] $FilePath)

  $FilePath -match "\\\\(.*?)\\" | Out-Null

  if($Matches.Count -ge 2)
  {
    return $Matches[1]
  }
}

The regex

The regex we are going to use is

\\\\(.*?)\\

It’s better to look at it without the escape characters so ..

\\(*.?)\

Breaking it down

\\

Is a straight character match of two backslashes

(*.?)

Is any number of character BUT the ? makes it non-greedy, So it will match the least amount of characters it can to still make the match. Without that it would be greedy and match everything it could up to the last \ rather than just matching to the first.

\

Another character match

So it matches \\ then anything then \. The trick is that the anything (*.?) is in parenthesis so it will be available to us as a group – the parenthesis does that

The PowerShell function

So step at a time

$FilePath -match "\\\\(.*?)\\"

Matches the file path to the regex. The match function then copies the result into a magic global variable called $Matches. This contains the results of the match and all the groups.

So we can see the overall match

\\CODEBUCKETSERVER1\

And the group

CODEBUCKETSERVER1

To return the group we check that there is at least 2 elements in $Matches

$Matches.Count -ge 2

Then return the hostname which is in the 2nd position in the matches collection

return $Matches[1]

What are we really returning?

Powershell is odd with returning values out of functions. It will return all values that haven’t been used. The return keyword just signals the end of the function so…

if($Matches.Count -ge 2)
{
  $Matches[1]
  return
}

Would work as would

if($Matches.Count -ge 2)
{
  $Matches[1]
}

There is additional weirdness though

$FilePath -match "\\\\(.*?)\\"

Returns true – we haven’t used it so that would be returned too. Out-Null ‘uses it’ and stops it returning so getting us just the single return value we want.

$FilePath -match "\\\\(.*?)\\" | Out-Null

It’s so odd (to me) that I might write a separate blog post about it one day. Anyway digression over.

Evaluation

It works. Host name for UNC and Null for local paths. I hate it though (an extreme reaction to a PowerShell script admittedly).

  1. Magical variable called $Matches – what’s that about?
  2. Having to use Out-Null to monkey around with the return value
  3. Too many lines – I can do this in one line surely.

    Function 3 – regex and split

    Trying to get away from the magic $Matches variable I’ll combine the first two attempts to get

    function GetHostName_V3{
     param ([string] $FilePath)
     if ($FilePath -match "\\\\(.*?)\\" -eq $TRUE)
     {
      return $FilePath -split "\\" | Where {  $_ -ne ""  } | Select -first 1
     }
    }

    This one is fairly transparent so

    $FilePath -match "\\\\(.*?)\\" -eq $TRUE

    Checks to see if the input is in a UNC type format. If it is then

    $FilePath -split "\\" | Where {  $_ -ne ""  } | Select -first 1

    We split it. The return doesn’t need to be there but for me points out the intention. We don’t need Out-Null because we are using the return value of the –match function so it won’t be put on the pipeline and returned out.

    Evaluation

    It’s OK. It returns empty for a local path which is good. It actually can be understood. In real life I would be happy with this – I’ve seen far worse PowerShell. But in my heart of hearts I know I can do better

Function 4 – Select-String

I’m abandoning – match now and using Select-String. Select-String will also pattern match a string to regex but it returns out the results as a MatchInfo object which we can then consume by piping it to other operators. It gets us to the one liner that I want so…

function GetHostName_V4{
 param ([string] $FilePath)
 
 $FilePath | select-string -pattern "\\\\(.*?)\\" -AllMatches 
 | ForEach {$_.Matches} | ForEach {$_.Groups} 
 | Select-Object -skip 1 -first 1
}

Examining this a piece at a time

$FilePath | select-string -pattern "\\\\(.*?)\\" –AllMatches

Matches the rfegex to the string and returns all matches in a collection of match info objects

So we have the match and then the group collection

| ForEach {$_.Matches}

Takes us through all the matches

ForEach {$_.Groups}

Takes us through each group for each match. Our machine name is put in a regex group (remember (.*?)). The first group is the full match and the second group is the machine name so

| Select-Object -skip 1 -first 1

Skips the first and picks up the next one. It works.

Evaluation

Good. It’s one line with the output all flowing along the pipeline which I like. It works – I’m nearly done. But I’ve a tiny bit of disquiet – am I really doing it in the simplest way I can?

Note on aliases

To shorten this we can use the % alias instead of Foreach (which is itself an alias for ForEach-Object). So the main body of the function could become

$FilePath | select-string -pattern "\\\\(.*?)\\" -AllMatches | % {$_.Matches} | % {$_.Groups} | Select -skip 1 -first 1

Shorter still. Nice.

Function  5 – lookahead and lookbehind

Reflecting on this – a lot of the complexity is the use of groups in this regex. Do I need them? Well no I can use the zero length regex assertions lookahead and lookbehind

function GetHostName_V5{
 param ([string] $FilePath)
 $FilePath | select-string -pattern "(?<=\\\\)(.*?)(?=\\)" 
 | Select -ExpandProperty Matches | Select -ExpandProperty Value
}

The regex

Once again it’s got escape characters in i.e.

 (?<=\\\\)(.*?)(?=\\)

It’s easier to understand if we just remove them while we are dissecting it

(?<=\\)(.*?)(?=\)

So there is three parts

(?<=\\)

Looks behind the match to check for \\. It isn’t part of the match though

.*?

Matches the least amount of anything it can (remember the non-greedy stuff).

(?=\)

Looks ahead of the match to check for \. Again it isn’t part of the match. So the match is only the machine name i.e. the least amount of anything.

The function

function GetHostName_V5{
 param ([string] $FilePath)
 $FilePath | select-string -pattern "(?<=\\\\)(.*?)(?=\\)" 
 | Select -ExpandProperty Matches | Select -ExpandProperty Value
}

So in parts

$FilePath | select-string -pattern "(?<=\\\\)(.*?)(?=\\)"

Returns a match info with just one match (and no extra groups)

Select -ExpandProperty Matches

Select the match property of the match info

Select -ExpandProperty Value

Selects the value property of the match object. This is the machine name. Done and Done!

The madness ends

It’s madness to write it all out – but then again there is a lot even a simple task. To go all the way through we covered off

  • Powershell return keyword and Out-Null
  • How regex groups work
  • How regex look ahead and look behind work
  • The powershell pipeline operator
  • PowerShell select-string vs –match

So perhaps not quite as mad as all that.

Post Script. Function  6 – FQDN

As requested in comments here is function 5 amended to deal with fully qualified domain names

function GetHostName_V6{ 
     param ([string] $FilePath) $FilePath | select-string -pattern "(?<=\\\\)(.*?)(?=(\\|[.]))" 
| Select -ExpandProperty Matches 
| Select -ExpandProperty Value 
}

The regex has been changed slightly to

(?<=\\\\)(.*?)(?=(\\|[.]))

So the lookahead (?=(\\|[.])) will stop the search if it finds \ or .

So testing with

\\server.domain.tld\share

gives the expected server and not server.domain.tld  therefore the computer name as promised. It still works with UNC paths that are not FQDN.

Useful Links

https://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming_Convention
UNC means Uniform Naming Convention i.e. paths in the form of \\MYHOST\more\more1

https://mcpmag.com/articles/2015/09/30/regex-groups-with-powershell.aspx
Good description of the magic $Matches object

http://www.regular-expressions.info/lookaround.html
Lookahead and Lookbehind in regex. This site is so old but still really useful – I’ve been looking at it for years now

https://msdn.microsoft.com/en-us/powershell/reference/5.0/microsoft.powershell.utility/select-string
MSDN documentation for select-string. Useful

https://blog.mariusschulz.com/2014/06/03/why-using-in-regular-expressions-is-almost-never-what-you-actually-want
Good post on greedy vs non-greedy regex operators

https://code.visualstudio.com/
All PowerShell was written with Visual Studio Code. My IDE of choice for PowerShell, Nice debugging.

https://github.com/timbrownls20/Demo/tree/master/PowerShell/UNC%20FilePath
As ever the code is on my git hub site

Choosing between Visual Studio Team Services and On-Premises TFS

A few days ago I bit the bullet and upgraded our on premises Team Foundation Server from TFS 2015 to TFS 2015 Update 4. As I sat there, after hours, watching the update spinner with the IT department on speed dial, it made me reflect on whether putting in on premises TFS was the best idea. If we had gone for Visual Studio Online (now Visual Studio Team Services) I wouldn’t have to do upgrades – ever.

Flashback 2 years: we had a mishmash of ALM tools that didn’t talk to each other, a git repository that worked at a snail’s pace and a deployment process that involved manually editing csproj files but never checking them in. We had to change. We discussed online vs on premises and after some pen chewing, fingernail biting and some proper pondering we went for on premises. Did we do the right thing? Here is why we took the decision to go on premises and on reflection here’s if I think we did the right thing.

1. Code stored offsite

With Visual Studio Team Services (VSTS) your code is clearly stored offsite on someone else’s machine – your crown jewels are stored under someone else’s bed. This shouldn’t be a problem. Microsoft bunkers are secure, more secure than what I could come up with. Surprisingly this was an issue and various stakeholders where very much happier with code being stored by us. I’ve had very experienced IT staff visit us since then and they were also happy to see we had complete control over our code. So on-premises won out on that score even if it was perception over reality.

Was I right?

Yes and no. Our source code is probably better off in Microsoft’s super secure bunkers but if the decisions makers want the code with us then that’s fine. I didn’t push for it to be stored offsite and I wouldn’t push for it now.

2. High levels of Customisation

This wasn’t a Greenfield project and we had pieces that needed to be integrated into TFS.  With on premises we could break it open and code against it. At the time with Team Services you got what you were given. Our two pain points were

  1. MSpec tests. We could only get these under continuous integration with a lot of on-premises custom tinkering
  2. Labelling build versions. I put a piece in to code against the TFS object model to get the build versions automatically populating and integrating this into continuous build. Again custom on premise tinkering but well worth it.

Was I right?

Yes and No – we ultimately retired MSpec but for a year about a third of our test coverage was using this so needed to be included. Labelling build versions is still really important to us and works well. But VSTS has come on a lot and would now do most of this if not all.

3. Infrastructure

With on-premises you are responsible for your own servers – you are masters of you own bits of tin. You want more infrastructure then you will need to find it. I didn’t think this would be a problem

Was I right?

No. Infrastructure became a big issue. Disk space ran out and I had to keep scavenging around for more. Our builds proliferated and our build server became very under powered. It took a long time to get another – we just didn’t have the rack space. Don’t underestimate the infrastructure – if the TFS implementation is successful them people will want more and more of it.

4. Quality of Internet access

2 years ago our internet was powered by a very tired hamster on a wheel. It wasn’t good – our existing online tools were causing a headache. No-one had much appetite for further reliance on our poor internet connectivity. On-premises seemed the better option.

Was I right?

Yes but not for long. We got a better Internet connection after 6 months so we could have ultimately used Team Services. But I think an initial 6 months of a poor user experience would have led to a rejection of the implementation. Not everyone in the world has great Internet so Team Services is always going to battle against that.

5. Upgrading

With Team Services your upgrade is done for you. You are always on the latest version. With on premises you do your own upgrades when you have the required levels of emotional resilience to embark on it. I didn’t think this would be a problem.

Was I right?

No. Upgrading is a pain. I’ve done it twice and it worked well both times but there is a lot of thought that needs to go into rollback plans and minimising outages. Now I’ve gone from TFS 2013 to 2015 R4 the next is 2017 and that’s going to take an upgrade to SQL Server as well. It will be much more difficult next time.

6. Amending Bug  and Work Item definitions

At the time Team Services didn’t offer customisation of the process templates i.e. you couldn’t put in extra field, dropdowns and process flows into your project tracking and defect management. This was a deal breaker for us – we had absolute requirements around this. Over two years we have done a lot of customisation of the templates and we really benefit. Once again customisation is king.

Was I right?

Yes and No. Team Services has added some customisation but it is still work in progress. We couldn’t and still wouldn’t wait for a feature that may or may not happen. We needed that level of customisation – it wasn’t just a nice to have. That said this feature is definitely on the road map for VSTS so any concerns I have will continue to evaporate.

7. Availability

Team Services promises 99.9% availability.  On-premises is going to struggle to complete with this particularly when factoring in downtime for upgrades. I didn’t think this would be an issue for on-premises.

Was I right?

Yes – it wasn’t an issue. Over 2 years we didn’t get anywhere near 99.9% for availability but it didn’t cause problems. Outages were planned and developers could keep developing, testers keep testing and project managers could keep doing whatever it is that project managers do. It was fine.

8. Other Issues

There are a few other issues that didn’t come up at the time but I would probably consider if I was to take the decision today

Costing

The costing model for on premises and VSTS is comparable currently.  You can get monthly users of on premise TFS with an online subscription. From the costing page

Each paid user in your Team Services account also gets a TFS client access license (CAL). This lets you buy monthly access to TFS for your team.

This really helps out when you are ramping up testing and need a whole bunch of new users. I’m just a little wary if this will always be the case – monthly users are baked into VSTS but I could imagine them being quietly dropped from on-premises. Paranoid maybe (maybe not).

Support for extensions

I was interested in a decent wiki for TFS but the extension is just for VSTS. The extension ecosystem seems better supported for VSTS. I suspect it always will be.

The future

I don’t need to climb inside the mind of the Microsoft Board to understand that there is a big focus on cloud offerings. VSTS fits nicely within this. TFS on-premises feels a little bit like a relic now. Perhaps not but I would be wary about ongoing support for the on premise version for small shops. I wouldn’t want to be chopping and changing my ALM solution every couple of years so I would have an eye on what the future may hold.

So … on-premise or online?

From my own experience I would say that for

Greenfield project with flexible ALM requirements
Pick VSTS – it’s easier to get going

Established project with requirements for high levels of customisation
Consider on premises but do review VSTS – it might do everything you need. It probably does by now.

For large shops with big IT departments and deep pockets
Go for on premises if it suits and you feel better with your code in house. Infrastructure and upgrading will be no problem. You’ll eat it up

For small shops with a small/non-existent IT department
Be realistic about your IT capacity and willingness to upgrade. Even if stakeholders are screaming for on-premises be certain about your own capacities.

And if I was to implement a Microsoft ALM today?
I would go for Visual Studio Team Services though on-premise was the right decision at the time.

As ever this is just my opinions. Feel free to disagree and if you post a comment telling me why I’m terribly wrong – that would be great.

Useful Links

Visual Studio Online home page
https://www.visualstudio.com/vso/

Customising process templates
https://www.visualstudio.com/en-us/docs/work/reference/process-templates/customize-process

Team Foundation Server object model – which I used to do a fair bit of customisation to our on-premises installation
https://www.nuget.org/packages/Microsoft.TeamFoundationServer.ExtendedClient/14.89.0

 

 

 

Lightning Talk at Leeds Sharp

I’m doing a lightning talk at Leeds Sharp user group tomorrow. It’s going to be 10 to 15 minutes about how to write more robust SpecFlow tests – subtitle ‘how I stopped worrying about my Specflows and started living life to the full’. It’s only a quarter an hour so the damage I can do is probably quite limited. I’m up last – my daughter tells it’s always best till last. I’m more skeptical.

Anyway Leeds Sharp is always good and I sure I’ll learn a lot from the other lightning talkers. My PowerPoint for the talk is here, archived for future generations. They’ll thank me for it.

A web.config implementation for AngularJS

It’s said that when learning a new language or framework it is best to approach it fresh, with no preconceived ideas from other languages. So completely ignoring that advice here is my implementation of an ASP.NET web.config file in AngularJS.

I’m used to squirrelling away my config in one external file and I want to do that again in AngularJS. Really it’s the appSettings part of the web.config file that I’m implementing but the point is that all my config for the application is in one file that I can use throughout my application. I like it in .NET and I want to do it again here.

The implementation

App.Config.js

The first step is to implement a config object in an external file like so…

window.appConfig = {
  apiRoot: "http://localhost:3000/api/",
  debug: 0
};

We have just 2 config items at the moment, the root of the API and if it is in debug mode or not. In the normal run of things I wouldn’t want to pollute the global scope with JSON objects. I would wrap them in an IIFE. However this is a special case as it is genuinely global so I’m explicitly placing it in the global scope i.e. using the window object.

As it is an external file I need to remember to reference in the main application page.

 <script src="app/config/app.config.js"></script>

App.Module.js

The next task is to reference the config in the main application file – app.module.js in this case. This is how I’m going to make it available throughout the entire application. Strictly speaking I don’t need to do this. As it is on the global scope I could leave it like that and directly reference it via the window object. I’d rather not do that. It’s going to make unit testing problematic in the future and anyway I don’t like it. Instead, I’m going to use the app.constant property in the main app.module file thus

 (function () {
    app.config(function ($routeProvider, $locationProvider) {
        //.. set up the application – routing etc..
    });
    app.constant("appConfig", window.appConfig);
 })();

My naughty accessing of a global object is here and nowhere else. The config object is placed into app.constant which is then available throughout the application via dependency injection. I could have used app.value and it would work in much the same way. However I’m never going to change these values during the lifetime of the application so the constant property is more appropriate.

Consuming the config

The config is then available via dependency injection in controllers, services, directives e.g.

    angular.module('app')
    .controller('MyController', ['appConfig', function (appConfig) {
            //.. app.config is now available
  });

In a service

The consumption in a service is the same. The use case here is a service can use the config to set the API url which could, should and will change in different environments.

angular.module('app')
.service('MyService', ['$http', 'appConfig', function ($http, appConfig) {
    var serviceRoot = appConfig.apiRoot;

    this.get = function(id) {
        var url = serviceRoot + "myAPIMethod/" + id;
        return $http.get(url);
    };
    //.. more methods
}]);

In a directive

I’ve previous blogged about a debug panel that can be toggled on and off. In that post I used a query string parameter to turn the display of the debug information on and off.  In truth I don’t really like that. This is the debug panel directive implemented with the web.config style config. As before it is injected in and it does a bit of DOM manipulation to remove the panel in the link method if debug is on. I prefer this way rather than my crazy query string implementation from before.

angular
  .module('app')
  .directive('debugDisplay', ['appConfig', function(appConfig){
      return {
          scope:
          {
              display : "=display"
          },
          templateUrl: "/views/directives/debugDisplay.html",
          link: function(scope, elem, attrs) {

              if(appConfig.debug == 0){
                  for(var i = 0; i < elem.length; i++)
                    elem[i].innerText = '';
              }
          }
      };
  }]);

Deployment

One of the reasons that I like external config files is that I can have different settings in different versions of the files. I can then use some sort of task runner or deployment script to swap them out as it moves through different environments. So I could have the different app.configs in my project i.e.

  • app.config (standard development one)
  • app.devtest.config
  • app.qa.config
  • app.preprod.config
  • app.prod.config

I can then delete the app.config file and rename the environment specific file so it is picked up in different environments. So when deploying to QA I would ….

  1. delete app.config
  2. rename app.qa.config to app.config

And as if by deployment magic I get different settings in QA. I currently use a bunch of PowerShell scripts to do this so it’s all automated but it can be done with whatever script or tool is most familiar or fashionable or nearest to hand.

Alternative implementations

As always, I’m not claiming that this is the best implementation for consuming application config. Global scope and config is going to be something that can be done in any number of ways. In fact I can think of several just while sat here typing in my pyjamas.

Global objects

I could just access the config directly wherever I like by using the global windows object like so..

    angular.module('app')
    .service(‘MyService’, ['$http', function ($http) {
        var serviceRoot = window.appConfig.apiRoot;

        this.get = function(id) {
            var url = serviceRoot + "myAPIMethod/" + id;
            return $http.get(url);

        };
        //.. more CRUD methods
    }]);

I personally don’t like having the windows object scattered through my code but it could be done. It’s not going to work well with any kind of unit testing though so unless you’re a no test kind of developer it’s probably best avoided.

Global scope

We could use $rootScope instead of app.constant to stash the external config file there. So accessing it would be a matter of grabbing the config from $rootScope thus

angular.module('app')
.service(‘MyService’, [‘$rootScope’, '$http', function ($rootScope, $http) 
{
   var serviceRoot = $rootScope.appConfig.apiRoot;
   //.. CRUD methods
}]);

I still like the idea of only the config being available in the method. I just get what I need, not the entire rootScope – that said this would work perfectly well.

Log Provider

This isn’t a complete replacement but it’s worth noting that debug config could be implemented through the $logProvider component if we wanted. To toggle on and off we would do this

  $logProvider.debugEnabled(false);

It’s not a complete replacement but in the interests of full disclosure there are other (better??) ways to get debug information in and out of angular that have their own config.

Useful Links

https://ilikekillnerds.com/2014/11/constants-values-global-variables-in-angularjs-the-right-way/
Article on app.constant vs app.value

http://stackoverflow.com/questions/6349232/whats-the-difference-between-a-global-var-and-a-window-variable-in-javascript
Useful stack overflow question comparing the windows object and global scope.

http://odetocode.com/Articles/345.aspx
Ode to Code on appSettings and web.config. That’s the part of the web.config I’m aiming to emulate in this post.

https://thinkster.io/egghead/testing-overview
Testing with angular. Here’s what you might miss if you want to implement this with a global object.

http://stackoverflow.com/questions/18880737/how-do-i-use-rootscope-in-angular-to-store-variables
Using rootscope to store variables

https://thinkster.io/egghead/index-event-log
logProvider examples – about halfway down.

Dealing with invalid js characters in VS Code

Even though mental stability is core personal principle that is rarely violated, here is something that was driving me mad recently.

The Problem

I was using VS Code do write a demo AngularJS application and I kept getting this error in my JavaScript files

[js] invalid character

With funny red squiggles thus

There was no rogue comma or strange character visible and no amount of deleting would resolve it. I’m more used to Visual Studio where odd things happen fairly frequently and this kind of thing can be ignored while the environment catches up with itself. However this floored VS Code and my lovely demo app just stop working. It was happening all the time. Bah!!

The Investigation

Although it isn’t visible in VS Code, the rogue character is visible in Chrome developer toolbar.

Press F12 -> Select Sources -> Navigate to the broken js file

The dodgy character is visible as a red dot and the character can be identified by hovering over it

In this case the character is

\u7f

\u donates Unicode and the 7f is hexadecimal so converting to decimal the character is 127. Looking this up on an ASCII table we find out it is a [DEL] character.

The Solution

Once the culprit is identified it’s an easier matter to use this regex

\u007f

to find and replace the rogue character and replace it with an empty string. Don’t forget to do a regex search in VS code – select the star icon in the search box as show below. This fixes the application and thus repairs my fragile mental state.

Why delete characters keep occurring in my JavaScript code is anyone’s guess. It could be a VS Code quirk but I’ve googled around it and haven’t found anything. It could be me flailing around with my new laptop and somehow inserting odd characters into files. I guess some things we aren’t meant to know.

Useful links

http://www.regular-expressions.info/unicode.html
How to write regex to find Unicode. The important point to remember is that it must contain 4 hexadecimal digits. So \u7f does find my DEL character – the regex needs to be padded out to \u007f

https://code.visualstudio.com/
Download page for VS Code. Recommended.

 

Better Numeric Range Input with ASP.NET MVC, HTML5 and JQuery

The Sin

I recently wrote this horrible code to generate a drop down box to select any number from 1 to 12.

Html.DropDownList("ddlClockHours", new List<SelectListItem>
 {
 new SelectListItem {Text = 1.ToString(), Value = 1.ToString()},
 new SelectListItem {Text = 2.ToString(), Value = 2.ToString()},
 new SelectListItem {Text = 3.ToString(), Value = 3.ToString()},
 new SelectListItem {Text = 4.ToString(), Value = 4.ToString()},
 new SelectListItem {Text = 5.ToString(), Value = 5.ToString()},
 @* And so on till 12 – I’m too embarrassed to continue *@
 })

Ouch! It was only a test harness but after an hour I still got sick of looking at it. It’s a sin that needs to be atoned for.

The Penance

For my penance I’ve said 5 Hail Marys and identified three better ways to generate numeric range dropdowns.

Using Enumerable.Range

If I want to stick with MVC and C# then there is a static method on Enumerable that will generate sequences of numbers i.e.

 Enumerable.Range(1, 12)

Will output 1 through to 12 as a list of ints. Leveraging this, my grotesque piece of Razor becomes

@Html.DropDownList("ddlClockHours", Enumerable.Range(1, 12)
.Select(x => new SelectListItem { Text = x.ToString(), 
Value = x.ToString() }));

Which renders out as a normal dropdown as before

Enumerable.Range

Much better.

Using HTML5

Of course we don’t even need to get into C#. If we’ve got the luxury of a modern browser we can just use HTML5 to give us a slider control.

<input id="clockHours" type="range" min="1" max="12"/>

Which renders as

HTML5

Better again.

Using JQuery

If you don’t have the luxury of a modern browser then you can fall back to JQuery UI which has a larger range of supported browsers. The code isn’t that much more but of course you need to reference JQuery and JQuery UI libraries. It’s another slider type control and the implementation is

<script>
 $(function() {
   $("#clockHoursJQuery").slider({
     min: 1,
     max: 12
   });
  });
 </script>

<h2>JQuery Numeric Range</h2>
<div id="clockHoursJQuery"></div>

Which looks like

JQuery UI

So much nicer.

The Atonement

To atone for my sin I swapped out the horrible code for the Enumerable.Range implementation. I think that’s my preference really. I don’t really want to include a whole bunch of scripts and css to get a decent control (JQuery) and I don’t want to limit myself to the latest and greatest browsers (HTML5). Beside I think Enumerable.Range is little known but pretty smart and let’s face it – who doesn’t want to be little known but pretty smart.

Useful links

https://www.dotnetperls.com/enumerable-range
Nice article on Enumerable.Range

http://www.htmlgoodies.com/html5/tutorials/whats-new-in-html5-forms-handling-numeric-inputs-using-the-number-and-range-input-types.html
More on the HTML slider control

https://jqueryui.com/slider/
Official documentation for the JQueryUI slider control

https://github.com/timbrownls20/Demo/blob/master/ASP.NET%20Core/Demo/Demo/Views/Demo/NumericDropdown.cshtml
As always there are code sample of all implementations on my git hub site.

 

Why Doesn’t My Visual Studio Solution Build? A Troubleshooting Guide

broken building

I’m pretty good at getting Visual Studio projects building correctly – I’m a bit of a Visual Studio whisperer. In one job I could get the ‘flag ship’ application up and running on a new machine in half the time of anyone else – that still meant half a day’s work. Strangely that’s not on my CV.

So here are the steps I go through to get a rogue project building in Visual Studio. These steps range from the basic to the bizarre. They are in the order I would do them and by following them I can pretty much get any Visual Studio project back on track. So put on your tin hat and let’s get that errant Visual Studio solution building.

Sanity Check

Before embarking on a full blown Visual Studio troubleshooting endeavour it’s best just to do a few simple checks.

Have you got latest code?

Everybody has wasted hours trying to get an old version of the code working. I still do it. The developer that sits opposite me still does it. My boss still does it. Don’t do it. Get the latest version of the code from your repository.

Does it build for other people in your team?

Just see if other people are having the same problem. If you have continuous build – check that is still working and running through cleanly. The code might not work for anyone. More illuminating – it might work for some at not others.

Basics

Every few weeks you will probably be faced with a non-building Visual Studio project. Here are some basic steps to help. These will probably be enough.

Clean and rebuild

clean solution

Go to

Solution Explorer -> Right Click -> Select Clean -> Wait -> Select rebuild

Often the bin directories have gone all wonky and are full of junk. Who knows what has happen to them. Clean and rebuild will refresh them and often work when a normal build doesn’t. Standard stuff – but then we are just beginning.

Build each project individually

Often you are faced with mountains of errors which is misleading. It could be one of your low level library projects that is failing to build and causing all other projects to fail. Rebuild each project individually starting with the low level ones that are dependencies for others. Sometimes that’s enough to get the entire solution building. At a minimum, you will better be able to see where the actual issue is and not be swamped by a heap of irrelevant error messages.

Close and reopen Visual Studio

It’s time to restart your Visual Studio. Don’t leave this till the end – it is often the problem. Turn it on and off again – it might help.

As a note – in my experience restarting your computer rarely helps. By all means try it but don’t be surprised when the solution still stubbornly refuses to build.

Manually delete all bin folders

This is really worth a try if you are working with an application with language variants. The language specific resource (e.g. Messages.fr-CH.resx) files compile down into satellite resource files in your bin folder that are contain in their own subfolder e.g.

…\bin\fr-CH\MyApplication.resources.dll

Weirdly Visual Studio Clean can leave these satellite assemblies behind. Your application will still build but it can cause changes in languages variants not to shine through.

This might seem like an edge case but this exact thing kept a colleague of mine baffled for 6 hours. It was a very emotional moment when we finally sorted it out. So this is very much worth a try.

Start Visual Studio in admin mode

run as administrator

I’m master of my own machine (i.e. I’m a local admin) and I’ve got my Visual Studio set to always open as an administrator. You might not. Right click and run as administrator.

If it isn’t possible get someone who is an administrator to open Visual Studio on your behalf. Then complain bitterly about not being a local admin or your own machine – you are a developer; you really should be.

Have you got bad references?

bad references

Just check all projects and make sure that the references are all there. Look out for the little yellow triangle. If you have bad references then jump to section below dealing with that.

Check IIS

This isn’t relevant for all projects but if you are building a web project using IIS then it’s worth doing checks around that. Obviously it’s a completely legitimate setup to use IIS Express. In that case this isn’t relevant.

IIS pointing at the wrong folder

This is often a problem when changing branches. Visual Studio normally makes a good job of switching for you but not always. If you are hooked up to the wrong folder then your project will build but you won’t be able to debug it. Also changes that you are convinced you are made won’t shine through. This has driven me crazy before.

IIS explore folder

To check

  1. Open IIS (run -> inetmgr)
  2. Navigate to website
  3. Press explore and confirm you are where you think you should be. You might not be in Kanas anymore.

Permissions on app pool

This won’t manifest itself as a build failure but the project won’t be running as expected particularly when accessing external resources. It might be worth checking what the app pool is running as.

 

application pool settings

To check

  1. Open IIS (run -> inetmgr)
  2. Application Pool -> select the one for the website
  3. Advanced settings

You can now check the user.

The application pool runs under ApplicationPoolIdentity by default. What I’ve sometimes seen is that it’s been changed to a normal user whose password has expired. This is typically fallout from previous Visual Studio failures echoing through the ages.

Bad references

bad references

If you are noticing a yellow triangle on your project references then Visual Studio can’t find a dependency. Sometimes there is no yellow triangle and all looks fine but it still can’t find the dependencies.

With the advent of nuget this is less of a problem now but it does still happen. There are instances where incorrect use of nuget makes it worse and more baffling.

Check reference properties

Go to the references -> right click -> properties

references properties

There are two things to look for

  1. Is it pointing to where you think it should be? It might not be
  2. Copy Local. Bit of witchcraft but I always set this to copy local. It will copy the dll into the bin folder so at least I can satisfy myself that it has found it and is copying it through OK.

Have you checked in the packages folder – don’t!

Even if you are using nuget your bad reference problem might not be at an end. Checking in the packages folder can cause the references not to be found. This is especially baffling since looking at the reference properties reveals no problems. The path to the required dlls is definitely valid and the dlls are definitely there. But it cannot be found – frustration and finger biting.

To resolve delete the packages folder then remove the folder from source control. Rebuild and all will be well.

Resetting Visual Studio

Resetting Visual Studio can (probably will) cause your environment to lose all your custom defaults – so all your shortcuts, settings perhaps plugins will go. Therefore I’ve left this step towards the end as it is not without consequences. That said it is often the resolution so it’s not a bad idea to do it earlier in the ‘what on earth is going wrong with Visual Studio’ resolution process.

Delete all temporary and cache files

This was a fix that often worked in slightly older versions of Visual Studio. I’m using VS 2015 currently and this often isn’t a problem. Still it is worth clearing out these folders and rebuilding

C:\Users\{user name}\AppData\Local\Microsoft\WebsiteCache

C:\Users\{user name}\AppData\Local\Temp

Delete project setting files

Your user settings for a project are stored in files with the extension *.csproj.user i.e

BookShelf.MVC.csproj.user

It’s worth deleting all those and rebuilding so reset user specific settings. Also if you have these checked into source control then remove them. They are specific to you and shouldn’t be in a shared repository.

Reset Visual Studio on the command line

When Visual Studio appears utterly broken and you are reaching for the uninstall button, this can often help. It takes Visual Studio back to its initial settings you will need to reapply any custom settings that you have.

Close VS then in a command prompt go to the folder that has the Visual Studio.exe (devenv.exe) i.e.

cd C:\Program Files (x86)\Microsoft Visual Studio {Version code}\Common7\IDE

For VS 2015 it is

cd C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE

Then

devenv /setup

Reopen visual studio

A similar approach can be used with devenv.exe /Resettings as detailed here.

Reset Visual Studio through file explorer

If the command line doesn’t work then try resetting Visual Studio via the file system. This often works when the command line doesn’t. Try this when Visual Studio is undergoing a profound collapse particularly when it keeps popping up an alert box detailing errors being written here…

C:\Users\{user name}\AppData\Roaming\Microsoft\VisualStudio\{VS version}\ActivityLog.xml

i.e. for Visual Studio 2013

C:\Users\codebuckets\AppData\Roaming\Microsoft\VisualStudio\12.0\ActivityLog.xml

To resolve for Visual Studio 2013

  1. Close VS
  2. Go to C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\12.0
  3. Rename the folder to C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\12.0.backup
  4. Reopen VS. The folder C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\12.0 will be regenerated.

The process is the same for other versions of Visual Studio except the version number at the end will be different i.e C:\Users\tbrown\AppData\Local\Microsoft\VisualStudio\14.0 for VS 2015.

Disable plugins

Leave this one till last because it’s a pain. Go to Tools -> Extensions and plugins. Then disable each plugin you can one by one.

disable plugin box

It’s a pain because even if it is a plugin that causes it, you have a choice of uninstalling and living without it or contacting the vendor. Clearly if you didn’t buy it then the vendor isn’t going to be interested in helping you. I’ve found PostSharp and Resharper the likely culprits here. The Resharper vendor was very helpful. PostSharp weren’t (because we hadn’t bought it!!).

Bizarre Ones

Be careful what you check in

Checking files into source control that you really shouldn’t can cause difficult to diagnose problems. This often happens to me for continuous builds where the user building has fewer privileges than I’m used to. It does happen locally too. The following files shouldn’t be checked in

  • Bin folders
  • Obj folders
  • Autogenerated xml (i.e. documentation generated during build)
  • Packages folder
  • .csproj.user files

If you have checked them in then delete from your disk, remove from source control and rebuild.

Is your file path too long

Windows limits the file path to 260 characters. It could be that you have exceeded this and Visual Studio has started to complain. The awkward thing is that it complains in a very oblique way. The error that you see will be something on the lines of…

“obj\Debug\LongFileName.cs” has an invalid name. The item metadata “%(FullPath)” cannot be applied to the path “obj\Debug\LongFileName.cs “. obj\Debug\\LongFileName.cs           ContainingProject.proj            C:\Program Files (x86)\MSBuild\14.0\bin\Microsoft.Common.CurrentVersion.targets

So not obvious. Double click the error and it will take you to the Microsoft.Common.CurrentVersions.targets file in the depths of the framework folder. Less than illuminating.

Once you have diagnosed this then the resolution is easy – move your project to a shorter file path. I’ve found this a problem when switching branches which can have longer file paths than the Main branch.

If all else fails

If all else fails uninstall Visual Studio and reinstall. But honestly, I have reached this frustrating point, uninstalled, reinstalled, waited hours and the problem persists. This might not be the cure that you were looking for. Go into a corner and have a good long think before you resort to this one.

So congrats if you have read this far (or commiserations as your Visual Studio install is clearly in a bad way) but hopefully this guide will enable you to have a healthy and happy Visual Studio for years to come.

Useful Links

http://stackoverflow.com/q/1880321/83178
Good stack overflow explanation on why the 260 character filepath limit exists in windows.

https://msdn.microsoft.com/en-us/library/ayds71se.aspx
Official advice from Microsoft about bad references.

http://stackoverflow.com/questions/1247457/difference-between-rebuild-and-clean-build-in-visual-studio
There is a difference between Clean + Build and ReBuild as detailed here.