Why I’m uninstalling Docker Desktop

Unkillable monster

It might be the parting of the ways for me and Docker Desktop for now. There have been good times. There have been bad times. But it’s relentless demands have become too much for one software developer to bear.

I’ve been noticing that both Visual Studio and SQL Server Management Studio have been particularly sluggish of late. They’ve never been the lightest of software companions but we’ve learnt to rub along together. I know their foibles and they know mine. So what gone wrong?

My hard disk is a reasonable 500GB but has been filling up rapidly of late. Could a steady diet of docker images and containers be to blame? But this shouldn’t phase the trusty SSMS warhorse. It knows how to get what it needs (typically most of my RAM). But what is this? It’s a new kid of the block. The Vmmem process is new and seems to be gobbling up CPU. I’m frequently running at 100% utilisation now. No wonder we are all looking a tad jaded.

An unwelcome visitor

I’m not deterred. Surely stopping the docker service and making sure it doesn’t auto restart should be enough. But no – vmmem continues to eat up my CPU. I try to kill it with task manager but it proves to be an unkillable monster. Could powershell assist as it has assisted me so often in the past. Not this time. Vmmem continues unbowed and unbroken.

But what is it? What is this ferocious beast? It appears to be a process used by virtual machines that bizarrely still runs when docker is turned off. It can’t be stopped. It can’t be killed. It continues to drain the life out of my computer. This intolerable situation has to come to an end.

And end it does; with a heavy heart Docker desktop is uninstalled.

End of the affair

Postscript

It’s a windows thing. I’ve heard complains that the windows implementation of docker desktop is particularly heavy so maybe things would be OK on a Linux box. It think my relationship with Docker Desktop is on a break and hasn’t irretrievably broken down. I’m talking to my local IT shop about an upgrade. When I’m on the latest Intel i9 chip with a bunch more hard disk space and probably more RAM too, then docker desktop and I can talk. We can go into couples counselling and see if there is a way to repair our fractured relationship.

80/20 rule for blogs

There is a pithy saying I’m fond of

The first 80% of a product features will be developed in 20% of the time
The last 20% of a product features will be developed in 80% of the time

I think this is truth of it. I’ve just noticed my blog is following the same rule

80% of my traffic comes from 20% of the posts.
20% of the traffic hits the remaining long tail of 80% of the posts

The bugger of it is, is that I have no idea if a blog post will be in the top 20% of the posts when I’m writing it. Conversely, I’ve a shrewd idea when I post, if is going to be in the bottom 80% of hyper unviewed posts. This post will definitely be in the stinky 80% pile. It’s all fairly immaterial though – most of my post are just for my own personal amusement anyway. And I’m easily amused.

Publishing SQL Server database in Docker

To me docker containers have an ethereal, almost unreal quality to them. Do they really exist? Where are they? What do they look like? To convince myself of their reality I want to use SQL Server Management Studio on the host to connect to a SQL instance in a running container. Along the way we shall

– bring up the container
– access container command shell
– find out where it is on the network
– connect to it from the host machine
– compare static and dynamically assigned container IPs

 Environment

I’m on windows 10 and I’m going to be working with windows containers.

docker-compose.yml

For this I don’t need a DockerFile as I’m just going to run a library image directly so I’m going straight to a docker compose file docker-compose.yml

version: '3.2'

services:
  db:
    image: microsoft/mssql-server-windows-developer
    ports:
      - "49401:1433"
    environment:
      - sa_password=Secret12345
      - ACCEPT_EULA=Y      
    container_name: sqlserver_db1

networks:
  default:
    external:
      name: nat


so to break it down

image: microsoft/mssql-server-windows-developer

Create the container from the image microsoft/mssql-server-windows-developer. If it hasn’t been downloaded then it will be when we bring up the container

ports:
- "49401:1433"

we are running on port 1433 internally to the container and the port 49401 will be bound to the host. I’m not running it on 1433:1433 because I’ve already got SQL Server on my host so that port is taken. Attempts to bind the host to a port that is already taken generates odd errors.

environment:
- sa_password=Secret12345
- ACCEPT_EULA=Y

we set the system admin password and accept the licence agreement

container_name: sqlserver_db1

and the container is given a name rather than one assigned by docker. It just keeps the rest of the examples easier. It’s not needed.

We haven’t specified a subnet or IP address so the network section tells docker to use its default network connection

Bringing the container up

To bring it up run

docker-compose up

which runs and brings up the docker container

If we jump onto another cmd window and run

docker container ls

We get the list of running containers thus

so we can see our sql box in it’s container in the platonic realm of container space that it is currently inhabiting. It’s called sqlserver_db1 as specified in our docker-compose.yml file

Getting an interactive shell on our container

The container still has an air on unreality to it. To start to resolve it into the real world I want to be able to run commands on it. To do this we need to bring up an interactive shell. I can bring up a cmd window

docker exec -ti sqlserver_db1 cmd

or a powershell window for more options

docker exec -ti sqlserver_db1 powershell

I’m doing the powershell. Once there I cna start to run whatever commands I see fit

echo 'Hello world'
hostname

and so on.

ping google.commands

is a good one to run to check if the container has network connectivity to the outside world. Mine didn’t and it took a while to work out why.

Where is it on the network

It’s not immediately apparent where the container is on the network. We’ve let docker dynamically assign the IP. To find it we go to our interactive shell and run

ipconfig

which will tell use the IP4 address

or outside of an interactive shell (from the host) we could use

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" sqlserver_db1

which will tell us the IP4 address also.

So right now our container is on 192.168.154.121 and we can ping it there. Going back to docker-compose.yml we also specify ports.

ports:
- "49401:1433"

so 1433 is the one that the docker container uses and we can telnet to it from the host to prove it’s open

telnet 192.168.154.121 1433

which it is. Also we can telnet to the host on the other port

telnet 127.0.0.1 49401

which is the same place but via the port that is bound to the host.

Connecting to the container with SSMS

I like to see things to full accept their existence. To convince my inner self let’s connect to the container with SQL Server Management Studio

We can connect to the host on its IP4

Or on the host IP by specifying the loopback IP and the port as we have bound the host to a port other than the default for SQL Server (1433).

Note that to connect via SSMS to a specific port the IP is separated by a comma i.e.

127.0.0.1,49401

Now we can see our container database from the host. We are convinced. It exists.

Connecting with a static IP

We can also assign the database container a static IP address by specifying a subnet and an IP address for our container in the docker-compose.yml file thus

version: '3.2'

services:
  db:
    image: microsoft/mssql-server-windows-developer
    ports:
      - "49401:1433"
    environment:
      - sa_password=Secret12345
      - ACCEPT_EULA=Y      
    networks:
      vpcbr:
        ipv4_address: 10.5.0.5
    container_name: sqlserver_db1
    
networks:
  vpcbr:
    driver: nat
    ipam:
     config:
       - subnet: 10.5.0.0/16      

The container will now always be on 10.5.0.5 and we can connect with SSMS on 10.5.0.5 without worrying about any of the intervening steps. I guess that’s the easy way.

Demo Code

In case anyone needs it the docker files for static and dynamic IP implementations are at on my github site here.

Useful Links

https://docs.docker.com/compose/compose-file/
Full specification of docker compose

The process cannot access the file error with Docker Compose

I came across this error the other day when I was working on windows with Docker

 Cannot start service db: failed to create endpoint dotnet-album-viewer_db_1 on network nat: failed during hnsCallRawResponse: hnsCall failed in Win32: The process cannot access the file because it is being used by another process. (0x20)

To say it’s misleading would be charitable – I was confused by this confusing message. Found the answer but it was a bit scattered over the internet so I thought I would bring it together.

The error

Consider this docker-compose.yml file

version: '3.2'

services:
  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "3306:4000"

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "80:80"
    environment:
      - "Data:Provider=MySQL"
      - "Data:ConnectionString=Server=db;Port=4000;Database=AlbumViewer;User=root;SslMode=None"      
    depends_on:
      - db

networks:
  default:
    external:
      name: nat

We are running a database container (db) and an application container (app). When we run

docker-compose up -d

to bring up the containers we get the error

Cannot start service db: failed to create endpoint dotnet-album-viewer_db_1 on network nat: failed during hnsCallRawResponse: hnsCall failed in Win32: The process cannot access the file because it is being used by another process. (0x20)

It makes it seem like the container is locked in some way, perhaps two instances running concurrently. That’s not the issue.

The cause

The problem is that we are running the containers on ports that have been taken by other processes. So, in my case it is both the database container and the application server that are at fault

Database container

  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "3306:4000"

The database container connects on port 4000 in the network internal to the containers.  The second port 3306 is what is exposed to the host machine. It’s port 3306 that is already taken by another process. To find which application is running on that port use netstat and PowerShell i.e.

netstat -aon | findstr 3306

which gives

MySql is on port 3306 hence I can’t run a container on that port and I get the weird ‘process cannot access file’ error. That does make sense as the image dockersamples/tidb:nanoserver-sac2016 is a MySQL compatible database – so it’s on the same port.

Application container

The application server is also targeting an in-use port

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "80:80"

The application container will use port 80 internally and also port 80 externally. I don’t need to go the trouble of running netstat. I know I’ve got a webserver (IIS) on that machine which has a default website running on port 80. I should have spotted that straight away and been a bit less baffled.

The resolution

I can either kill the processes that are taking that port or change the external port that the docker containers are going to use. I don’t really want to kill MySQL or IIS so I need 2 new ports.

I tend to use ports above 49200 as they are dynamic ports and are less likely to be in use (though I tend to use them up myself, so I still get clashes). Specifically

Well-known ports range from 0 through 1023.
Registered ports are 1024 to 49151.
Dynamic ports (also called private ports) are 49152 to 65535.

Once you’ve identified a port you can check if it’s in use by using netstat again

netstat -aon | findstr 49301

And see if you get a result. So, the final resolution is

version: '3.2'

services:
  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "49301:4000"

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "49250:80"
    environment:
      - "Data:Provider=MySQL"
      - "Data:ConnectionString=Server=db;Port=4000;Database=AlbumViewer;User=root;SslMode=None"      
    depends_on:
      - db

networks:
  default:
    external:
      name: nat

So db and app external ports are now in the dynamic range and when I run

docker-compose up -d

the error is gone, and my containers and the application come up. Lovely.

Useful links

The examples in this post come from here https://github.com/docker/labs/blob/master/windows/windows-containers/WindowsContainers.md

Useful post on how to check if your ports are in use. Further detail and explanation to what I’ve posted here.

List of well known ports https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers. Searching for port 3306 on Wikipedia would have shown me that it’s in use by MySQL. I guess a lot of people would have just known this. I didn’t.

Stack overflow answer as a reminder on which port in the docker-compose file is the external port and which one is the one used internally by docker. Spoiler alert: the first one is the externally exposed port and the second one is use by the applications running inside of the docker containers. It’s the first port that is causing us the problems.

Dynamically Loading Assemblies for Dependency Injection in .Net Core

We want to get all our assemblies registered for Dependency Injection in a .Net Core 3 application by scanning the bin folder and registering all the classes which implement a specific interface. Bit of a mouthful so a simple (and classic) example is

public class Foo: IFoo
{
}

public interface IFoo : ILifecycle
{
}
public class Bar: IBar
{
}

public interface IBar: ILifecycle
{
}
public class Bar: IBar
{
}

public interface IBar: ILifecycle
{
}

We could register each object by its own interface i.e.

services.AddTransient<IFoo, Foo>();
services.AddTransient<IBar, Bar>();

That’s fine for 2 but for hundreds it will be a pain. We want to register the assemblies based on an interface that they all inherit – in this case ILifecycle. Also, we want to register all the assemblies in a given folder, typically the bin folder. This is what didn’t work in .Net Core for me and hence the post – no answer on the internet (that I could find) so I’m filling the internet gap.

Implementation

We will use Scrutor as our DI framework. It’s an extension of the native .Net DI framework with extensions for scanning and registering assemblies en mass.

Incorrect Implementation – LoadFromFile

Using an MVC application and within Startup

public void ConfigureServices(IServiceCollection services)
{
//..lots of other code

foreach (string assemblyPath in Directory.GetFiles(System.AppDomain.CurrentDomain.BaseDirectory, "*.dll", SearchOption.AllDirectories))
{
  var assembly = Assembly.LoadFile(assemblyPath);
  assemblies.Add(assembly);
}

//.. register
services.Scan(scan => scan
   .FromAssemblies(assemblies)
   .AddClasses(classes => classes.AssignableTo<ILifecycle>(), publicOnly: false)
   .AsImplementedInterfaces()
   .WithTransientLifetime());

//.. test
var provider = services.BuildServiceProvider();
var foo = provider.GetService<IFoo>(); //.. returns null
var bar = provider.GetService<IBar>(); //.. return null
}

Frustratingly this doesn’t work and it’s not obvious why. The test returns null Foo and Bar objects even though we have registered them in the correct way i.e. by scanning for ILifecycle and registering the object as their matching interface. I we debug and look at the loaded assemblies everything looks fine. The assemblies have loaded and I can use reflection to poke into them further and satisfy myself that they are indeed the correct assemblies. Except they aren’t

Correct Implementation – AssemblyLoadContext

public void ConfigureServices(IServiceCollection services)
{
//..lots of other code

foreach (string assemblyPath in Directory.GetFiles(System.AppDomain.CurrentDomain.BaseDirectory, "*.dll", SearchOption.AllDirectories))
{
var assembly = System.Runtime.Loader.AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath);
assemblies.Add(assembly);
}

//.. register
services.Scan(scan => scan
.FromAssemblies(assemblies)
.AddClasses(classes => classes.AssignableTo<ILifecycle>(), publicOnly: false)
.AsImplementedInterfaces()
.WithTransientLifetime());

//.. test
var provider = services.BuildServiceProvider();
var foo = provider.GetService<IFoo>(); //.. returns null
var bar = provider.GetService<IBar>(); //.. return null
}

Very similar except we have swapped

Assembly.LoadFile(assemblyPath);

for

System.Runtime.Loader.AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath);

Now it works and the DI framework can provide us with the correct objects based on their matching interface. All is good.

Explanation

I don’t really have one so if anyone can post a comment explain this then that would be very interesting. I do know that .Net Core handles its dependencies differently to .net framework i.e. it’s all NuGet packages so it makes sense that we need another way to correctly load the assemblies with their dependencies. What’s puzzling is that Assembly.LoadFromFile is still there and appears to work – except it doesn’t for DI.

Other DI Frameworks

I wasn’t exhaustive but I tried the with StructureMap and AutoFac and got the same issue. So, it’s not limited to Scrutor. You’d face these problems with other frameworks as well.

Useful Links

Stack Overflow question on How to load assemblies located in a folder in .net core console app. It’s where I found the proper way to load assemblies

Git Hub for Scutor. Works well (when you get over the assembly loading issue).

Converting .Net Standard Libraries to .Net Core

I was looking for a quick answer to this on the internet and couldn’t find it. So here it is

Open your .Net Standard csproj file in notepad or similar text editor. You’ll see this structure

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<ProjectReference Include="..\..\..\Framework\AnotherLibrary.csproj" />
</ItemGroup>
</Project>

Change the target framework to the desired version of core

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
</PropertyGroup>
<ItemGroup>
<ProjectReference Include="..\..\..\Framework\AnotherLibrary.csproj" />
</ItemGroup>
</Project>

Save it
Reload the project in visual studio or other IDE
Done

(I know there will be all kinds of other complexities to this but I’ve just got a bunch of standard libraries that need to reference a core project and can’t. So, they all need a rapid upgrade to .Net Core – like so).

70-480 Programming in HTML5 with JavaScript and CSS3 – Study Notes

I recently took MS exam 70-480 (and passed and was surprisingly delighted). So before the good feelings dissolve away I thought I would publish a few notes and useful links. I’m not going to be comprehensive but I’m just going to highlight areas that came up for me that I needed deeper reading on.

Generally

The exam does feel old hat especially the JavaScript section. It is frustrating to have to remember Microsoft specific syntax (AddEventHandler!!!). But I’ve seen worse and there is stuff that is still interesting and relevant in there. The good thing is that there are techniques and syntax that crops up over and over and can really help you winnow the answers down on the exam.

Basic stuff to know

The book

I’m old school and still prefer to start with the written word on bits of paper. The standard book is this …

People are really negative about exam ref books generally and this one gets the obligatory poor reviews (and some goods ones in fairness). But do you know what – it’s fine. It’s not that readable, it’s got a lot of gaps and some of it is out of date now (and maybe was never correct in the first place). But it highlights all the syllabus areas for you then it is up to you to go on and read more. I think people get upset because the book alone isn’t enough to pass the exam. So long as you appreciate that then it’s fine.

Syllabus

https://www.microsoft.com/en-us/learning/exam-70-480.aspx

The key thing is to go through it all with a fine tooth comb (or hair care implement of choice). So biting our lips and going through each section ….

Section 1: Implement and Manipulate Document Structures and Objects

JavaScript

Important bits here is understanding the JavaScript this keyword (notoriously confusion) and a tight tight grasp of JavaScript scope. Other things are

HTML5

  • Semantic elements. If I’m honest, some of the questions have a ‘guess what I’m thinking’ quality to them. Get to grips with the Microsoft party line on when these should be used. Just suck it up.
  • Geolocation API
  • App Cache
  • Canvas appears a lot but there is a lot of stuff out there for instance here and here among lots of other places.
  • SVG. It’s good to know when to use SVG and when to use the canvas elements. For instance SVG scales inherently.

Section 2: Implement Program Flow

JavaScript Arrays

Good grasp of javascript control structures is vital so
array iteration (more detail here) also associative arrays and understand why they iterate differently.

Web sockets

Very high level only. How to call and pass events backwards I forwards. I read this and as interesting as it was it is way too much. Spend less time than I did on this.

Ajax

Conversely Ajax comes up a lot in detail

Web Workers

Web workers also comes up. Specific points I found were

Section 3: Access and Secure data

Reg Ex

Regular Expressions crops up here and and how to call from JavaScript Which methods belong to the regex pattern and which methods belong to the string class are useful to know here.

JQuery Forms

JQuery forms API. Pay attention to differences between serialise and serialiseArray methods and be secure in your knowledge of how to submit forms through JQuery.

JavaScript Encoding

JavaScript encoding and decoding of Uris and query string . Comes up bizzarely often so one to really know. Know which method gives which output and which one to use to ensure that the query string is encoded.

Html5 input types

Html 5 input types. Really useful to memorise them all and know all the possible attributes i.e.

  • step
  • min
  • max
  • title
  • pattern etc…

particular know the attributes that can be used to limit the data input format e.g. type=”number”.

Section 4: Use CSS3 in Applications

I wanted to do this exam primarily to improve my CSS which is my weakest web dev side so I spent a bit of time of this. I’m sure you will need less.

Basics

The cascade

Layout

Layout comes up a lot so

Text

Fancy stuff

Generally with the animation /transitions side of things I found it most useful to look at worked examples of some standard (though clever to me) forms of animations. Card flips and loading spinners are typical stuff.

Never saw

I never saw anything on the drag and drop api.  It had a decent sized section in the Microsoft exam ref book but I didn’t see this on the exam or the revision questions. Don’t blame me if you get 7 questions about it though.

Good luck

And of course best of luck for the exam. As painful as it can be, it does feel good to pass a Microsoft exam. Take it from someone who has passed onto the other side.

Extending TypeScript to serialise Map objects to JSON

The collection objects offered by ES6 such as Map and Set are indeed a marvel. One notable downside to them though is that they don’t serialise to JSON. For example

JSON.stringify(MyMapObject);

Results in

{}

Always – irrespective of what is in the map collection. Not great and a pain when we want to stash the map object in a cookie or local storage to be used later. It just doesn’t do it out of the box. Here I’m going to extend TypeScript to do this so I will never need to think about it again.

Serialising the Map object.

There are a number of different ways to get the Map object to serialise and deserialise in and out of JSON. This one relies on the contents of the Map object being serialisable. If they are then we can do the following

Serialise Map into JSON
function toJson(map) {
return JSON.stringify(Array.from(map.entries()));
}
Deserialise JSON into Map
function fromJson(jsonStr) {
return new Map(JSON.parse(jsonStr));
}

This works in JavaScript (ES6) and TypeScript – which after all is a superset of JavaScript.

Serialise the Map Object

It feels a bit clunky to have these methods hanging around in some utility library. What I want to do is extend the TypeScript Map class to have these. To do this go to a TypeScript definition file (*.d.ts) and enter the following

interface Map<K, V> {
toJson(): string;
}

This will extend our map object to have a toJson method. In a suitable library file place the implementation and associate with the prototype object of the Map object

Map.prototype.toJson = function toJson() {
return JSON.stringify(Array.from(this.entries()));
}

It’s not hugely intuitive why we need to extend the prototype object at this point. From the mozilla documention

All Map instances inherit from Map.prototype.

So if we want our new method on an instance of the Map object we need to extend the prototype object –  because Map instances all inherit from this.

This is then called thus

let jsonString = myMap.toJson();

Which can then be stashed away in local storage or the like.

Deserialise a JSON string into a Map object

We are going to do similar to implement the deserialisation part. It’s not quite the same though. If we are serialising a Map object it makes sense to have the toJson method associated with an instance of a Map object. If we are deserialising into a new Map object we want the method associated with the class- like a static method in C#. The following illustrates the difference

let myMap: Map<string, string> = new Map<string, string>();
myMap.set("key1", "value1")
myMap.set("key2", "value2")
myMap.set("key2", "value2")

//.. serialise method associated with the instance of the class
let jsonString = myMap.toJson();

//.. deserialise method associated with the class
let restoredMap: Map<string, string> = Map.fromJson(jsonString);

Again go to a TypeScript definition file (*.d.ts) and enter the following

interface MapConstructor {
fromJson(jsonStr: string);
}

Note – this time I am extending the MapConstructor object rather than the Map<K,V> object. In TypeScript the MapConstructor object returns a reference to the Map function that created the object. So by extending the MapConstructor Typescript object I am associating the method with the object that created the instance of the map object. I am associating the new method with the class and not the instance.

Now associate our implementation directly with the Map object which itself implements MapConstructor

Map.jsonToMap = function jsonToMap(jsonStr) {
return new Map(JSON.parse(jsonStr));
}

Which we call thus

let restoredMap: Map<string, string> = Map.jsonToMap(jsonString);

That’s it – the Map object now has toJson and fromJson methods which handle our JSON conversion. The Set object suffers from the same limitations and can be extended in the same way.

Useful Links

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
Map and Set objects are part of the ES6 standard. TypeScript offers a strongly typed version of these collections

https://codepunk.io/extending-strings-and-other-data-types-in-typescript/
More detail about extending built in types in TypeScript

https://stackoverflow.com/
The initial code to serialise and deserialise the Map object came from a stack overflow question i.e.

function toJson(map) {
return JSON.stringify(Array.from(map.entries()));
}

function fromJson(jsonStr) {
return new Map(JSON.parse(jsonStr));
}

but I’ve forgotten which one – if I find it I will pop the reference here in the interest of full disclosure and not ripping off someone else’s work and claiming it for my own.

Not as irreplaceable as I thought

Like all software developers, I like to think of myself as being irreplaceable. Clearly no-one on the entire planet combines my depth of technical knowledge, my amazing familiarly with the business domain and my legendary humility. I am truly a unique gift to any and all employers.

So I was particularly disappointed this week when I was replaced my this play-mobile figure at the family dinner table . I was out that evening so the daughter placed this unkempt looking plastic figure in my chair . She felt it was an adequate or even superior replacement. It’s uncertain whether my wife noticed the difference. Certainly the level of conversation was improved that evening.

So not as irreplaceable as I thought. I once replaced a colleague with a blow up sheep and sent him the photo. I thought it was very funny at the time – I realise now that it wasn’t as funny as I thought. I really hope my employer doesn’t realise how easily I can be replaced by a cheap child’s toy.

Selecting from a column with possible NULLs in T-SQL

I write blog posts for different reasons. Sometimes it’s because things aren’t well documented elsewhere, sometimes it’s new and interesting and sometimes it’s purely for my own amusement. Occasionally I write things as I kind of penance – things I should know about but bizarrely forgot or misunderstood. This is one of those. It’s how to select from a column with possible NULLs and not omit expected rows from the results set. This has been documented at length elsewhere but that’s not the point of penance.

Test Cases – equals operator

Working through some examples when comparing NULLs with the equals operator.

NULL is equal to ‘test’

SELECT CASE WHEN NULL = 'test' THEN 'true' ELSE 'false' END

False: as expected

NULL is not equal ‘test’

SELECT CASE WHEN NULL <> 'test' THEN 'true' ELSE 'false' END

False:  less expected

NULL is not equal to NULL

SELECT CASE WHEN NULL <> NULL THEN 'true' ELSE 'false' END

False: more expected

NULL is equal to NULL

SELECT CASE WHEN NULL = NULL THEN 'true' ELSE 'false' END

False: odd again

NULL isn’t equal to anything including itself when using the equals operator. I know this really and we see the same kind of thing in JavaScript.

Test cases – IS NULL

Of course it well known that equals doesn’t do the job with NULLS. The IS NULL operator is correct

Null is null

SELECT CASE WHEN NULL IS NULL THEN 'true' ELSE 'false' END

True.

Null is not null

SELECT CASE WHEN NULL IS NOT NULL THEN 'true' ELSE 'false' END

False.

All as expected.

Test case -selecting from a column with possible NULLs

So this is what tripped me up – when you have a column that is nullable and trying to filter as a string.

DECLARE @TestTable TABLE (Id INT NOT NULL, Comment VARCHAR(100))
INSERT INTO @TestTable (Id, Comment) VALUES (1, 'Test')
INSERT INTO @TestTable (Id, Comment) VALUES (2, 'Other Value')
INSERT INTO @TestTable (Id, Comment) VALUES (3,  NULL)

SELECT * FROM @TestTable WHERE Comment <> 'Test'

I naively thought that there are 2 rows that aren’t ‘Test’ so we will get those. But since NULL is not equal to ‘Test’ and not not equal to ‘Test’ then actually we get one less row than we expected

The correct select is

DECLARE @TestTable TABLE (Id INT NOT NULL, Comment VARCHAR(100))
INSERT INTO @TestTable (Id, Comment) VALUES (1, 'Test')
INSERT INTO @TestTable (Id, Comment) VALUES (2, 'Other Value')
INSERT INTO @TestTable (Id, Comment) VALUES (3,  NULL)

SELECT * FROM @TestTable WHERE COALESCE(Comment, '') <> 'Test'

Which converts the NULL to an empty string so then works

SELECT * FROM @TestTable WHERE ISNULL(Comment, '') <> 'Test'

Also works and is the same. Penitential blog post over.

Useful Links

https://stackoverflow.com/questions/18828641/sql-difference-between-coalesce-and-isnull
Comparison of ISNULL and COALESCE

http://adripofjavascript.com/blog/drips/equals-equals-null-in-javascript.html
It’s not just T-SQL. Special handling around NULLs in JavaScript