Blocked by CORS policy? Unblocking in .Net Core 3

A while ago I wrote an post about hosting Angular under an existing IIS website. Quite a few people seem to have found it useful which is good. My motivation was to avoid CORS policy errors i.e blocked JavaScript requests to a different domain. I bypassed them completely by hosting the Angular client and the UI under the same domain – bit of a cheat really. At the time I wrote

This is to avoid Cross Origin Scripting Issues in my own environment. [.. ]other ways to do this

Never worked
Were too invasive to implement on the API side
Did work then frustratingly stopped working

I don’t know why I was struggling so much. It turns out to be pretty straight forward to have a CORS policy that lets anything through. I suspect its got a lot easier in .Net Core 3. Perhaps I just missed it before.

Cross-origin resource sharing (CORS)

Just to define terms- CORS is a way to enable one website to access resources on another domain. Often requests are blocked if they are from a different host (same-origin policy). It’s typically when JavaScript clients (Angular, React etc..) make a request to a API on a different host using XMLHttpRequest. When this happens, we see something like

blocked by CORS policy

In this case we need a suitable CORS Policy.

Enabling CORS for all hosts in .Net Core

Here I’m going to create a very relaxed CORS policy; it’s going to let anything through. In Startup.cs file ConfigureServices add

public void ConfigureServices(IServiceCollection services)
{
    //.. configure other services
    
    services.AddCors(options =>
    {
        options.AddPolicy("AllOrigins",
            builder =>
            {
                builder.AllowAnyHeader().AllowAnyOrigin().AllowAnyMethod();
            });
    });
}

and wire it up in the Configure method

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{	
    //.. more code

    app.UseCors("AllOrigins");

}

and that’s it. The error goes away and any JavaScript client can make a request against this API. This is good for my development projects but if this was to go into production you’d want to consider a finer tuned CORS policy. Same-origin policy is implemented in browsers for a reason, not just to frustrate my demo projects.

Demo code

As ever, the full code is at on my github site here.

Useful Links

https://en.wikipedia.org/wiki/Cross-origin_resource_sharing#Headers
Wikipedia has a good page on Cross-origin resource sharing

https://code.google.com/archive/p/browsersec/wikis/Part2.wiki#Same-origin_policy.
I found the same-origin policy browser security notes from Google interesting. Same-origin policy is a bit of an umbrella terms for some related security concerns. The ones that is causing the problem here is the same-origin policy for XMLHttpRequest but there are same-origin policy for DOM access, cookies and Java (as well as Silverlight and Flash – remember those guys anyone?)

Why I’m uninstalling Docker Desktop

Unkillable monster

It might be the parting of the ways for me and Docker Desktop for now. There have been good times. There have been bad times. But it’s relentless demands have become too much for one software developer to bear.

I’ve been noticing that both Visual Studio and SQL Server Management Studio have been particularly sluggish of late. They’ve never been the lightest of software companions but we’ve learnt to rub along together. I know their foibles and they know mine. So what gone wrong?

My hard disk is a reasonable 500GB but has been filling up rapidly of late. Could a steady diet of docker images and containers be to blame? But this shouldn’t phase the trusty SSMS warhorse. It knows how to get what it needs (typically most of my RAM). But what is this? It’s a new kid of the block. The Vmmem process is new and seems to be gobbling up CPU. I’m frequently running at 100% utilisation now. No wonder we are all looking a tad jaded.

An unwelcome visitor

I’m not deterred. Surely stopping the docker service and making sure it doesn’t auto restart should be enough. But no – vmmem continues to eat up my CPU. I try to kill it with task manager but it proves to be an unkillable monster. Could powershell assist as it has assisted me so often in the past. Not this time. Vmmem continues unbowed and unbroken.

But what is it? What is this ferocious beast? It appears to be a process used by virtual machines that bizarrely still runs when docker is turned off. It can’t be stopped. It can’t be killed. It continues to drain the life out of my computer. This intolerable situation has to come to an end.

And end it does; with a heavy heart Docker desktop is uninstalled.

End of the affair

Postscript

It’s a windows thing. I’ve heard complains that the windows implementation of docker desktop is particularly heavy so maybe things would be OK on a Linux box. It think my relationship with Docker Desktop is on a break and hasn’t irretrievably broken down. I’m talking to my local IT shop about an upgrade. When I’m on the latest Intel i9 chip with a bunch more hard disk space and probably more RAM too, then docker desktop and I can talk. We can go into couples counselling and see if there is a way to repair our fractured relationship.

80/20 rule for blogs

There is a pithy saying I’m fond of

The first 80% of a product features will be developed in 20% of the time
The last 20% of a product features will be developed in 80% of the time

I think this is truth of it. I’ve just noticed my blog is following the same rule

80% of my traffic comes from 20% of the posts.
20% of the traffic hits the remaining long tail of 80% of the posts

The bugger of it is, is that I have no idea if a blog post will be in the top 20% of the posts when I’m writing it. Conversely, I’ve a shrewd idea when I post, if is going to be in the bottom 80% of hyper unviewed posts. This post will definitely be in the stinky 80% pile. It’s all fairly immaterial though – most of my post are just for my own personal amusement anyway. And I’m easily amused.

Accessing Docker container SQL Server with SSMS from host

To me docker containers have an ethereal, almost unreal quality to them. Do they really exist? Where are they? What do they look like? To convince myself of their reality I want to use SQL Server Management Studio on the host to connect to a SQL instance in a running container. Along the way we shall

– bring up the container
– access container command shell
– find out where it is on the network
– connect to it from the host machine
– compare static and dynamically assigned container IPs

 Environment

I’m on windows 10 and I’m going to be working with windows containers.

docker-compose.yml

For this I don’t need a DockerFile as I’m just going to run a library image directly so I’m going straight to a docker compose file docker-compose.yml

so to break it down

Create the container from the image microsoft/mssql-server-windows-developer. If it hasn’t been downloaded then it will be when we bring up the container

we are running on port 1433 internally to the container and the port 49401 will be bound to the host. I’m not running it on 1433:1433 because I’ve already got SQL Server on my host so that port is taken. Attempts to bind the host to a port that is already taken generates odd errors.

we set the system admin password and accept the licence agreement

and the container is given a name rather than one assigned by docker. It just keeps the rest of the examples easier. It’s not needed.

We haven’t specified a subnet or IP address so the network section tells docker to use its default network connection

Bringing the container up

To bring it up run

which runs and brings up the docker container

If we jump onto another cmd window and run

We get the list of running containers thus

so we can see our sql box in it’s container in the platonic realm of container space that it is currently inhabiting. It’s called sqlserver_db1 as specified in our docker-compose.yml file

Getting an interactive shell on our container

The container still has an air on unreality to it. To start to resolve it into the real world I want to be able to run commands on it. To do this we need to bring up an interactive shell. I can bring up a cmd window

or a powershell window for more options

I’m doing the powershell. Once there I cna start to run whatever commands I see fit

and so on.

is a good one to run to check if the container has network connectivity to the outside world. Mine didn’t and it took a while to work out why.

Where is it on the network

It’s not immediately apparent where the container is on the network. We’ve let docker dynamically assign the IP. To find it we go to our interactive shell and run

which will tell use the IP4 address

or outside of an interactive shell (from the host) we could use

which will tell us the IP4 address also.

So right now our container is on 192.168.154.121 and we can ping it there. Going back to docker-compose.yml we also specify ports.

so 1433 is the one that the docker container uses and we can telnet to it from the host to prove it’s open

which it is. Also we can telnet to the host on the other port

which is the same place but via the port that is bound to the host.

Connecting to the container with SSMS

I like to see things to full accept their existence. To convince my inner self let’s connect to the container with SQL Server Management Studio

We can connect to the host on its IP4

Or on the host IP by specifying the loopback IP and the port as we have bound the host to a port other than the default for SQL Server (1433).

Note that to connect via SSMS to a specific port the IP is separated by a comma i.e.

Now we can see our container database from the host. We are convinced. It exists.

Connecting with a static IP

We can also assign the database container a static IP address by specifying a subnet and an IP address for our container in the docker-compose.yml file thus

The container will now always be on 10.5.0.5 and we can connect with SSMS on 10.5.0.5 without worrying about any of the intervening steps. I guess that’s the easy way.

Demo Code

In case anyone needs it the docker files for static and dynamic IP implementations are at on my github site here.

Useful Links

https://docs.docker.com/compose/compose-file/
Full specification of docker compose

The process cannot access the file error with Docker Compose

I came across this error the other day when I was working on windows with Docker

 Cannot start service db: failed to create endpoint dotnet-album-viewer_db_1 on network nat: failed during hnsCallRawResponse: hnsCall failed in Win32: The process cannot access the file because it is being used by another process. (0x20)

To say it’s misleading would be charitable – I was confused by this confusing message. Found the answer but it was a bit scattered over the internet so I thought I would bring it together.

The error

Consider this docker-compose.yml file

We are running a database container (db) and an application container (app). When we run

to bring up the containers we get the error

Cannot start service db: failed to create endpoint dotnet-album-viewer_db_1 on network nat: failed during hnsCallRawResponse: hnsCall failed in Win32: The process cannot access the file because it is being used by another process. (0x20)

It makes it seem like the container is locked in some way, perhaps two instances running concurrently. That’s not the issue.

The cause

The problem is that we are running the containers on ports that have been taken by other processes. So, in my case it is both the database container and the application server that are at fault

Database container

The database container connects on port 4000 in the network internal to the containers.  The second port 3306 is what is exposed to the host machine. It’s port 3306 that is already taken by another process. To find which application is running on that port use netstat and PowerShell i.e.

which gives

MySql is on port 3306 hence I can’t run a container on that port and I get the weird ‘process cannot access file’ error. That does make sense as the image dockersamples/tidb:nanoserver-sac2016 is a MySQL compatible database – so it’s on the same port.

Application container

The application server is also targeting an in-use port

The application container will use port 80 internally and also port 80 externally. I don’t need to go the trouble of running netstat. I know I’ve got a webserver (IIS) on that machine which has a default website running on port 80. I should have spotted that straight away and been a bit less baffled.

The resolution

I can either kill the processes that are taking that port or change the external port that the docker containers are going to use. I don’t really want to kill MySQL or IIS so I need 2 new ports.

I tend to use ports above 49200 as they are dynamic ports and are less likely to be in use (though I tend to use them up myself, so I still get clashes). Specifically

Well-known ports range from 0 through 1023.
Registered ports are 1024 to 49151.
Dynamic ports (also called private ports) are 49152 to 65535.

Once you’ve identified a port you can check if it’s in use by using netstat again

And see if you get a result. So, the final resolution is

So db and app external ports are now in the dynamic range and when I run

the error is gone, and my containers and the application come up. Lovely.

Useful links

The examples in this post come from here https://github.com/docker/labs/blob/master/windows/windows-containers/WindowsContainers.md

Useful post on how to check if your ports are in use. Further detail and explanation to what I’ve posted here.

List of well known ports https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers. Searching for port 3306 on Wikipedia would have shown me that it’s in use by MySQL. I guess a lot of people would have just known this. I didn’t.

Stack overflow answer as a reminder on which port in the docker-compose file is the external port and which one is the one used internally by docker. Spoiler alert: the first one is the externally exposed port and the second one is use by the applications running inside of the docker containers. It’s the first port that is causing us the problems.

Dynamically Loading Assemblies for Dependency Injection in .Net Core

We want to get all our assemblies registered for Dependency Injection in a .Net Core 3 application by scanning the bin folder and registering all the classes which implement a specific interface. Bit of a mouthful so a simple (and classic) example is

We could register each object by its own interface i.e.

That’s fine for 2 but for hundreds it will be a pain. We want to register the assemblies based on an interface that they all inherit – in this case ILifecycle. Also, we want to register all the assemblies in a given folder, typically the bin folder. This is what didn’t work in .Net Core for me and hence the post – no answer on the internet (that I could find) so I’m filling the internet gap.

Implementation

We will use Scrutor as our DI framework. It’s an extension of the native .Net DI framework with extensions for scanning and registering assemblies en mass.

Incorrect Implementation – LoadFromFile

Using an MVC application and within Startup

Frustratingly this doesn’t work and it’s not obvious why. The test returns null Foo and Bar objects even though we have registered them in the correct way i.e. by scanning for ILifecycle and registering the object as their matching interface. I we debug and look at the loaded assemblies everything looks fine. The assemblies have loaded and I can use reflection to poke into them further and satisfy myself that they are indeed the correct assemblies. Except they aren’t

Correct Implementation – AssemblyLoadContext

Very similar except we have swapped

for

Now it works and the DI framework can provide us with the correct objects based on their matching interface. All is good.

Explanation

I don’t really have one so if anyone can post a comment explain this then that would be very interesting. I do know that .Net Core handles its dependencies differently to .net framework i.e. it’s all NuGet packages so it makes sense that we need another way to correctly load the assemblies with their dependencies. What’s puzzling is that Assembly.LoadFromFile is still there and appears to work – except it doesn’t for DI.

Other DI Frameworks

I wasn’t exhaustive but I tried the with StructureMap and AutoFac and got the same issue. So, it’s not limited to Scrutor. You’d face these problems with other frameworks as well.

Useful Links

Stack Overflow question on How to load assemblies located in a folder in .net core console app. It’s where I found the proper way to load assemblies

Git Hub for Scutor. Works well (when you get over the assembly loading issue).

Converting .Net Standard Libraries to .Net Core

I was looking for a quick answer to this on the internet and couldn’t find it. So here it is

Open your .Net Standard csproj file in notepad or similar text editor. You’ll see this structure

Change the target framework to the desired version of core

Save it
Reload the project in visual studio or other IDE
Done

(I know there will be all kinds of other complexities to this but I’ve just got a bunch of standard libraries that need to reference a core project and can’t. So, they all need a rapid upgrade to .Net Core – like so).

70-480 Programming in HTML5 with JavaScript and CSS3 – Study Notes

I recently took MS exam 70-480 (and passed and was surprisingly delighted). So before the good feelings dissolve away I thought I would publish a few notes and useful links. I’m not going to be comprehensive but I’m just going to highlight areas that came up for me that I needed deeper reading on.

Generally

The exam does feel old hat especially the JavaScript section. It is frustrating to have to remember Microsoft specific syntax (AddEventHandler!!!). But I’ve seen worse and there is stuff that is still interesting and relevant in there. The good thing is that there are techniques and syntax that crops up over and over and can really help you winnow the answers down on the exam.

Basic stuff to know

The book

I’m old school and still prefer to start with the written word on bits of paper. The standard book is this …

People are really negative about exam ref books generally and this one gets the obligatory poor reviews (and some goods ones in fairness). But do you know what – it’s fine. It’s not that readable, it’s got a lot of gaps and some of it is out of date now (and maybe was never correct in the first place). But it highlights all the syllabus areas for you then it is up to you to go on and read more. I think people get upset because the book alone isn’t enough to pass the exam. So long as you appreciate that then it’s fine.

Syllabus

https://www.microsoft.com/en-us/learning/exam-70-480.aspx

The key thing is to go through it all with a fine tooth comb (or hair care implement of choice). So biting our lips and going through each section ….

Section 1: Implement and Manipulate Document Structures and Objects

JavaScript

Important bits here is understanding the JavaScript this keyword (notoriously confusion) and a tight tight grasp of JavaScript scope. Other things are

HTML5

  • Semantic elements. If I’m honest, some of the questions have a ‘guess what I’m thinking’ quality to them. Get to grips with the Microsoft party line on when these should be used. Just suck it up.
  • Geolocation API
  • App Cache
  • Canvas appears a lot but there is a lot of stuff out there for instance here and here among lots of other places.
  • SVG. It’s good to know when to use SVG and when to use the canvas elements. For instance SVG scales inherently.

Section 2: Implement Program Flow

JavaScript Arrays

Good grasp of javascript control structures is vital so
array iteration (more detail here) also associative arrays and understand why they iterate differently.

Web sockets

Very high level only. How to call and pass events backwards I forwards. I read this and as interesting as it was it is way too much. Spend less time than I did on this.

Ajax

Conversely Ajax comes up a lot in detail

Web Workers

Web workers also comes up. Specific points I found were

Section 3: Access and Secure data

Reg Ex

Regular Expressions crops up here and and how to call from JavaScript Which methods belong to the regex pattern and which methods belong to the string class are useful to know here.

JQuery Forms

JQuery forms API. Pay attention to differences between serialise and serialiseArray methods and be secure in your knowledge of how to submit forms through JQuery.

JavaScript Encoding

JavaScript encoding and decoding of Uris and query string . Comes up bizzarely often so one to really know. Know which method gives which output and which one to use to ensure that the query string is encoded.

Html5 input types

Html 5 input types. Really useful to memorise them all and know all the possible attributes i.e.

  • step
  • min
  • max
  • title
  • pattern etc…

particular know the attributes that can be used to limit the data input format e.g. type=”number”.

Section 4: Use CSS3 in Applications

I wanted to do this exam primarily to improve my CSS which is my weakest web dev side so I spent a bit of time of this. I’m sure you will need less.

Basics

The cascade

Layout

Layout comes up a lot so

Text

Fancy stuff

Generally with the animation /transitions side of things I found it most useful to look at worked examples of some standard (though clever to me) forms of animations. Card flips and loading spinners are typical stuff.

Never saw

I never saw anything on the drag and drop api.  It had a decent sized section in the Microsoft exam ref book but I didn’t see this on the exam or the revision questions. Don’t blame me if you get 7 questions about it though.

Good luck

And of course best of luck for the exam. As painful as it can be, it does feel good to pass a Microsoft exam. Take it from someone who has passed onto the other side.

Extending TypeScript to serialise Map objects to JSON

The collection objects offered by ES6 such as Map and Set are indeed a marvel. One notable downside to them though is that they don’t serialise to JSON. For example

Results in

Always – irrespective of what is in the map collection. Not great and a pain when we want to stash the map object in a cookie or local storage to be used later. It just doesn’t do it out of the box. Here I’m going to extend TypeScript to do this so I will never need to think about it again.

Serialising the Map object.

There are a number of different ways to get the Map object to serialise and deserialise in and out of JSON. This one relies on the contents of the Map object being serialisable. If they are then we can do the following

Serialise Map into JSON

Deserialise JSON into Map

This works in JavaScript (ES6) and TypeScript – which after all is a superset of JavaScript.

Serialise the Map Object

It feels a bit clunky to have these methods hanging around in some utility library. What I want to do is extend the TypeScript Map class to have these. To do this go to a TypeScript definition file (*.d.ts) and enter the following

This will extend our map object to have a toJson method. In a suitable library file place the implementation and associate with the prototype object of the Map object

It’s not hugely intuitive why we need to extend the prototype object at this point. From the mozilla documention

All Map instances inherit from Map.prototype.

So if we want our new method on an instance of the Map object we need to extend the prototype object –  because Map instances all inherit from this.

This is then called thus

Which can then be stashed away in local storage or the like.

Deserialise a JSON string into a Map object

We are going to do similar to implement the deserialisation part. It’s not quite the same though. If we are serialising a Map object it makes sense to have the toJson method associated with an instance of a Map object. If we are deserialising into a new Map object we want the method associated with the class- like a static method in C#. The following illustrates the difference

Again go to a TypeScript definition file (*.d.ts) and enter the following

Note – this time I am extending the MapConstructor object rather than the Map<K,V> object. In TypeScript the MapConstructor object returns a reference to the Map function that created the object. So by extending the MapConstructor Typescript object I am associating the method with the object that created the instance of the map object. I am associating the new method with the class and not the instance.

Now associate our implementation directly with the Map object which itself implements MapConstructor

Which we call thus

That’s it – the Map object now has toJson and fromJson methods which handle our JSON conversion. The Set object suffers from the same limitations and can be extended in the same way.

Useful Links

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
Map and Set objects are part of the ES6 standard. TypeScript offers a strongly typed version of these collections

https://codepunk.io/extending-strings-and-other-data-types-in-typescript/
More detail about extending built in types in TypeScript

https://stackoverflow.com/
The initial code to serialise and deserialise the Map object came from a stack overflow question i.e.

but I’ve forgotten which one – if I find it I will pop the reference here in the interest of full disclosure and not ripping off someone else’s work and claiming it for my own.

Not as irreplaceable as I thought

Like all software developers, I like to think of myself as being irreplaceable. Clearly no-one on the entire planet combines my depth of technical knowledge, my amazing familiarly with the business domain and my legendary humility. I am truly a unique gift to any and all employers.

So I was particularly disappointed this week when I was replaced my this play-mobile figure at the family dinner table . I was out that evening so the daughter placed this unkempt looking plastic figure in my chair . She felt it was an adequate or even superior replacement. It’s uncertain whether my wife noticed the difference. Certainly the level of conversation was improved that evening.

So not as irreplaceable as I thought. I once replaced a colleague with a blow up sheep and sent him the photo. I thought it was very funny at the time – I realise now that it wasn’t as funny as I thought. I really hope my employer doesn’t realise how easily I can be replaced by a cheap child’s toy.