Applying Entity Framework Migrations to a Docker Container

I’m going to run through how to deploy an API and a database into two separate Docker containers then apply Entity Framework migrations. This will create and populate the database with the correct schema and reference data. My idea was that EF migrations should be a straightforward way to initialise a database. It wasn’t that easy. I’m going to go through the failed attempts as I think they are instructive. I know that most people just want the answer – so if that’s you then just jump to the end and it’s there.

Environment

I’m using a .Net Core 3 API with Entity Framework Core and the database is MySQL. I’ll also touch on how you would do it with Entity Framework 6. The docker containers are Windows, though as it’s .Net Core and MySQL you could use Linux as well if needed.

The demo project is called Learning Analytics and it’s simple student management application. It’s just what I’m tinkering around with at the moment.

Deploying into Docker without migrations.

The DockerFile is

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1903 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS build
WORKDIR /src
COPY ["LearningAnalytics.API/LearningAnalytics.API.csproj", "LearningAnalytics.API/"]
RUN dotnet restore "LearningAnalytics.API/LearningAnalytics.API.csproj"
COPY . .
WORKDIR "/src/LearningAnalytics.API"
RUN dotnet build "LearningAnalytics.API.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "LearningAnalytics.API.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "LearningAnalytics.API.dll"]

and there is a docker-compose.yml file to bring up the API container above and the database ….

services:
  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "49301:4000"

  app:
    image: learninganalyticsapi:dev
    build:
      context: .
      dockerfile: LearningAnalytics.API\Dockerfile
    ports:
      - "49501:80"
    environment:
      - "ConnectionStrings:LearningAnalyticsAPIContext=Server=db;Port=4000;Database=LearningAnalytics;User=root;SslMode=None;ConnectionReset=false;connect timeout=3600"     
    depends_on:
      - db

networks:
  default:
    external:
      name: nat

if I go to the directory containing docker-compose.yml file and run

docker-compose up -d

I’ll get the database and the api up. I can browse to the API at a test endpoint (the API is bound to port 49501 in the docker compose file)

http://localhost:49501/test

but if I try to access the API and get a list of students at

http://localhost:49501/api/student

then the application will crash because the database is blank. I haven’t done anything to populate it. I’m going to use migrations to do that.

Deploying into Docker with migrations – what doesn’t work

I thought it would be easy but it proved not to be.

Attempt 1 – via docker-compose

My initial thought was run the migrations as part of the docker-compose file using the command directive. So in the docker-compose file

  app:
    image: learninganalyticsapi:dev
    build:
      context: .
      dockerfile: LearningAnalytics.API\Dockerfile
    ports:
      - "49501:80"
    environment:
      - "ConnectionStrings:LearningAnalyticsAPIContext=Server=db;Port=4000;Database=LearningAnalytics;User=root;SslMode=None;ConnectionReset=false;connect timeout=3600"     
    depends_on:
      - db
	command: ["dotnet", "ef", "database update"]

The app server depends on the database (depends_on) so docker compose will bring them up in dependency order. However even though the app container comes up after the db container it, isn’t necessarily ‘ready’. The official documentation says

However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) – only until it’s running.

So when I try to run entity framework migrations against the db container from the app container it fails. The db container isn’t ready and isn’t guaranteed to be either.

Attempt 2 – via interactive shell

I therefore thought I could do the same but run it afterwards via an interactive shell (details of an interactive shell is here). The idea being that I could wrap all this up in a PowerShell script looking like this

docker-compose up -d
docker exec learninganalytics_app_1 c:\migration\LearningAnalytics.Migration.exe

but this doesn’t work because

  1. the container doesn’t have the SDK installed as part of the base image so donet command isn’t available. This is resolvable
  2. EF core migrations needs the source code to run. We only have the built application in the container; as it should be. This sucks and isn’t resolvable

Attempt 3 – via the Startup class

I’m coming round to the idea that there is going to have to be some kind of code change in the application. I can apply migrations easily via C#. So in the startup class I could do

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
	using (var serviceScope = app.ApplicationServices.GetService<IServiceScopeFactory>().CreateScope())
	{
		var context = serviceScope.ServiceProvider.GetRequiredService<MyDatabaseContext>();
		context.Database.Migrate();
	}
	
	//.. other code
}	

Which does work but isn’t great. My application is going to apply migrations every time it starts – not very performant. I don’t like it.

Deploying into Docker with migrations – what does work

The resolution is a combination of the failed attempts. The principle is

  1. Provide a separate utility that can run migrations
  2. deploy this into the docker application container into it’s own folder
  3. run it after docker-compose
  4. wrap it up in a PowerShell script.

Ef Migration Utility

This is a simple console app that references the API. The app is

class Program
{
	static void Main(string[] args)
	{
		Console.WriteLine("Applying migrations");
		var webHost = new WebHostBuilder()
			.UseContentRoot(Directory.GetCurrentDirectory())
			.UseStartup<ConsoleStartup>()
			.Build();

		using (var context = (DatabaseContext) webHost.Services.GetService(typeof(DatabaseContext)))
		{
			context.Database.Migrate();
		}
		Console.WriteLine("Done");
	}
}

and the Startup class is a stripped down version of the API start up

public class ConsoleStartup
{
	public ConsoleStartup()
	{
		var builder = new ConfigurationBuilder()
			.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
			.AddEnvironmentVariables();
		Configuration = builder.Build();
   }

	public IConfiguration Configuration { get; }

	public void ConfigureServices(IServiceCollection services)
	{
		services.AddDbContext<DatabaseContext>(options =>
		{
			options.UseMySql(Configuration.GetConnectionString("LearningAnalyticsAPIContext"));

		});
	}

	public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
	{
   
	}
}

I just need the Startup to read the app.config and get the database context up which this does. The console app references the API so it can use the API’s config files so I don’t have to double key the config into the console app.

DockerFile amends

The DockerFile file needs to be amended to deploy the migrations application into a separate folder on the app container file system. It becomes

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1903 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS build
WORKDIR /src
COPY ["LearningAnalytics.API/LearningAnalytics.API.csproj", "LearningAnalytics.API/"]
RUN dotnet restore "LearningAnalytics.API/LearningAnalytics.API.csproj"
COPY . .
WORKDIR "/src/LearningAnalytics.API"
RUN dotnet build "LearningAnalytics.API.csproj" -c Release -o /app/build

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS migration
WORKDIR /src
COPY . .
RUN dotnet restore "LearningAnalytics.Migration/LearningAnalytics.Migration.csproj"
COPY . .
WORKDIR "/src/LearningAnalytics.Migration"
RUN dotnet build "LearningAnalytics.Migration.csproj" -c Release -o /app/migration

FROM build AS publish
RUN dotnet publish "LearningAnalytics.API.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /migration
COPY --from=migration /app/migration .

WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "LearningAnalytics.API.dll"]

the relevant part is

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS migration
WORKDIR /src
COPY . .
RUN dotnet restore "LearningAnalytics.Migration/LearningAnalytics.Migration.csproj"
COPY . .
WORKDIR "/src/LearningAnalytics.Migration"
RUN dotnet build "LearningAnalytics.Migration.csproj" -c Release -o /app/migration

which builds out the migration application and …

FROM base AS final
WORKDIR /migration
COPY --from=migration /app/migration .

which copies it into a folder on the published container called migrations

Glue it together with PowerShell

Once the containers are brought up with docker-compose then it’s straightforward to use an interactive shell to navigate to the LearningAnalytics.Migration.exe application and run it. That will initialise the database. A better solution is to wrap it all up in a simple PowerShell script e.g.

docker-compose up -d
docker exec learninganalytics_app_1 c:\migration\LearningAnalytics.Migration.exe

and run that. The container comes up and the database is populated with the correct schema and reference data via EF migrations. The API now works correctly.

Entity Framework 6

The above is all for Entity Framework Core. Entity Framework 6 introduced the Migrate.exe tool . This can apply EF migrations without the source code which was the major stumbling block for EF Core. Armed with this then you could copy this up to the container and perform the migrations via something like

docker exec learninganalytics_app_1 Migration.exe

Do Migrations suck though?

This person thinks so. Certainly the inability to run them on compiled code is a huge drag. Whenever I write a production application then I prefer to just write the SQL out for the schema and apply it with some PowerShell. It’s not that hard. I like to use migrations for personal projects but there must be a reason that I’m not using them when I get paid to write code. Do I secretly think that they suck just a little?

Demo code

As ever, demo code is on my git hub site

https://github.com/timbrownls20/Learning-Analytics/tree/master/LearningAnalytics/LearningAnalytics.Migration
is the migration app

https://github.com/timbrownls20/Learning-Analytics/blob/master/LearningAnalytics/LearningAnalytics.API/DockerfileMigrations
the DockerFile

https://github.com/timbrownls20/Learning-Analytics/tree/master/LearningAnalytics
for the docker-compose.yml file and the simple PowerShell that glues it together

Useful links


This Stack Overflow question was the starting point for a lot of this and this answer particularly has a good discussion and some other options on how to achieve this – none of them are massively satisfactory. I felt something like what I’ve done was about the best.

https://docs.docker.com/compose/startup-order/
discusses why you can’t rely on the depends_on directive to make the database available to the application when you are bringing up the containers. It has more possibilities to circumvent this, such as wait-for-it. I’m certainly going to look at these but they do seem scoped to Linux rather than Windows so I’d have to change around the docker files for that. Also they wouldn’t help with Entity Framework 6 or earlier.

Browsing the File System in Windows and Linux Docker Containers

I’ve written a few posts about Docker now so I thought I would just step back and write a set of instructions on how to browse the file system via an interactive shell on a running container. Although it’s basic I’d like to just reference these kind of instructions in other posts so I can avoid repeating myself. Also, people need simple guides to basic processes anyway – just watch me with a powerdrill and you’ll see someone in dire need of a basic guide.

Environment

I’m running Docker on a windows machine but I’ll be bring up windows and Linux containers.

The test project is a simple .Net Core project for managing student tests which I’ve ambitiously called Learning Analytics.

Windows Container

Using this simple DockerFile

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1903 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS build
WORKDIR /src
COPY ["LearningAnalytics.API/LearningAnalytics.API.csproj", "LearningAnalytics.API/"]
RUN dotnet restore "LearningAnalytics.API/LearningAnalytics.API.csproj"
COPY . .
WORKDIR "/src/LearningAnalytics.API"
RUN dotnet build "LearningAnalytics.API.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "LearningAnalytics.API.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "LearningAnalytics.API.dll"]

Build it into an image

docker build . -f "LearningAnalytics.API\DockerFile" -t learninganalyticsapi:dev

It will be named learninganalyticsapi and tagged dev.

Now run the image as a container called learninganalyticsapi_app_1 in detached mode.

docker run -d -p 80:80 --name learninganalyticsapi_app_1 learninganalyticsapi:dev dotnet c:/app/publish/LearningAnalytics.API.dll

It’s going to bind the output of the api to port 80 of the host. Assuming there is nothing already bound to port 80, I can navigate to a test page here

http://localhost/test

And I will get a test message which confirms the container is up and running.

Now run the cmd shell in inteactive mode

docker exec -it learninganalyticsapi_app_1 cmd

Now we are on the running container itself so running these commands

cd ..
dir

will navigate up to the root of the container and I can see what the top level directories are like so ….

Obviously now I’ve got an interactive shell I can do anything that shell supports. Browsing files is just an easy example.

Once I’m done then type exit to end the interactive session and I’m back to the host.

Linux Container

So same again for a Linux container. It’s going to be pretty similar

Using this simple Docker file

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["LearningAnalytics.API/LearningAnalytics.API.csproj", "LearningAnalytics.API/"]
RUN dotnet restore "LearningAnalytics.API/LearningAnalytics.API.csproj"
COPY . .
WORKDIR "/src/LearningAnalytics.API"
RUN dotnet build "LearningAnalytics.API.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "LearningAnalytics.API.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "LearningAnalytics.API.dll"]

Build and run the container

docker build . -f "LearningAnalytics.API\DockerFile" -t learninganalyticsapi:dev

docker run -d -p 49501:80 --name learninganalyticsapi_app_2 learninganalyticsapi:dev dotnet c:/app/publish/LearningAnalytics.API.dll

The only difference here is that I’ve bound it to a different port on the host.I’m working against port 49501. It’s just because I’ve already bound to port 80 in the first example so it’s now in use. If I use port 80 again then I get these kind of errors. So the test page for the Linux box is at

http://localhost:49501/test

Also the name of the container is learninganalyticsapi_app_2 to differentiate it from the Windows one which is already there from the first example.

Now bring up the shell, which is bash for Linux

docker exec -it learninganalyticsapi_app_2 bash

Now go to the root and list files. Slightly different commands than before

cd ..
ls

and we get this

which is the folders at the root of the Linux container.

As before type exit to end the interactive shell and return to the host.

Demo Code

As ever, the source code is on my GitHub site

https://github.com/timbrownls20/Learning-Analytics/tree/master/LearningAnalytics

It’s just an API with a MySQL database. I’m just bringing up the docker container for this demo. The windows Docker file is

https://github.com/timbrownls20/Learning-Analytics/blob/master/LearningAnalytics/LearningAnalytics.API/DockerfileWindows

and the Linux one is

https://github.com/timbrownls20/Learning-Analytics/blob/master/LearningAnalytics/LearningAnalytics.API/DockerfileLinux

you could do similar to the above but replace the build and run steps with a docker-compose.yml file. An example is here

https://github.com/timbrownls20/Learning-Analytics/blob/master/LearningAnalytics/docker-compose.yml

which brings up the API container and one for the database. The principle is the same though.

NuGet restore failing in Docker Container

I was tempted to write about this before, but I didn’t as there is already a very good, highly rated stack overflow answer with the solution. However, I’m just reinstalling Docker desktop and getting things working again and I wish I had written this stuff down as I’ve forgotten it. One of the many reasons to write blog posts is to fix stuff in my memory and as my own personal development notes. So in that spirit…

The Problem

We have a very simple .Net Core MVC solution.

It has the following NuGet packages

Install-Package NugetSample.NugetDemo.Demo -Version 1.0.0
Install-Package bootstrap -Version 4.5.0

With this DockerFile to containerise it

FROM mcr.microsoft.com/dotnet/core/aspnet AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk
WORKDIR /src
COPY ["Template.Web.csproj", "Template.Web/"]
RUN dotnet restore "Template.Web/Template.Web.csproj"
COPY . .
WORKDIR "/src/Template.Web"
RUN dotnet build "Template.Web.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "Template.Web.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Template.Web.dll"]

We go to the directory with the DockerFile and try to build it into a container with

docker build .

It fails on the dotnet restore step like so …

i.e. with this error

C:\Program Files\dotnet\sdk\3.1.302\NuGet.targets(128,5): error : Unable to load the service index for source https://api.nuget.org/v3/index.json. [C:\src\Template.Web\Template.Web.csproj]
C:\Program Files\dotnet\sdk\3.1.302\NuGet.targets(128,5): error :   No such host is known. [C:\src\Template.Web\Template.Web.csproj]
The command 'cmd /S /C dotnet restore "Template.Web/Template.Web.csproj"' returned a non-zero code: 1

NuGet is failing us

The Cause

The container doesn’t have connectivity to the internet so can’t get bring down the packages. We can see this clearly by building this very very simple docker file

FROM mcr.microsoft.com/dotnet/core/sdk
RUN ping google.com

The ping fails. The host (my development machine) does have internet access – I would have noticed if that had gone down and I would be hysterically ringing Telstra (again). So it’s something specific to the container.

The Resolution

The DNS server is wrong in the container. To fix, hardcode the DNS into Docker i.e. put this JSON

"dns": ["10.1.2.3", "8.8.8.8"]

into the Docker daemon settings. In Docker Desktop it’s here

And restart the docker service. The container now has internet access, NuGet restore will work and we can now containerise our very simple web application.

Demo Code

As ever, the demo code is on my GitHub site

The very simple application
https://github.com/timbrownls20/Demo/tree/master/ASP.NET%20Core/Template

and its docker file
https://github.com/timbrownls20/Demo/blob/master/ASP.NET%20Core/Template/Template.Web/Dockerfile

Docker file for the internet test
https://github.com/timbrownls20/Demo/blob/master/Docker/InternetTest/DockerFile

Useful Links

This Stack Overflow answer has the resolution to this with a very good explanation. Also it has other (probably better) ways to fix this and resolutions to other Docker network issues that you may face.

Why I’m uninstalling Docker Desktop

Unkillable monster

It might be the parting of the ways for me and Docker Desktop for now. There have been good times. There have been bad times. But it’s relentless demands have become too much for one software developer to bear.

I’ve been noticing that both Visual Studio and SQL Server Management Studio have been particularly sluggish of late. They’ve never been the lightest of software companions but we’ve learnt to rub along together. I know their foibles and they know mine. So what gone wrong?

My hard disk is a reasonable 500GB but has been filling up rapidly of late. Could a steady diet of docker images and containers be to blame? But this shouldn’t phase the trusty SSMS warhorse. It knows how to get what it needs (typically most of my RAM). But what is this? It’s a new kid of the block. The Vmmem process is new and seems to be gobbling up CPU. I’m frequently running at 100% utilisation now. No wonder we are all looking a tad jaded.

An unwelcome visitor

I’m not deterred. Surely stopping the docker service and making sure it doesn’t auto restart should be enough. But no – vmmem continues to eat up my CPU. I try to kill it with task manager but it proves to be an unkillable monster. Could powershell assist as it has assisted me so often in the past. Not this time. Vmmem continues unbowed and unbroken.

But what is it? What is this ferocious beast? It appears to be a process used by virtual machines that bizarrely still runs when docker is turned off. It can’t be stopped. It can’t be killed. It continues to drain the life out of my computer. This intolerable situation has to come to an end.

And end it does; with a heavy heart Docker desktop is uninstalled.

End of the affair

Postscript

It’s a windows thing. I’ve heard complains that the windows implementation of docker desktop is particularly heavy so maybe things would be OK on a Linux box. It think my relationship with Docker Desktop is on a break and hasn’t irretrievably broken down. I’m talking to my local IT shop about an upgrade. When I’m on the latest Intel i9 chip with a bunch more hard disk space and probably more RAM too, then docker desktop and I can talk. We can go into couples counselling and see if there is a way to repair our fractured relationship.

Publishing SQL Server database in Docker

To me docker containers have an ethereal, almost unreal quality to them. Do they really exist? Where are they? What do they look like? To convince myself of their reality I want to use SQL Server Management Studio on the host to connect to a SQL instance in a running container. Along the way we shall

– bring up the container
– access container command shell
– find out where it is on the network
– connect to it from the host machine
– compare static and dynamically assigned container IPs

 Environment

I’m on windows 10 and I’m going to be working with windows containers.

docker-compose.yml

For this I don’t need a DockerFile as I’m just going to run a library image directly so I’m going straight to a docker compose file docker-compose.yml

version: '3.2'

services:
  db:
    image: microsoft/mssql-server-windows-developer
    ports:
      - "49401:1433"
    environment:
      - sa_password=Secret12345
      - ACCEPT_EULA=Y      
    container_name: sqlserver_db1

networks:
  default:
    external:
      name: nat


so to break it down

image: microsoft/mssql-server-windows-developer

Create the container from the image microsoft/mssql-server-windows-developer. If it hasn’t been downloaded then it will be when we bring up the container

ports:
- "49401:1433"

we are running on port 1433 internally to the container and the port 49401 will be bound to the host. I’m not running it on 1433:1433 because I’ve already got SQL Server on my host so that port is taken. Attempts to bind the host to a port that is already taken generates odd errors.

environment:
- sa_password=Secret12345
- ACCEPT_EULA=Y

we set the system admin password and accept the licence agreement

container_name: sqlserver_db1

and the container is given a name rather than one assigned by docker. It just keeps the rest of the examples easier. It’s not needed.

We haven’t specified a subnet or IP address so the network section tells docker to use its default network connection

Bringing the container up

To bring it up run

docker-compose up

which runs and brings up the docker container

If we jump onto another cmd window and run

docker container ls

We get the list of running containers thus

so we can see our sql box in it’s container in the platonic realm of container space that it is currently inhabiting. It’s called sqlserver_db1 as specified in our docker-compose.yml file

Getting an interactive shell on our container

The container still has an air on unreality to it. To start to resolve it into the real world I want to be able to run commands on it. To do this we need to bring up an interactive shell. I can bring up a cmd window

docker exec -ti sqlserver_db1 cmd

or a powershell window for more options

docker exec -ti sqlserver_db1 powershell

I’m doing the powershell. Once there I cna start to run whatever commands I see fit

echo 'Hello world'
hostname

and so on.

ping google.commands

is a good one to run to check if the container has network connectivity to the outside world. Mine didn’t and it took a while to work out why.

Where is it on the network

It’s not immediately apparent where the container is on the network. We’ve let docker dynamically assign the IP. To find it we go to our interactive shell and run

ipconfig

which will tell use the IP4 address

or outside of an interactive shell (from the host) we could use

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" sqlserver_db1

which will tell us the IP4 address also.

So right now our container is on 192.168.154.121 and we can ping it there. Going back to docker-compose.yml we also specify ports.

ports:
- "49401:1433"

so 1433 is the one that the docker container uses and we can telnet to it from the host to prove it’s open

telnet 192.168.154.121 1433

which it is. Also we can telnet to the host on the other port

telnet 127.0.0.1 49401

which is the same place but via the port that is bound to the host.

Connecting to the container with SSMS

I like to see things to full accept their existence. To convince my inner self let’s connect to the container with SQL Server Management Studio

We can connect to the host on its IP4

Or on the host IP by specifying the loopback IP and the port as we have bound the host to a port other than the default for SQL Server (1433).

Note that to connect via SSMS to a specific port the IP is separated by a comma i.e.

127.0.0.1,49401

Now we can see our container database from the host. We are convinced. It exists.

Connecting with a static IP

We can also assign the database container a static IP address by specifying a subnet and an IP address for our container in the docker-compose.yml file thus

version: '3.2'

services:
  db:
    image: microsoft/mssql-server-windows-developer
    ports:
      - "49401:1433"
    environment:
      - sa_password=Secret12345
      - ACCEPT_EULA=Y      
    networks:
      vpcbr:
        ipv4_address: 10.5.0.5
    container_name: sqlserver_db1
    
networks:
  vpcbr:
    driver: nat
    ipam:
     config:
       - subnet: 10.5.0.0/16      

The container will now always be on 10.5.0.5 and we can connect with SSMS on 10.5.0.5 without worrying about any of the intervening steps. I guess that’s the easy way.

Demo Code

In case anyone needs it the docker files for static and dynamic IP implementations are at on my github site here.

Useful Links

https://docs.docker.com/compose/compose-file/
Full specification of docker compose

The process cannot access the file error with Docker Compose

I came across this error the other day when I was working on windows with Docker

 Cannot start service db: failed to create endpoint dotnet-album-viewer_db_1 on network nat: failed during hnsCallRawResponse: hnsCall failed in Win32: The process cannot access the file because it is being used by another process. (0x20)

To say it’s misleading would be charitable – I was confused by this confusing message. Found the answer but it was a bit scattered over the internet so I thought I would bring it together.

The error

Consider this docker-compose.yml file

version: '3.2'

services:
  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "3306:4000"

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "80:80"
    environment:
      - "Data:Provider=MySQL"
      - "Data:ConnectionString=Server=db;Port=4000;Database=AlbumViewer;User=root;SslMode=None"      
    depends_on:
      - db

networks:
  default:
    external:
      name: nat

We are running a database container (db) and an application container (app). When we run

docker-compose up -d

to bring up the containers we get the error

Cannot start service db: failed to create endpoint dotnet-album-viewer_db_1 on network nat: failed during hnsCallRawResponse: hnsCall failed in Win32: The process cannot access the file because it is being used by another process. (0x20)

It makes it seem like the container is locked in some way, perhaps two instances running concurrently. That’s not the issue.

The cause

The problem is that we are running the containers on ports that have been taken by other processes. So, in my case it is both the database container and the application server that are at fault

Database container

  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "3306:4000"

The database container connects on port 4000 in the network internal to the containers.  The second port 3306 is what is exposed to the host machine. It’s port 3306 that is already taken by another process. To find which application is running on that port use netstat and PowerShell i.e.

netstat -aon | findstr 3306

which gives

MySql is on port 3306 hence I can’t run a container on that port and I get the weird ‘process cannot access file’ error. That does make sense as the image dockersamples/tidb:nanoserver-sac2016 is a MySQL compatible database – so it’s on the same port.

Application container

The application server is also targeting an in-use port

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "80:80"

The application container will use port 80 internally and also port 80 externally. I don’t need to go the trouble of running netstat. I know I’ve got a webserver (IIS) on that machine which has a default website running on port 80. I should have spotted that straight away and been a bit less baffled.

The resolution

I can either kill the processes that are taking that port or change the external port that the docker containers are going to use. I don’t really want to kill MySQL or IIS so I need 2 new ports.

I tend to use ports above 49200 as they are dynamic ports and are less likely to be in use (though I tend to use them up myself, so I still get clashes). Specifically

Well-known ports range from 0 through 1023.
Registered ports are 1024 to 49151.
Dynamic ports (also called private ports) are 49152 to 65535.

Once you’ve identified a port you can check if it’s in use by using netstat again

netstat -aon | findstr 49301

And see if you get a result. So, the final resolution is

version: '3.2'

services:
  db:
    image: dockersamples/tidb:nanoserver-sac2016
    ports:
      - "49301:4000"

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "49250:80"
    environment:
      - "Data:Provider=MySQL"
      - "Data:ConnectionString=Server=db;Port=4000;Database=AlbumViewer;User=root;SslMode=None"      
    depends_on:
      - db

networks:
  default:
    external:
      name: nat

So db and app external ports are now in the dynamic range and when I run

docker-compose up -d

the error is gone, and my containers and the application come up. Lovely.

Useful links

The examples in this post come from here https://github.com/docker/labs/blob/master/windows/windows-containers/WindowsContainers.md

Useful post on how to check if your ports are in use. Further detail and explanation to what I’ve posted here.

List of well known ports https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers. Searching for port 3306 on Wikipedia would have shown me that it’s in use by MySQL. I guess a lot of people would have just known this. I didn’t.

Stack overflow answer as a reminder on which port in the docker-compose file is the external port and which one is the one used internally by docker. Spoiler alert: the first one is the externally exposed port and the second one is use by the applications running inside of the docker containers. It’s the first port that is causing us the problems.