Whale inception

Tuesday, January 8, 2019

Docker in Docker (DinD) and Docker outside of Docker (DooD) are the technique where you run a docker container inside another container. And could potentially run another, and another, and another…until you run out of resources.

Inception

Over the last couple of month I have found several handy uses for this. I plan to write about those soon. In this post I will center in the actual thing.

Simple experiment

There are many images out there, most with specific purpose. Let’s start with a simple one.

> docker run -it --rm \
	-v /var/run/docker.sock:/var/run/docker.sock \
	docker
# We are now inside the inner docker
> docker run -it --rm alpine
# We are now inside the alpine that seems to be inside the inner
> echo Hello inner alpine!!

Note the volume mount, it’s very important , will talk about that later.

Now, without exiting from none of those containers open another terminal:

> docker ps
CONTAINER ID	IMAGE 	COMMAND               
30f92b110fd6    alpine	"/bin/sh"             
504490e3e1b7    docker	"docker-entrypoint.s…"

And…there are two containers on the outer docker.

If you connect to the inner and execute the same command:

> docker exec -it 504490e3e1b7 ash
# We are now inside the inner
> docker ps
CONTAINER ID	IMAGE 	COMMAND               
30f92b110fd6    alpine	"/bin/sh"             
504490e3e1b7    docker	"docker-entrypoint.s…"

You will get same output.

Oneiric free fall

To start playing the inception game you could get into the inner and from there get inside the inner…and again…and again.

> docker exec -it 504490e3e1b7 ash
# We are now inside the inner
> docker exec -it 504490e3e1b7 ash
# We are now inside the inner from inside the inner
> docker exec -it 504490e3e1b7 ash
# We are now inside the inner inside the inner inside the inner

In the end it’s like opening a command line from a command line…and repeat until very tired.

The other “game” you could play is actually running a new inner from the inner

> docker run -it --rm \
	-v /var/run/docker.sock:/var/run/docker.sock \
	docker
# We are now inside a new inner
> docker run -it --rm \
	-v /var/run/docker.sock:/var/run/docker.sock \
	docker
# We are now inside a deeper inner
> docker run -it --rm \
	-v /var/run/docker.sock:/var/run/docker.sock \
	docker
# We are now inside an even deeper inner

At this point we have 3, apparently nested, inners.

> docker ps
CONTAINER ID	IMAGE 	COMMAND                
ec0a384361fb	docker	"docker-entrypoint.s…"
28b580875eea	docker	"docker-entrypoint.s…"
e8e61c5ffb6b	docker	"docker-entrypoint.s…"

Nested: Well…actually

A docker command line interface (CLI) running in the host will send commands to the Docker Engine API, then the engine will spin up containers, and kill them and so on. A inner container will look like it’s running docker. It will have a CLI and all, but it’s commands will be sent to the the outer Docker Engine, the one in the host…The only one.
There is no spoon

That communication is achieved with the volume mount I mention earlier: -v /var/run/docker.sock:/var/run/docker.sock. That maps the unix sock from the host into the inner. At this point, whether the image you are using actually contains a docker engine or just a CLI won’t matter. It will always “speak” with the host’s engine.

Pitfalls

First and most important: Resist the temptation of running an actual docker inside a container. That approach is very very tricky-hacky-flacky look here. You can use DinD or DooD images safely as long as you use the sock volume mount.

Volumes must match the host not the container.

> docker run -it --rm \
	-v /var/run/docker.sock:/var/run/docker.sock \
	-v /home/user/data:/data \
	docker
# We are now inside the inner
> docker run -it --rm \
	-v /data:/data \
	alpine
# And if it does not fail, the result at least won't be what was expected.

In previous excerpt, /data in the alpine does not map to /data in the inner and therefore does not map to /home/user/data in the host. It maps to /data in the host. So if there was no /data an empty one would be created, as usual when mounting volumes. If there was a /data in the host, it will be mounted.

Similar problem happens with network.

> docker run -it --rm \
	-v /var/run/docker.sock:/var/run/docker.sock \
	-p 8080:8080 \
	docker
# We are now inside the inner
> docker run -it --rm \
	-p 8080:80 \
	nginx
docker: Error response from daemon: driver failed programming 
external connectivity on endpoint (acd374b02be): Bind for 
0.0.0.0:8080 failed: port is already allocated.

We were trying to map port 80 from the nginx to 8080 on the host, not on the inner, and that port has already been allocated to map port 8080 from the inner.

Outro

DinD or DooD can be tricky and hard to is very powerful. I think, if your are diving into docker, you should give it a try. Once you have you will understand much better what is really going on. Then, when it’s all crystal clear, I am sure you would get some juice out of it.

With this technique you can run your CI/CD pipelines without requiring to install anything in your agents, I will write another post about that soon.

You could orchestrate automated tasks, again, with no need for customizing your orchestrator nor it’s distributed agents.