Skip to main content

Docker: setting up a reusable build environment

Working with Docker I found a Dockerfile structure that can be reused for different images so that I can have a uniform building enviroment. Beside this, I chose to do most of the work in a shell script running in the container; this has some advantages: you can leverage the full power of the shell and it keeps the total number of levels low. Remember that Docker can handle at most 127 layers while using the devicemapper, while AUFS can handle something near 42 layers.

Dockerfile structure

My Dockerfiles are divided into four main sections:

1) copy files;
2) run setup;
3) expose ports;
4) set the starting command.

Copy files

First copy all the necessary files into the container. If there are lots of files it's better to enclose them in a tgz archive to have only a few "COPY" commands. Remember that each command in a Dockerfile creates a new layer, so it's better to have as few as possible.

COPY ./website.tgz /home/
ADD ./start.sh /start.sh

I use /home as a handy location to save temporary files to.

Run setup

The actual work is done by the setup.sh shell script that is run inside the container and that uses the files copied before.

ADD ./setup.sh /setup.sh
RUN /bin/bash /setup.sh
RUN rm -f /setup.sh

Well, removing setup.sh isn't really necessary: you can leave it where it is.

The setup.sh script is a shell script, so it can do a lot of things, for example unpack the website.tgz archive in its final directory, or change the permissions of the start.sh script to make it executable, or install additional packages using yum or apt-get, and so on...

yum -y -q install tar



[ -d /srv ] || mkdir -m 755 /srv
cd /srv
tar xzf /home/website.tgz
rm /home/website.tgz

chmod 755 /start.sh

Expose ports

Usually the service you're putting into a container uses some ports to communicate with the external world, so in every Dockerfile there's a section exposing such ports.

EXPOSE 8100
EXPOSE 8101

Set the starting command

The starting command of my containers is always the start.sh script, so that also this part of the Dockerfile is always the same.

CMD [ "/bin/bash", "/start.sh" ]

Conclusions

As you can see there are only two parts of the Dockerfile you need to change:

1) the files you have to copy;
2) the ports to be exposed.

Everything else remains the same. What can change deeply from image to image is the setup.sh and start.sh scripts.

Actually even my start.sh script doesn't change too much because I'm using Monit to manage processes, so the script looks like:

monit -l - -c /etc/monit.conf

But this is another story: I wrote an article on the use of Monit in Docker.

Comments

Most popular posts

Pairing the Raspberry Pi 3 with your Playstation 3 controller

While setting up the MAME emulator on the Raspberry Pi 3 I decided to experiment with the PS3 controller trying to pair it with the RPi. I found a useful guide here: http://holvin.blogspot.it/2013/11/how-to-setup-raspberry-pi-as-retro.html At section 4 the author describes how to compile sixpair utility, test that everything is working and compile the QtSixA tool. But there are some differences to be noted when working with the Raspberry Pi version 3. First, and most obvious, of all: the RPi 3 has already a Bluetooth device built in, so you don't have to plug a dongle in it, and it's compatible with the PS3 controller. 1. Sixpair The sixpair utility succeeds in coupling with the controller. But to test that it's working I had to test the js1 joystick port, and not the js0 as stated in the guide; so the actual command is: jstest /dev/input/js1 2. QtSixA The QtSixA download link must be changed, because the one shown doesn't compile with the latest

JSON Web Token Tutorial: An Example in Laravel and AngularJS

With the rising popularity of single page applications, mobile applications, and RESTful API services, the way web developers write back-end code has changed significantly. With technologies like AngularJS and BackboneJS, we are no longer spending much time building markup, instead we are building APIs that our front-end applications consume. Our back-end is more about business logic and data, while presentation logic is moved exclusively to the front-end or mobile applications. These changes have led to new ways of implementing authentication in modern applications. Authentication is one of the most important parts of any web application. For decades, cookies and server-based authentication were the easiest solution. However, handling authentication in modern Mobile and Single Page Applications can be tricky, and demand a better approach. The best known solutions to authentication problems for APIs are the OAuth 2.0 and the JSON Web Token (JWT). What is a JSON Web Token? A JSO

Software Release Management For Small Teams

Formalizing The Release Management Process (If There’s Any) In some team configurations, especially ones that are found in startups, there are no DevOps, nor infrastructure engineers, to provide support when releasing a new version of the product. Moreover, unlike large bureaucratic companies with defined formal processes, the CTO or Head of Software Development team in a startup is often not aware of the complexities of the software release management process; a few developers in the company may be aware of the complex details of the process, but not everyone. If this knowledge is not documented thoroughly , I believe it could result in confusion. In this article, I’ll try to provide some tips about how to formalize the release process, particularly from the developer’s point of view. Enter The Software Release Checklist You may be familiar with the idea of a checklist for some operations, as per the Checklist Manifesto , a book by Atul Gawande. I believe a formal release proc