Find Which Images Docker Compose Services Are Using
If you need to find out which image a Docker Compose service is using, you need the docker compose images command. In this short tutorial, I’ll show you how it works.
Do you write Linux shell scripts? Do your scripts work on more than one distribution? If so what does your development environment look like and what tools do you use to let you develop, debug, and maintain your scripts both quickly and relatively hassle-free?
The reason that I ask that I’ve been tearing through The Insider’s Guide to Technical Writing recently. As a result, I’ve gained a new lease on life as a technical writer.
This isn’t to say that I didn’t have a strong professional work ethic or solid experience prior to reading the book. It’s that since I’ve begun reading it, I’ve felt so much more confident in how I undertake the role than I have been up until now.
Specifically, I only document and approve PR’s about topics that I’ve personally tested. If I’m not sure they work, they don’t get my approval.
However, like unit testing or security best practices in software, sometimes you can feel under pressure from deadlines to get things done quicker than you should, effectively rushing things through without being 100% sure that they work.
So it was recently when I was going through the manual installation section of the ownCloud administration documentation. I started a quick run through of the steps outlined in response to a new issue only to find that:
These omissions and errors meant there was room for doubt and error — especially for newer ownCloud users.
What’s more, while code samples are very helpful (you don’t have to figure out what you have to type) they still leave so much work for the user, as they have to copy and paste the code examples into their terminal manually.
As we’re attempting to help them save time and effort, why not provide a script that they can use as part of a build process, one able to be scheduled via Cron?
So despite (or in spite of) the factual errors and omissions, I became excited at the prospect of revising the documentation.
Now I could have raised an issue with the core development team to create the script, detailing what it should do when it was ready.
However, I’m not only a technical writer, I’m also a software developer. And what do developers love to do more than most else? Design and write code!
So it was that I started designing a shell script that would automate the process of installing all of ownCloud’s dependencies.
The first thing to do was to assess what it needed to do and the environment in which it would run.
As I already knew what it had to do, using the documentation as my guide, I moved on to assessing the environment requirements.
Currently, ownCloud officially supports several Linux distributions. These are:
So when the script was finished, it had to achieve the same outcome, regardless of which distribution it was run on.
A bash shell script seemed to make the most sense. I could have written it in Ruby, Python, or PHP. However, I always associated shell scripts with SysAdmin & DevOps work. What’s more, I had an itch to scratch!
Here’s an admission: I’ve been a bash script hacker since 1999. But I’ve never actually developed my proficiency past a certain point.
So I saw this as an opportunity to grow my skills and learn more about bash while indulging one of my oldest, technical, passions: Linux. On top of that, I’ve long been curious about how the different distributions organise themselves.
And so the decision was made. Then came the next question:
How would I make it portable, yet not spend more time than was necessary provisioning the different Linux distributions?
VirtualBox, Vagrant, and a provisioner such as Ansible seemed out of the question. If I went down that path, I’d likely spend more time writing the provisioning scripts than writing the actually setup script — which was my main focus!
So I chose to go with Docker instead, as it demands only a limited amount of time and effort to get an environment up and going.
Given that, I created a custom Docker setup, based largely on an existing project, which you can find on GitHub. In it, you can see that it uses Docker-Compose to create a two-container setup.
There’s a web service that provides Apache 2 and PHP 7. And there’s a MySQL container that, surprise, provides MySQL.
The web service uses any one of three Dockerfiles, which are based on a different base image. There’s one for Ubuntu, openSUSE Leaf 42.3, and CentOS 7.
Each of them installs a set of packages, sets up a user & group, and sets up some permissions on a required directory so that that user can access it.
While they don’t do a lot, they’s still essential. By using them, I was able to write a shell script that achieved the same outcome across each distribution.
The script doesn’t actually do a lot. But, here’s a quick summary:
Doing so taught me:
Interestingly, the key step was determining which distribution was being used, so that the script knew which package manager to use to install the required packages, and what the packages were called.
This was done with two functions: which_distro
and install_required_packages
.
function which_distro()
{
case "$( grep -Eoi 'Debian|SUSE|Ubuntu' /etc/issue )" in
"SUSE")
echo "SUSE"
;;
"Ubuntu")
echo "Ubuntu"
;;
esac
redhat_release_file=/etc/redhat-release
# Need to do a bit more work to detect RedHat-based distributions
if [ -e "$redhat_release_file" ]; then
case "$( grep -Eoi 'CentOS' $redhat_release_file )" in
"CentOS")
echo "CentOS"
;;
esac
fi
}
The function first greps /etc/issue
for one of Debian, SUSE, or Ubuntu.
If it contains one of the three strings, then we know its that distribution.
I did this because I’ve found that these distributions consistently show themselves there.
Determining if the distribution was RedHat or CentOS was a bit harder, as these two don’t always store the identifying information consistently in /etc/issue
.
They can store it there.
But they can also store it in /proc/version
as well as /etc/redhat-release
. /etc/redhat-release
seems to be the most consistent approach.
Given that, if /etc/redhat-release
is available, then we know that one of the two distributions is being used.
From there, the script greps for which of them it is, similar to the previous approach.
function install_required_packages()
{
case "$( which_distro )" in
"SUSE") echo "Installing required packages on SUSE"
install_required_suse_packages
;;
"Ubuntu"|"Debian") echo "Installing required packages on Ubuntu/Debian"
install_required_ubuntu_debian_packages
;;
"CentOS") echo "Installing required packages on Centos"
install_required_centos_packages
;;
esac
}
The second script just uses which_distro
to determine which distribution is being used, and then:
Now for the more interesting part, the distribution-specific installers.
I started off with Ubuntu/Debian. I’ve got the most experience with them as I’ve been using them since about 2003. What’s heartening is that it was the distribution that required the least amount of effort.
I know a number of the idioms and quirks, so perhaps I’m biased. But it took the least amount of effort.
function install_required_ubuntu_debian_packages()
{
sudo apt-get -y -q update && \
apt-get install -y -q wget make npm \
nodejs nodejs-legacy unzip git
}
I then refactored the script to work with openSUSE. To be honest, while I live in Nuremberg, the hometown of SUSE I think, I’ve barely used it.
What’s more, it took the most amount of effort to code. While the changes largely only reflect using Zypper instead of Apt, it took some experimenting to both get the base environment working and to find the combination of packages and dependencies.
function install_required_suse_packages()
{
sudo zypper --quiet --non-interactive install \
wget make nodejs6 nodejs-common \
unzip git npm6 phantomjs
}
Finally, I added an installer for CentOS. While it looks quite large by lines of code, it took less effort than openSUSE. It took time to figure out how to get PhantomJS up and going to be fair, but not all that much.
function install_required_centos_packages()
{
sudo yum update -q -y
sudo yum --enablerepo=cr -q -y install wget make nodejs unzip git npm bzip2 file
# Install PhantomJS - see https://www.bonusbits.com/wiki/HowTo:Install_PhantomJS_on_CentOS
# It's not in the official repos, so needs to be installed independently.
sudo yum install fontconfig freetype freetype-devel fontconfig-devel libstdc++
sudo mkdir -p /opt/phantomjs
wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
sudo tar -jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 --strip-components 1 --directory /opt/phantomjs/
sudo ln -s /opt/phantomjs/bin/phantomjs /usr/bin/phantomjs
}
At this stage, I’ve not completed refactoring the script to work with RHEL. I expect to have that done later in the week.
It’s been an interesting journey building a shell script that works across multiple Linux distributions. I learned some things, including:
While it’s frustrating that each distribution doesn’t provide the packages that each other does, it makes sense. They’re created by different people, to serve different audiences and needs.
It makes no sense for them to be identical. And it’s something to keep in mind when you’re writing shell scripts. It may save you a lot of confusion and frustration.
And that’s been a whirlwind run through of both how Docker’s helped me create a shell script that works across multiple Linux distributions, as well as a bit of a step-through of the relevant sections of the script.
If you’re a bash expert (or more of an expert than myself), I’d love to know how you would improve or change the script. I’d love to know if there are better ways, quicker ways, easier ways to do it.
Please leave your feedback in the comments or the discussion of the script on GitHub.
If you need to find out which image a Docker Compose service is using, you need the docker compose images command. In this short tutorial, I’ll show you how it works.
A little while ago, I wrote two parts in a multi-part series about using Docker. As someone who’s reasonably new to Docker — and been bitten by the Docker bug — I wanted to share what I’d learned, in the hopes that others may benefit.
If you’re using Docker Compose to deploy an application (whether locally or remotely) and something’s not working, here’s a concise approach you can use to debug the deployment and get your containers up and running properly.
Recently, I deployed a Go-powered app backed by an SQLite database on Fly.io for the first time. Here’s the process that I went through, along with some of the issues that I encountered along the way.
Please consider buying me a coffee. It really helps me to keep producing new tutorials.
Join the discussion
comments powered by Disqus