Using Docker can be really annoying if you are trying to use it for purposes it probably was not designed for - at least in my opinion. But it is really great solution if you don't want to maintain a VM or other virtual environment.
In my case, I have Python script which I need to run periodically. I don't need and I don't want to maintain VM for that. I just want to run this script for time to time. Of course it is not just a script - it has dependencies (Azure SDK for Python), so it's more like a bundle than a script - it's normal for Python and many other languages.
Docker is a perfect solution for me in that case. I can bundle SDK and other dependencies and use it as a base image for my script runtime environment - all without storing any data on image itself.
I have clear prerequisites:
- I need Azure SDK for Python.
- I have my script written in Python 3.6.
- I need to pass some parameters to my script to prevent hardcoding.
- I wish to run this script on almost any machine -Linux, Mac and Windows.
- I wish to run this script periodically.
Azure SDK for Python
You can find it on PyPi (https://pypi.org/project/azure/) and you can install it using pip install azure
. There is no philosophy here - it's open, it's developed on GitHub and it's available on PyPi.
An additional prerequisite for Azure SDK for Python is keyrings.alt package - due to this issue.
So, I have:
pip install azure keyrings.alt
Python 3.6
I'm working on Python 3.6 environment locally on my computer, where I'm developing, so I wish to have the same environment on the script's runtime. It's probably compatible with 3.5, 2.4 and 3.3 but... I'm working on 3.6.
Let see my script (sample) - it is listing all resource groups in my subscription:
Parameters
As you can see, I'm not hardcoding stuff like tenant, application ID, application key or subscription ID in my script. I'm using os.getenv()
to extract it from environment variables. It means, that I need to include some "sensitive" data in my environment.
Interoperability
I don't want to focus on the question if my script will run on Windows Server or Linux... or Mac, that I'm using personally. Python is Python but... environments differs between Operating Systems. And this is the place where Docker comes to the game. it does not matter where you will run your Docker image - it will be totally the same from the code/script perspective.
We have two options in Docker world - first is to use pre-built images, prepared by community or team-mates; second is to use custom images we are building by our own.
If you are looking for ready-to-use images, check on Docker Hub or Docker Store.
But if you are looking for more flexible solution or you just want to have a lot of fun, try to build your own Docker image, using Dockerfile. As I assumed above, we need Python 3.6, azure package and keyrings.alt package. Let's create Dockerfile for that:
As you can see, it's really simple. We are getting Python image with tag pointing for version 3.6 from Docker Hub - a community repository - and it is an official Python image for docker. You can check it here.
The second step is to install packages wee need on top of python image. To do that, we are using pip
of course. Image building means, that having Python 3.6 image, we are installing additional packages we want, and then we are generating new image based on the base one and changes we made. After that, we have a static environment image with Python 3.6 and Azure SDk for Python.
Complex Docker image
Having our script and a Docker image based on Python 3.6 official image, we can prepare a complex Docker image with ready-to-use solution. We need to merge the script with the environment image. We will do it using Dockerfile, adding script to it:
Runtime
Assuming, that we have created image above, we have a complex solution: we have a Python 3.6 interpreter, an Azure SDK for Python and keyrings.alt package. But when this image will runs, it will do... nothing. The script is inside, but the command is not declared. We need to declare what the image will do on startup:
And this is a complete solution. On startup, conainer will run run.py script, using Python 3.6 interpreter, where Azure SDk for Python is installed along with keyrings.alt package.
Build
At this point, we need to build our Docker imageand to do so we need to have run.py script and a Dockerfile in the same directory. Using shell where docker is installed - no matter on what OS - go to this directory and perform a command:
docker build -t imagename .
We have built the Docker image based on definition from Dockerfile, tagging it as "imagename" which tag will be used as the image name when running.
Run
No we know three things:
- We have an "imagename" Docker image.
- We want to run run.py script which is on that image.
- We need to "inject" environment variables with sensitive data to the runtime.
The run.py script will run automatically because we built an image in that way. Only thing we need is to pass environment variables to the image on startup. To do it, we will use -e arguments to the *docker run * command. Let's do it:
docker run -e "TENANT_ID=<tenant_id>" -e "CLIENT=<applicatiom_id>" -e "KEY=<key>" -e "SUBSCRIPTION=<subscription_id>" imagename:latest
Conclusion
The script should run and should start treating Docker as your daily-basis tool. Not because it's fancy and cool. Just because it's easy to use, simple and it works almost everywhere.
If you don't want to wait for image to build with SDK, I have created a ready-to-use image. You can find it here and use as a base: