I have been reading a few white papers and articles about Continuous Delivery (CD) and Database Lifecycle Management (DLM) recently and have seen mention of Docker – in fact the first time I came across it was reading the Rainbird website (Norwich-based AI company).
My first impression was that Docker is like VMWare or Virtual Box in providing a virtual environment which can be used for developing or testing in isolation. Turns out that these offerings are not exactly alike and that there are some key differences.
Essentially, Docker packages everything that is required into a package that can used on Linux-based systems as well as Windows systems. This means that the code or application runs in a consistent manner every time irrespective of where it is running.
The main difference is that the Docker containers share the operating system kernal whereas each virtual machine (VM) has an operating system of its own. The obvious benefit here is that Docker has a lower overhead in terms of disk space – it doesn’t need the 8Gb or more that a Windows operating system requires. Docker containers are also quicker to start up.
Containers and VMs both provide isolation protecting the host from what is run within, this is where the technologies are similar.
The example on the Docker website illustrates the difference between virtual machines and containers in a nice graphic, basically the stacks compare as follows:
- Guest O/S
- Host O/S
Immediately it is obvious that the container stack is lighter – this should provide lower set-up cost, shorter time to operating and more efficient use of resources.
I plan to find out more about Docker and will add further background notes as well as steps on setting up and configuring. If you have used Docker or virtual machines then please feel free to leave a comment on what you like/dislike and any tips on configuration and use.