Bring your Whole Army with Docker to Attack DR
“Life is constantly providing us with new funds, new resources, even when we are reduced to immobility. In life’s ledger there is no such thing as frozen assets.”
I’d like to take a fresh approach to DR using Docker instead of worrying about it. It seems the majority of people in IT seem worried about backing up Docker instead of leveraging Docker? Let’s not get too far ahead of ourselves, firstly lets talk about the technology behind what a container is.
Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer’s laptop to a test environment, from a staging environment into production and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud. Now it sounds a lot like a virtual machine and in some sense that’s correct but these are different.
Problems arise when the supporting software environment is not identical, says Solomon Hykes, the creator of Docker, “You’re going to test using Python 2.7, and then it’s going to run on Python 3 in production and something weird will happen. Or you’ll rely on the behavior of a certain version of an SSL library and another one will be installed. You’ll run your tests on Debian and production is on Red Hat and all sorts of weird things happen.”
And it’s not just different software that can make a difference, he added, “The network topology might be different, or the security policies and storage might be different but the software has to run on it.”
Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.
By contrast a server running three containerized applications as with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system are read only, while each container has its own mount (i.e., a way to access the container) for writing. That means the containers are much more lightweight and use far fewer resources than virtual machines. I would like to point out that Solaris has had this technology for over a decade now they just didn’t expound upon it other than creating Solaris OS containers with a simple line of code.
To summarize, containers cannot generally provide the same level of isolation as hardware virtualization. Unlike virtual machines they are only a few Megs rather than a few Gigs and also have smaller attack surfaces do to that which make these containers also more desirable to. A mental image I like to use is seeing an OS in 3D and the application running on it, I picture reaching into the OS grabbing the application and pulling it out and with it comes everything need it for it to function. Kind of like a like pulling those leeches off a shark and letting it swim free of bloat.
USE CASE for this example- a remote site that has started doing heavy development on enhancing an application that was never initially considered important enough to be part of the backup because at the time it was just beta testing. One option is to Dockerize the actual application and send the containers to remote locations. For us we will deploy a Docker image with b/u Media Server client to that subnet which makes sense due to multiple development boxes that are going to cache the backups on these subnets and then send back to the master backup server.
Docker currently only runs inside of Linux as of now but Microsoft is planning on having support and containers of their own by the end of the summer – I will run in Centos my favorite OS.
1. Getting started install Docker using yum (yellowdog update manager) its #TBT..
2. Start the service then make sure docker always starts at boot and pull a new CENTOS based container OS.
3. Docker run + the image name you would like to run + the command to run within the container. If the image doesn’t exist on your local machine, Docker will attempt to fetch it from the public image registry lets go into the container and make sure we can run from the bash command line – You will see that root has left your Host and is now in the container check the red arrow.
4. For a moment lets jump back out of our container back to the HOST and with some hot keys
5. Docker ps will give you container IDs and they also list ports and changes made thus far – The overlay filesystem works similar to git: our image now builds off of the CentOS base and adds another layer with SW on top. These layers get cached separately so that you won’t have to pull down the CentOS base more than once.
6. Since there’s only one listed which we just pulled down the latest version of Centos we will attach to it without needing to issue a run command. ( It can be useful to have a container in a ready state and not always on per se, think of a cron job that would run nightly where have a container just run a health check on your database then runs a report and back to sleep)
7. Now act as a host to put in the GIT repo using YUM which has Thousands of available containers
8. Check your version and set your credentials to use the repo
9. Now list your images and just test some functionality of the latest image we updated if you ran /bin/echo hello world as your command, the container will start, print hello world and then stop
10. Now that its confirmed we can jump into a container with Docker style root privileges to make the changes we want within the container. The -t and -i flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running. Install your software using YUM
11. Building the image to use for DR – Use YUM just as always to get SW and updates just remember this is Barebones meaning you have to install everything.. samba-utils, samba-client cifs-utils etc..
12. Now that you can start mounting NFS and shares go into vi /etc/fstab to start setting permanent mount points. (take note of the local mount point I created 🙂
13. Now go into your SW share directory on your network to start building your container to host the applications you choose. (Personally I like to use YUM for local RPM installs as well because it will check for dependencies and install them *remember bare OS!)
14. Now that we have our first robust container we want to commit our changes to that image. With Docker, the process of saving the state is called committing. Commit basically saves the difference between the old image and the new state.
(Take note of the new ID given)
15. Now we can run a command from the new name we just committed the image to
16. Earlier we downloaded the CentOS image remotely from the Docker public registry because it didn’t exist on our local machine. We can also push local images to the public registry (or a private registry)
Join the community It’s important to note that you can commit using any username and image name locally, but to push an image to the public registry, the username must be a valid Docker.IO user account. https://hub.Docker.com/account/signup/
17. Now you can start creating directories and repositories on the site and marking them private and public etc.. You can standardize and deploy a DR VM to any of locations anywhere in the world with the press of a button using a cloud deployment system.
By pushing (uploading) images that you build, you can easily retrieve them to use on other hosts as well as share them with other users. To push to a private repository the syntax is very similar. First, we must prefix our image with the host running our private registry instead of our username. List images by running Docker images and insert the correct ID into the tag command:
18. Now we are going to commit our image to a name to send it into the cloud – we need to run the commit command to the given status listed below. Run a docker image command and we should see the name given as the latest instance update.
19. Now its time to push the container up to our registry to deploy out to other locations of our choice. We are going to run the docker push command to our committed container. You will then go through the various user names and accounts that you used creating your login a short time ago and then get confirmation that your image is in the repository!
So what are the implications of all this you might be wondering? How about a VM with 10 Docker images running inside of it that can be deployed to any subnet or location within seconds? one Docker image could have a Netbackup front end, load balance analyzer, replica website front end, another could be a database and another a database caching service all inside of a single VM, AWS or Azure image. The possibilities of what’s running in your DR container are almost limitless and can be sent anywhere you choose.
Here are a couple of other docker uses and use cases for that matter to start you on your docker journey. Its relatively newer tech so as of now the potential is untapped.