From Around The Web: Red Hat Collaboration with Docker

Redhat recently announced its new collaboration with the developers of the popular Docker software, which enables applications to be packaged with their dependencies into “containers,” rather than requiring a host OS for each application (monolithic approach).  This developement is quite interesting, as development of Docker, and bug fixes should increase, as well as this method for sandboxed applications.  This should also trickle down to Fedora and like distros that use similar structure (RPM based).  The move will primarily focus, at first, at removed the need for AuFS with a provisioning method based on the device-mapper technology in Fedora, RHEL, and others.  This will ensure compatibility with upstream kernel versions.

What does docker do?  A little back story:

 Docker’s core functionality is applications that run inside a self-contained environment.  This is not to be confused with a VM application, such as an App-V package, that requires a HOST OS, even if just one application is being used.  Each Docker “container” shares the same resources on the Docker engine, and leverages the Linux kernel’s LXC technology to deliver the “pre-packaged” applications.  This enables Docker also to run on anything, from bare metal VMs, to hosted VMs, to Guest OS’s, and any native OS for that matter.  This means that Docker containers are independent of hardware, language, framework, packaging system, and hosting provider, and avoids having to repackage each application multiple times for different environments.  This should make Docker a favorable alternative to traditional VM-client situations, with easy deployment.

A good analogy of the process would be akin to different commodities and goods such as crates, cars, oil barrels, electronics and so on.  No matter what the good, it would be packaged into a shipping container (like those on a large cargo vessel).  This common container can be unpacked/loaded/transported/used anywhere it is placed.  No more do you have to reconfigure applications for different environments and the guesswork is thus removed.  When a container is updated, the Image Registry simply redeploys the new container with zero overhead afterwards.  This allows 2 hosts to also run a different version, as Docker stores the ‘diff’ changes of each application container.  Changes are pushed to the Image Registry, and it is up to hosts to update or not.

Alternatives within Docker do exist for different scenarios.  If you are worried about control in the enterprise, Docker can work with management tools that handle this to prevent inadvertent upgrades with enterprise environments such as policy, configuration management, and automation.  Complementary tools such as Chef and Puppet can help administrate machine configurations, but presents difficulties in having to be redone for eah new application or version.  Sites outside the enterprise however, do not get all these benefits due to the lack of access to the Image Registry.  To see more use cases being currently used in the real world, see the source link below.

This move will cement an exciting time for RedHat’s PaaS Solutions, such as Openshift, integrating this cartridge model with the management model of OpenShift.  Already there has begun development by RedHat employees in Docker, evident by the active github changelogs.  Keep in mind though, that the Docker project is still under heavy development.  Thank you Redhat!

Learn more at

I’ll just leave this here:


About professorkaos64

Posted on 20130920, in From Around The Web and tagged , , , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s