Monday, November 23, 2015

Deploying services to a heterogeneous network of machines with Disnix

Last week I was in Berlin to visit the first official Nix conference: NixCon 2015. Besides just being there, I have also given a talk about deploying (micro)services with Disnix.

In my talk, I have elaborated about various kinds of aspects, such as microservices in general, their implications (such as increasing operational complexity), the concepts of Disnix, and a number of examples including a real-life usage scenario.

I have given two live demos in the talk. The first demo is IMHO quite interesting, because it shows the full potential of Disnix when you have to deal with many heterogeneous traits of service-oriented systems and their environments -- we deploy services to a network of machines running multiple kinds of operating systems, having multiple kinds of CPU architectures and they must be reached by using multiple connection protocols (e.g. SSH and SOAP/HTTP).

Furthermore, I consider it a nice example that should be relatively straight forward to repeat by others. The good parts of the example are that it is small (only two services that communicate through a TCP socket), and it has no specific requirements on the target systems, such as infrastructure components (e.g. a DBMS or application server) that must be preinstalled first.

In this blog post, I will describe what I did to set up the machines and I will explain how to repeat the example deployment scenarios shown in the presentation.

Configuring the target machines

Despite being a simple example, the thing that makes repeating the demo hard is that Disnix expects the target machines to be present already running the Nix package manager and the Disnix service that is responsible for executing deployment steps remotely.

For the demo, I have manually pre-instantiated these VirtualBox VMs. Moreover, I have installed their configurations manually as well, which took me quite a bit of effort.

Instantiating the VMs

For instantiation of the VirtualBox VMs, most of the standard settings were sufficient -- I simply provided the operating system type and CPU architecture to VirtualBox and used the recommended disk and RAM settings that VirtualBox provided me.

The only modification I have made to the VM configurations is adding an additional network interface. The first network interface is used to connect to the host machine and the internet (with the host machine being the gateway). The second interface is used to allow the host machine to connect to any VM belonging to the same private subnet.

To configure the second network interface, I right click on the corresponding VM, pick the 'Network' option and open the 'Adapter 2' tab. In this tab, I provide the following settings:

Installing the operating systems

For the Kubuntu and Windows 7 machine, I have just followed their standard installation procedures. For the NixOS machine, I have used the following NixOS configuration file:

{ pkgs, ... }:

  boot.loader.grub.device = "/dev/sda";
  fileSystems = {
    "/" = { label = "root"; };
  networking.firewall.enable = false;
  services.openssh.enable = true;
  services.tomcat.enable = true;
  services.disnix.enable = true;
  services.disnix.useWebServiceInterface = true;
  environment.systemPackages = [ ];

The above configuration file captures a machine configuration providing OpenSSH, Apache Tomcat (for hosting the web service interface) and the Disnix service with the web service interface enabled.

Configuring SSH

The Kubuntu and Windows 7 machine require the OpenSSH to be running to allow deployment operations to be executed from a remote location.

I ran the following command-line instruction to enable the OpenSSH server on Kubuntu:

$ sudo apt-get install openssh-server

I ran the following command on Cygwin to configure the OpenSSH server:

$ ssh-host-config

One of the things the above script does is setting up a Windows service that runs the SSH daemon. It can be started by opening the 'Control Panel -> System and Security -> Administrative Tools -> Services', right clicking on 'CYGWIN sshd' and then selecting 'Start'.

Setting up user accounts

We need to set up specialized user accounts to allow the coordinator machine to connect to the target machines. By default, the coordinator machine connects as the same user which carries out the deployment process. I have configured all the three VMs to have a user account named: 'sander'.

To prevent the SSH client from asking for a password for each request, we must set up a pair of public-private SSH keys. This can be done by running:

$ ssh-keygen

After generating the keys, we must upload the public key (~/.ssh/ to all the target machines in the network and configure them so that they can be used. Basically, we need to modify their authorized_keys configuration files and set the correct file permissions:

$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ cat >> ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Installing Nix, Dysnomia and Disnix

The next step is installing the required deployment tools on the host machines. For the NixOS machine all required tools have been installed as part of the system configuration, so no additional installation steps are required. For the other machines we must manually install Nix, Dysnomia and Disnix.

On the Kubuntu machine, I first did a single user installation of the Nix package manager under my own user account:

$ curl | sh

After installing Nix, I deploy Dysnomia from Nixpkgs. The following command-line instruction configures Dysnomia to use the direct activation mechanism for processes:

$ nix-env -i $(nix-build -E 'with import <nixpkgs> {}; dysnomia.override { jobTemplate = "direct"; }')

Installing Disnix can be done as follows:

$ nix-env -f '<nixpkgs>' -iA disnix

We must run a few additional steps to get the Disnix service running. The following command copies the Disnix DBus configuration file allowing it to run on the system bus granting permissions to the appropriate class of users:

$ sudo cp /nix/var/nix/profiles/default/etc/dbus-1/system.d/disnix.conf \

Then I manually edit /etc/dbus-1/system.d/disnix.conf and change the line:
<policy user="root">


<policy user="sander">

to allow the Disnix service to run under my own personal user account (that has a single user Nix installation).

We also need an init.d script that starts the server on startup. The Disnix distribution has a Debian-compatible init.d script included that can be installed as follows:

$ sudo cp /nix/var/nix/profiles/default/share/doc/disnix/disnix-service.initd /etc/init.d/disnix-service
$ sudo ln -s ../init.d/disnix-service /etc/rc2.d/S06disnix-service
$ sudo ln -s ../init.d/disnix-service /etc/rc3.d/S06disnix-service
$ sudo ln -s ../init.d/disnix-service /etc/rc4.d/S06disnix-service
$ sudo ln -s ../init.d/disnix-service /etc/rc5.d/S06disnix-service

The script has been configured to run the service under my user account, because it contains the following line:


The username should correspond to the user under which the Nix package manager has been installed.

After executing the previous steps, the DBus daemon needs to be restarted so that it can use the Disnix configuration. Since DBus is a critical system service, it is probably more convenient to just reboot the entire machine. After rebooting, the Disnix service should be activated on startup.

Installing the same packages on the Windows/Cygwin machine is much more tricky -- there is no installer provided for the Nix package manager on Cygwin, so we need to compile it from source. I installed the following Cygwin packages to make source installations of all required packages possible:


Besides the above Cygwin packages, we also need to install a number of Perl packages from CPAN. I opened a Cygwin terminal in administrator mode (right click, run as: Administrator) and ran the following commands:

$ perl -MCPAN -e shell
install DBD::SQLite
install WWW::Curl

Then I installed the Nix package manager by obtaining the source tarball and running:

tar xfv nix-1.10.tar.xz
cd nix-1.10
make install

I installed Dysnomia by obtaining the source tarball and running:

tar xfv dysnomia-0.5pre1234.tar.gz
cd dysnomia-0.5pre1234
./configure --with-job-template=direct
make install

And Disnix by running:

tar xfv disnix-0.5pre1234.tar.gz
cd disnix-0.5pre1234
make install

As with the Kubuntu machine, we must provide a service configuration file for DBus allowing the Disnix service to run on the system bus:
$ cp /nix/var/nix/profiles/default/etc/dbus-1/system.d/disnix.conf \

Also, I have to manually edit /etc/dbus-1/system.d/disnix.conf and change the line:
<policy user="root">


<policy user="sander">

to allow operations to be executed under my own less privileged user account.

To run the Disnix service, we must define two Windows services. The following command-line instruction creates a Windows service for DBus:

$ cygrunsrv -I dbus -p /usr/bin/dbus-daemon.exe \
    -a '--system --nofork'

The following command-line instruction creates a Disnix service running under my own user account:

$ cygrunsrv -I disnix -p /usr/local/bin/disnix-service.exe \
  -e 'PATH=/bin:/usr/bin:/usr/local/bin' \
  -y dbus -u sander

In order to make the Windows service work, the user account requires login rights. To check if this right has been granted, we can run:

$ editrights -u sander -l

which should list SeServiceLogonRight. If this is not the case, this permission can be granted by running:

$ editrights -u sander -a SeServiceLogonRight

Finally, we must start the Disnix service. This can be done by opening the services configuration screen (Control Panel -> System and Security -> Administrative Tools -> Services), right clicking on: 'disnix' and selecting: 'Start'.

Deploying the example scenarios

After deploying the virtual machines and their configurations, we can start doing some deployment experiments with the Disnix TCP proxy example. The Disnix deployment models can be found in the deployment/DistributedDeployment sub folder:

$ cd deployment/DistributedDeployment

Before we can do any deployment, we must write an infrastructure model (infrastructure.nix) reflecting the machines' configuration properties that we have deployed previously:

  test1 = { # x86 Linux machine (Kubuntu) reachable with SSH
    hostname = "";
    system = "i686-linux";
    targetProperty = "hostname";
    clientInterface = "disnix-ssh-client";
  test2 = { # x86-64 Linux machine (NixOS) reachable with SOAP/HTTP
    hostname = "";
    system = "x86_64-linux";
    targetEPR =;
    targetProperty = "targetEPR";
    clientInterface = "disnix-soap-client";

  test3 = { # x86-64 Windows machine (Windows 7) reachable with SSH
    hostname = "";
    system = "x86_64-cygwin";
    targetProperty = "hostname";
    clientInterface = "disnix-ssh-client";

and write the distribution model to reflect the initial deployment scenario shown in the presentation:


  hello_world_server = [ infrastructure.test2 ];
  hello_world_client = [ infrastructure.test1 ];

Now we can deploy the system by running:

$ disnix-env -s services-without-proxy.nix \
  -i infrastructure.nix -d distribution.nix

If we open a terminal on the Kubuntu machine, we should be able to run the client:

$ /nix/var/nix/profiles/disnix/default/bin/hello-world-client

When we type: 'hello' the client should respond by saying: 'Hello world!'. The client can be exited by typing: 'quit'.

We can also deploy a second client instance by changing the distribution model:


  hello_world_server = [ infrastructure.test2 ];
  hello_world_client = [ infrastructure.test1 infrastructure.test3 ];

and running the same command-line instruction again:

$ disnix-env -s services-without-proxy.nix \
  -i infrastructure.nix -d distribution.nix

After the redeployment has been completed, we should be able to start the client that connects to the same server instance on the second test machine (the NixOS machine).

Another thing we could do is moving the server to the Windows machine:


  hello_world_server = [ infrastructure.test3 ];
  hello_world_client = [ infrastructure.test1 infrastructure.test3 ];

However, running the following command:

$ disnix-env -s services-without-proxy.nix \
  -i infrastructure.nix -d distribution.nix

probably leads to a build error, because the host machine (that runs Linux) is unable to build packages for Cygwin. Fortunately, this problem can be solved by enabling building on the target machines:

$ disnix-env -s services-without-proxy.nix \
  -i infrastructure.nix -d distribution.nix \

After deploying the new configuration, you will observe that the clients have been disconnected. You can restart any of the clients to observe that they have been reconfigured to connect to the new server instance that has been deployed to the Windows machine.


In this blog post, I have described how to set up and repeat the heterogeneous network deployment scenario that I have shown in my presentation. Despite being a simple example, the thing that makes repeating it difficult is because we need to deploy the machines first, a process which is not automated by Disnix. (As a sidenote: with the DisnixOS extension we can automate the deployment of machines as well, but this does not work with a network of non-NixOS machines, such as Windows installations).

Additionally, the fact that there is no installer (or official support) for the Nix deployment tools on other platforms than Linux and Mac OS X makes it even more difficult. (Fortunately, compiling from source on Cygwin should work and there are also some ongoing efforts to revive FreeBSD support).

To alleviate some of these issues, I have improved the Disnix documentation a bit to explain how to work with single user Nix installations on non-NixOS platforms and included the Debian init.d script in the Disnix distribution as an example. These changes have been integrated into the current development version Disnix.

I am also considering writing a simple infrastructure model generator for static deployment purposes (a more advanced prototype already exists in the Dynamic Disnix toolset) and include it with the basic Disnix toolset to avoid some repetition while deploying target machines manually.


I have published the slides of my talk on SlideShare. For convenience, I have embedded them into this web page:

Furthermore, the recordings of the NixCon 2015 talks are also online.