Migrating to Clan and a Dendritic Architecture (Summer 2025)

Aug 30, 2025
TLDR
  • Five machine NixOS homelab consisting of workstations, server and router migration to Clan and dendritic flake-parts
  • Public and private bits split
  • Abstractions to create maintainable modules (Router module)
  • Clan vars wrapper developed to add features like discovery and ACL permissions
  • Repo

Summer of 2025

Summers are when I dive into one or two new projects. I look forward to it.

In past years, summer projects included:

This year I picked two projects to balance my time with family: build a 40 m² deck and tidy my homelab configuration. The former drew far more praise—unsurprisingly.

Finished deck
Finished deck

Pain

Over the years my homelab has grown in scope and machines. This spring I replaced my old router with a new N150 running NixOS, which I detailed in the blog. My server now runs my save-for-later PKM, Minne, and its demo deployment.

I initially kept my configurations public, but as I added services (email, etc.) they leaked too many personal details. Another goal with the refactor was to split the private bits to a private repo while keeping the main configuration shareable.

Orchestration/Deployment options

I’ve used remote builds on my workstation for low-resource devices; it’s been solid. While revising the setup I evaluated Colmena, deploy-rs, and Clan (a longer list exists here). Clan stood out: active development, clear ambition, great people—and its secrets module (“vars”), an overlay on sops-nix, made creating and sharing secrets between machines straightforward.

Dendritic configurations

A quick detour into architecture. Over time my configuration became more chaotic, and it became harder to localize changes. I lacked a consistent pattern.

Mightyiam’s repo is where I first came across the Dendritic Pattern.

The Dendritic Pattern:

A Nix flake-parts usage pattern in which every Nix file is a flake-parts module

— mightyiam

You can still make a mess, but it helps—and it’s fun.

My current setup is what I think of as a hybrid dendritic approach. All features live under modules/ and are written as flake‑parts modules, each exposing nixosModules.<name> or homeModules.<name>. Machines then compose these features in their configuration.nix by importing the relevant modules. I might move to a fully dendritic design, but for now I’m pleased.

Goals

At this point the plan is beginning to shape up. We’ve got a framework to implement in a certain manner, and target machines:

  • Io: Router, reverse proxy, etc.
  • Makemake: Server, hosting internal and external services, media, backups.
  • Charon: Workstation.

The goals for this project are:

  • Refactor existing configuration for increased maintainability.
  • Use Clan where appropriate.
  • Enable sharing of main repo, separating sensitive bits out of the main config.
  • Increase security across my machines.

Migration strategy

The plan was to create a fresh configuration around Clan and the new architecture rather than refactoring in place, for three reasons:

  1. It’s recommended in the Clan migration guide.
  2. I wanted a public repository going forward, without rewriting history.
  3. It offered a cleaner path.

I started by setting up a new VM following the getting started guide. The flake template helped. Once I had a working deployment—with host SSH keys and user/root passwords as secrets—I migrated the old configuration into the new pattern, added options to form meaningful abstractions, and placed them under config.my.*. It was fun taking some time and creating abstractions over different areas. One module, or rather set of modules, I found particularly meaningful was the ones related to router functionality. Network stuff isn’t my specialty, and maintaining it was a bit fragile. Creating an abstraction layer on top makes maintenance easier, making changes like adding static ip addresses, forwarding ports and reverse proxying is now simplified.

{
  my.router = {
    enable = true;
    hostname = "io";
    lan = { subnet = "10.0.0"; interfaces = [ "enp2s0" "enp3s0" "enp4s0" ]; };

    machines = [
      { name = "makemake"; ip = "10"; mac = "00:d0:b4:02:bb:3c";
        portForwards = [ { port = 25; protocol = "tcp"; } ]; }
    ];

    wireguard.peers = [ { name = "phone"; ip = 2; publicKey = "<redacted>"; persistentKeepalive = 25; } ];

    nginx = {
      enable = true;
      acmeEmail = "[email protected]";
      wildcardCerts = [
        { name = "lanstark"; baseDomain = "lan.stark.pub"; dnsProvider = "cloudflare";
          environmentFile = config.my.secrets.getPath "api-key-cloudflare-dns" "api-token"; }
      ];
      virtualHosts = [
        { domain = "minne.stark.pub"; target = "makemake"; port = 3000; cloudflareOnly = true; }
      ];
    };
}

Clan

Inventory

One file lists machines and roles (router, server, workstation). Each entry sets hostname, users, networking roles, and which features to import. Below is an excerpt from my configuration, declaring three machines, sshd setup for all, admin module, the user p for all machines. Note that deploy.buildHost is set to @localhost, this is since I use my workstation to deploy from, letting it evaluate and build, saving time.

{
  inventory = {
    machines.io = {
      deploy.targetHost = "[email protected]";
      deploy.buildHost = "root@localhost";
      tags = ["server"];
    };
    machines.charon = {
      deploy.targetHost = "root@localhost";
      tags = ["client"];
    };
    instances = {
      sshd-basic = {
        module = {
          name = "sshd";
          input = "clan-core";
        };
        roles.server.tags.all = {};
        roles.client.tags.all = {};
      };
      user-p = {
        module = {
          name = "users";
          input = "clan-core";
        };
        roles.default.tags.all = {};
        roles.default.settings = {
          user = "p";
          prompt = true;
        };
      };
      admin = {
        roles.default.tags.all = {};
        roles.default.settings = {
          allowedKeys = {
            "p" = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6uq8nXD+QBMhXqRNywwCa/dl2VVvG/2nvkw9HEPFzn";
          };
        };
      };
  };
}

Modules

Services are features under modules/system and modules/home exposed as nixosModules.* or homeModules.* and enabled per machine. Examples: reverse proxy, minne, media stack, backups. They all share the same pattern, which is somewhat more verbose, but offers some flake-parts niceties.

{
  config.flake.nixosModules.interception-tools = {pkgs, ...}: {
    services.interception-tools = {
      enable = true;
    };
  };
}

This enables us to do several things. One is that we can use import-tree, to import all nix files from a directory into our flake and import them to our configurations by importing using their declared names. This also enables us to share config.* between all our modules. Reusing used options between modules with ease.

{
  imports = with modules.nixosModules;
    [
      interception-tools
      system-stylix
      shared
      options
      router
      home-assistant
      k3s
    ]
    ++ (with vars-helper.nixosModules; [default]);
}

Secrets

Secrets use Clan “vars” on top of sops-nix. You might’ve noticed the vars-helper.nixosModules.default in the above code snippet. It’s a wrapper around clan vars I made to improve ergonomics and add some additional features, you can find it here. For example it adds discovery of files by the use of tags, some helper functions to set up secrets in userspace, allow read access via ACL to secrets, wrapping commands in systemd environments for improved security.

Private configuration

Having this structure meant that having a private repo containing sensitive bits, and importing this into my public configuration was quite straightforward. I set up a repository in a similar manner, with flake-parts, import-tree and sops (since I want to keep the secrets pertaining to services localized to where they are created).

{
  config.flake.nixosModules.hello-service = {
    config,
    pkgs,
    ...
  }: {
    # Set up sops and have it point to the secrets.yaml in the private repo, and the key managed by clan vars
    sops = {
      defaultSopsFile = ./../../secrets.yaml;
      age.sshKeyPaths = ["/run/secrets/vars/openssh/ssh.id_ed25519"];
      secrets.hello = {};
    };
    systemd.services.hello-world-tmp = {
      description = "Create hello world file with secret";
      serviceConfig.Type = "oneshot";
      script = ''
        ${pkgs.coreutils}/bin/echo "Hello from private repo!" > /tmp/hello-world
        ${pkgs.coreutils}/bin/echo "Secret: $(${pkgs.coreutils}/bin/cat ${config.sops.secrets.hello.path})" >> /tmp/hello-world
        ${pkgs.coreutils}/bin/chmod 644 /tmp/hello-world
      '';
      wantedBy = ["multi-user.target"];
    };
  };
}

This is then consumed by my public config

{
  private-infra = {
    url = "git+ssh://[email protected]/perstarkse/private-infra.git";
    inputs.nixpkgs.follows = "nixpkgs";
  };  
}

And then the modules are imported in my:

{
  imports  = ["module"] ++ (with private-infra.nixosModules; [hello-service]);
}

End

At the point of writing I’ve reached the point where the summer is at it’s last legs here in Sweden and all machines and services are migrated.

It’s time for reflection. It’s been a fun summer project, some aspects of it more than others. Assuming a dendritic nix pattern (ish) and utilizing options to create meaningful abstractions, reducing code duplication, improving maintainability sure was fun.

Implementing Clan, well, I knew it was going to be a bit of a challenge due to how early it was. Just in the past month documentation has really improved and I think getting started is significantly easier now. I really do enjoy clan vars, the sshd service. And I know there is significantly more services to use, distributed s3 storage, databases, etc.

RSS
https://blog.stark.pub/posts/feed.xml