Goals

The Tails system administrators set up and maintain the infrastructure that supports the development and operations of Tails. We aim at making the life of Tails contributors easier, and to improve the quality of the Tails releases.

Principles

Infrastructure as code

We want to treat system administration like a (free) software development project:

  • We want to enable people to participate without needing an account on the Tails servers.
  • We want to review the changes that are applied to our systems.
  • We want to be able to easily reproduce our systems via automatic deployment.
  • We want to share knowledge with other people.

This is why we try to publish as much as possible of our systems configuration, and to manage our whole infrastructure with configuration management tools. That is, without needing to log into hosts.

Free Software

We use Free Software, as defined by the Debian Free Software Guidelines.
The firmware our systems might need are the only exception to this rule.

Relationships with upstream

The principles used by the broader Tails project also apply for system administration.

Duties

In general

As said above, "set up and maintain the infrastructure". This implies for example:

  • dealing with hardware purchase, upgrades and failures;
  • upgrading our systems to a new version of Debian.

During sysadmin shifts

  • create Git repositories when requested
  • update access control lists to resources we manage, as requested by the corresponding teams
  • keep systems up-to-date, reboot them as needed
  • keep backups up-to-date
  • keep Puppet modules up-to-date wrt. upstream changes
  • keep Jenkins plugins up-to-date, by upgrading any plugin that satisfies at least one of these conditions:
    • only brings security fixes
    • fixes bugs we're affected by
    • brings new feature we are interested in, without breaking the ones we rely on
    • is needed to upgrade another plugin that we want to upgrade
    • is required by a system upgrade (e.g. of the Jenkins packages)
  • report bugs identified in Jenkins plugins after they have been upgraded (both on the upstream bug tracker and on our own one)
  • act as the de facto interface between Tails and the servers hosting our services (boum.org, immerda.ch) for non-trivial requests

Tools

The main tools used to manage the Tails infrastructure are:

  • Debian GNU/Linux; in the vast majority of cases, we run the current stable release
  • Puppet, a configuration management system
  • Git to host and deploy configuration, including our Puppet modules

Communication

A few people have write access to the puppetmasters, and can log into the hosts.
They read the tails-sysadmins@boum.org encrypted mailing list.

We use Redmine tickets for public discussion and tasks management:

Services

APT repositories

Custom APT repository

  • purpose: host Tails-specific Debian packages
  • documentation
  • access: anyone can read, Tails core developers can write
  • tools: reprepro
  • configuration:
    • tails::reprepro::custom class in puppet-tails
    • signing keys are managed with the tails_secrets_apt Puppet module

Time-based snapshots of APT repositories

  • purpose: host full snapshots of the upstream APT repositories we need, which provides the freezable APT repositories feature needed by the Tails development and QA processes
  • documentation
  • access: anyone can read, release managers have write access
  • tools: reprepro
  • configuration:
    • tails::reprepro::snapshots::time_based class in puppet-tails
    • signing keys are managed with the tails_secrets_apt Puppet module

Tagged snapshots of APT repositories

  • purpose: host partial snapshots of the upstream APT repositories we need, for historical purposes and compliance with some licenses
  • documentation
  • access: anyone can read, release managers can create and publish new snapshots
  • tools: reprepro
  • configuration:
    • tails::reprepro::snapshots::tagged class in puppet-tails
    • signing keys are managed with the tails_secrets_apt Puppet module

Bitcoind

  • purpose: handle the Tails Bitcoin wallet
  • access: Tails core developers only
  • tools: bitcoind
  • configuration: bitcoind class in puppet-bitcoind

BitTorrent

  • purpose: seed the new ISO image when preparing a release
  • documentation
  • access: anyone can read, Tails core developers can write
  • tools: transmission-daemon
  • configuration: done by hand (#6926)

Gitolite

  • purpose: host Git repositories used by the puppetmaster and other services; mostly useless for humans
  • access: Tails core developers only
  • tools: gitolite
  • configuration: tails::gitolite class in puppet-tails

git-annex

  • purpose: host the full history of Tails released images and Tor Browser tarballs
  • access: Tails core developers only
  • tools: git-annex
  • configuration:
    • tails::git_annex and tails::gitolite classes in puppet-tails
    • tails::git_annex::mirror defined resource in puppet-tails

Icinga2

  • purpose: Monitor Tails online services and systems.
  • access: only Tails core developers can read-only the Icingaweb2 interface, sysadmins are RW and receive notifications by email.
  • setup: We have one Icinga2 instance installed on a dedicated system used as the master of all our Icinga2 zones. We use a VM on the other bare-metal host as the Icinga2 satellite of our master. Icinga2 agents are installed on every other VM and the host itself. They report back to the satellite, which transmits to the master. We spread the Icinga2 configuration with Puppet. This way, we achieve a certain isolation where the master or the satellite have no right to configure agents or run arbitrary commands on them.
  • tools: Icinga2, icingaweb2
  • configuration:
    • master:
      • tails::monitoring::master class in puppet-tails.
      • some configuration in the ecours.tails.boum.org node manifest.
      • See Vpn section.
    • web server:
    • satellite:
    • agents:
    • private keys are managed with the tails_secrets_monitoring Puppet module
  • documentation:

Jenkins

  • purpose: continuous integration, e.g. build Tails ISO images from source and run test suites
  • access: only Tails core developers can see the Jenkins web interface (#6270); anyone can download the built products
  • tools: Jenkins, jenkins-job-builder
  • configuration:
    • master:
    • slaves:
      • tails::builder, tails::jenkins::slave, tails::jenkins::slave::iso_builder and tails::tester classes in puppet-tails
      • some configuration in the manifest (#7106)
      • signing keys are managed with the tails_secrets_jenkins Puppet module
    • web server:
      • some configuration in the manifest (#7107)

rsync

  • purpose: provide content to the public rsync server, from which all HTTP mirrors in turn pull
  • access: read-only for those who need it, read-write for Tails core developers
  • tools: rsync
  • configuration:
    • tails::rsync in puppet-tails
    • users and credentials are managed with the tails_secrets_rsync Puppet module

Tor bridge

  • purpose: provide a Tor bridge that Tails contributors can easily use for testing
  • access: anyone who gets it from BridgeDB
  • tools: tor, obfs4proxy
  • configuration:

VPN

  • purpose: flow through VPN traffic the connections between our different remote systems. Mainly used by the monitoring service.
  • access: private network.
  • tools: tinc
  • configuration:

Web server

  • purpose: serve web content for any other service that need it
  • access: depending on the service
  • tools: nginx
  • configuration:
    • nginx class in puppet-nginx
    • hard-coded manifest snippets and files on the puppetmaster (#6938)

Weblate

WhisperBack relay

  • purpose: forward bug reports sent with WhisperBack to tails-bugs@boum.org
  • access: public; WhisperBack (and hence, any bug reporter) uses it
  • tools: Postfix
  • configuration:
    • tails::whisperback::relay in puppet-tails
    • private keys are managed with the tails_secrets_whisperback Puppet module