This report covers the activity of Tails in March 2016.

Everything in this report is public.

A. Replace Claws Mail with Icedove

  • A.1.1 Secure the Icedove autoconfig wizard: #6154

The modifications to Thunderbird that we reported last month are still waiting for review.

We also worked on adjusting some settings in Torbirdy to match our changes to the automatic email creation wizard in Thunderbird, in order to benefit from both. We've made a lot of progress on adapting Torbirdy's code (#11204) and have sent the modifications upstream.

We set up a development branch which integrates all these modifications in Tails, uses only secure protocols and the configured proxy while still allowing the user to benefit from the automatic email creation wizard. However, we're hesitating to release because we want the Icedove and Torbirdy parts to be upstreamed first, so we don't risk releasing something that would change on the user's end if upstream prefers a different implementation (e.g. a different GUI). For example, Thunderbird upstream has not yet agreed to our proposal of implementing a check box which allows the user to select or deselect "Only use secure protocols".

Ideally, we would like to release these modifications in Tails 2.4 release (June 7). But as said, we'd like to have more comments from Thunderbird beforehand.

B. Improve our quality assurance process

B.1. Automatically build ISO images for all the branches of our source code that are under active development

In March, 839 ISO images were automatically built by our Jenkins instance.

B.2. Continuously run our entire test suite on all those ISO images once they are built

In March, 834 ISO images were automatically tested by our Jenkins instance.

  • B.2.4. Implement and deploy the best solution from this research: #5288

    Given the workload on other deliverables, we postponed the triaging of the false positives generated by our test infrastructure in February and will triage them at the same time as the ones from March. (#11083 and #11084)

    We rejected a bug in the Cucumber plugin of Jenkins that didn't reappear recently (#10725). The same is likely to happen to #10601.

B.3. Extend the coverage of our test suite

B.3.11. Fix newly identified issues to make our test suite more robust and faster

  • Robustness improvements

    As discussed last month, this month has been spent on trying out new "fundamental approaches" to deal with our robustness issues. Specifically, we looked at dealing with "glitches when interacting with graphical user interfaces" and found a suitable solution in Dogtail, an automated GUI testing framework based on assistive technologies (specifically a11y). (#10721)

    We developed a simple interface in our test suite to generate and run Dogtail scripts inside the system under test, so elements of the graphical user interface can be identified programmatically and then interacted with much like before (e.g. clicked with the mouse or typed text into). Previously we could only identify such elements using static images that had to exactly match what we see on the screen, which often turned out to not be the case due to the non-deterministic nature of modern desktop applications, and this new approach already shows promise on being more reliable.

    Furthermore, being able to identify elements with code instead of images will significantly decrease the maintenance burden; previously we've had to update images whenever something changes in the GUI, e.g. size or anti-aliazing changes for fonts, whereas we now only need to change if the program itself changes. Also, when reading the code, referring to an image is much more opaque than a programmatic description, making the code more readable.

    Next month's focus will be to deal with "transient network issues" (#9521).

  • Performance improvements on Jenkins

    Last month, we started optimizing the platform that runs our test suite to make it run faster. As planned, in March we came back to it and measurements showed that real workloads indeed benefit from these changes (#11175, #11113).

B.4. Freezable APT repository

We now have a working proof-of-concept for all essential pieces of infrastructure and code, except for some bits of the "tagged snapshots" component which is actually not needed to achieve ours goals for this year.

The progress made in March matches the updated development schedule that we shared last month and the goals we set back then still seem realistic.

Now, into the details:

  • B.4.1. Specify when we want to import foreign packages into which APT suites (#9488) and B.4.4. Design freezable APT repository (#9487)

    We discussed our design and validated it. Proof-of-concept implementations also confirm the soundness of large parts of this design. So we now consider these two deliverables as done.

  • B.4.2. Implement a mechanism to save the list of packages used at ISO build time (#10748, #10749)

    We made great progress on this front. We added support for storing the architecture information about packages used at build time, in preparation for (unrelated) upcoming changes that will make the Tails build process fetch packages for multiple architectures. We made our code and design evolve a bit after more intensive testing that highlighted a few problems.

    We still have to do some polishing but are happy with how this component works now. We still aim at having this work merged in April, ideally in time for Tails 2.3 (April 19).

    This will allow us to store, virtually forever, all the packages needed to build a given Tails release.

  • B.4.3. Centralize and merge the list of needed packages

    With our current design, the original definition of this deliverable doesn't make sense anymore. We replaced it with processes and tools that our release managers can use to:

    • Allow storing APT snapshots longer than the default when needed. For example to store the snapshots used during a code freeze that lasts longer than usual and prevent them from being deleted by our garbage collector.

    • Freeze and unfreeze the APT snapshots used by a branch when needed.

    The corresponding scripts were drafted (build-from-snapshots, tails-bump-apt-snapshot-valid-until) and passed early testing.

  • B.4.5. Implement processes and tools for importing and freezing those packages (#6299, #6296)

    Time-based snapshots are now generated four times a day and published over HTTP. We designed how to do garbage collection on these snapshots and have a draft implementation. This infrastructure is described with and managed by a Puppet manifest.

    We successfully built a Tails ISO image using time-based APT snapshots: feature/build-from-snapshots! This was a great success that validated a large chunk of our previous work in a production context.

    This lead us to identify that if we didn't do anything about it, apt-cacher-ng, the caching proxy used to speed up building Tails ISO images, would not only become mostly useless, but would also quickly consume too much disk space. It took us some time but we now have a proof-of-concept to solve this problem. The next step is to integrate our proof-of-concept into the various configuration used to build a Tails ISO image.

  • B.4.6. Adjust the rest of our ecosystem to the freezable APT repository (#6303)

    We investigated more closely what exact changes remain to be done, adjusted our backup system, and ensured that our plans for a fail-over system already take into account the needs of our upcoming freezable APT repository.

    We still have to purchase a hard-drive for an additional set of backups, but this is a minor issue and we consider this deliverable as basically done at this point.

C. Scale our infrastructure

C.1. Change in depth the infrastructure of our pool of mirrors

Not much progress was made on this front in March, but as announced last month, this will be one of our primary focuses in April. Still, a bit of work was done:

  • C.1.1. Specify a way of describing the pool of mirrors (#8637, #10284)

    The design proposed in February was reviewed and validated. So this deliverable is now done.

  • C.1.2. Write & audit the code that makes the redirection decision from our website (#8639, #8640, #11109)

    We found someone who has the skills and the desire to audit our JavaScript code.

  • C.1.4. Communicate with each mirror operator to adapt their configuration (#8635, #11079)

    We prepared a call for mirrors by gathering a list of potential big hosts and drafting a text.

  • C.1.7. Adjust update-description files for incremental upgrades

    With our current design, the original definition of this deliverable doesn't make sense anymore. We will instead adjust the code of Tails Upgrader to the new mirror pool (#11123).

What we aim to do in April:

  • Submit a prototype of the changes needed in our ISO Verification Extension for code review and security audit (#11109, part of C.1.2).
  • Ask all mirror operators to switch to a new virtual host name (#11055, part of C.1.4).
  • Prototype the changes needed in Tails Upgrader (#11123, part of C.1.7).

And if time allows:

  • Send our call for new mirrors (#11079, part of C.1.4).
  • Prepare our new fallback pool based on DNS (#10295, pre-requisite of C.1.5).

We aim to deploy the new mirror pool in May but, if we are delayed, the deployment might have to wait until July.

C.2. Be able to detect within hours failures and malfunction on our services

  • C.2.2. Set up the monitoring software and the underlying infrastructure

    We configured half of our monitored systems as Icinga2 agents reporting to a satellite (as described last month). To do this, we wrote a second batch of Puppet manifests which will make it faster to configure the second half. Configuring the second half will also debug our automated deployment procedures.

    The satellite is collecting the reports and pushing them to our monitoring system. This way, our infrastructure hosted on one host is isolated from the external monitoring host, which can't execute code on the monitoring systems and only collects results pushed by the satellite.

    We also prefer to use Puppet to spread the configuration, so that we don't have to use the mechanisms available in Icinga2 to do so, isolating a bit more both hosts.

    We set up an Icingaweb2 interface on https://icingaweb2.tails.boum.org/.

  • C.2.4. Configure and debug the monitoring of the most critical services and C.2.6. Configure and debug the monitoring of other high-priority services

    We deployed basic system checks and set up our monitoring host to check public services remotely. These checks were already helpful by warning us that some of our partitions were running out of disk space.

    We deployed most of the checks defined as CRITICAL and HIGH PRIORITY. (#8650, #8653)

C.4. Maintain our already existing services

We kept on answering the requests from the community and taking care of security updates as covered by "C.4.5. Administer our services up to milestone V".

We upgraded all our remaining Debian 7 (Wheezy) to Debian 8 (Jessie) (#11178, #11186). This fixed a bug that prevented us to upgrade the Linux kernel running on all our ISO tester virtual machines (#9157).

E. Release management

  • Tails 2.2 was released on 2016-03-08.

  • Tails 2.2.1 was released as an emergency release on 2016-03-18.