phone

    • chevron_right

      Felipe Borges: GNOME is participating in Google Summer of Code 2025!

      news.movim.eu / PlanetGnome • 27 February

    The Google Summer of Code 2025 mentoring organizations have just been announced and we are happy that GNOME’s participation has been accepted!

    If you are interested in having a internship with GNOME, check gsoc.gnome.org for our project ideas and getting started information.

    • wifi_tethering open_in_new

      This post is public

      feborg.es /gnome-is-participating-in-google-summer-of-code-2025/

    • chevron_right

      Jussi Pakkanen: The price of statelessness is eternal waiting

      news.movim.eu / PlanetGnome • 27 February • 4 minutes

    Most CI systems I have seen have been stateless. That is, they start by getting a fresh Docker container (or building one from scratch), doing a Git checkout, building the thing and then throwing everything away. This is simple and matematically pure, but really slow. This approach is further driven by the fact that in cloud computing CPU time and network transfers are cheap but storage is expensive. Probably because the cloud vendor needs to take care of things like backups, they can't dispatch the task on any machine on the planet but instead on the one that already has the required state and so on.

    How much could you reduce resource usage (or, if you prefer, improve CI build speed) by giving up on statelessness? Let's find out by running some tests. To get a reasonably large code base I used LLVM. I did not actually use any cloud or Docker in the tests, but I simulated them on a local media PC. I used 16 cores to compile and 4 to link (any more would saturate the disk). Tests were not run.

    Baseline

    Creating a Docker container with all the build deps takes a few minutes. Alternatively you can prebuild it, but then you need to download a 1 GB image.

    Doing a full Git checkout would be wasteful. There are basically three different ways of doing a partial checkout: shallow clone, blobless and treeless. They take the following amount of time and space:

    • shallow: 1m, 259 MB
    • blobless: 2m 20s, 961 MB
    • treeless: 1m 46s, 473 MB
    Doing a full build from scratch takes 42 minutes.

    With CCache

    Using CCache in Docker is mostly a question of bind mounting a persistent directory in the container's cache directory. A from-scratch build with an up to date CCache takes 9m 30s.

    With stashed Git repo

    Just like the CCache dir, the Git checkout can also be persisted outside the container. Doing a git pull on an existing full checkout takes only a few seconds. You can even mount the repo dir read only to ensure that no state leaks from one build invocation to another.

    With Danger Zone

    One main thing a CI build ensures is that the code keeps on building when compiled from scratch. It is quite possible to have a bug in your build setup that manifests itself so that the build succeeds if a build directory has already been set up, but fails if you try to set it up from scratch. This was especially common back in ye olden times when people used to both write Makefiles by hand and to think that doing so was a good idea.

    Nowadays build systems are much more reliable and this is not such a common issue (though it can definitely still occur). So what if you would be willing to give up full from-scratch checks on merge requests? You could, for example, still have a daily build that validates that use case. For some organizations this would not be acceptable, but for others it might be reasonable tradeoff. After all, why should a CI build take noticeably longer than an incremental build on the developer's own machine. If anything it should be faster, since servers are a lot beefier than developer laptops. So let's try it.

    The implementation for this is the same as for CCache, you just persist the build directory as well. To run the build you do a Git update, mount the repo, build dir and optionally CCache dirs to the container and go.

    I tested this by doing a git pull on the repo and then doing a rebuild. There were a couple of new commits, so this should be representative of the real world workloads. An incremental build took 8m 30s whereas a from scratch rebuild using a fully up to date cache took 10m 30s.

    Conclusions

    The amount of wall clock time used for the three main approaches were:

    • Fully stateless
      • Image building: 2m
      • Git checkout: 1m
      • Build: 42m
      • Total : 45m
    • Cached from-scratch
      • Image building: 0m (assuming it is not "apt-get update"d for every build)
      • Git checkout: 0m
      • Build: 10m 30s
      • Total : 10m 30s
    • Fully cached
      • Image building: 0m
      • Git checkout: 0m
      • Build: 8m 30s
      • Total : 8m 30s
    Similarly the amount of data transferred was:

    • Fully stateless
      • Image: 1G
      • Checkout: 300 MB
    • Cached from-scratch:
      • Image: 0
      • Checkout: O(changes since last pull), typically a few kB
    • Fully cached
      • Image: 0
      • Checkout: O(changes since last pull)
    The differences are quite clear. Just by using CCache the build time drops by almost 80%. Persisting the build dir reduces the time by a further 15%. It turns out that having machines dedicated to a specific task can be a lot more efficient than rebuilding the universe from atoms every time. Fancy that.

    The final 2 minute improvement might not seem like that much, but on the other hand do you really want your developers to spend 2 minutes twiddling their thumbs for every merge request they create or update? I sure don't. Waiting for CI to finish is one of the most annoying things in software development.

    • wifi_tethering open_in_new

      This post is public

      nibblestew.blogspot.com /2025/02/the-price-of-statelessness-is-eternal.html

    • chevron_right

      Sebastian Pölsterl: scikit-survival 0.24.0 released

      news.movim.eu / PlanetGnome • 26 February • 4 minutes

    It’s my pleasure to announce the release of scikit-survival 0.24.0.

    A highlight of this release the addition of cumulative_incidence_competing_risks() which implements a non-parameteric estimator of the cumulative incidence function in the presence of competing risks. In addition, the release adds support for scikit-learn 1.6, including the support for missing values for ExtraSurvivalTrees .

    Analysis of Competing Risks

    In classical survival analysis, the focus is on the time until a specific event occurs. If no event is observed during the study period, the time of the event is considered censored. A common assumption is that censoring is non-informative, meaning that censored subjects have a similar prognosis to those who were not censored.

    Competing risks arise when each subject can experience an event due to one of $K$ ($K \geq 2$) mutually exclusive causes, termed competing risks. Thus, the occurrence of one event prevents the occurrence of other events. For example, after a bone marrow transplant, a patient might relapse or die from transplant-related causes (transplant-related mortality). In this case, death from transplant-related mortality precludes relapse.

    The bone marrow transplant data from Scrucca et al., Bone Marrow Transplantation (2007) includes data from 35 patients grouped into two cancer types: Acute Lymphoblastic Leukemia (ALL; coded as 0), and Acute Myeloid Leukemia (AML; coded as 1).

    from sksurv.datasets import load_bmt
    bmt_features, bmt_outcome = load_bmt()
    diseases = bmt_features["dis"].cat.rename_categories(
    {"0": "ALL", "1": "AML"}
    )
    diseases.value_counts().to_frame()
    
    dis count
    AML 18
    ALL 17

    During the follow-up period, some patients might experience a relapse of the original leukemia or die while in remission (transplant related death). The outcome is defined similarly to standard time-to-event data, except that the event indicator specifies the type of event, where 0 always indicates censoring.

    import pandas as pd
    status_labels = {
    0: "Censored",
    1: "Transplant related mortality",
    2: "Relapse",
    }
    risks = pd.DataFrame.from_records(bmt_outcome).assign(
    label=lambda x: x["status"].replace(status_labels)
    )
    risks["label"].value_counts().to_frame()
    
    label count
    Relapse 15
    Censored 11
    Transplant related mortality 9

    The table above shows the number of observations for each status.

    Non-parametric Estimator of the Cumulative Incidence Function

    If the goal is to estimate the probability of relapse, transplant-related death is a competing risk event. This means that the occurrence of relapse prevents the occurrence of transplant-related death, and vice versa. We aim to estimate curves that illustrate how the likelihood of these events changes over time.

    Let’s begin by estimating the probability of relapse using the complement of the Kaplan-Meier estimator. With this approach, we treat deaths as censored observations. One minus the Kaplan-Meier estimator provides an estimate of the probability of relapse before time $t$.

    import matplotlib.pyplot as plt
    from sksurv.nonparametric import kaplan_meier_estimator
    times, km_estimate = kaplan_meier_estimator(
    bmt_outcome["status"] == 1, bmt_outcome["ftime"]
    )
    plt.step(times, 1 - km_estimate, where="post")
    plt.xlabel("time $t$")
    plt.ylabel("Probability of relapsing before time $t$")
    plt.ylim(0, 1)
    plt.grid()
    
    bmt-kaplan-meier.svg

    However, this approach has a significant drawback: considering death as a censoring event violates the assumption that censoring is non-informative. This is because patients who died from transplant-related mortality have a different prognosis than patients who did not experience any event. Therefore, the estimated probability of relapse is often biased.

    The cause-specific cumulative incidence function (CIF) addresses this problem by estimating the cause-specific hazard of each event separately. The cumulative incidence function estimates the probability that the event of interest occurs before time $t$, and that it occurs before any of the competing causes of an event. In the bone marrow transplant dataset, the cumulative incidence function of relapse indicates the probability of relapse before time $t$, given that the patient has not died from other causes before time $t$.

    from sksurv.nonparametric import cumulative_incidence_competing_risks
    times, cif_estimates = cumulative_incidence_competing_risks(
    bmt_outcome["status"], bmt_outcome["ftime"]
    )
    plt.step(times, cif_estimates[0], where="post", label="Total risk")
    for i, cif in enumerate(cif_estimates[1:], start=1):
    plt.step(times, cif, where="post", label=status_labels[i])
    plt.legend()
    plt.xlabel("time $t$")
    plt.ylabel("Probability of event before time $t$")
    plt.ylim(0, 1)
    plt.grid()
    
    bmt-cumulative-incidence.svg

    The plot shows the estimated probability of experiencing an event at time $t$ for both the individual risks and for the total risk.

    Next, we want to to estimate the cumulative incidence curves for the two cancer types — acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) — to examine how the probability of relapse depends on the original disease diagnosis.

    _, axs = plt.subplots(2, 2, figsize=(7, 6), sharex=True, sharey=True)
    for j, disease in enumerate(diseases.unique()):
    mask = diseases == disease
    event = bmt_outcome["status"][mask]
    time = bmt_outcome["ftime"][mask]
    times, cif_estimates, conf_int = cumulative_incidence_competing_risks(
    event,
    time,
    conf_type="log-log",
    )
    for i, (cif, ci, ax) in enumerate(
    zip(cif_estimates[1:], conf_int[1:], axs[:, j]), start=1
    ):
    ax.step(times, cif, where="post")
    ax.fill_between(times, ci[0], ci[1], alpha=0.25, step="post")
    ax.set_title(f"{disease}: {status_labels[i]}", size="small")
    ax.grid()
    for ax in axs[-1, :]:
    ax.set_xlabel("time $t$")
    for ax in axs[:, 0]:
    ax.set_ylim(0, 1)
    ax.set_ylabel("Probability of event before time $t$")
    
    bmt-cumulative-incidence-by-diagnosis.svg

    The left column shows the estimated cumulative incidence curves (solid lines) for patients diagnosed with ALL, while the right column shows the curves for patients diagnosed with AML, along with their 95% pointwise confidence intervals. The plot indicates that the estimated probability of relapse at $t=40$ days is more than three times higher for patients diagnosed with ALL compared to AML.

    If you want to run the examples above yourself, you can execute them interactively in your browser using binder .

    • wifi_tethering open_in_new

      This post is public

      k-d-w.org /blog/2025/02/scikit-survival-0.24.0-released/

    • chevron_right

      Michael Meeks: 2025-01-17 Friday

      news.movim.eu / PlanetGnome • 17 January

    • Up early, sync with Dave, Anuj, lunch with Julia, worked away at contractuals. Onto mail catch-up, and slides.
    • wifi_tethering open_in_new

      This post is public

      meeksfamily.uk /~michael/blog/2025-01-17.html

    • chevron_right

      Michael Meeks: 2025-01-16 Thursday

      news.movim.eu / PlanetGnome • 16 January

    • Up too early; train - with Christian, sky-train, some data analysis on the plane, heathrow-express.
    • Home, read minutes of calls I missed: seems I should miss more calls; text review, dinner with the family. Worked after dinner, missed bible-stidy group, bed early.
    • wifi_tethering open_in_new

      This post is public

      meeksfamily.uk /~michael/blog/2025-01-16.html

    • chevron_right

      Luis Villa: non-profit social networks: benchmarking responsibilities and costs

      news.movim.eu / PlanetGnome • 15 January • 4 minutes

    I’m trying to blog quicker this year. I’m also sick with the flu. Forgive any mistakes caused by speed, brevity, or fever.

    Monday brought two big announcements in the non-traditional (open? open-ish?) social network space, with Mastodon moving towards non-profit governance (asking for $5M in donations this year), and Free Our Feeds launching to do things around ATProto/Bluesky (asking for $30+M in donations).

    It’s a little too early to fully understand what either group will do, and this post is not an endorsement of specifics of either group—people, strategies, etc.

    Instead, I just want to say: they should be asking for millions.

    There’s a lot of commentary like this one floating around:

    I don’t mean this post as a critique of Jan or others. (I deliberately haven’t linked to the source, please don’t pile on Jan!) Their implicit question is very well-intentioned. People are used to very scrappy open source projects , so millions of dollars just feels wrong. But yes, millions is what this will take.

    What could they do?

    I saw a lot of comments this morning that boiled down to “well, people run Mastodon servers for free, what does anyone need millions for”? Putting aside that this ignores that any decently-sized Mastodon server has actual server costs (and great servers like botsin.space shut down regularly in part because of those), and treats the time and emotional trauma of moderation as free… what else could these orgs be doing?

    Just off the top of my head:

    • Moderation, moderation, moderation, including:
      • moderation tools, which by all accounts are brutally badly needed in Masto and would need to be rebuilt from scratch by FoF. (Donate to IFTAS !)
      • multi-lingual and multi-cultural, so you avoid the Meta trap of having 80% of users outside the US/EU but 80% of moderation in the US/EU.
    • Jurisdictionally-distributed servers and staff
      • so that when US VP Musk comes after you, there’s still infrastructure and staff elsewhere
      • and lawyers for this scenario
    • Good governance
      • which, yes, again, lawyers, but also management, coordination, etc.
      • (the ongoing WordPress meltdown should be a great reminder that good governance is both important and not free)
    • Privacy compliance
      • Mention “GDPR compliance” and “Mastodon” in the same paragraph and lots of lawyers go pale; doing this well would be a fun project for a creative lawyer and motivated engineers, but a very time-consuming one.
      • Bluesky has similar challenges, which get even harder as soon as meaningfully mirrored.

    And all that’s just to have the same level of service as currently.

    If you actually want to improve the software in any way, well, congratulations: that’s hard for any open source software, and it’s really hard when you are doing open source software with millions of users. You need product managers, UX designers, etc. And those aren’t free. You can get some people at a slight discount if you’re selling them on a vision (especially a pro-democracy, anti-harassment one), but in the long run you either need to pay near-market or you get hammered badly by turnover, lack of relevant experience, etc.

    What could that cost, $10?

    So with all that in mind, some benchmarks to help frame the discussion. Again, this is not to say that an ATProto- or ActivityPub-based service aimed at achieving Twitter or Instagram-levels of users should necessarily cost exactly this much, but it’s helpful to have some numbers for comparison.

    • Wikipedia: ( source )
      • legal: $10.8M in 2023-2024 (and Wikipedia plays legal on easy mode in many respects relative to a social network—no DMs, deliberately factual content, sterling global brand)
      • hosting: $3.4M in 2023-2024 (that’s just hardware/bandwidth, doesn’t include operations personnel)
    • Python Package Index
      • $20M/year in bandwidth from Fastly in 2021 ( source ) (packages are big, but so is social media video, which is table stakes for a wide-reaching modern social network)
    • Twitter
      • operating expenses, not including staff , of around $2B/year in 2022 ( source )
    • Signal
    • Content moderation
      • Hard to get useful information on this on a per company basis without a lot more work than I want to do right now, but the overall market is in the billions ( source ).
      • Worth noting that lots of the people leaving Meta properties right now are doing so in part because tens of thousands of content moderators, paid unconscionably low wages , are not enough .

    You can handwave all you want about how you don’t like a given non-profit CEO’s salary, or you think you could reduce hosting costs by self-hosting, or what have you. Or you can pushing the high costs onto “volunteers”.

    But the bottom line is that if you want there to be a large-scale social network, even “do it as cheap as humanly possible” is millions of costs borne by someone .

    What this isn’t

    This doesn’t mean “give the proposed new organizations a blank check”. As with any non-profit, there’s danger of over-paying execs, boards being too cozy with execs and not moving them on fast enough, etc. ( Ask me about founder syndrome sometime !) Good governance is important.

    This also doesn’t mean I endorse Bluesky’s VC funding; I understand why they feel they need money, but taking that money before the techno-social safeguards they say they want are in place is begging for problems. (And in fact it’s exactly because of that money that I think Free Our Feeds is intriguing—it potentially provides a non-VC source of money to build those safeguards.)

    But we have to start with a realistic appraisal of the problem space. That is going to mean some high salaries to bring in talented people to devote themselves to tackling hard, long-term, often thankless problems, and lots of data storage and bandwidth.

    And that means, yes, millions of dollars.

    • wifi_tethering open_in_new

      This post is public

      lu.is /2025/01/social-network-costs/

    • chevron_right

      Hans de Goede: IPU6 camera support status update

      news.movim.eu / PlanetGnome • 14 January • 1 minute

    The initial IPU6 camera support landed in Fedora 41 only works on a limited set of laptops. The reason for this is that with MIPI cameras every different sensor and glue-chip like IO-expanders needs to be supported separately.

    I have been working on making the camera work on more laptop models. After receiving and sending many emails and blog post comments about this I have started filing Fedora bugzilla issues on a per sensor and/or laptop-model basis to be able to properly keep track of all the work.

    Currently the following issues are being either actively being worked on, or are being tracked to be fixed in the future.

    Issues which have fixes pending (review) upstream:


    Open issues with various states of progress:

    See all the individual bugs for more details. I plan to post semi-regular status updates on this on my blog.

    This above list of issues can also be found on my Fedora 42 change proposal tracking this and I intent to keep an updated complete list of all x86 MIPI camera issues (including closed ones) there.



    comment count unavailable comments
    • chevron_right

      Jussi Pakkanen: Measuring code size and performance

      news.movim.eu / PlanetGnome • 14 January • 2 minutes

    Are exceptions faster and/or bloatier than using error codes? Well...

    The traditional wisdom is that exceptions are faster when not taken, slower when taken and lead to more bloated code. On the other hand there are cases where using exceptions makes code a lot smaller . In embedded development, even, where code size is often the limiting factor.

    Artificial benchmarks aside, measuring the effect on real world code is fairly difficult. Basically you'd need to implement the exact same, nontrivial piece of code twice. One implementation would use exceptions, the other would use error codes but they should be otherwise identical. No-one is going to do that for fun or even idle curiosity.

    CapyPDF has been written exclusively using C++ 23's new expected object for error handling. As every Go programmer knows, typing error checks over and over again is super annoying. Very early on I wrote macros to autopropagate errors. That props up an interesting question, namely could you commit horrible macro crimes to make the error handling either use error objects or exceptions?

    It tuns out that yes you can . After a thorough scrubbing of the ensuring shame from your body and soul you can start doing measurements. To get started I built and ran CapyPDF's benchmark application with the following option combinations:

    • Optimization -O1, -O2, -O3, -Os
    • LTO enabled and disabled
    • Exceptions enabled and disabled
    • RTTI enabled and disabled
    • NDEBUG enabled and disabled
    The measurements are the stripped size of the resulting shared library and runtime of the test executable. The code and full measurement data can be found in this repo . The code size breakdown looks like this:

    Performance goes like this:

    Some interesting things to note:

    • The fastest runtime of 0.92 seconds with O3-lto-rtti-noexc-ndbg
    • The slowest is 1.2s with Os-noltto-rtti-noexc-ndbg
    • If we ignore Os the slowes is 1.07s O1-noltto-rtti-noexc-ndbg
    • The largest code is 724 kb with O3-nolto-nortt-exc-nondbg
    • The smallest is 335 kB with Os-lto-nortti-noexc-ndbg
    • Ignoring Os the smallest is 470 kB with O1-lto-nortti-noexc-ndbg
    Things noticed via eyeballing

    • Os leads to noticeably smaller binaries at the cost of performance
    • O3 makes binaries a lot bigger in exchange for a fairly modest performance gain
    • NDEBUG makes programs both smaller and faster, as one would expect
    • LTO typically improves both speed and code size
    • The fastest times for O1, O2 and O3 are within a few percent points of each other with 0.95, 094 and 0.92 seconds, respectively

    Caveats

    At the time of writing the upstream code uses error objects even when exceptions are enabled. To replicate these results you need to edit the source code.

    The benchmark does not actually raise any errors. This test only measures the golden path.

    The tests were run on GCC 14.2 on x86_64 Ubuntu 10/24.

    • wifi_tethering open_in_new

      This post is public

      nibblestew.blogspot.com /2025/01/measuring-code-size-and-performance.html

    • chevron_right

      Georges Basile Stavracas Neto: Flatpak 1.16 is out!

      news.movim.eu / PlanetGnome • 14 January • 5 minutes

    Last week I published the Flatpak 1.16.0 release This marks the beginning of the 1.16 stable series.

    This release comes after more than two years since Flatpak 1.14, so it’s pretty packed with new features, bug fixes, and improvements. Let’s have a look at some of the highlights!

    USB & Input Devices

    Two new features are present in Flatpak 1.16 that improve the handling of devices:

    • The new input device permission
    • Support for USB listing

    The first, while technically still a sandbox hole that should be treated with caution, allows some apps to replace --device=all with --device=input , which has a far smaller surface. This is interesting in particular for apps and games that use joysticks and controllers, as these are usually exported by the kernel under /dev/input .

    The second is likely the biggest new feature in the Flatpak release! It allows Flatpak apps to list which USB devices they intend to use. This is stored as static metadata in the app, which is then used by XDG Desktop Portal to notify the app about plugs and unplugs, and eventually request the user for permission.

    Using the USB portal, Flatpak apps are able to list the USB devices that they have permission to list (and only them). Actually accessing these USB devices triggers a permission request where the user can allow or deny the app from having access to the device.

    Finally, it is possible to forcefully override these USB permissions locally with the --usb and --nousb command-line arguments.

    This should make the USB access story fairly complete. App stores like Flathub are able to review the USB permissions ahead of time, before the app is published, and see if they make sense. The portal usage prevents apps from accessing devices behind the user’s back. And users are able to control these permissions locally even further.

    Better Wayland integration

    Flatpak 1.16 brings a handful of new features and improvements that should deepen its integration with Wayland.

    Flatpak now creates a private Wayland socket with the security-context-v1 extension if available. This allows the Wayland compositor to properly identify connections from sandboxed apps as belonging to the sandbox.

    Specifically, with this protocol, Flatpak is able to securely tell the Wayland compositor that (1) the app is a Flatpak-sandboxed app, (2) an immutable app id, and (3) the instance id of the app. None of these bits of information can be modified by apps themselves.

    With this information, compositors can implement unique policies and have tight control over security.

    Accessibility

    Flatpak already exposes enough of the accessibility stack for most apps to be able to report their accessible contents. However, not all apps are equal, and some require rather challenging setups with the accessibility stack.

    One big example here is the WebKit web engine. It basically pushes Flatpak and portals to their limit, since each tab is a separate process. Until now, apps that use WebKit – such as GNOME Web and Newsflash – were not able to have the contents of the web pages properly exposed to the accessibility stack. That means things like screen readers wouldn’t work there, which is pretty disappointing.

    Fortunately a lot of work was put on this front, and now Flatpak has all the pieces of the puzzle to make such apps accessible. These improvements also allow apps to detect when screen readers are active, and optimize for that.

    WebKit is already adapted to use these new features when they’re available. I’ll be writing about this in more details in a future series of blog posts.

    Progress Reporting

    When installing Flatpak apps through the command-line utility, it already shows a nice fancy progress bar with block characters. It looks nice and gets the job done.

    However terminals may have support for an OSC escape sequence to report progress. Christian Hergert wrote about it here . Christian also went ahead and introduced support to emitting the progress escape sequence in Flatpak. Here’s an example:

    Screenshot of the terminal app Ptyxis with a progress bar

    Unfortunately, right before the release, it was reported that this new feature was spamming some terminal emulators with notifications. These terminals (kitty and foot) have since been patched, but older LTS distributions probably won’t upgrade. That forced us to make it opt-in for now, through the FLATPAK_TTY_PROGRESS environment variable.

    Ptyxis (the terminal app above) automatically sets this environment variable so it should work out of the box. Users can set this variable on their session to enable the feature. For the next stable release (Flatpak 1.18), assuming terminals cooperate on supporting this feature, the plan is to enable it by default and use the variable for opting out.

    Honorable Mentions

    I simply cannot overstate how many bugs were fixed in Flatpak in all these releases.

    We had 13 unstable releases (the 1.15.X series) until we finally released 1.16 as a stable release. A variety of small memory leaks and build warnings were fixed.

    The gssproxy socket is now shared with apps, which acts like a portal for Kerberos authentication. This lets apps use Kerberos authentication without needing a sandbox hole.

    Flatpak now tries to pick languages from the AccountsService service, making it easier to configure extra languages.

    Obsolete driver versions and other autopruned refs are now automatically removed, which should help keeping things tight and clean, and reducing the installed size.

    If the timezone is set through the TZDIR environment variable, Flatpak takes timezone information from there. This should fix apps with the wrong timezone in NixOS systems.

    More environment variables are now documented in the man pages.

    This is the first stable release of Flatpak that can only be built with Meson. Autotools served us honorably for the past decades, but it was time to move to something more modern. Meson has been a great option for a long time now. Flatpak 1.16 limits itself to require a fairly old version of Meson, which should make it easy to distribute on old LTS distributions.

    Finally, the 1.10 and 1.12 series have now reached their end of life, and users and distributions are encouraged to upgrade to 1.16 as soon as possible. During this development cycle, four CVEs were found and fixed, all of these fixes were backported to the 1.14 series, but not all were backported to versions older than that. So if you’re using Flatpak 1.10 or 1.12, be aware that you’re on your own risk.

    Future

    The next milestone for the platform is a stable XDG Desktop Portal release. This will ship with the aforementioned USB portal, as well as other niceties for apps. Once that’s done, and after a period of receiving bug reports and fixing them, we can start thinking about the next goals for these projects.

    These are important parts of the platform, and are always in need of contributors. If you’re interested in helping out with development, issue management, coordination, developer outreach, and/or translations, please reach out to us in the following Matrix rooms:

    Acknowledgements

    Thanks to all contributors, volunteers, issue reporters, and translators that helped make this release a reality. In particular, I’d like to thank Simon McVittie for all the continuous maintenance, housekeeping, reviews, and coordination done on Flatpak and adjacent projects.

    • wifi_tethering open_in_new

      This post is public

      feaneron.com /2025/01/14/flatpak-1-16-is-out/