phone

    • chevron_right

      Pablo Correa Gomez: Analysis of GNOME Foundation’s public economy: concerns and thoughts

      news.movim.eu / PlanetGnome • 20 May, 2024 • 7 minutes

    Apart from Software Development, I also have an interest in governance and finances. Therefore, last July, I was quite happy to attend my first Annual General Meeting (AGM), taking place in  GUADEC in Riga. I was a bit surprised by the format, as I was expecting something closer to an assembly than to a presentation with a Q&A at the end. It was still interesting to witness, but I was even more shocked by the huge negative cash flow (difference between revenue and expenditures). With the numbers presented, the foundation had lost approximately 650 000 USD in the 2021 exercise, and 300 000 USD in the 2022 exercise. And nobody seemed worry about it. I would have expected that such difference would come consequence of a great investment aimed at improving the situation of the foundation long-term. However, nothing like that was part of the AGM. This left me thinking, and a bit worried about what was going on with the financials and organization of the foundation. After asking a member of the Board in private, and getting no satisfactory response, I started doing some investigations

    Public information research

    The GNOME Foundation (legally GNOME Foundation Inc) has the 501(c)3 status. It means it is tax exempt. As part of such status, the tax payments, economic status and whereabouts of the GNOME Foundation Inc are public . So I had some look at the tax filling declarations of the last years. These contain detailed information about income and expenses, net assets (e.g: money in bank accounts), retribution of the Board, Executive director, and key employees, amount of money spent on fulfilling the goals of the foundation, and lots of other things. Despite their wide goal, the tax fillings are not very hard to read, and it’s easy to learn how much money the  foundation made or spent. Looking at the details, I found several worrying things, like the fact that revenue and expenses in the Annual Report presented in the AGM did not match those in the tax reports, that most expenses were aggregated in sections that required no explanation, or that there were some explanations for expenses required but missing. So I moved on to open a confidential issue in the Board Team in gitlab expressing my concerns.

    The answer mostly covered an explanation of the big deficits in the previous year (that would have been great to have in the Annual Report), but was otherwise generally disappointing. Most of my concerns (all of which are detailed below) were answered with nicely-written variations of: “that’s a small problem, we are aware of it and working on it”, or “this is not common practice and you can find unrelated information in X place”. It has been 6 months, a new tax statement and annual report are available, but problems persist. So I am sharing publicly my concerns with several goals:

    • Make these concerns available to the general GNOME community. Even though everything I am presenting comes from public sources, it is burdensome to research, and requires some level of experience with bureaucracy.
    • Show my interest about the topic, as I plan to present myself to the Board of Directors in the next elections. My goal is to become part of the Finance Committee to help improve in the transparency and efficiency of accounting.
    • Make the Board aware of my concerns (and hopefully show that others also share them), so things can be improved regardless of whether I get or don’t get elected for the board

    Analysis of data and concerns

    The first analysis I did some months ago was not very detailed, and quite manual. This time, I gather information in more detailed, and compiled it in an spread sheet that I made publicly available . All the numbers are taken from GNOME’s Annual Reports and from the Tax declarations available in Pro Publica. I am very happy to get those values reviewed, as there could always be mistakes. I am still fairly certain that small errors won’t change my concerns, since those are based on patterns and not on one-time problems. So to my concerns:

    • Non-matching values between reports and taxes: in the last 3 years, for revenue and income, only the revenue in presented for Fiscal Year 2021/2022 matches what is actually declared. For the rest, differences vary, but go up to close to 9%. I was told that some difference is expected (as numbers these numbers are crafted a bit earlier than taxes), the Board had worked on it, and the last year (the only one with at least revenue matching) is certainly better. But there are still something like 18 000 USD of mismatch in expenses. For me, this is a clear sign, that something is going wrong with the accounting of the foundation, even if improved in the last year.
    • Non-matching values between reports from different years: each Annual Report contains not only the results for that year, but also from the previous one. However, numbers only match half of the time. This is still the case for the latest report in 2023, where suddenly 10 000 USD disappeared from 2022’s expenses, growing the difference from what was declared that year to 27 000 USD. This again shows accountability issues, as previous-years’ numbers should certainly not diverge even more from the tax declarations than initial numbers.
    • Impossibility to match tax declarations and Annual Reports: the way the annual reports are presented, makes it impossible to get a more detailed picture of how are expenses and revenue split. For example, more than 99% of the revenue in 2023 is grouped under a single tax category, while the previous year at least 3 where used. However, the split in the Annual Reports remains roughly the same. So either the accounting is wrong in one of those years, or the split of expenses for the Annual Report was crafted from different data sources. Another example is how “Staff” makes the greatest expense until it ceases to exist in the latest report. However, staff-related expenses in the taxes do not make up for the “Staff” expense in the repots. The chances are that part of that is due to subcontracting, and thus counted in “Fees for services, Other” in the taxes. Unfortunately that category has its own issues.
    • Missing information in the tax declaration: most remarkably, in the tax fillings of fiscal years 2020/2021 and 2021/2022, the category: “Fees for services, Other” represents more than 10% of the expense, which is clearly stated that it should be explained in a later part of the tax filling. However, it is not. I was told 6 months ago that might have to be with some problem with ProPublica not getting the data, and that they would try to fix it. But I was not provided with the information, and 6 months later the public tax fillings still have not been amended.
    • Lack of transparency on expenses:
      • First, in the last 2 tax fillings, more than 50% of expenses lay under “Other salaries and wages”, and “Fees for services, Other”. These fields do not provide enough transparency (maybe they would if the previous point was addressed), and means most of the expenses actually go unaccounted.
      • Second, in the Annual Reports. For the previous 2 years, the biggest expenses were by far “Staff”. There exists a website with the staff and their roles, but there is no clear explanation of which money goes to whom or why. This can be a great problem if some part of the community does not feel supported in its affairs by the foundation. Compare for example with Mastodon’s Annual Report , where everybody on a pay-slip of free-lancing is accounted and written down how much they earn. This is made worse since the current year’s Annual Report has completely removed that category in favor of others. Tax fillings (once available) will, however, provide more context if proper explanations regarding “Fees for services, Other” is finally available.
    • Different categories and reporting formats: the reporting format changed completely in 2021/2022 compared to previous years, and changed completely again this year. This is a severe issue for transparency, since continuously updating formats make it hard to compare between years (which as noted above, is useful!). One of course can understand that things need to be updated to improve things, but such drastic changes do not help with transparency.

    There are certainly other small things that I noticed that caught my attention. However, I hope these examples are enough to get my point across. And there is no need to make this blog post even longer!

    Conclusions

    My main conclusion from the analysis is that the foundation accounting and decision-making regarding expenses has been sub-par in the last years. It is also a big issue that there is a huge lack in transparency regarding the economic status and decision-making of the foundation. I learned more about the economic status of the foundation by reading tax fillings than by reading Annual Reports. Unfortunately, opening an issue with the Board six months ago to share these concerns has not make it better. It could possibly be, that things are much better than they look from outside, but the lack of transparency is making it not appear as so. I hope that I can join the Finance Committee, and help address these issues in the short term!

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /pabloyoyoista/2024/05/19/analysis-of-gnome-foundation-public-economic-concerns-and-thoughts/

    • chevron_right

      Sam Thursfield: Status update, 19/05/2024 – GNOME OS and more

      news.movim.eu / PlanetGnome • 19 May, 2024 • 4 minutes

    Seems this is another of those months where I did enough stuff to merit two posts. (See yesterday’s post on async Rust ). Sometimes you just can’t get out of doing work, no matter how you try. So here is part 2.

    A few weeks ago I went to the USA for a week to meet a client team who I’ve been working with since late 2022. This was actually the first time I left Europe since 2016*. Its wild how a Euro is now pretty much equal value to a US dollar, but everything costs about double compared to Europe. It was fun though and good practice for another long trip to the Denver GUADEC in July.

    * The UK is still part of Europe , it hasn’t physically moved, has it ?

    GNOME OS stuff

    The GNOME OS project has at least 3 active maintainers and a busy Matrix room , which makes it fairly healthy as GNOME modules go. There’s no ongoing funding for maintenance though and everyone who contributes is doing so mostly as a volunteer — at least, as far as I’m aware. So there are plenty of plans and ideas for how it could develop, but many of them are incomplete and nobody has the free time to push them to completion.

    We recently announced some exciting collaboration between Codethink, GNOME and the Sovereign Tech Fund. This stint of full time work will help complete several in-progress tasks. Particularly interesting to me is finishing the migration to systemd-sysupdate ( issue 832 ), and creating a convenient developer workflow and supporting tooling ( issue 819 ) so we can finally kill jhbuild. Plus, of course, making the openQA tests great again.

    Getting to a point where we could start work, took a lot of work, most of which isn’t visible to the outside world. Discussions go back at least to November 2023. Several people worked over months on scoping, estimates and contracts before any of the engineering work started: Sonny Piers working to represent GNOME, and on the Codethink side, Jude Onyenegecha and Weyman Lo, along with Abderrahim Kitouni and Javier Jardón (who are really playing for both teams ;-).

    I’m not working directly on the project, but I’m helping out where I can on the communications side. We have at least 3 IRC + Matrix channels where communication happens every day, each with a different subset of people and cocumentation is scattered all over the place. Some of the Codethink team are seasoned GNOME contributors, others are not, and the collaborative nature of the GNOME OS project – there is no “BDFL” figure who takes all the decisions – means it’s hard to get clear answers around how things should be implemented and

    You can read more about the current work here on the Codethink blog: GNOME OS and systemd-sysupdate , the team will hopefully be posting regular progress updates to This Week In GNOME , and Martín Abente Lahaye (who very recently joined the team on the Codethink side \o/) is opening public discussions around the next generation developer experience for GNOME modules – keep your eyes on GNOME Discourse for that.

    Tiny SPARQL, Twinql, Sqlite-SPARQL, etc.

    We’re excited to welcome Demigod and Rachel to the GNOME community, working on a SPARQL web IDE as part of Google Summer of Code 2024.

    Since this is going to hopefully shine a new light on the SPARQL database project, it seems like a good opportunity to start referring to it by a better name than “Tracker SPARQL”, even while we aren’t going to actually rename the whole API and release 4.0 any time soon.

    There are a few name ideas already, the front runners being Tiny SPARQL or Twinql, which I still can’t quite decide which I prefer. The former is unique but rather utilitarian, while the latter is a nicer name but is already used by a few other (mostly abandoned) projects. Which do you prefer? Let me know in the comments..


    Minilogues and Minifreaks

    I picked up a couple of hardware synthesizers, the Minilogue XD and the Minifreak. I was happy for years with my OP-1 synth, but after 6 years of use it has so many faults to be unplayable, and replacing it would cost more than a second hand car, plus its a little too tiny for on-stage use.

    The Minilogue XD is one of the only mainstream synths to have an open SDK for custom oscillators and effects , full respect to Korg for their forward thinking here … although their Linux tooling is a closed source binary with an critical bug that they won’t fix, so, still some way to go before they get 10/10 for openness.

    The Minifreak, by contrast, has a terrible Windows-only firmware update system, which works so poorly that I already had to the return the synth once to Arturia after a firmware update caused it to brick itself. There’s a stark lesson here in having open protocols which hopefully Arturia can pick up on. This synth has absolutely incredible sound design capabilities though so I decided to keep it and just avoid ever updating the firmware.

    Here’s a shot of the Minifreak next to another mini freak:

    • chevron_right

      Allan Day: GNOME maintainers: here’s how to keep your issue tracker in good shape

      news.movim.eu / PlanetGnome • 17 May, 2024 • 2 minutes

    One of the goals of the new GNOME project handbook is to provide effective guidelines for contributors. Most of the guidelines are based on recommendations that GNOME already had, which were then improved and updated. These improvements were based on input from others in the project, as well as by drawing on recommendations from elsewhere.

    The best example of this effort was around issue management . Before the handbook, GNOME’s issue management guidelines were seriously out of date, and were incomplete in a number of areas. Now we have shiny new issue management guidelines which are full of good advice and wisdom!

    The state of our issue trackers matters. An issue tracker with thousands of open issues is intimidating to a new contributor. Likewise, lots of issues without a clear status or resolution makes it difficult for potential contributors to know what to do. My hope is that, with effective issue management guidelines, GNOME can improve the overall state of its issue trackers.

    So what magic sauce does the handbook recommend to turn an out of control and burdensome issue tracker into a source of calm and delight, I hear you ask? The formula is fairly simple:

    • Review all incoming issues, and regularly conduct reviews of old issues, in order to weed out reports which are ambiguous, obsolete, duplicates, and so on
    • Close issues which haven’t seen activity in over a year
    • Apply the “needs design” and “needs info” labels as needed
    • Close issues that have been labelled “need info” for 6 weeks
    • Issues labelled “needs design” get closed after 1 year of inactivity, like any other
    • Recruit contributors to help with issue management

    To some readers this is probably controversial advice, and likely conflicts with their existing practice. However, there’s nothing new about these issue management procedures. The current incarnation has been in place since 2009 , and some aspects of them are even older. Also, personally speaking, I’m of the view that effective issue management requires taking a strong line (being strong doesn’t mean being impolite, I should add – quite the opposite ). From a project perspective, it is more important to keep the issue tracker focused than it is to maintain a database of every single tiny flaw in its software.

    The guidelines definitely need some more work. There will undoubtedly be some cases where an issue needs to be kept open despite it being untouched for a year, for example, and we should figure out how to reflect that in the guidelines. I also feel that the existing guidelines could be simplified, to make them easier to read and consume.

    I’d be really interested to hear what changes people think are necessary. It is important for the guidelines to be something that maintainers feel that they can realistically implement. The guidelines are not set in stone.

    That said, it would also be awesome if more maintainers were to put the current issue management guidelines into practice in their modules. I do think that they represent a good way to get control of an issue tracker, and this could be a really powerful way for us to make GNOME more approachable to new contributors.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /aday/2024/05/17/gnome-maintainers-heres-how-to-keep-your-issue-tracker-in-good-shape/

    • chevron_right

      Andy Wingo: on hoot, on boot

      news.movim.eu / PlanetGnome • 16 May, 2024 • 5 minutes

    I realized recently that I haven’t been writing much about the Hoot Scheme-to-WebAssembly compiler . Upon reflection, I have been too conscious of its limitations to give it verbal tribute, preferring to spend each marginal hour fixing bugs and filling in features rather than publicising progress.

    In the last month or so, though, Hoot has gotten to a point that pleases me. Not to the point where I would say “accept no substitutes” by any means, but good already for some things, and worth writing about.

    So let’s start today by talking about bootie. Boot, I mean! The boot, the boot, the boot of Hoot.

    hoot boot: temporal tunnel

    The first axis of boot is time. In the beginning, there was nary a toot, and now, through boot, there is Hoot.

    The first boot of Hoot was on paper. Christine Lemmer-Webber had asked me, ages ago, what I thought Guile should do about the web. After thinking a bit, I concluded that it would be best to avoid compromises when building an in-browser Guile: if you have to pollute Guile to match what JavaScript offers, you might as well program in JavaScript. JS is cute of course, but Guile is a bit different in some interesting ways, the most important of which is control: delimited continuations, multiple values, tail calls, dynamic binding, threads, and all that. If Guile’s web bootie doesn’t pack all the funk in its trunk, probably it’s just junk.

    So I wrote up a plan something to which I attributed the name tailification . In retrospect, this is simply a specific flavor of a continuation-passing-style (CPS) transmutation, late in the compiler pipeline. I’ll elocute more in a future dispatch. I did end up writing the tailification pass back then; I could have continued to target JS, but it was sufficiently annoying and I didn’t prosecute. It sat around unused for a few years, until Christine’s irresistable charisma managed to conjure some resources for Hoot.

    In the meantime, the GC extension for WebAssembly shipped (woot woot!), and to boot Hoot, I filled in the missing piece: a backend for Guile’s compiler that tailified and then translated primitive operations to snippets of WebAssembly.

    It was, well, hirsute, but cute and it did compute, so we continued to boot. From this root we grew a small run-time library, written in raw WebAssembly, used for slow-paths for the various primitive operations that are part of Guile’s compiler back-end. We filled out Guile primcalls, in minute commits, growing the WebAssembly runtime library and toolchain as we went.

    Eventually we started constituting facilities defined in terms of those primitives, via a Scheme prelude that was prepended to all programs, within a nested lexical environment. It was never our intention though to drown the user’s programs in a sea of predefined bindings, as if the ultimate program were but a vestigial inhabitant of the lexical lake—don’t dilute the newt!, we would often say [ed: we did not] — so eventually when the prelude became unmanageable, we finally figured out how to do whole-program compilation of a set of modules .

    Then followed a long month in which I would uproot the loot from the boot: take each binding from the prelude and reattribute it into an appropriate module. User code could import all the modules that suit, as long as they were known to Hoot, but no others; it was only until we added the ability for users to programmatically consitute an environment from their modules that Hoot became a language implementation of any repute.

    Which brings us to the work of the last month, about which I cannot be mute. When you have existing Guile code that you want to distribute via the web, Hoot required you transmute its module definitions into the more precise R6RS syntax. Precise, meaning that R6RS modules are static, in a way that Guile modules, at least in absolute terms, are not: Guile programs can use first-class accessors on the module systems to pull out bindings. This is yet another example of what I impute as the original sin of 1990s language development, that modules are just mutable hash maps. You see it in Python, for example: because you don’t know for sure to what values global names are bound, it is easy for any discussion of what a particular piece of code means to end in dispute.

    The question is, though, are the semantics of name binding in a language fixed and absolute? Once your language is booted, are its aspects definitively attributed? I think some perfection, in the sense of becoming more perfect or more like the thing you should be, is something to salute. Anyway, in Guile it would be coherent with Scheme’s lexical binding heritage to restitute some certainty as to the meanings of names, at least in a default compilation node. Lexical binding is, after all, the foundation of the Macro Writer’s Statute of Rights . Of course if you are making a build for development purposes, not to distribute, then you might prefer a build that marks all bindings as dynamic. Otherwise I think it’s reasonable to require the user to explicitly indicate which definitions are denotations, and which constitute locations.

    Hoot therefore now includes an implementation of the static semantics of Guile’s define-module : it can load Guile modules directly, and as a tribute, it also has an implementation of the ambient (guile) module that constitutes the lexical soup of modules that aren’t #:pure . (I agree, it would be better if all modules were explicit about the language they are written in—their imported bindings and so on—but there is an existing corpus to accomodate; the point is moot.)

    The astute reader (whom I salute!) will note that we have a full boot: Hoot is a Guile. Not an implementation to substitute the original, but more of an alternate route to the same destination. So, probably we should scoot the two implementations together, to knock their boots, so to speak, merging the offshoot Hoot into Guile itself.

    But do I circumlocute: I can only plead a case of acute Hoot. Tomorrow, we elocute on a second axis of boot. Until then, happy compute!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/05/16/on-hoot-on-boot

    • chevron_right

      Sam Thursfield: Status update, 16/05/2024 – Learning Async Rust

      news.movim.eu / PlanetGnome • 16 May, 2024 • 6 minutes

    This is another month where too many different things happened to stick them all in one post together. So here’s a ramble on Rust, and there’s more to come in a follow up post.

    I first started learning Rust in late 2020 . It took 3 attempts before I could start to make functional commandline apps, and the current outcome of this is the ssam_openqa tool, which I work on partly to develop my Rust skills. This month I worked on some intrusive changes to finally start using async Rust in the program.

    How it started

    Out of all the available modern languages I might have picked to learn, I picked Rust partly for the size and health of its community: every community has its issues, but Rust has no “BDFL” figure and no one corporation that employs all the core developers, both signs of a project that can last a long time. Look at GNOME, which is turning 27 this year.

    Apart from the community, learning Rust improved the way I code in all languages, by forcing more risks and edge cases to the surface and making me deal with them explicitly in the design. The ecosystem of crates has most of what you could want (although there is quite a lot of experimentation and therefore “churn”, compared to older languages). It’s kind of addictive to know that when you’ve resolved all your compile time errors, you’ll have a program that reliably does what you want.

    There are still some blockers to me adopting Rust everywhere I work (besides legacy codebases). The “cycle time” of the edit+compile+test workflow has a big effect on my happiness as a developer. The fastest incremental build of my simple CLI tool is 9 seconds, which is workable, and when there are compile errors (i.e. most of the time) its usually even faster. However, a release build might take 2 minutes. This is 3000 lines of code with 18 dependencies . I am wary of embarking on a larger project in Rust where the cycle time could be problematically slow.

    Binary size is another thing, although I’ve learned several tricks to keep ssam_openqa at “only” 1.8MB. Use a minimal arg parser library instead of clap . Use minreq for HTTP. Follow the min-size-rust guidelines. Its easy to pull in one convenient dependency that brings in a tree of 100 more things, unless you are careful. (This is a problem for C programmers too, but dependency handling in C is traditionally so horrible that we are already conditioned to avoid too many external helper libraries).

    The third thing I’ve been unsure about until now is async Rust. I never immediately liked the model used by Rust and Python of having a complex event loop hidden in the background, and a magic async keyword that completely changes how a function is executed, and requires all other functions to be async such as you effectively have two *different* languages: the async variant, and the sync variant; and when writing library code you might need to provide two completely different APIs to do the same thing, one async and one sync.

    That said, I don’t have a better idea for how to do async.

    Complicating matters in Rust is the error messages, which can be mystifying if you hit an edge case (see below for where this bit me). So until now I learned to just use thead::spawn for background tasks, with a std::sync::mpsc channel to pass messages back to the main thread, and use blocking IO everywhere. I see other projects doing the same.

    How it’s going

    My blissful ignorance came to an end due to changes in a dependency. I was using the websocket crate in ssam_openqa, which embeds its own async runtime so that callers can use a blocking interface in a thread. I guess this is seen as a failed experiment, as the library is now “sluggishly” maintained and the developers recommend tungstenite instead.

    Tungstenite seems unusable from sync code for anything more than toy examples, you need an async wrapper such as async-tungstenite . So, I thought, I will need to port my *whole codebase* to use an async runtime and an async main loop.

    I tried, and spent a few days lost in a forest of compile errors, but its never the correct approach to try and port code “in one shot” and without a plan. To make matters worse, websocket-rs embeds an *old* version of Rust’s futures library. Nobody told me, but there is “futures 0.1” and “futures 0.3.” Only the latter works with the await keyword; if you await a future from futures 0.1, you’ll get an error about not implementing the expected trait. The docs don’t give any clues about this, eventually I discovered the Compat01As03 wrapper which lets you convert types from futures 0.1 to futures 0.3. Hopefully you never have to deal with this as you’ll only see futures 0.1 on libraries with outdated dependencies, but, now you know.

    Even better, I then realized I could keep the threads and blocking IO around, and just start an async runtime in the websocket processing thread. So I did that in its own MR , gaining an integration test and squashing a few bugs in the process.

    The key piece is here :

    use tokio::runtime;
    use std::thread;
    
    ...
    
        thread::spawn(move || {
            let runtime = runtime::Builder::new_current_thread()
                .enable_io()
                .build()
                .unwrap();
    
            runtime.block_on(async move {
                // Websocket event loop goes here
    

    This code uses the tokio new_current_thread () function to create an async main loop out of the current thread, which can then use block_on () to run an async block and wait for it to exit. It’s a nice way to bring async “piece by piece” into a codebase that otherwise uses blocking IO, without having to rewrite everything up front.

    I have some more work in progress to use async for the two main loops in ssam_openqa: these currently have manual polling loops that periodically check various message queue for events and then call thread::sleep(250) , which work fine in practice for processing low frequency control and status events, but it’s not the slickest nor most efficient way to write a main loop. The classy way to do it is using the tokio::select! macro.

    When should you use async Rust?

    I was hoping for a simple answer to this question, so I asked my colleagues at Codethink where we have a number of Rust experts.

    The problem is, cooperative task scheduling is a very complicated topic. If I convert my main loop to async , but I use the std library blocking IO primitives to read from stdin rather than tokio’s async IO, can Rust detect that and tell me I did something wrong? Well no, it can’t – you’ll just find that event processing stops while you’re waiting for input. Which may or may not even matter.

    There’s no way automatically detect “syscall which might wait for user input” vs “syscall which might take a lot of CPU time to do something”, vs “user-space code which might not defer to the main loop for 10 minutes”; and each of these have the same effect of causing your event loop to freeze.

    The best advice I got was to use tokio console to monitor the event loop and see if any tasks are running longer than they should. This looks like a really helpful debugging tool and I’m definitely going to try it out.

    So I emerge from the month a bit wiser about async Rust, no longer afraid to use it in practice, and best of all, wise enough to know that its not an “all or nothing” switch – its perfectly valid to mix and sync and async in different places, depending on what performance characters you’re looking for.

    • chevron_right

      Christian Hergert: Custom Artifacts Directories

      news.movim.eu / PlanetGnome • 4 April, 2024

    Since the inception of Builder, it has used $XDG_CACHE_DIR for many things like custom installation paths, flatpak-builder state directories, out-of-tree build directories, code indexes, and more.

    I finally got around to allowing for custom cache roots and now Builder will default to ~/Projects/.gnome-builder/ . If you’ve set a custom projects directory then .gnome-builder within that will be used.

    You may also change this value globally or per-project in Preferences and/or Project Settings.

    A screenshot of the application preferences in the General panel of Build Tooling section. The first option is the build artifacts directory which may be set by the user.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /chergert/2024/04/03/custom-artifacts-directories/

    • chevron_right

      Richard Hughes: fwupd and xz metadata

      news.movim.eu / PlanetGnome • 3 April, 2024 • 1 minute

    A few people (and multi-billion dollar companies!) have asked for my response to the xz backdoor . The fwupd metadata that millions of people download every day is a 9.5MB XML file — which thankfully is very compressible. This used to be compressed as gzip by the LVFS, making it a 1.6MB download for end-users, but in 2021 we switched to xz compression instead.

    What actually happens behind the scenes is that the libxmlb library loads the optionally compressed metadata into a mmap-able binary blob , and then it gets used by fwupd to look for new updates for specific hardware. In libxmlb 0.3.3 we added support for xz as a compression format. Then fwupd 1.8.7 was released with xz support, preferring the xz format to the “legacy” gz format — as the metadata became a 1.1MB download, saving significant amounts of data from the CDN.

    Then this week we learned that xz wasn’t the kind of thing we want to depend on. Out of an abundance of caution (and to be clear — my understanding is there is no fwupd or LVFS security problem of any kind) I’ve switched the LVFS to also generate zstd metadata , make libxmlb no longer hard depend on lzma and switched fwupd to prefer the zstd metadata over the xz metadata if the installed version of libjcat supports it. The zstd metadata is also ~3% smaller than xz (and faster to decompress), but the real benefit is that I now trust it a lot more than xz.

    I’ll be doing new libxmlb and fwupd releases with the needed changes next week.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /hughsie/2024/04/03/fwupd-and-xz-metadata/

    • chevron_right

      Jussi Pakkanen: Aesthetics matter

      news.movim.eu / PlanetGnome • 2 April, 2024 • 4 minutes

    When I started working on Meson I had several goals: portability, performance, usability and so on. I particularly liked the last one of these, but to my surprise this interest was not shared by people at large, especially those who used Autotools. Eventually the discussion always degenerated with them saying some variant of this:

    It does not matter that Autotools is implemented as a mixture of five different scripting languages mismashed together. It works, so why replace it with something that is, at best, a tiny bit better?

    One person went so far as to ask me (in public in front of a crowd) why making builds faster is even a thing to waste effort on? Said person followed this by saying he began his every working day by starting a build and going to brew some coffee. When he came back to his computer everything was ready to start programming.

    It annoyed me to no end that I did not have a good reply to these people at the time. Unfortunately a thing happened last week that changed this.

    The XZ malicious code injection incident.

    It would be easy to jump on a bandwagon and blame Autotools for the whole issue and demand it to be banned as an unfixable security vulnerability [1] and all that. But let's not do that. Instead let's look at the issue from a slightly wider perspective.

    Take any project you are working on currently. It can either be a work project or an open source one. Now think about all the various components it has. Go through them one by one in your mind. Pause at each one. Ponder them. Does any one of them immediately conjure up the following reaction in your mind:

    I'm not touching that shit!

    If the answer is yes then congratulations, you have found the most likely attack vector against the project. Why? Because that part that is guaranteed to have the absolute worst code reviews for the simple reason that nobody wants to touch it with a ten foot pole [2]. It is the very definition of someone else's problem . In the case of Autotools the problem is even worse , because there are no tools to find bugs automatically. Static analysis? No [3]! Linters? No! Even something simple like compiler warnings? Lol no! The reason they don't exist is exactly the same as above: the whole problem space is so off-putting that even the people who could do something about it prefer to work on something more meaningful instead. Badness begets more badness and apathy. The fact that it does not halt and catch fire most of the time is seen as sufficient quality.

    This is even more of a problem for open source projects. Commercial projects pay people a full living salary to deal with necessary non-glamorous work like this. Volunteer based open source projects can not. A major fraction of the motivation for contributing on an open source project is to work on something that is somehow "cool", "fun" or "interesting". Debugging issues caused by incorrect M4 substitutions somewhere in the guts of a ten layer deep sed/awk/grep/Make/xargs/subshell pipeline is not that.

    The reports I have read do not state whether XZ's malicious payload was submitted PR or not, but let's do a thought experiment. Assume that you are the overworked maintainer of an open source project that gets a PR that changes a bunch of M4 files with a description "fixes issue X in Y". What would you do? If you are honest with yourself, you'd probably do the same thing I'd do: merge it in while thinking "I'm just glad someone else fixed this so I don't have to touch that shit [4]".

    Thus we find that aesthetics and beauty in fact play a vital role in security, because those systems make people want to work on them. They attract eyeballs. There is never a risk of getting stuck maintaining some awful piece of garbage because you touched it last so it's your responsibility now [5]. Beauty truly is the mother of security, or, as the ancient romans used to say:

    Pulchritudo mater securitatis! [6]

    [1] Which you still might choose to do.

    [2] For a more literal example, several "undefeatable" fortresses have been taken over by attackers entering via sewage pipes.

    [3] And not only because all the languages in question are dynamic.

    [4] Yes, I have done this with Meson. Several times. Every maintainer of even moderately popular open source project has done the same. Trying to deny it is not productive.

    [5] This is especially common in corporations with the added limitation that you are not allowed to make any major changes to avoid breaking things. If you ever find yourself in this situation, find employment elsewhere sooner rather than later. Otherwise your career has reached a dead end you can't escape.

    [6] At least according to Google translate, which is good enough for our modern post-truth world.

    • chevron_right

      Michael Meeks: 2024-04-01 Monday

      news.movim.eu / PlanetGnome • 1 April, 2024

    • Up lateish; drove to St Albans with the family to meet up with Sue & family.
    • Removed 4% of CPU from a week-long profile of our demo servers: avoiding lots of extension querying with a one line change: fun.
    • wifi_tethering open_in_new

      This post is public

      meeksfamily.uk /~michael/blog/2024-04-01.html