call_end

    • chevron_right

      Christian Hergert: Libdex Futures from asyncio

      news.movim.eu / PlanetGnome • 28 November

    One of my original hopes for Libdex was to help us structure complex asynchronous code in the GNOME platform. If my work creating Foundry and Sysprof are any indicator, it has done leaps and bounds for my productivity and quality in that regard.

    Always in the back of my mind I hoped we could make those futures integrate with language runtimes.

    This morning I finally got around to learning enough of Python’s asyncio module to write the extremely minimal glue code. Here is a commit that implements integration as a PyGObject introspection override. It will automatically load when you from gi.repository import Dex .

    This only integrates with the asyncio side of things so your application is still responsible for integrating asyncio and GMainContext or else nothing will be pumping the DexScheduler . But that is application toolkit specific so I must leave that to you.

    I would absolutely love it if someone could work on the same level of integration for GJS as I have even less experience with that platform.

    • chevron_right

      Allan Day: GNOME Foundation Update, 2025-11-28

      news.movim.eu / PlanetGnome • 28 November • 4 minutes

    Welcome to another GNOME Foundation update; an overview of everything that’s been happening at the Foundation. There was no update last week, due to me being away from my computer last Friday, so this post covers a two week period rather than the usual single week.

    Many thanks to everyone who responded to my request for feedback in my previous post! It was great to hear your views on these posts, and it was extremely motivating to get positive feedback on the blog series.

    Budget report

    In case you didn’t see it, last week Rob posted a detailed breakdown of the Foundation’s current operating budget . This is the second year in a row that we have provided a budget report for the community, and I’m thrilled that we’ve been able to keep up the momentum around financial transparency. I’d encourage you to give it a read if you haven’t already.

    Community travel

    One positive aspect of the current budget is that we have a healthy community travel budget, and I really want to encourage members to make use of the fund. The travel budget is there to be spent, and we absolutely want to see community members applying for travel. If you have been interested in organising a hackfest, or attending a GNOME conference, and finances have been a barrier, please do make use of the funding that is available. Information about how to apply can be found in the handbook .

    Also on travel: we are currently looking to recruit additional volunteers to help administer the travel budget, as part of the Travel Committee. So, if you are interested in helping with GNOME and would like to get involved, please do get in touch using the comments below, or by messaging the Travel Committee.

    Outreachy

    The Foundation has a proud history of funding outreach efforts, and has regularly supported interns through Outreachy . The December to March round is almost upon us, and the Internship Committee has coordinated the selection of an intern who we will be sponsoring. We were pleased to release the funding for this internship this week. More details about the internship itself will follow.

    Banking and finance systems

    As mentioned in recent updates, we have been working through a round of improvements to our banking setup, which will give us enhanced fraud protection, as well as automatic finance management features. This week we had a training session with our bank, the fraud protection features were turned on, and I signed the last of the paperwork. As a result, this round of work is now complete.

    I have also been going through the process of signing up for the new financial system that Dawn our new finance advisor will be setting up for us.

    Bookkeeping meetings

    Our regular monthly bookkeeping meeting happened last week, and we had another follow-up call more recently. We are still working through the 2024-25 financial year end accounts, which primarily involves resolving a list of small questions, to make sure that the accounts are 100% complete and accurate. Our bookkeeper has also been very patiently answering questions from Deepa, our treasurer, and myself as we continue to familiarise ourselves with the finance and accounting setup (thank you!)

    Board meeting

    The Board had a regular meeting this week. The topics under discussion included:

    • Setting goals for the upcoming fundraising campaign, in particular what the fundraising target should be, and what programs we want to fund with the proceeds.
    • Improving our minutes to meet the needs of different audiences (directors, auditors, the IRS, members, and so on). We also worked on a plan to clear the backlog of unapproved minutes.
    • Planning for a Board hackfest prior to next FOSDEM.

    We came away with a healthy list of action items, and I’m looking forward to making progress in each of these areas.

    GNOME.Asia

    Our upcoming conference is Tokyo continues to be a focus, and Kristi is busy putting the final arrangements together. The event is just 15 days away! A reminder: if you want to participate, please do head over to the site and register.

    Flathub

    There has been some good progress around Flathub over the past two weeks. Bart has done quite a bit of work to improve the performance of the Flathub website, which I’m sure users will appreciate. We also received some key pieces of legal work, which are required as part of the roadmap to establish Flathub as its own financial/legal entity. With those legal documents in place we have turned our attention to planning Flathub’s financial systems; discussions about this are ongoing.

    Digital Wellbeing

    There was another review call this week to check on progress as the current phase of the program reaches its final stages. The main focus right now is making sure that the new screen time limits feature is in good shape before we use up the remaining funding.

    Progress is looking good in general: the main changes for GNOME Shell and Settings have all been merged. There are two more pieces of work to land before we can say that we are in a feature complete state. After that we will circle back to UX review and papercut fixing. If you want more information about these features, I would recommend Ignacy’s recent post on the topic .

    Philip has also published a fantastic post on the web filtering functionality that has been implemented as part of this program.

    That’s it for this week! Thanks for reading, and see you next week.

    • chevron_right

      Philip Withnall: Parental controls web filtering backend

      news.movim.eu / PlanetGnome • 27 November • 5 minutes

    In my previous post I gave an overview of the backend for the screen time limits feature of parental controls in GNOME. In this post, I’ll try and do the same for the web filtering feature.

    We haven’t said much about web filtering so far, because the user interface for it isn’t finished yet. The backend is, though, and it will get plumbed up eventually. Currently we don’t have a GNOME release targeted for it yet.

    When is web filterings? What is web filtering?

    (Apologies to Radio 4 Friday Night Comedy .)

    Firstly, what is the aim of web filtering? As with screen time limits, we’ve written a design document which (hopefully) covers everything. But the summary is that it should allow parents to filter out age-inappropriate content on the web when it’s accessed by child accounts, while not breaking the web (for example, by breaking TLS for websites) and not requiring us (as a project) to become curators of filter lists. It needs to work for all apps on the system (lots of apps other than web browsers can show web content), and needs to be able to filter things differently for different users (two different children of different ages might use the same computer, as well as the parents themselves).

    After looking at various different possible ways of implementing it, the best solution seemed to be to write an NSS module to respond to name resolution (i.e. DNS) requests and potentially block them according to a per-user filter list.

    A brief introduction to NSS

    NSS (Name Service Switch) is a standardised name lookup API in libc. It’s used for hostname resolution, but also for user accounts and various other things. Names are resolved by various modules which are dlopen() ed into your process by libc and queried in the order given in /etc/nsswitch.conf . So for hostname resolution, a typical configuration in nsswitch.conf would cause libc to query the module which looks at /etc/hosts first, then the module which checks your machine’s hostname, then the mDNS module, then systemd-resolved.

    So, we can insert our NSS module into /etc/nsswitch.conf , have it run somewhere before systemd-resolved (which in this example does the actual DNS resolution), and have it return a sinkhole address for blocked domains. Because /etc/nsswitch.conf is read by libc within your process, this means that the configuration needs to be modified for containers (flatpak) as well as on the host system.

    Because the filter module is loaded into the name lookup layer, this means that content filtering (as opposed to domain name filtering) is not possible with this approach. That’s fine — content filtering is hard, I’m not sure it gives better results overall than domain name filtering, and means we can’t rely on existing domain name filter lists which are well maintained and regularly updated. We’re not planning on adding content filtering.

    It also means that DNS-over-HTTPS/-TLS can be supported, as long as the app doesn’t implement it natively (i.e. by talking HTTPS over a socket itself). Some browsers do that, so the module needs to set a canary to tell them to disable it. DNS-over-HTTPS/-TLS can still be used if it’s implemented by one of the NSS modules, like systemd-resolved.

    Nothing here stops apps from deliberately bypassing the filtering if they want, perhaps by talking DNS over UDP directly, or by calling secret internal glibc functions to override nsswitch.conf . In the future, we’d have to implement per-app network sandboxing to prevent bypasses. But for the moment, trusting the apps to cooperate with parental controls is fine.

    Filter update daemon

    So we have a way of blocking things; but how does it know what to block? There are a lot of filter lists out there on the internet, targeted at existing web filtering software. Basically, a filter list is a list of domain names to block. Some filter lists allow wildcards and regexps, others just allow plain strings. For simplicity, we’ve gone with plain strings.

    We allow the parent to choose zero or more filter lists to build a web filtering policy for a child. Typically, these filter lists will correspond to categories of content, so the parent could choose a filter list for advertising, and another for violent content, for example. The web filtering policy is basically the set of these filter lists, plus some options like “do you want to enforce safe search”. This policy is, like all other parental controls policies, stored against the child user in accounts-service .

    Combine these filter lists, and you have the filter list to give to NSS in the child’s session, right? Not quite — because the internet unfortunately keeps changing, filter lists need to be updated regularly. So actually what we need is a system daemon which can regularly check the filter lists for updates, combine them, and make them available as a compiled file to the child’s NSS module — for each user on the system.

    This daemon is malcontent-webd . It has a D-Bus interface to allow the parent to trigger compiling the filter for a child when changing the parental controls policy for that child in the UI, and to get detailed feedback on any errors. Since the filter lists come from third parties on the internet, there are various ways they could have an error.

    It also has a timer unit trigger, malcontent-webd-update , which is what triggers it to periodically check the filter lists for all users for updates.

    High-level diagram of the web filtering system, showing the major daemons and processes, files, and IPC calls. If it’s not clear, the awful squiggled line in the bottom left is meant to be a cloud. Maybe this representation is apt.

    And that’s it! Hopefully it’ll be available in a GNOME release once we’ve implemented the user interface for it and done some more end-to-end testing, but the screen time limits work is taking priority over it.

    • chevron_right

      Sam Thursfield: Bollocks to Github

      news.movim.eu / PlanetGnome • 27 November • 3 minutes

    I am spending the evening deleting my Github.com account.

    There are plenty of reasons you might want to delete your Github account. I’d love to say that this is a coherently orchestrated boycott on my part, in sympathy with the No Azure for Apartheid movement. Microsoft, owner of Github, is a big pile of cash happy to do business with an apartheid state. That’s a great reason to delete your Github.com account.

    I will be honest with you though, the thing that pushed me over the edge was a spam email they sent entitled “GitHub Copilot: What’s in your free plan 🤖 ”. I was in a petty mood this morning.

    Offering free LLM access is a money loser. The long play is this: Microsoft would like to create a generation of computer users hooked on GitHub Copilot. And, I have to hand it to them, they have an excellent track record in monopolising how we interact with our PCs.

    Deleting my Github.com account isn’t going to solve any of that. But it feels good to be leaving, anyway. The one billionth Github repository was created recently and it has a single line README containing the word “shit” . I think that summarizes the situation more poetically than I could.

    I had 145 repositories in the ssssam/ namespace to delete. The oldest was Iris; forked in 2011.

    Quite a story to that one. A fork of a project by legendary GNOME hacker Christian Hergert. In early 2011, I’d finished university, had no desire to work in the software industry, and was hacking on a GTK based music player app from time to time. A rite of passage that every developer has to go through, I suppose. At some point I decided to overcomplicate some aspect of the app and ended up integrating libiris, a library to manage concurrent tasks. Then I started working professionally and abandoned the thing.

    Its fun to look at that with 13 years perspective. I since learned, largely thanks to Rust, that I cannot possibly ever write correct concurrent thread-based C code. (All my libiris changes had weird race conditions). I met Christian various times. Christian created libdex which does the same thing, but much better. I revived the music player app as a playlist generation toolkit . We all lived happily ever after.

    Except for the Github fork, which is gone.

    What else?

    This guy was an early attempt at creating a sort of GObject mapping for SPARQL data as exposed by Localsearch (then Tracker). Also created around 2011. Years later, I did a much better implementation in TrackerResource which we still use today.

    The Sam of 2011 would be surprised to hear that we organized GUADEC in Manchester only 6 years later. Back in those days we for some reason maintained our own registration system written in Node.js. I spent the first few weeks of 2017 hacking in support for accommodation bookings.

    I discovered another 10 year old gem called “Software Integration Ontology.” Nowadays we’d call that an SBOM. Did that term exist 10 years ago? I have spent too much time working on software integration.

    Various other artifacts research into software integration and complexity. A vestigal “Software Dependency Visualizer” project. (Which took on a life of its own, and many years later the idea is alive in KDE Codevis ). A fork of Aboriginal Linux, which we unwittingly brought to an end back in 2016. Bits of Baserock, which never went very far but was also led to the beginning of BuildStream .

    A fork of xdg-app, which is the original name of Flatpak. A library binding GLib to the MS RPC API on Windows, from the 2nd professional project I ever did. Thhese things are now dust.

    I had over 100 repositories on github.com. I sponsored one person, who I can’t sponsor any more as they only accept Github money. (I sponsor plenty of people in other platforms )

    Anyway, lots of nice people use Github, you can keep using Github. I will probably have to create the occasional burner account to push PRs where projects haven’t migrated away. Some of my projects are now in Gitlab.com and in GNOME’s Gitlab. Others are gone!

    It’s a weird thing but in 2025 I’m actually happy knowing that there’s a little bit less code in the world.

    • chevron_right

      Michael Meeks: <h1>a new & beautiful Collabora Office</h1>

      news.movim.eu / PlanetGnome • 26 November • 3 minutes

    Just a short personal note to say how super excited I am to get our very first release of a new Collabora Office out that brings Collabora Online's lovely UX - created by the whole team to the desktop. You can read all about it in the press release . Please note - this is a first release - we expect all manner of unforseen problems, but still - it edits documents nicely.

    The heros behind the scenes

    There has been a huge amount of work behind the scenes, and people to say thank-you to. Let me try to get some of them:

    • First off - thanks to Jan 'Kendy' Holesovsky and Tor Lillqvist (who came out of retirement to create yet another foundational heavy-lift for FLOSS Office. I can't say how grateful we are for your hard work here on Mac and Windows respectively.
    • Then after the allotropia merger we had Thorsten Behrens to lead the project, and Sarper Akdemir to drive the Linux front-end.
    • Towards the end of the project we were thrilled to expand things to include a dream-team of FLOSS engineers to fix bugs and add features ...
    • Thanks to Rashesh Padia for the lovely first-start WebGL slideshow presentation added to Richard Brock 's content skills.
    • Thanks to Vivek Javiya for building a new file creation UI with Pedro Silva 's design skills here and elsewhere.
    • Lots of bug fix and polishing work from Parth Raiyani , and Jeremy Whiting (who also did multi-tabbed interface on Mac), and to Stephan Bergmann for digging out and clobbering the most hard-core races and horror bugs that we had hidden, Caolán McNamara too who made multiple documents work, and fixed crashes and multi-screen bits.
    • With Hubert Figuière making the flatpak beautiful, and of course the indomitable Andras Timar doing so much amazing work getting all of the CI, release-engineering, app-store, translation pieces and also bug-fixing done and completed in time.
    • Thanks too to our marketing team: from Chris Thornett getting the press briefing into a good state and multiplexing quotes left and right, to Richard Brock creating beautiful blog output, to Asja Čandić socializing it all, with Naomi Obbard leading the charge.
    • Thanks to all of our supporters who say nice things about us, and of course to so many translators who contribute to making Collabora Online great - hopefully now the strings are all public it should be easy to expand coverage.

    This is an outstanding result from so many - thank you!

    What is next technically ?

    There are lots of things we plan to do next, but there is so much that can be done. First - merging the work into our main product branches - and at the same time sharing much more of the code across platforms. We have some features in the pipeline already - starting to take more advantage of platform APIs for much improved slideshow / multi-screen presentation pieces that need merging and releasing, and ultimately better printing APIs, and better copy/paste. Then we need to make sure that all of the new features are present on all platforms - multi-tabbed UI, the new file creation UI, and of course much more polish and bug fixing - as well as better automated testing.

    Its exciting to have a big new release - but in general - we work really quite hard to avoid having big-bang deadlines, and a more steady development cadence. We will be trying to get the feature conveyer belt working alongside the Collabora Online development process - reasonably quickly - but now with another three platforms.

    Then over the next months - there are various fairly obvious directions to take the code in - one amusing feature was the ability to apparently collaborate with yourself when loading the same document on the same machine, that can be extended.

    Conclusion

    It has been really exciting to get feedback from many partners, customers and community members about the need for this. Again - this is a very first release - we plan to do lots of iteration and improvement around it. But, thank you again to the whole team and community for making Collabora Online something that people really want us to bring to the desktop. If you'd like to get involved (or just see pretty artwork of hard working animals) - why not head to community website , or our forum or code .

    Rock on =)

    • chevron_right

      Sam Thursfield: Status update — 23rd November 2025

      news.movim.eu / PlanetGnome • 25 November • 8 minutes

    Bo día.

    I am writing this from a high speed train heading towards Madrid, en route to Manchester. I have a mild hangover, and a two hundred page printout of “The STPA Handbook”… so I will have no problem sleeping through the journey. I think the only thing keeping me awake is the stunning view.

    Sadly I havent got time to go all the way by train, in Madrid i will transfer to Easy Jet. It is indeed easy compared to trying to get from Madrid into France by train. Apparently this is mainly the fault of France’s SNCF .

    On the Spain side, fair play. The ministro de fomento (I think this translates as “Guy in charge of trains?”) just announced major works in Barcelona, including a new station in La Sagrera with space for more trains than they have now, and a more direct access from Madrid, and a speed boost via some new type of railway sleeper, which would make the top speed 350km/h instead of 300km/h as it is now. And some changes in Madrid, which would reduce the transfer time when arriving from the west and heading out further east. You can argue with many things about the trains in Spain… perhaps it would be useful if the regional trains here ran more than once per day … but you cant argue with the commitment to fast inter-city travel.

    If only we had similar investment to fix the cross border links between Spain and France, which are something of a joke. Engineers around the world will know this story. The problem is social: two different organizations, who speak different languages, have to agree on something. There is already a perfectly usable modern train line across the border. How many trains per day? Uh… two. Hope you planned your trip in advance because they’re fully booked next week.

    Anyway, this isn’t meant to a post on the status of the railways of western Europe.

    Digital Resilience Forum

    Last month I hopped on another Madrid-bound train to attend the Digital Resilience Forum . It’s a one day conference organized by Bitergia who you might know as world leaders in open source community analysis.

    I have mixed feelings about “community metrics” projects. As Nick Wellnhofer said regarding libxml , when you participate as a volunteer in a project that is being monitored, its easy to feel like you’re being somehow manipulated by the corporations who sponsor these things. How come you guys will spend time and money analyzing my project’s development processes and Git history, but you won’t spend time actually fixing bugs and improvements upstream? As the ffmpeg developers said : How come you will pay top calibre security researchers to read our code and find very specific exploits, but then wait for volunteers to fix them?

    The Bitergia team are great people who genuinely care about open source, and I really enjoyed the conference. The main themes were: digital sovereignty, geopolitics, the rise of open source, and that XKCD where all our digital
    infrastructure depends on a single unpaid volunteer in Nebraska. ( https://xkcd.com/2347/ ). (Coincidentally, one of the Bitergia guys actually does live in Nebraska).

    It was a day in a world where I am not used to participating: less engineering, more politics and campaigning. Yes, the Sovereign Tech Agency were around. We played a cool role play game simulating various hypothetical software crisis that might happen in the year 2027 (spoiler: in most cases a vendor-neutral, state-funded organization focused on open source was able to save the day : -). It is amazing what they’ve done so far with a relatively small investment, but it is a small organization and they maintain that citizens of every country should be campaigning and organizing to setting up an equivalent. Let’s not tie the health of open source infrastructure too closely to German politics.

    Also present, various campaign groups with “Open” at the start of their name: OpenForum Europe, OpenUK, OpenIreland, OpenRail. When I think about the future of Free Software platforms, such as our beloved GNOME, my mind always goes to funding contributors. There’s very little money here and meanwhile Apple and Microsoft have nearly all of the money and I feel like still GNOME succeeds largely thanks to the evenings and weekends of a small core of dedicated hackers; including some whose day job involves working on some other part of GNOME. It’s a bit depressing sometimes to see things this way, because the global economy gets more unequal every day, and how do you convince people who are already squeezed for cash to pay for something that’s freely available online? How do you get students facing a super competitive job market to hack on GTK instead of studying for university exams?

    There’s another side which I talk about less, and that’s education. There are more desktop Linux users than ever — apparently 5% of all desktop users or something — but there’s still very little agreement or understanding what “open source” is. Most computer users couldn’t tell you what an “operating system” does, and don’t know why “source code” can be an interesting thing to share and modify.

    I don’t like to espouse any dogmatic rule that the right way to solve any problem is to release software under the GPLv3. I think the problems society has today with technology come from over-complexity and under-study. (See also, my rant from last month. ). To tackle that, it is important to havesoftware infrastructure like drivers and compilers available under free software licenses. The Free Software movement has spent the last 40 years doing a pretty amazing job of that, and I think its surprising how widely software engineers accept that as normal and fight to maintain it. Things could easily be worse. But this effort is one part of a larger problem, of helping those people who think of themselves as “non-technical” to understand the fundamentals of computing and not see it as a magic box. Most people alive today have learned to read and write one or more languages, to do mathematics, to operate a car, to build spreadsheets, and operate a smartphone. Most people I know under 45 have learned to prompt a large language model in the last few years.

    With a basic grounding in how a computer operates, you can understand what an operating system does. And then you can see that whoever controls your OS has complete control over your digital life. And you will start to think twice about leaving that control to Apple, Google and Microsoft — big piles of cash where the concept of “ethics” barely has a name.

    Reading was once a special skill reserved largely for monks. And it was difficult: we only started spaces between the words later on. Now everyone knows what a capital letter is. We need to teach how computers work, we need to stop making them so complicated, and the idea of open development will come into focus for everyone.

    (and yes i realize this sounds a bit like the permacomputing manifesto ).

    Codethink work

    This is a long rant, isn’t it? My train only just left Zamora and I didnt fall asleep yet, so there’s more to come.

    I had a nice few months hacking on Endless OS 7, which has progressed from an experiment to a working system, bootable on bare metal, albeit with a various open issues that would block a stable release as yet. The overview docs in the repo tell you how to play with it.

    This is now fully in the hands of the team at Endless, and my next move is going to be in some internal research that has been ongoing for a number of years. Not much of it is secret, in fact quite a lot is being developed in the open, and it relates in part to regulatory compliance and safety-critical softare systems.

    Codethink dedicates more to open source than most companies its size. We never have trouble getting sponsorship for events like GUADEC. But I do wish I could spend more time maintaining open infrastructure that I use every day, like, you know, GNOME.

    This project isn’t going to solve that tomorrow, but it does occupy an interesting space in the intersection between industry and open source. The education gap I talked to you above is very much present in some industries where we work. Back in February a guy in a German car firm told me, “Nobody here wants open source. What they want is somebody to blame when the thing goes wrong.”

    Open source software comes with a big disclaimer that says, roughly, that if it breaks you get to keep both pieces. You get to blame yourself.

    And that’s a good thing! The people who understand a final, integrated system are the only people who can really define “correct behaviour”. If you’ve worked in the same industries I have you might recognise a common anti-pattern: teams who spend all their time arguing about ownership of a particular bug, and team A are convinced it’s a misbehaviour of component B and team B will try to prove the exact opposite. Meanwhile nobody actually spends the 15 minutes it would take to actually fix the bug. Another anti-pattern: team A would love to fix the bug in component B, but team B won’t let them even look at the source code. This happens muuuuuuuch more than you might think.

    So we’re not trying to teach the world how computers work, on this project, but we are trying to increase adoption and understanding at least in the software industry. There are some interesting ideas. Looking at software systems from new angles. This is where STPA comes in, by the way — it’s a way of breaking a system down not into components but rather into one or more control loops. Its going to take a while to make sense of everything in this new space… but you can expect some more 1500 word blog posts on the topic.

    • chevron_right

      Christian Hergert: Status Week 47

      news.movim.eu / PlanetGnome • 24 November • 5 minutes

    Ptyxis

    • Issue filed about highlight colors. Seems like a fine addition but again, I can’t write features for everyone so it will need a MR.

    • Walk a user through changing shortcuts to get the behavior they want w/ ctrl+shift+page up/down. Find a bug along the way.

    • Copy GNOME Terminal behavior for placement of open/copy links from the context menu.

    • Add a ptyxis_tab_grab_focus() helper and use that everywhere instead of gtk_widget_grab_focus() on the tab (which then defers to the VteTerminal ). This may improve some state tracking of keyboard shift state, which while odd, is a fine enough workaround.

    • Issue about adding waypipe to the list of “network” command detection. Have to nack for now just because it is so useful in situations without SSH.

    • Issue about growing size of terminal window (well shrinking back) after changing foreground. Triaged enough to know this is still a GDK Wayland issue with too-aggressively caching toplevel sizes and then re-applying them even after first applying a different size. Punted to GTK for now since I don’t have the cycles for it.

    Foundry

    • Make progress icon look more like AdwSpinner so we can transition between the two paintables for progress based on if we get any sort of real fractional value from the operation.

    • Paintable support for FoundryTreeExpander which is getting used all over Foundry/Builder at this point.

    • Now that we have working progress from LSPs we need to display it somewhere. I copied the style of Nautilus progress operations into a new FoundryOperationBay which sits beneath the sidebar content. It’s not a perfect replicate but it is close enough for now to move on to other things.

      Sadly, we don’t have a way with the Language Server Protocol to tie together what operation caused a progress object to be created so cancellation of progress is rather meaningless there.

    • Bring over the whole “snapshot rewriting” we do in Ptyxis into the FoundryTerminal subclass of VTE to make it easy to use in the various applications getting built on Foundry.

    • Fix FoundryJsonrpcDriver to better handle incoming method calls. Specifically those lacking a params field. Also update things to fix reading from LF/Nil style streams where the underlying helper failed to skip bytes after read_upto() .

    • It can’t be used for much but there is a foundry mcp server now that will expose available tools. As part of this make a new Resources type that allows providing context in an automated fashion. They can be listed, read, track changes to list, subscribe to changes in a resource, etc.

      There are helpers for JSON style resources.

      For example, this makes the build diagnostics available at a diagnostics:// URI.

    • Make PluginFlatpakSdk explicitly skip using flatpak build before the flatpak build-init command in the pipeline should have run.

    • Allow locating PluginDevhelpBook by URI so that clicking on the header link in documentation can properly update path navigators.

    • Make FoundryPtyDiagnostics auto-register themselves with the FoundryDiagnosticManager so that things like foundry_diagnostic_manager_list_all() will include extracted warnings that came from the build pipeline automatically rather than just files with diagnostic providers attached.

    • Support for “linked pipelines” which allows you to insert a stage into your build pipeline which then executes another build pipeline from another project.

      For example, this can allow you to run a build pipeline for GTK or GtkSourceView in the early phase of your projects build. The goal here is to simplify the effort many of us go through working on multiple projects at the same time.

      Currently there are limitations here in that the pipeline you want to link needs to be setup correctly to land in the same place as your application. For jhbuild or any sort of --prefix= install prefix this is pretty simple an works.

      For the flatpak case we need more work so that we can install into a separate DESTDIR which is the staging directory of the app we’re building. I prefer this over setting up incremental builds just for this project manually as we have no configuration information to key off.

    • Add new FoundryVcsDiffHunk and FoundryVcsDiffLine objects and API necessary access them via FoundryVcsDelta . Implement the git plugin version of these too.

    • Add a new FoundryGitCommitBuilder high-level helper class to make writing commit tooling easier. Provides access to the delta, hunk, and lines as well as commit details. Still needs more work though to finish off the process of staging/unstaging individual lines and try to make the state easy for the UI side.

      Add a print-project-diff test program to test that this stuff works reasonably well.

    Builder

    • Lots of little things getting brought over for the new editor implementation. Go to line, position tracking, etc.

    • Searchable diagnostics page based on build output. Also fix up how we track PTY diagnostics using the DiagnosticManager now instead of only what we could see from the PTY diagnostics.

    Manuals

    • Merge support for back/forward mouse buttons from @pjungkamp

    • Merge support for updating pathbar elements when navigating via the WebKit back/forward list, also from @pjungkamp.

    Systemd

    • Libdex inspired fiber support for systemd which follows a core design principle which is that fibers are just a future like any other.

      https://github.com/systemd/systemd/pull/39771

    CentOS

    • Merge vte291/libadwaita updates for 10.2

    GTK

    • Lots of digging through GdkWayland/GtkWindow to see what we can do to improve the situation around resizing windows. There is some bit of state getting saved that is breaking our intent to have a window recalculate it’s preferred size.

    GNOME Settings Daemon

    • Took a look over a few things we can do to reduce periodic IO requests in the housekeeping plugin.

      Submitted a few merge requests around this.

    GLib

    • Submitted a merge request adding some more file-systems to those that are considered “system file system types”. We were missing a few that resulted in extraneous checking in gsd-housekeeping.

      While researching this, using a rather large /proc/mounts I have on hand, and modifying my GLib to let me parse it rather than the system mounts, I verified that we can spend double-digit milliseconds parsing this file. Pretty bad if you have a system where the mounts can change regularly and thus all users re-parse them.

      Looking at a Sysprof recording I made, we spend a huge amount of time in things like strcmp() , g_str_hash() , and in hashtable operations like g_hash_table_resize() .

      This is ripe for a better data-structure instead of strcmp() .

      I first tried to use gperf to generate perfect hashes for the things we care about and that was a 10% savings. But sometimes low-tech is even better.

      In the end I went with bsearch() which is within spitting distance of the faster solutions I came up with but a much more minimal change to the existing code-base at the simple cost of keeping lists sorted.

      There is likely still more that can be done on this with diminishing returns. Honestly, I was surprised a single change would be even this much.

    Red Hat

    This week also had me spending a significant amount of time on Red Hat related things.

    • chevron_right

      Jussi Pakkanen: 3D models in PDF documents

      news.movim.eu / PlanetGnome • 24 November

    PDF can do a lot of things. One them is embedding 3D models in the file and displaying them. The user can orient them freely in 3D space and even choose how they should be rendered (wireframe, solid, etc). The main use case for this is engineering applications .

    Supporting 3D annotations is, as expected, unexpectedly difficult because:

    1. No open source PDF viewer seems to support 3D models.
    2. Even though the format specification is available , no open source software seems to support generating files in this format (by which I mean Blender does not do it by default).
    But, again, given sufficient effort and submitting data to not-at-all-sketchy-looking 3D model conversion web sites, you can get 3D annotations to work. Almost.

    As you can probably tell, the picture above is not a screenshot. I had to take it with a cell phone camera, because while Acrobat Reader can open the file and display the result, it hard crashes before you can open Windows' screenshot tool.

    • chevron_right

      Jakub Steiner: 12 months instead of 12 minutes

      news.movim.eu / PlanetGnome • 21 November • 2 minutes

    Hey Kids! Other than raving about GNOME.org being a static HTML , there’s one more aspect I’d like to get back to in this writing exercise called a blog post.

    Share card gets updated every release too

    I’ve recently come across an apalling genAI website for a project I hold deerly so I thought I’d give a glimpse on how we used to do things in the olden days. It is probably not going to be done this way anymore in the enshittified timeline we ended up in. The two options available these days are — a quickly generated slop website or no website at all, because privately owned social media is where it’s at.

    The wanna-be-catchy title of this post comes from the fact the website underwent numerous iterations (iterations is the core principle of good design) spanning over a year before we introduced the redesign.

    So how did we end up with a 3D model of a laptop for the hero image on the GNOME website, rather than something generated in a couple of seconds and a small town worth of drinking water or a simple SVG illustration?

    The hero image is static now, but used to be a scroll based animation at the early days. It could have become a simple vector style illustration, but I really enjoy the light interaction of the screen and the laptop, especially between the light and dark variants. Toggling dark mode has been my favorite fidget spinner.

    Creating light/dark variants is a bit tedious to do manually every release, but automating still a bit too hard to pull off (the taking screenshots of a nightly OS bit). There’s also the fun of picking a theme for the screenshot rather than doing the same thing over and over. Doing the screenshooting manually meant automating the rest, as a 6 month cycle is enough time to forget how things are done. The process is held together with duct tape, I mean a python script, that renders the website image assets from the few screenshots captured using GNOME OS running inside Boxes . Two great invisible things made by amazing individuals that could go away in an instant and that thought gives me a dose of anxiety.

    light.webp

    This does take a minute to render on a laptop (CPU only Cycles), but is a matter of a single invocation and a git commit. So far it has survived a couple of Blender releases, so fingers crossed for the future.

    Sophie has recently been looking into translations , so we might reconsider that 3D approach if translated screenshots become viable (and have them contained in an SVG similar to how os.gnome.org is done). So far the 3D hero has always been in sync with the release, unlike in our Wordpress days. Fingers crossed.