call_end

    • chevron_right

      Matthew Garrett: Investigating a forged PDF

      news.movim.eu / PlanetGnome • 24 September • 6 minutes

    I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law , having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

    This post is not about that.

    Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation , but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

    The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account. , where IER is International Executive Rentals , the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:
    Text reading ENTIRE AGREEMENT: The foregoing constitutes the entire agreement between the parties and may bemodified only in writing signed by all parties. This agreement and any modifications, including anyphotocopy or facsimile, may be signed in one or more counterparts, each of which will be deemed anoriginal and all of which taken together will constitute one and the same instrument. The followingexhibits, if checked, have been made a part of this Agreement before the parties’ execution:۞Exhibit 1:Lead-Based Paint Disclosure (Required by Law for Rental Property Built Prior to 1978)۞Addendum 1 The security deposit will be held by (name removed) and applied, refunded, or forfeited in accordance with the terms of this lease agreement.
    Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

    Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:
    The same text as the previous picture, but addendum 1 is empty
    Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature , an online document signing platform, and they'd added a certification page that looked like this:
    A Signature Certificate, containing a bunch of data about the document including a checksum or the original
    Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

    First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

    These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

    Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

    My next move was pdfalyzer , which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

    But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

    You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

    But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf . Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

    Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery .

    There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

    comment count unavailable comments
    • chevron_right

      Sam Thursfield: Status update, 22/09/2025

      news.movim.eu / PlanetGnome • 22 September • 6 minutes

    For the first time in many years I can talk publicly about what I’m doing at work: a short engagement funded by Endless and Codethink to rebuild Endless OS as a GNOME OS derivative, instead of a Debian derivative.

    There is nothing wrong with Debian, of course, just that today GNOME OS aligns more closely with the direction the Endless OS team want to go in. A lot of the innovations from earlier versions of Endless OS over the last decade were copied and re-used in GNOME OS, so in a sense this is work coming full circle.

    I’ll tell you a bit more about the project but first I have a rant about complexity.

    Complexity

    I work for a consultancy and the way consultancy projects work is like this: you agree what the work is, you estimate how long the work will take, you agree a budget, and then you do the work.

    The problem with this approach is that in software engineering, most of your work is research. Endless OS is the work of thousands of different people, and hundreds of millions of lines of code. We reason and communicate about the code using abstractions, and there are hundreds of millions of abstractions too.

    If you ask me “how long will it take to change this thing in that abstraction over there”, I can research those abstractions and come up with an estimate for the job. How long to change a lightbulb? How long to rename a variable? How long to add an option in this command line tool ? Some hours of work.

    Most real world tasks involve many abstractions and, by the time youve researched them all, you’ve done 90% of the work. How long to port this app to Gtk4? How long to implement this new optimization in GCC? How long to write a driver for this new USB beard trimmer device? Some months or years of work.

    And then you have projects where it’s not even possible to research the related abstractions. So much changed between Debian 12 and GNOME OS 48 that you’d be a year just writing a comprehensive changelog. So, how can you possibly estimate the work involved when you can’t know in advance what the work is ?

    Of course, you can’t, you can only start and see what happens.

    But, allocating people to projects in a consultancy business is also a hard problem. You need to know project start and end dates because you are lining up more projects in advance, and your clients want to know when their work will start.

    So for projects involving such a huge number of abstractions, we have to effectively make up a number and hope for the best. When people say things like “try to do the best estimation you can”, it’s a bit like saying “try to count the sand on this beach as best as you can”.

    Another difficulty is around finding people who know the right abstractions. If you’re adding a feature to a program written in Rust, management won’t assign someone who never touched Rust before. If they do, you can ask for extra time to learn some Rust as part of the project. (Although since software is largely a cowboy industry, there are always managers who will tell you to just learn by doing.)

    But what abstractions do you need to know for OS development and integration? These projects can be harder than programming work, because the abstractions involved are larger, more complicated and more numerous. If you can code in C, can you can be a Linux integrator? I don’t know, but can a bus driver can fly a helicopter?

    If a project is so complex that you can’t predict in advance which abstractions are going to be problematic and which ones you won’t need to touch, then even if you wanted to include teaching time in your estimation you’ll need a crystal ball to know how much time the work will take.

    For this project, my knowledge of BuildStream and Freedesktop SDK is proving valuable. There’s a good reference manual for BuildStream, but no tutorials on how to use it for OS development. How do we expect people to learn it? Have we solved anything by introducing new abstractions that aren’t widely understood — even if they’re genuinely better in some use cases?

    Endless OS 7

    Given I’ve started with a rant you might ask how the project is going. Actually, quite some good progress. Endless OS 7 exists, it’s being built and pushed as an ostree from eos-build-meta to Endless’ ostree server. You can install it as an update to eos6 if you like to live dangerously — see the “Switch master” documentation. (You can probably install it on other ostree based systems if you like to live really dangerously, but I’m not going to tell you how). I have it running on an IBM Thinkpad laptop. Actually my first time testing any GNOME OS derivative on hardware!

    Thinkpad P14s running Endless OS 7

    For a multitude of reasons the work has been more stressful than it needed to be, but I’m optimistic for a successful outcome. (Where success means, we don’t give up and decide the Debian base was easier after all). I think GNOME OS and Endless OS will both benefit from closer integration.

    The tooling is working well for me: reliability and repeatability were core principles when BuildStream was being designed, and it shows. Once you learn it you can do integration work fast. You don’t get flaky builds. I’ve never deleted my cache to fix a weird problem. It’s an advanced tool, and in some ways it’s less flexible than its friends in the integration tool world, but it’s a really good way to build an operating system.

    I’ve learned a bunch about some important new abstractions on this project too. UEFI and Secure Boot . The systemd-sysusers service and userdb. Dracut and initramfs debugging .

    I haven’t been able to contribute any effort upstream to GNOME OS so far. I did contribute some documentation comments to Freedesktop SDK, and I’m trying to at least document Endless OS 7 as clearly as I can . Nobody has ever had much to time to document how GNOME OS is built or tested, hopefully the documentation in eos-build-meta is a useful step forwards for GNOME OS as well.

    As always the GNOME OS community are super helpful. I’m sure it’s a big part of the success of GNOME OS that Valentín is so helpful whenever things break. I’m also privileged to be working with the highly talented engineers at Endless who built all this stuff.

    Abstractions

    Broadly, the software industry is fucked as long as we keep making an infinite number of new abstractions. I haven’t had a particularly good time on any project since I returned to software engineering five years ago, and I suspect it’s because we just can’t control the complexity enough to reason properly about what we are doing.

    This complexity is starting to inconvenience billionaires. In the UK the entire car industry has been stopped for weeks because system owners didn’t understand their work well enough to do a good job of securing systems. I wonder if it’s going to occur to them eventually that simplification is the best route to security. Capitalism doesn’t tend to reward that way of thinking — but it can reward anything that gives you a business advantage.

    I suppose computing abstractions are like living things, with a tendency to boundlessly multiply until they reach some natural limit, or destroy their habitat entirely. Maybe the last year of continual security breaches could be that natural limit. If your system is too complex for anyone to keep it secure, then your system is going to fail.

    • chevron_right

      Christian Hergert: Directory Listings with Foundry

      news.movim.eu / PlanetGnome • 22 September • 3 minutes

    I took a different approach to directory listings in Foundry. They use GListModel as the interface but behind the scenes it is implemented with futures and and fibers.

    A primary use case for a directory listing is the project tree of an IDE. Since we use GtkListView for efficient trees in GTK it we expose a GListModel . Each item in the directory is represented as a FoundryDirectoryItem which acts just a bit like a specialized GFileInfo . It contains information about all the attributes requested in the directory listing.

    You can also request some other information that is not traditionally available via GFileInfo . You can request attributes that will be populated by the version control system such as if the file is modified or should be ignored.

    Use a GtkFilterListModel to look at specific properties or attributes. Sort them quickly using GtkSortListModel . All of this makes implementing a project tree browser very straight forward.

    Reining in the Complexity

    One of the most complex things in writing a directory listing is managing updates to the directory. To manage some level of correctness here Foundry does it with a fiber in the following ordering:

    • Start a file monitor on the directory and start queing up changes
    • Enumerate children in the directory and add to internal list
    • Start processing monitor events, starting with the backlog

    Performance

    There are a couple tricks to balancing the performance of new items being added and items being removed. We want both to be similarly well performing.

    To do this, new items are always placed at the end of the list. We don’t care about sorting here because that will be dealt with at a higher layer (in the GtkSortListModel ). That keeps adding new items quite fast because we can quickly access the end of the list. This saves us a lot of time sorting the tree just for the removal.

    But if you aren’t sorting the items, how do you make removals quick? Doesn’t that become O(n) ?

    Well not quite. If we keep a secondary index (in this case a simple GHashTable ) then we can store a key (the file’s name) which points to a stable pointer (a GSequenceIter ). That lookup is O(1) and the removal from a GSequence on average are O(log n) .

    Big O notation is often meaningless when you’re talking about different systems. So let’s be real for a moment. Your callback from GListModel::items-changed() can have a huge impact on performance compared to the internal data structures here.

    Reference Counting and Fibers

    When doing this with a fiber attention must be taken to avoid over referencing the object or you risk never disposing/finalizing.

    One way out of that mess is to use a GWeakRef and only request the object when it is truly necessary. That way, you can make your fiber cancel when the directory listing is disposed. In turn cleanup your monitor and other resources are automatically cleaned up.

    I do this by creating a future that will resolve when there is more work to be done. Then I release my self object followed by awaiting the future. At that point I’ll either resolve or reject from an error (including the fiber being cancelled).

    If I need self again because I got an event, it’s just a g_weak_ref_get() away.

    Future-based File Monitors

    I’m not a huge fan of “signal based” APIs like GFileMonitor so internally FoundryFileMonitor does something a bit different. It still uses GFileMonitor internally for outstanding portability, but the exposed API uses DexFuture .

    This allows you to call when you want to get the next event (which may already be queued). Call foundry_file_monotor_next() to get the next event and the future will resolve when an event has been received. This makes more complex awaiting operations feasible too (such as monitoring multiple directories for the next change).

    foundry-directory-listing.c , foundry-directory-item.c , and foundry-file-monitor.c .

    • chevron_right

      Christian Hergert: Getting Started with Foundry

      news.movim.eu / PlanetGnome • 21 September • 2 minutes

    In addition to all the Libdex 1.0 fanfare, Foundry has also reached 1.0 for GNOME 49. That doesn’t mean it’s complete, but it does mean that what is there I feel pretty confident about from an API/ABI standpoint.

    If you have a project that works in GNOME Builder, it is a good time to test it out with Foundry! To get started you need to “initialize” a Foundry project similar to what you do with Git. This creates a minimal skeleton structure that Foundry will use for the project.

    cd myapp/
    foundry init

    At this point, you should have a .foundry/ directory in your project. Foundry will try to store everything it knows about your project in there. That includes settings, builds, cache, and tmp files. It also sets up a .gitignore file so that only the things you’re interested end up getting committed to the project’s git repository.

    Now we can build the project.

    foundry build

    Foundry will do all the same sort of SDK setup, cross-container build pipelines, and build system management that GNOME Builder does.

    To run the project, just run the following. It will handle building first if necessary too.

    foundry run

    If using a Flatpak manifest as your build configuration (you can check with foundry config list ) then it will handle all the same sort of details that Builder does.

    Building with Libfoundry

    Of course the command line tool is great for when you are needing to quickly do something in a terminal. But perhaps you want to integrate Foundry features into your own application or tools? Everything in the command line tool is available via libfoundry .

    To open an existing project with libfoundry, using just a directory to start, we can discover the `.foundry` directory location.

    g_autofree char *state_dir = NULL;
    DexFuture *future;
    
    future = foundry_context_discover (g_get_current_dir (), NULL);
    state_dir = dex_await_string (future, &error);

    We can use that information to load the project.

    g_autoptr(FoundryContext) context = NULL;
    DexFuture *future;
    
    future = foundry_context_new (state_dir, NULL, 0, NULL);
    context = dex_await_object (future, &error);

    To run a build we need to get the build manager service. That is available from the context with the :build-manager property.

    g_autoptr(FoundryBuildManager) build_manager = NULL;
    
    build_manager = foundry_context_dup_build_manager (context);

    One thing you’ll notice in Foundry is that many getters take a reference to the returned value. This is because Foundry heavily uses fibers and it is always better to own your objects when you cross fiber suspension points. This is where the dup_ prefix comes from instead of get_ .

    Building the project is quite simple too. Just await until it has completed.

    dex_await (foundry_build_manager_build (build_manager), &error);

    Running with Libfoundry

    To run is quite similar. Except you use the run manager instead of the build manager.

    g_autoptr(FoundryRunManager) run_manager = NULL;
    
    run_manager = foundry_context_dup_run_manager (context);
    dex_await (foundry_run_manager_run (run_manager), &error));

    There are of course more detailed ways to do this if you need precise control over things like where the project is deployed, such as to an external device. Or perhaps you want to control what command is run. The run manager and build manager also allows controlling the PTY that is used for both operations.

    Hopefully that is just enough to get you excited about using the tooling. The API has been obsessed over but it could use more documentation of which this is the infancy.

    • chevron_right

      This Week in GNOME: #217 Mahjongg Sundays

      news.movim.eu / PlanetGnome • 21 September • 6 minutes

    Update on what happened across the GNOME project in the week from September 14 to September 21.

    GNOME Core Apps and Libraries

    GLib

    The low-level core library that forms the basis for projects such as GTK and GNOME.

    Philip Withnall says

    Tobias Stoeckmann continues to grind away at fixing various integer overflow corner cases in GLib, making the library more reliable for everyone when your program gets into a weird corner case. This week: string utility functions 🎉

    GNOME Circle Apps and Libraries

    Mahjongg

    Match tiles and clear the board

    Tobias Bernard says

    Mahjongg was accepted into Circle! It’s one the historical GNOME games, but thanks to Mat’s work over the past few cycles it looks very fresh and clean nowadays. Congratulations 🥳

    https://apps.gnome.org/Mahjongg

    mahjongg_banner.CKSQnLWW_92g8D.webp

    Mat says

    Mahjongg 49.0 has been released, and is available on Flathub . This release contains a bunch of improvements:

    • New app icon (Tobias Bernard)
    • Save and restore active games on startup (François Godin)
    • Adjust theme contrast in dark and high contrast modes
    • Shake unselectable tiles when clicking them
    • Replace help docs with Game Rules dialog
    • Add confirmation dialog for layout change during active game
    • Rename ‘Difficult’ layout to ‘Taipei’
    • Remove Date column from Scores dialog to leave more space for player name
    • Fix text entry focus when recycling rows in Scores dialog
    • Reduce frame drops when using the Cairo renderer
    • Use Rsvg directly instead of GdkPixbuf for asset loading
    • Several performance optimizations related to Scores dialog
    • Translation updates

    mahjongg-screenshot-1.zDzlk6d7_EnWHU.webp

    mahjongg-screenshot-2.C9x4RV38_ZkjQn2.webp

    Déjà Dup Backups

    A simple backup tool.

    Michael Terry announces

    Déjà Dup Backups 49.0 is finally out! This is a big one - Restic by default, restoring by file manager, and a big UI refresh.

    Read more here: https://discourse.gnome.org/t/deja-dup-49-0-released/31441

    Third Party Projects

    Alain says

    Planify 4.14 released

    We are excited to announce the release of Planify 4.14, which comes with major new features, performance improvements, and important bug fixes.

    Highlights of this release include:

    • Rewritten CalDAV backend with support for Radicale, Baïkal, and other CalDAV servers — thanks to @byquanton.
    • Fixed memory leaks when completing or deleting tasks, and during CalDAV synchronization — thanks to @markochk.
    • New view cache system that automatically frees unused memory when views are not in use.
    • Added customizable sidebar filters: Tomorrow, Someday, Recurring, No label, and All tasks.
    • Improved synchronization: Planify now respects the task order from Nextcloud and Todoist, with a new reordering algorithm.
    • Work in progress to make WebKit and Evolution dependencies optional, paving the way for Windows and macOS support — thanks to @byquanton.
    • Spell checker support in task descriptions.
    • Fixed bug when moving tasks with subtasks in Nextcloud/CalDAV projects.
    • Updated themes (Light, Dark, Dark Blue) and improved task design for better readability.
    • Labels and indicators for notes, reminders, and subtasks are now aligned to the right of task titles.
    • Added a quick-add button to every task list.
    • New completion animation and updated sound when finishing tasks.
    • Completion notification added, so users can easily review completed tasks.

    Planify 4.14 is available now on Flathub: https://flathub.org/apps/io.github.alainm23.planify

    Read the full announcement here: https://useplanify.com/blog#planify-414-is-here

    macOS.CUWha81I_ZdzUXI.webp

    windows.C52ejkEd_1Ef7jW.webp

    Alexander Vanhee says

    Everyone makes mistakes, that’s why in Gradia you will be able to edit both positioning and properties like color and size of already drawn elements. This saves you the trouble of having to redo the placement. Follow future developments on Github .

    xjuan reports

    Casilda 1.0 released! A simple wayland compositor widget for GTK 4

    Release notes:

    • Add dmabuf support (Val Packett)
    • Added vapi generation (PaladinDev)
    • Add library soname (Benson Muite)
    • Implement GtkScrollable iface
    • Add get_client_socket_fd()
    • Add spawn_async()
    • Remove bg-color property
    • Render windows and popups directly in snapshot()
    • Position windows on center of widget
    • Improve transient windows handling

    Read more about it at https://blogs.gnome.org/gtk/2025/09/15/casilda-1-0-released/

    casilda.oNBSZNjy_Z1dpHiG.webp

    Mahyar Darvishi reports

    SSH Studio v1.2.2 is out!

    SSH-Studio is a new desktop app for managing your ~/.ssh/config without needing to dive into terminal editors. It makes working with SSH easier by letting you search, edit, and validate hosts in a clean interface.

    The app also comes with a raw/diff view for advanced edits, inline error checking, quick actions like copy or test connection, and even a simple SSH key manager. Automatic backups ensure your config stays safe while you experiment.

    Check it out on Flathub

    key-generator.CxLdiIdG_Zk7WU2.webp

    ssh_studio.Bzi4-AZ0_1uk3X3.webp

    Raw-diff.B5cmuL7c_ZhchJb.webp

    GNOME Websites

    Guillaume Bernard announces

    Starting with GNOME 49, Damned Lies now follows the GNOME release cycle. Even if Damned Lies is continuously deployed, it helps track the changes and will surely motivate developments, as we have a deadline!

    For this cycle, as explained in a previous TWIG, we switched authentication to a 3rd party-based system that works like a charm. You can connect with your GNOME SSO account.

    Amongst a few changes in the user interface and some bug fixes, we now send notifications in both HTML and plain text. Some elements of the Vertimus workflow have also been fixed, resulting in more stable contributions and better error messages to help translators, reviewers, and committers debug the situation themselves without requiring them to open an issue on Damned Lies’ issue tracker.

    Most of the work was done in the background. The code responsible for computing statistics, POT file generation, and Git repositories has been refactored. This is the most important part of this cycle, as it was a requirement to implement other new features: asynchronous Git commits and pushes, and asynchronous refresh of statistics. The number of tests increased from around 300 to more than 500 for the same code coverage.

    The existing code is now more stable, and Damned Lies can be continuously deployed without fear. And that’s the case. Did you even notice all the silent updates in September?

    Miscellaneous

    sadlyascii says

    do you maintain a flatpak of an app that uses media codecs? if you do, please upgrade it to the new runtime org.gnome.Platform//49 (org.freedesktop.Platform//25.08)! the older versions download some codecs directly from cisco’s servers, and cisco has geoblocked some places, including the whole Ukraine. so your flatpak might be failing to install for people there, see for details: https://github.com/cisco/openh264/issues/3886

    Ada Magicat ❤️🧡🤍🩷💜 says

    Valentin added a firewall to GNOME OS . This protects your computer from other devices on the network. By default we use a lax policy to ensure most applications work, but you can change it for more security. Check out the relevant section in our security hardening guide .

    We also greatly improved our Nvidia driver support. The driver was in poor condition, but after lots of fixes most features should now work.

    If you play games or make use of features like CUDA we would appreciate your help in testing more hardware and applications. GNOME OS is still pre-release software, but if you are interested in testing you can follow the installation instructions here . After installing, enable the Nvidia driver by running:

    sudo updatectl enable nvidia-driver --now

    Next, reboot and test your favourite game or CUDA application. If you find any issues, feel free to report them here .

    Note : We use the new “open” kernel modules by Nvidia, so only 2000 and 1600 series cards or newer work with GNOME OS.

    GNOME Foundation

    Allan Day announces

    Another weekly GNOME Foundation update is available, covering highlights from the last seven days. Highlights include the GNOME 49 release, a new fundraising committee, and self-hosting our Matrix homeserver.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • chevron_right

      Allan Day: GNOME Foundation Update, 2025-09-19

      news.movim.eu / PlanetGnome • 19 September • 2 minutes

    Another week, another GNOME Foundation update! This week was slightly quieter than the last two, but there’s still plenty going on. Here are some of the highlights.

    GNOME 49

    GNOME 49 was released this week ! Huge congratulations to the GNOME community. It’s a really strong release and the release notes are a testament to everyone’s hard work .

    It’s notable that some of the new features in this release have been in the works for much longer than 6 months – in some cases, probably years. This speaks to an amazing level of dedication by the contributors involved.

    Given that this is a GNOME Foundation update, I am obliged to mention the role our organisation played in the release. By providing the project’s development infrastructure, organising events where planning discussions happen, and providing other general support to the project, we played a significant role in the making of GNOME 49.

    Our goal is to increase the amount of support we provide for GNOME development, but we can only do that with the support of donors. So, if you want to help GNOME, donate today .

    Fundraising committee

    Maria Majadas, who is our current board chair, has organised the first meeting of the new fundraising committee this week. I’m hugely grateful to Maria for taking on this important task, and it’s exciting to see our community fundraising effort starting to take shape. Hopefully there will be more details to share once the committee has had its first meeting.

    Lawyering

    We have a few legal questions that we’re looking to get answers to right now, around requirements for our governance arrangements. That led me to have conversations with a few lawyers this week.

    The GNOME Foundation has the good fortune of being able to tap into a network of legal advisors who are also open source experts, and who are very supportive of our organisation. I find it very humbling that we have this support, and I think we can be very grateful to have friends in the legal space.

    Matrix hosting

    Historically, Element Matrix Services (EMS) has generously hosted GNOME’s Matrix homeserver. However, this week Bart has started the process of moving our Matrix service to our own infrastructure. This will offer a few advantages in the future, such as integration with our SSO. Stay tuned for updates as the migration rolls out.

    Bookkeeper meeting

    We had our monthly meeting with our bookkeeper this week, which was great as usual. We love our bookkeeper! She helps us to keep our accounts in order, address issues as they come up, and makes sure that we are ready for our annual tax filings. Accounting is a big part of what the GNOME Foundation does, and our books are in great shape thanks to this work.

    Message ends

    Thanks for reading! See you next week.

    • chevron_right

      Christian Hergert: Fibers in Libdex

      news.movim.eu / PlanetGnome • 19 September • 4 minutes

    I’ve talked a lot about fibers before. But what are they and how do they work?

    From the progammer perspective they can feel a lot like a thread. They have a stack just like a real thread. They maintain a program counter just like a real thread. They can spill registers to the stack like a real thread.

    Most importantly is that you can call into GTK from your fiber if running on the main thread. Your fiber will have been dispatched from the main loop on the same OS thread so this is a great use of libdex.

    There are downsides too though. You have to allocate stack and guard pages for them like a read thread. They have some cost in transitioning between stacks even if fairly low these days. You also need to be mindful to own the lifecycle of pointers on your stack if you intend to await.

    Many fibers may work together on a single thread where each runs a little bit until yielding back to the scheduler. This is called “cooperative multi-tasking” because it is up to the fibers to be cooperative and yield when appropriate.

    That means that you generally should not “block” when writing code for fibers. It not only blocks your own fiber from making progress but all other fibers sharing your thread.

    The way around this is to use non-blocking APIs instead of blocking calls like open() or read() . This is where combining a library for Futures and a library for Fibers makes a lot of sense. If you provide asynchronous APIs based on futures you immediately gain a natural point to yield from a fiber back to the scheduler.

    The scheduler maintains two queues for fibers on a thread. That is because fibers exist in one of two (well three sort of) states.

    • The first state is “runnable” meaning the fiber can make progress immediately.
    • The second state is “blocked” meaning the fiber is waiting on a future to complete.
    • The (sort of) third state is “finished” but in libdex it would be removed from all queues here.

    When a fiber transitions from runnable to blocked its linked-list node migrates from one queue to the other. Naturally, that means we must be waiting for a completion of a future. To scheduler will register itself with the dependent future so it may be notified of completion.

    Upon completion of the dependent future our fiber will move from the blocked to the runnable queue. The next GMainContext iteration will transition into the fibers stack, restore register values, set the instruction pointer and continue running.

    Fibers get their stack from a pool of pre-allocated stacks. When they are discarded they return to the pool for quick re-use. If we have too many saved then we release the stacks memory back to the system. It’s all just mmap() -based stacks currently.

    You might be wondering how we transition into the fibers stack from the thread. Libdex has a few different strategies for that based on the platforms it supports.

    Windows, for example, has native support for fibers in their C runtime. So it uses ConvertThreadToFiber() and ConvertFiberToThread() for transitions.

    On Linux and many other Unix-like systems we can use makecontext() and swapcontext() to transition. There was a time when swapcontext() was quite slow and so people used specialized assembly to do the same thing. These days I found that to be unnecessary (at least on Linux).

    Another way to transition stacks is by using signalstack() , but libdex does not use that method.

    Libdex fibers work on Linux, FreeBSD, macOS (both x86_64 and aarch64 ), Windows, and Illumos/Solaris. I believe Hurd also works but I’ve not verified that.

    When your fiber first runs you will be placed at the base of your new stack. So if you found yourself in a debugger at your fibers entry point, it might look like you are one function deep. The first function from libdex would be equivalent to your _start on a regular thread.

    If you await while 5 functions deep then your stacks register state will be saved and then your stack is set aside. The fiber will transition back to the scheduler where it left off. Then the original thread state is restored and the fiber scheduler can continue on to the next work item.

    At the core, the fiber scheduler is really just a GSource within the GMainContext . It knows when it can flag itself is runnable. When dispatched it will wake up any number of runnable fibers.

    To make sure that we don’t have to deal with extremely tricky situations fibers may not be migrated across threads. They are always pinned to the thread they were created on. If that becomes a problem it is usually better to break up your work into smaller tasks.

    Another feature that has become handy is implicit fiber cancellation. A fiber is itself a future. If all code awaiting completion of your fiber have discarded interest then your fiber will be implicity cancelled.

    Where this works out much better than real thread cancellation is that we already have natural exit points where we yield. So when your fiber goes to dex_await() it will get a DEX_ERROR_FIBER_CANCELLED in a GError . Usually when you get errors you propagate that rejection by returning from your fiber, easy.

    If you do not want implicit fiber cancellation, you can “disown” your fiber using dex_future_disown() .

    • chevron_right

      Sebastian Wick: Integrating libdex with GDBus

      news.movim.eu / PlanetGnome • 18 September • 5 minutes

    Writing asynchronous code in C has always been a challenge. Traditional callback-based approaches, including GLib’s async/finish pattern, often lead to the so-called callback hell that’s difficult to read and maintain. The libdex library offers a solution to this problem, and I recently worked on expanding the integration with GLib’s GDBus subsystem.

    The Problem with the Sync and Async Patterns

    Writing C code involving tasks which can take non-trivial amount of time has traditionally required choosing between two approaches:

    1. Synchronous calls - Simple to write but block the current thread
    2. Asynchronous callbacks - Non-blocking but result in callback hell and complex error handling

    Often the synchronous variant is chosen to keep the code simple, but in a lot of cases, blocking for potentially multiple seconds is not acceptable. Threads can be used to prevent the other threads from blocking, but it creates parallelism and with it the need for locking. It also can potentially create a huge amount of threads which mostly sit idle.

    The asynchronous variant has none of those problems, but consider a typical async D-Bus operation in traditional GLib code:

    static void
    on_ping_ready (GObject      *source_object,
                   GAsyncResult *res,
                   gpointer      data)
    {
      g_autofree char *pong = NULL;
    
      if (!dex_dbus_ping_pong_call_ping_finish (DEX_BUS_PING_PONG (source_object),
                                                &pong,
                                                res, NULL))
        return; // handle error
    
      g_print ("client: %s\n", pong);
    }
    
    static void
    on_ping_pong_proxy_ready (GObject      *source_object,
                              GAsyncResult *res,
                              gpointer      data)
    {
      DexDbusPingPong *pp dex_dbus_ping_pong_proxy_new_finish (res, NULL);
      if (!pp)
        return; // Handle error
    
      dex_dbus_ping_pong_call_ping (pp, "ping", NULL,
                                    on_ping_ready, NULL);
    }
    

    This pattern becomes unwieldy quickly, especially with multiple operations, error handling, shared data and cleanup across multiple callbacks.

    What is libdex?

    Dex provides Future-based programming for GLib. It provides features for application and library authors who want to structure concurrent code in an easy to manage way. Dex also provides Fibers which allow writing synchronous looking code in C while maintaining the benefits of asynchronous execution.

    At its core, libdex introduces two key concepts:

    • Futures : Represent values that will be available at some point in the future
    • Fibers : Lightweight cooperative threads that allow writing synchronous-looking code that yields control when waiting for asynchronous operations

    Futures alone already simplify dealing with asynchronous code by specefying a call chain ( dex_future_then() , dex_future_catch() , and dex_future_finally() ), or even more elaborate flows ( dex_future_all() , dex_future_all_race() , dex_future_any() , and dex_future_first() ) at one place, without the typical callback hell. It still requires splitting things into a bunch of functions and potentially moving data through them.

    static DexFuture *
    lookup_user_data_cb (DexFuture *future,
                         gpointer   user_data)
    {
      g_autoptr(MyUser) user = NULL;
      g_autoptr(GError) error = NULL;
    
      // the future in this cb is already resolved, so this just gets the value
      // no fibers involved 
      user = dex_await_object (future, &error);
      if (!user)
        return dex_future_new_for_error (g_steal_pointer (&error));
    
      return dex_future_first (dex_timeout_new_seconds (60),
                               dex_future_any (query_db_server (user),
                                               query_cache_server (user),
                                               NULL),
                               NULL);
    }
    
    static void
    print_user_data (void)
    {
      g_autoptr(DexFuture) future = NULL;
    
      future = dex_future_then (find_user (), lookup_user_data_cb, NULL, NULL);
      future = dex_future_then (future, print_user_data_cb, NULL, NULL);
      future = dex_future_finally (future, quit_cb, NULL, NULL);
    
      g_main_loop_run (main_loop);
    }
    

    The real magic of libdex however lies in fibers and the dex_await() function, which allows you to write code that looks synchronous but executes asynchronously. When you await a future, the current fiber yields control, allowing other work to proceed while waiting for the result.

    g_autoptr(MyUser) user = NULL;
    g_autoptr(MyUserData) data = NULL;
    g_autoptr(GError) error = NULL;
    
    user = dex_await_object (find_user (), &error);
    if (!user)
      return dex_future_new_for_error (g_steal_pointer (&error));
    
    data = dex_await_boxed (dex_future_first (dex_timeout_new_seconds (60),
                                              dex_future_any (query_db_server (user),
                                                              query_cache_server (user),
                                                              NULL),
                                              NULL), &error);
    if (!data)
      return dex_future_new_for_error (g_steal_pointer (&error));
    
    g_print ("%s", data->name);
    

    Christian Hergert wrote pretty decent documentation , so check it out!

    Bridging libdex and GDBus

    With the new integration, you can write D-Bus client code that looks like this:

    g_autoptr(DexDbusPingPong) *pp = NULL;
    g_autoptr(DexDbusPingPongPingResult) result = NULL;
    
    pp = dex_await_object (dex_dbus_ping_pong_proxy_new_future (connection,
                                                                G_DBUS_PROXY_FLAGS_NONE,
                                                                "org.example.PingPong",
                                                                "/org/example/pingpong"),
                           &error);
    if (!pp)
      return dex_future_new_for_error (g_steal_pointer (&error));
    
    res = dex_await_boxed (dex_dbus_ping_pong_call_ping_future (pp, "ping"), &error);
    if (!res)
      return dex_future_new_for_error (g_steal_pointer (&error));
    
    g_print ("client: %s\n", res->pong);
    

    This code is executing asynchronously, but reads like synchronous code. Error handling is straightforward, and there are no callbacks involved.

    On the service side, if enabled, method handlers will run in a fiber and can use dex_await() directly, enabling complex asynchronous operations within service implementations:

    static gboolean
    handle_ping (DexDbusPingPong       *object,
                 GDBusMethodInvocation *invocation,
                 const char            *ping)
    {
      g_print ("service: %s\n", ping);
    
      dex_await (dex_timeout_new_seconds (1), NULL);
      dex_dbus_ping_pong_complete_ping (object, invocation, "pong");
    
      return G_DBUS_METHOD_INVOCATION_HANDLED;
    }
    
    static void
    dex_dbus_ping_pong_iface_init (DexDbusPingPongIface *iface)
    {
      iface->handle_ping = handle_ping;
    }
    
    pp = g_object_new (DEX_TYPE_PING_PONG, NULL);
    dex_dbus_interface_skeleton_set_flags (DEX_DBUS_INTERFACE_SKELETON (pp),
                                           DEX_DBUS_INTERFACE_SKELETON_FLAGS_HANDLE_METHOD_INVOCATIONS_IN_FIBER);
    

    This method handler includes a 1-second delay, but instead of blocking the entire service, it yields control to other fibers during the timeout.

    The merge request contains a complete example of a client and service communicating with each other.

    Implementation Details

    The integration required extending GDBus’s code generation system. Rather than modifying it directly, the current solution introduces a very simple extension system to GDBus’ code generation.

    The generated code includes:

    • Future-returning functions : For every _proxy_new() and _call_$method() function, corresponding _future() variants are generated
    • Result types : Method calls return boxed types containing all output parameters
    • Custom skeleton base class : Generated skeleton classes inherit from DexDBusInterfaceSkeleton instead of GDBusInterfaceSkeleton , which implements dispatching method handlers in fibers

    Besides the GDBus code generation extension system, there are a few more changes required in GLib to make this work. This is not merged at the time of writing, but I’m confident that we can move this forward.

    Future Directions

    I hope that this work convinces more people to use libdex! We have a whole bunch of existing code bases which will have to stick with C in the foreseeable future, and libdex provides tools to make incremental improvements. Personally, I want to start using in in the xdg-desktop-portal project.

    • chevron_right

      Michael Meeks: 2025-09-18 Thursday

      news.movim.eu / PlanetGnome • 18 September

    • Up early, tech planning call, admin, lunch with Julia. Partner call, dug through bugs and patches, UX/Design stand-up.