phone

    • chevron_right

      Andy Wingo: an annoying failure mode of copying nurseries

      news.movim.eu / PlanetGnome • 13 January • 1 minute

    I just found a funny failure mode in the Whippet garbage collector and thought readers might be amused.

    Say you have a semi-space nursery and a semi-space old generation. Both are block-structured. You are allocating live data, say, a long linked list. Allocation fills the nursery, which triggers a minor GC, which decides to keep everything in the nursery another round, because that’s policy: Whippet gives new objects another cycle in which to potentially become unreachable.

    This causes a funny situation!

    Consider that the first minor GC doesn’t actually free anything. But, like, nothing : it’s impossible to allocate anything in the nursery after collection, so you run another minor GC, which promotes everything, and you’re back to the initial situation, wash rinse repeat. Copying generational GC is strictly a pessimization in this case, with the additional insult that it doesn’t preserve object allocation order.

    Consider also that because copying collectors with block-structured heaps are unreliable , any one of your minor GCs might require more blocks after GC than before. Unlike in the case of a major GC in which this essentially indicates out-of-memory, either because of a mutator bug or because the user didn’t give the program enough heap, for minor GC this is just what we expect when allocating a long linked list.

    Therefore we either need to allow a minor GC to allocate fresh blocks – very annoying, and we have to give them back at some point to prevent the nursery from growing over time – or we need to maintain some kind of margin, corresponding to the maximum amount of fragmentation. Or, or, we allow evacuation to fail in a minor GC, in which case we fall back to promotion.

    Anyway, I am annoyed and amused and I thought others might share in one or the other of these feelings. Good day and happy hacking!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2025/01/13/an-annoying-failure-mode-of-copying-nurseries

    • chevron_right

      This Week in GNOME: #182 Updated Crypto

      news.movim.eu / PlanetGnome • 10 January • 3 minutes

    Update on what happened across the GNOME project in the week from January 03 to January 10.

    GNOME Core Apps and Libraries

    nielsdg reports

    gcr , a core library that provides a GObject-oriented interface to several crypto APIs, is preparing for the new 4.4 version with the alpha release 4.3.90. It contains some new APIs for GcrCertificate , such as the new GcrCertificateExtension class that allows you to inspect certificate extensions. 🕵️

    nielsdg says

    GNOME Keyring has now finally moved to Meson and has dropped support for building with autotools. This will be part of the upcoming 48.alpha release.

    Vala

    An object-oriented programming language with a self-hosting compiler that generates C code and uses the GObject system.

    lorenzw says

    Many people might have seen it already, but a while ago we finally officailly moved our documentation from the old GNOME wiki to a new website: https://docs.vala.dev ! This has been a long-standing task completed by Colin Kiama. The pages are hosted on https://github.com/vala-lang/vala-docs and everyone is welcome to contribute and improve them, we have already started to file tickets in the issue tracker and assign labels, especially for newcomers, so its easy to start helping out! We want to port a lot more docs and code examples from other locations to this new website, and thats not difficult at all! The website is built similar to all other new GNOME documentation websites using sphinx, so you don’t even need to learn a new markup language. Happy docs reading and hacking! :D

    Image Viewer (Loupe)

    Browse through images and inspect their metadata.

    Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) says

    Image Viewer (Loupe) 48.alpha is now available .

    This new release adds image editing support for PNGs and JPEGs. Images can be cropped ( tracking issue ), rotated, and flipped. New zoom controls allow setting a specific zoom level and feature a more compact style. Support for additional metadata formats like XMP and new image information fields have been added as well.

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) reports

    adaptive preview has received a bunch of updates since the last time: for example it now shows device bezels and allows to take screenshot of the app along with the shell panels and bezels

    GNOME Circle Apps and Libraries

    Shortwave

    Internet radio player with over 30000 stations.

    Felix announces

    At the end of the festive season I was able to implement one more feature: Shortwave now supports background playback , and interacts with the background portal to display the current status in the system menu!

    Third Party Projects

    Fabrix announces

    Confy 0.8.0 has been released. Confy is a conference schedule companion. This release brings updated UI design, some quality of life improvements like recent opened schedules list, and fixes to schedule parsing. https://confy.kirgroup.net/

    Parabolic

    Download web video and audio.

    Nick announces

    Parabolic V2025.1.0 is here! This update contains various bug fixes for issues users were experiencing, as well as a new format selection system.

    Here’s the full changelog:

    • Parabolic will now display all available video and audio formats for selection by the user when downloading a single media
    • Fixed an issue where some video downloads contained no audio
    • Fixed an issue where progress was incorrectly reported for some downloads
    • Fixed an issue where downloads would not stop on Windows
    • Fixed an issue where paths with accent marks were not handled correctly on Windows
    • Fixed an issue where the bundled ffmpeg did not work correctly on some Windows systems

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • chevron_right

      Adetoye Anointing: Demystifying SVG2 Text Layout: Understanding Librsvg

      news.movim.eu / PlanetGnome • 9 January • 3 minutes

    Prerequisite

    Hi! I’m Anointing, your friendly neighborhood software engineer. I’m an Outreachy GNOME intern currently working on the project titled “Implement the SVG2 Text Layout Algorithm in Librsvg.”

    In a previous blog post, I briefly introduced my project and tasks. If you missed it, don’t worry—this article dives deeper into the project and the specific task I’m working on.


    What is Librsvg?

    Librsvg is a lightweight library used to render Scalable Vector Graphics (SVGs), primarily within GNOME projects. It has been around since 2001, initially developed to handle SVG icons and graphical assets for GNOME desktops. Over the years, it has evolved into a versatile tool used for various SVG rendering needs.


    What is SVG2?

    Before understanding SVG2, let’s break down SVG (Scalable Vector Graphics):

    • SVG is an XML-based format for creating two-dimensional graphics.
    • Unlike raster images (e.g., JPEG or PNG), SVG images are scalable, meaning they retain quality regardless of size.
    • They are widely used for web graphics, illustrations, icons, and more because of their scalability and efficiency.

    SVG2 (Scalable Vector Graphics, version 2) is the latest update to the SVG standard, developed by the World Wide Web Consortium (W3C). It builds upon SVG 1.1 with new features, bug fixes, and enhancements to make SVG more powerful and consistent across modern browsers.


    Librsvg’s Current State

    Librsvg supports some parts of the SVG 1.1 specifications for text, including bidirectional text. However, advanced features like per-glyph positioning or text-on-path are not yet implemented.

    The SVG2 specification introduces significant improvements in text layout, such as:

    • Fine-grained glyph positioning
    • Support for right-to-left and bidirectional text
    • Vertical text layout
    • Text-on-path
    • Automatic text wrapping

    Currently, librsvg does not fully implement SVG2’s comprehensive text layout algorithm. My role is to help improve this functionality.


    My Role: Implementing the SVG2 Text Layout Algorithm

    If the above sounds technical, don’t worry—I’ll explain the key tasks in simpler terms.

    1. Support for Basic Text Layout

    This involves ensuring that text in SVG images appears correctly. Imagine a digital poster: every word and letter must be precisely positioned. My task is to make sure librsvg can handle this properly.

    2. Whitespace Handling

    Whitespace refers to the blank space between words and lines. In SVG, whitespace is standardized—extra spaces should not disrupt the layout. I’m implementing an algorithm to handle whitespace effectively.

    3. Left-to-Right (LTR) and Right-to-Left (RTL) Languages

    Languages like English are read from left to right, while Arabic or Hebrew are read from right to left. Librsvg must handle both correctly using a process called the Bidi (Bidirectional) algorithm.

    4. Inter-Glyph Spacing

    In SVG, attributes like x , y , dx , and dy allow precise control over letter spacing. This ensures text looks balanced and beautiful. Additionally, this task involves handling ligatures (e.g., combining characters in Arabic) to ensure the output is correct.

    5. Text-on-Path Handling (If Time Permits)

    This feature allows text to follow a specific shape, like a circle or wave. It’s a fancy but useful way to add artistic effects to SVGs.


    Why Does This Matter?

    Improving librsvg’s text layout makes it more powerful and accessible for designers, developers, and artists. Whether creating infographics, digital posters, or interactive charts, these enhancements ensure that text renders beautifully in every language and style.


    Tips for Newbies

    If you’re new to SVG, text layout algorithms, or even Rust (the programming language used in librsvg), here’s what you need to know:

    • Skills Needed : Communication, basic Rust programming, and familiarity with terms like shaping, bidi algorithm, glyphs, ligatures, and baselines.
    • Start Small : Focus on one concept at a time—there’s no need to know everything at once.
    • Resources : The GNOME librsvg project is beginner-friendly and a great way to dive into open-source contributions.

    Resources for Learning and Contributing


    Wrapping Up

    These tasks may seem technical, but they boil down to making librsvg a better tool for rendering SVGs. Whether it’s neat text placement, handling multiple languages, or adding artistic text effects, we’re improving SVG rendering for everyone.

    So far, this project has been a journey of immense learning for me—both in technical skills like Rust programming and soft skills like clear communication.

    Feel free to ask questions or share your thoughts. I’d love to hear from you! 😊

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /yorubad-dev/2025/01/09/demystifying-svg2-text-layout-understanding-librsvg/

    • chevron_right

      Andy Wingo: ephemerons vs generations in whippet

      news.movim.eu / PlanetGnome • 9 January • 5 minutes

    Happy new year, hackfolk! Today, a note about ephemerons . I thought I was done with them, but it seems they are not done with me. The question at hand is, how do we efficiently and correctly implement ephemerons in a generational collector? Whippet ‘s answer turns out to be simple but subtle.

    on oracles

    The deal is, I want to be able to evaluate different collector constructions and configurations, and for that I need a performance oracle: a known point in performance space-time against which to compare the unknowns. For example, I want to know how a sticky mark-bit approach to generational collection does relative to the conventional state of the art. To do that, I need to build a conventional system to compare against! If I manage to do a good job building the conventional evacuating nursery, it will have similar performance characteristics as other nurseries in other unlike systems, and thus I can use it as a point of comparison, even to systems I haven’t personally run myself.

    So I am adapting the parallel copying collector I described last July to have generational support: a copying (evacuating) young space and a copying old space. Ideally then I’ll be able to build a collector with a copying young space (nursery) but a mostly-marking nofl old space.

    notes on a copying nursery

    A copying nursery has different operational characteristics than a sticky-mark-bit nursery, in a few ways. One is that a sticky mark-bit nursery will promote all survivors at each minor collection, leaving the nursery empty when mutators restart. This has the pathology that objects allocated just before a minor GC aren’t given a chance to “die young”: a sticky-mark-bit GC over-promotes.

    Contrast that to a copying nursery, which can decide to promote a survivor or leave it in the young generation. In Whippet the current strategy for the parallel-copying nursery I am working on is to keep freshly allocated objects around for another collection, and only promote them if they are live at the next collection. We can do this with a cheap per-block flag, set if the block has any survivors, which is the case if it was allocated into as part of evacuation during minor GC. This gives objects enough time to die young while not imposing much cost in the way of recording per-object ages.

    Recall that during a GC, all inbound edges from outside the graph being traced must be part of the root set. For a minor collection where we just trace the nursery, that root set must include all old-to-new edges, which are maintained in a data structure called the remembered set . Whereas for a sticky-mark-bit collector the remembered set will be empty after each minor GC, for a copying collector this may not be the case. An existing old-to-new remembered edge may be unnecessary, because the target object was promoted; we will clear these old-to-old links at some point. (In practice this is done either in bulk during a major GC, or the next time the remembered set is visited during the root-tracing phase of a minor GC.) Or we could have a new-to-new edge which was not in the remembered set before, but now because the source of the edge was promoted, we must adjoin this old-to-new edge to the remembered set.

    To preserve the invariant that all edges into the nursery are part of the roots, we have to pay special attention to this latter kind of edge: we could (should?) remove old-to-promoted edges from the remembered set, but we must add promoted-to-survivor edges. The field tracer has to have specific logic that applies to promoted objects during a minor GC to make the necessary remembered set mutations.

    other object kinds

    In Whippet, “small” objects (less than 8 kilobytes or so) are allocated into block-structed spaces, and large objects have their own space which is managed differently. Notably, large objects are never moved. There is generational support, but it is currently like the sticky-mark-bit approach: any survivor is promoted. Probably we should change this at some point, at least for collectors that don’t eagerly promote all objects during minor collections.

    finalizers?

    Finalizers keep their target objects alive until the finalizer is run, which effectively makes each finalizer part of the root set . Ideally we would have a separate finalizer table for young and old objects, but currently Whippet just has one table, which we always fully traverse at the end of a collection. This effectively adds the finalizer table to the remembered set. This is too much work—there is no need to visit finalizers for old objects in a minor GC—but it’s not incorrect.

    ephemerons

    So what about ephemerons? Recall that an ephemeron is an object E × K V in which there is an edge from E to V if and only if both E and K are live . Implementing this conjunction is surprisingly gnarly; you really want to discover live ephemerons while tracing rather than maintaining a global registry as we do with finalizers. Whippet’s algorithm is derived from what SpiderMonkey does , but extended to be parallel .

    The question is, how do we implement ephemeron-reachability while also preserving the invariant that all old-to-new edges are part of the remembered set?

    For Whippet, the answer turns out to be simple: an ephemeron E is never older than its K or V , by construction, and we never promote E without also promoting (if necessary) K and V . (Ensuring this second property is somewhat delicate.) In this way you never have an old E and a young K or V , so no edge from an ephemeron need ever go into the remembered set. We still need to run the ephemeron tracing algorithm for any ephemerons discovered as part of a minor collection, but we don’t need to fiddle with the remembered set. Phew!

    conclusion

    As long all promoted objects are older than all survivors, and that all ephemerons are younger than the objects referred to by their key and value edges, Whippet’s parallel ephemeron tracing algorithm will efficiently and correctly trace ephemeron edges in a generational collector. This applies trivially for a sticky-mark-bit collector, which always promotes and has no survivors, but it also holds for a copying nursery that allows for survivors after a minor GC, as long as all survivors are younger than all promoted objects.

    Until next time, happy hacking in 2025!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2025/01/09/ephemerons-vs-generations-in-whippet

    • chevron_right

      Tobias Bernard: Re-Decentralizing Development

      news.movim.eu / PlanetGnome • 7 January • 8 minutes

    As I’ve already announced internally, I’m stepping down from putting together an STF application for this year. For inquiries about the 2025 application, please contact Adrian Vovk going forward. This is independent of the 2024 STF project, which we’re still in the process of wrapping up. I’m sticking around for that until the end.

    The topic of this blog post is not the only reason I’m stepping down but it is an important one, and I thought some of this is general enough to be worth discussing more widely.

    In the context of the Foundation issues we’ve had throughout the STF project I’ve been thinking a lot about what structures are best suited for collectively funding and organizing development, especially in the context of a huge self-organized project like GNOME. There are a lot of aspects to this, e.g. I hadn’t quite realized just how important having a motivated, talented organizer like Sonny is to successfully delivering a complex project. But the specific area I want to talk about here is how power and responsibilities should be split up between different entities across the community.

    This is my personal view, based on having worked on GNOME in a variety of structures over the years (volunteer, employee, freelancer, for- and non-profit, worked under grants, organized grants, etc.). I don’t have any easy answers, but I wanted to share how my perspective has shifted as a result of the events of the past year, which I hope will contribute to the wider ongoing discussion around this.

    A Short History

    Unlike many other user-facing free software projects, GNOME had strong corporate involvement since early on in its history, with many different product companies and consultancies paying people to work on various parts of it. The project grew up during the Dotcom bubble (younger readers may not remember this, but “Linux” was the “AI” of that era ), and many of our structures date back to this time.

    The Foundation was created in those early days as a neutral organization to hold resources that should not belong to any one of the companies involved (such as the trademark, donation money, or the development infrastructure). A lot of checks and balances were put in place to avoid one group taking over the Foundation or the Foundation itself overshadowing other players. For example, hiring developers via the Foundation was an explicit non-goal, advisory board companies do not get a say in the project’s technical direction, and there is a limit to how many employees of any single company can be on the board. See this episode of Emmanuele Bassi’s History of GNOME Podcast for more details.

    The Dotcom bubble burst and some of those early companies died, but there continued to be significant corporate investment, e.g. from enterprise desktop companies like Sun, and then later via companies from the mobile space during the hype cycles around netbooks, phones, and tablets around 2010.

    Fast forward to today, this situation has changed drastically. In 2025 the desktop is not a growth area for anyone in the industry, and it hasn’t been in over a decade. Ever since the demise of Nokia and the consolidation of the iOS/Android duopoly, most of the money in the ecosystem has been in server and embedded use cases.

    Today, corporate involvement in GNOME is limited to a handful of companies with an enterprise desktop business (e.g. Red Hat), and consultancies who mostly do low-level embedded work (e.g. Igalia with browsers, or Centricular with Gstreamer).

    Retaining the Next Generation

    While the current level of corporate investment, in combination with volunteer work from the wider community, have been enough to keep the project afloat in recent years, we have a pretty glaring issue with our new contributor pipeline: There are very few job openings in the field.

    As a result, many people end up dropping out or reducing their involvement after they finish university. Others find jobs on adjacent technologies where they occasionally get work time for GNOME-related stuff, and put in a lot of volunteer time on top. Others still are freelancing, applying for grants, or trying to make Patreon work.

    While I don’t have firm numbers, my sense is that the number of people in precarious situations like these has been going up since I got involved around 2015. The time when you could just get a job at Red Hat was already long gone when I joined, but for a while e.g. Endless and Purism had quite a few people doing interesting stuff.

    In a sense this lack of corporate interest is not unusual for user-facing free software — maybe we’re just reverting to the mean. Public infrastructure simply isn’t commercially profitable. Many other projects, particularly ones without corporate use cases (e.g. Tor) have always been in this situation, and thus have always relied on grants and donations to fund their development. Others have newly moved in this direction in recent years with some success (e.g. Thunderbird).

    Foundational Issues

    I think what many of us in the community have wanted to see for a while is exactly what Tor, Thunderbird, Blender et al. are doing: Start doing development at the Foundation, raise money for it via donations and grants, and grow the organization to pick up the slack from shrinking corporate involvement.

    I know why this idea is so seductive to many of us, and has been for years. It’s in fact so popular, I found four board candidacies ( 1 , 2 , 3 , 4 ) from the last few election cycles proposing something like it.

    On paper, the Foundation seems perfect as the legal structure for this kind of initiative. It already exists, it has the same name as the wider project, and it already has the infrastructure to collect donations. Clearly all we need to do is to raise a bit more money, and then use that money to hire community members. Easy!

    However, after having been in the trenches trying to make it work over the past year, I’m now convinced it’s a bad idea, for two reasons: Short/medium term the current structure doesn’t have the necessary capacity, and longer term there are too many risks involved if something goes wrong.

    Lack of Capacity

    Simply put, what we’ve experienced in the context of the STF project (and a few other initiatives) over the past year is that the Foundation in its current form is not set up to handle projects that require significant coordination or operational capacity. There are many reasons for this — historical, personal, structural — but my takeaway after this year is that there need to be major changes across many of the Foundation’s structures before this kind of thing is feasible.

    Perhaps given enough time the Foundation could become an organization that can fund and coordinate significant development, but there is a second, more important reason why I no longer think that’s the right path.

    Structural Risk

    One advantage of GNOME’s current decentralized structure is its resilience. Having a small Foundation at the center which only handles infrastructure, and several independent companies and consultancies around it doing development means different parts are insulated from each other if something goes wrong.

    If there are issues inside e.g. Codethink or Igalia, the maximum damage is limited and the wider project is never at risk. People don’t have to leave the community if they want to quit their current job, ideally they can just move to another company and continue most of their upstream work.

    The same is not true of projects with a monolithic entity at the center. If there’s a conflict in that central monolith it can spiral ever wider if it isn’t resolved, affecting more and more structures and people, and doing drastically more damage.

    This is a lesson we’ve unfortunately had to learn the hard way when, out of the blue, Sonny was banned last year. I’m not going to talk about the ban here (it’s for Sonny to talk about if/when feels like it), but suffice to say that it would not have happened had we not done the STF project under the Foundation, and many community members including myself do not agree with the ban.

    What followed was, for some of us, maybe the most stressful 6 months of our lives. Since last summer we’ve had to try and keep the STF project running without its main architect, while also trying to get the ban situation fixed, as well as dealing with a number of other issues caused by the ban. Thousands of volunteer hours were probably burned on this, and the issue is not even resolved yet. Who knows how many more will be burned before it’s over. I’m profoundly sad thinking about the bugs we could have fixed, the patches we could have reviewed, and the features we could have designed in those hours instead.

    This is, to me, the most important takeaway and the reason why I no longer believe the Foundation should be the structure we use to organize community development. Even if all the current operational issues are fixed, the risk of something like this happening is too great, the potential damage too severe.

    What are the Alternatives?

    If using the Foundation is too risky, what other options are there for organizing development collectively?

    I’ve talked to people in our community who feel that NGOs are fundamentally a bad structure for development projects, and that people should start more new consultancies instead. I don’t fully buy that argument, but it’s also not without merit in my experience. Regardless though, I think everyone has also seen at one point or another how dysfunctional corporations can be. My feeling is it probably also heavily depends on the people and culture, rather than just the specific legal structure.

    I don’t have a perfect solution here, and I’m not sure there is one. Maybe the future is a bunch of new consulting co-ops doing a mix of grants and client work. Maybe it’s new non-profits focused on development. Maybe we need to get good at Patreon. Or maybe we all just have to get a part time job doing something else.

    Time will tell how this all shakes out, but the realization I’ve come to is that the current decentralized structure of the project has a lot of advantages. We should preserve this and make use of it, rather than trying to centralize everything on the Foundation.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /tbernard/2025/01/07/re-decentralizing/

    • chevron_right

      Luis Villa: Reading in 2024—tools

      news.movim.eu / PlanetGnome • 3 January • 3 minutes

    I was going to do a single post on my reading in 2024, but realized it probably makes more sense as a two-parter: the things I used to read (this post), and the things I actually read (at least one post on books, maybe a second on news and feeds).

    Feeds

    I still read a lot of feeds (and newsletters). Mid-way through the year, I switched to Reader by Readwise for RSS and newsletters, after a decade or so with Feedbin. It’s everything I have always wanted from an RSS tool—high polish, multi-platform, and separates inbound stuff to skim from stuff to be gone into at more depth. Expensive, but totally worth it if you’re an addict of my sort.

    Surprise bonuses: has massively reduced my pile of open tabs, and is a nice ebook reader—have read several DRM-free books from Verso and Standard Ebooks in it.

    One minor gripe (that I’ve also had with just about every other feed reader/read later tool): I wish it were more possible to get content out with tools. Currently I use Buffer to get things out of Reader to social, but I’d love to do that in a more sophisticated and automated way (eg by pulling everything in a tag saved in the previous week, and massaging it to stuff into a draft blog post).

    E-ink reading

    I wrote early in the year about my Boox Page , and then promptly put a knee through the screen. I had liked it, but ultimately didn’t love it. In particular, the level of hackery in their mods to Android really bothered me—the thing did not Just Work, which was the last thing I wanted in a distraction-free reading device.

    So I got a Daylight DC1 . What can I say, I’m a sucker for new e-ink and e-ink like devices. I love it but it has a lot of warning signs so I’m not sure I can recommend it to anyone yet.

    Parts I love:

    • Screen is delightfully warm. Doesn’t quite try to be paper (can’t) but is very easy on the eye, both in broad daylight and at night. (Their marketing material , for the screen , is really quite accurate.)
    • Screen texture is great under the finger when reading; feels less like a screen and more like paper. (Can’t speak to the pen; really am using this as a consumption device, with my iPad more for creation. Might change that in the new year, not sure yet.)
    • Battery life is great.
    • Android is Just Android (with a very tasteful launcher as the only significant modification), so you really can run things you want (especially if their output is mostly text). I’ve got mine loaded with pretty much just readers: Kindle, Libby, Kobo, Readwise Reader; all work great.
    • I find myself weirdly in love with the almost pillow-like “case”. It’s silly and inefficient and maybe that’s OK?

    Parts I worry about:

    • Physical build quality is a little spotty—most notably the gap between screen and case is very uneven. Hasn’t affected my day to day use, but makes me wonder about how long it’ll last.
    • The OS is… shifty? Reported itself to the Android app store as a Pixel 5(?), and the launcher is unpaid-for freeware (got a nice little “please give us $5!” note from it, which screams corner-cutting.) Again, works fine, just red flag in terms of attention to detail and corner-cutting.
    • I found out after I bought that the CEO is a not-so-reformed cryptobro, an organizational red flag.
    • They’re talking a lot about AI for their next OS “release”. That implies a variety of possible dooms: either a small team gets overwhelmed by the hard work of AI, or a large team has lots of VC demands. Neither is good.

    Audio

    Switched from Audible (I know) to Apple Books (I know, again) because it works so much more reliably on my Apple Watch, and running with the Watch is where I consume most of my audiobooks. Banged through a lot of history audiobooks this year as a result.

    Paper

    I still love paper too. 2025 goal: build a better bookshelf. We’ll see how that goes…

    • wifi_tethering open_in_new

      This post is public

      lu.is /2025/01/reading-in-2024-tools/

    • chevron_right

      Matthew Garrett: The GPU, not the TPM, is the root of hardware DRM

      news.movim.eu / PlanetGnome • 2 January • 5 minutes

    As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
    Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here ).
    This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.

    I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.

    What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.

    Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.

    The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.

    The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.

    In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).

    Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow . I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.

    The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.

    comment count unavailable comments
    • chevron_right

      Sophie Herold: This was 2024

      news.movim.eu / PlanetGnome • 31 December • 1 minute

    In non-chronological order

      • Earned money for the first time within many many years.
      • Wrote C bindings and GObject introspection annotations for a Rust library for the first time.
      • Wrote 40 weekly updates on Patreon / Ko-Fi .
      • Got formally diagnosed with Autism.
      • Implemented some basic image editing in Loupe.
      • Bought new woodworking tools.
      • Got bold and worked on a few lines of security critical C-code.
      • Confirmed with my doctors that the suspected diagnosis changed from fibromyalgia to ME/CFS .
      • Dove into BuildStream to add for more complete Rust support.
      • Released a collection of Rust crates that allow extraction, recomposition, and editing of image data including Exif or XMP for several image formats.
      • Created a website that lists all GNOME components like libraries that are not apps.
      • Called a Taxi for the first time in my life.
      • Wrote Rust bindings for the C bindings of a Rust crate.
      • Stopped occupational therapy and started another psychotherapy.
      • Got interviewed by c’t Open Source Spotlight (German).
      • Started ordering groceries online to have more energy for other things.
      • Was proud (and still am) to be part of a community with such a strong pride month statement .
      • Did a bunch of reviews for potential new GNOME Core apps.
      • Expanded the crowdfunding of my work to Patreon , Ko-Fi , GitHub , and PayPal .
      • Built a coat rack.

    A huge thanks to everyone who supported my work!

    Chisels in a box, carpenter’s square, and a hand plane lying on a table

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /sophieh/2024/12/31/this-was-2024/

    • chevron_right

      Michael Meeks: 2024-12-27 Friday

      news.movim.eu / PlanetGnome • 27 December

    • Slept a chunk of the day, caught up with various surreal events amplified by fever dreams. Watched Little Women in the evening.
    • wifi_tethering open_in_new

      This post is public

      meeksfamily.uk /~michael/blog/2024-12-27.html