call_end

    • chevron_right

      Thibault Martin: From VS Code to Helix

      news.movim.eu / PlanetGnome • 29 October • 13 minutes

    I created the website you're reading with VS Code. Behind the scenes I use Astro, a static site generator that gets out of the way while providing nice conveniences.

    Using VS Code was a no-brainer: everyone in the industry seems to at least be familiar with it, every project can be opened with it, and most projects can get enhancements and syntactic helpers in a few clicks. In short: VS Code is free, easy to use, and widely adopted.

    A Rustacean colleague kept singing Helix's praises. I discarded it because he's much smarter than I am, and I only ever use vim when I need to fiddle with files on a server. I like when things "Just Work" and didn't want to bother learning how to use Helix nor how to configure it.

    Today it has become my daily driver. Why did I change my mind? What was preventing me from using it before? And how difficult was it to get there?

    Automation is a double-edged sword

    Automation and technology make work easier, this is why we produce technology in the first place. But it also means you grow more dependent on the tech you use. If the tech is produced transparently by an international team or a team you trust, it's fine. But if it's produced by a single large entity that can screw you over, it's dangerous.

    VS Code might be open source, but in practice it's produced by Microsoft. Microsoft has a problematic relationship to consent and is shoving AI products down everyone's throat. I'd rather use tools that respect me and my decisions, and I'd rather not get my tools produced by already monopolistic organizations.

    Microsoft is also based in the USA, and the political climate over there makes me want to depend as little as possible on American tools. I know that's a long, uphill battle, but we have to start somewhere.

    I'm not advocating for a ban against American tech in general, but for more balance in our supply chain. I'm also not advocating for European tech either: I'd rather get open source tools from international teams competing in a race to the top, rather than from teams in a single jurisdiction. What is happening in the USA could happen in Europe too.

    Why I feared using Helix

    I've never found vim particularly pleasant to use but it's everywhere, so I figured I might just get used to it. But one of the things I never liked about vim is the number of moving pieces. By default, vim and neovim are very bare bones. They can be extended and completely modified with plugins, but I really don't like the idea of having extremely customize tools.

    I'd rather have the same editor as everyone else, with a few knobs for minor preferences. I am subject to choice paralysis, so making me configure an editor before I've even started editing is the best way to tank my productivity.

    When my colleague told me about Helix, two things struck me as improvements over vim.

    1. Helix's philosophy is that everything should work out of the box. There are a few configs and themes, but everything should work similarly from one Helix to another. All the language-specific logic is handled in Language Servers that implement the Language Server Protocol standard.
    2. In Helix, first you select text, and then you perform operations onto it. So you can visually tell what is going to be changed before you apply the change. It fits my mental model much better.

    But there are major drawbacks to Helix too:

    1. After decades of vim, I was scared to re-learn everything. In practice this wasn't a problem at all because of the very visual way Helix works.
    2. VS Code "Just Works", and Helix sounded like more work than the few clicks from VS Code's extension store. This is true, but not as bad as I had anticipated.

    After a single week of usage, Helix was already very comfortable to navigate. After a few weeks, most of the wrinkles have been ironed out and I use it as my primary editor. So how did I overcome those fears?

    What Helped

    Just Do It

    I tried Helix. It can sound silly, but the very first step to get into Helix was not to overthink it. I just installed it on my mac with brew install helix and gave it a go. I was not too familiar with it, so I looked up the official documentation and noticed there was a tutorial.

    This tutorial alone is what convinced me to try harder. It's an interactive and well written way to learn how to move and perform basic operations in Helix. I quickly learned how to move around, select things, surround them with braces or parenthesis. I could see what I was about to do before doing it. This has been epiphany. Helix just worked the way I wanted.

    Better: I could get things done faster than in VS Code after a few minutes of learning. Being a lazy person, I never bothered looking up VS Code shortcuts. Because the learning curve for Helix is slightly steeper, you have to learn those shortcuts that make moving around feel so easy.

    Not only did I quickly get used to Helix key bindings: my vim muscle-memory didn't get in the way at all!

    Better docs

    The built-in tutorial is a very pragmatic way to get started. You get results fast, you learn hands on, and it's not that long. But if you want to go further, you have to look for docs. Helix has officials docs . They seem to be fairly complete, but they're also impenetrable as a new user. They focus on what the editor supports and not on what I will want to do with it.

    After a bit of browsing online, I've stumbled upon this third-party documentation website . The domain didn't inspire me a lot of confidence, but the docs are really good. They are clearly laid out, use-case oriented, and they make the most of Astro Starlight to provide a great reading experience. The author tried to upstream these docs, but that won't happen . It looks like they are upstreaming their docs to the current website. I hope this will improve the quality of upstream docs eventually.

    After learning the basics and finding my way through the docs, it was time to ensure Helix was set up to help me where I needed it most.

    Getting the most of Markdown and Astro in Helix

    In my free time, I mostly use my editor for three things:

    1. Write notes in markdown
    2. Tweak my website with Astro
    3. Edit yaml to faff around my Kubernetes cluster

    Helix is a "stupid" text editor. It doesn't know much about what you're typing. But it supports Language Servers that implement the Language Server Protocol. Language Servers understand the document you're editing. They explain to Helix what you're editing, whether you're in a TypeScript function, typing a markdown link, etc. With that information, Helix and the Language Server can provide code completion hints, errors & warnings, and easier navigation in your code.

    In addition to Language Servers, Helix also supports plugging code formatters. Those are pieces of software that will read the document and ensure that it is consistently formatted. It will check that all indentations use spaces and not tabs, that there is a consistent number of space when indenting, that brackets are on the same line as the function, etc. In short: it will make the code pretty.

    Markdown

    Markdown is not really a programming language, so it might seem surprising to configure a Language Server for it. But if you remember what we said earlier, Language Servers can provide code completion, which is useful when creating links for example. Marksman does exactly that!

    Since Helix is pre-configured to use marksman for markdown files we only need to install marksman and make sure it's in our PATH . Installing it with homebrew is enough.

    $ brew install marksman
    

    We can check that Helix is happy with it with the following command

    $ hx --health markdown
    Configured language servers:
      ✓ marksman: /opt/homebrew/bin/marksman
    Configured debug adapter: None
    Configured formatter: None
    Tree-sitter parser: ✓
    Highlight queries: ✓
    Textobject queries: ✘
    Indent queries: ✘
    

    But Language Servers can also help Helix display errors and warnings, and "code suggestions" to help fix the issues. It means Language Servers are a perfect fit for... grammar checkers! Several grammar checkers exist. The most notable are:

    • LTEX+ , the Language Server used by Language Tool . It supports several languages must is quite resource hungry.
    • Harper , a grammar checker Language Server developed by Automattic, the people behind WordPress, Tumblr, WooCommerce, Beeper and more. Harper only support English and its variants, but they intend to support more languages in the future.

    I mostly write in English and want to keep a minimalistic setup. Automattic is well funded, and I'm confident they will keep working on Harper to improve it. Since grammar checker LSPs can easily be changed, I've decided to go with Harper for now.

    To install it, homebrew does the job as always:

    $ brew install harper
    

    Then I edited my ~/.config/helix/languages.toml to add Harper as a secondary Language Server in addition to marksman

    [language-server.harper-ls]
    command = "harper-ls"
    args = ["--stdio"]
    
    
    [[language]]
    name = "markdown"
    language-servers = ["marksman", "harper-ls"]
    

    Finally I can add a markdown linter to ensure my markdown is formatted properly. Several options exist, and markdownlint is one of the most popular. My colleagues recommended the new kid on the block, a Blazing Fast equivalent: rumdl .

    Installing rumdl was pretty simple on my mac. I only had to add the repository of the maintainer, and install rumdl from it.

    $ brew tap rvben/rumdl
    $ brew install rumdl
    

    After that I added a new language-server to my ~/.config/helix/languages.toml and added it to the language servers to use for the markdown language .

    [language-server.rumdl]
    command = "rumdl"
    args = ["server"]
    
    [...]
    
    
    [[language]]
    name = "markdown"
    language-servers = ["marksman", "harper-ls", "rumdl"]
    soft-wrap.enable = true
    text-width = 80
    soft-wrap.wrap-at-text-width = true
    

    Since my website already contained a .markdownlint.yaml I could import it to the rumdl format with

    $ rumdl import .markdownlint.yaml
    Converted markdownlint config from '.markdownlint.yaml' to '.rumdl.toml'
    You can now use: rumdl check --config .rumdl.toml .
    

    You might have noticed that I've added a little quality of life improvement: soft-wrap at 80 characters.

    Now if you add this to your own config.toml you will notice that the text is completely left aligned. This is not a problem on small screens, but it rapidly gets annoying on wider screens.

    Helix doesn't support centering the editor. There is a PR tackling the problem but it has been stale for most of the year. The maintainers are overwhelmed by the number of PRs making it their way, and it's not clear if or when this PR will be merged.

    In the meantime, a workaround exists, with a few caveats. It is possible to add spaces to the left gutter (the column with the line numbers) so it pushes the content towards the center of the screen.

    To figure out how many spaces are needed, you need to get your terminal width with stty

    $ stty size
    82 243
    

    In my case, when in full screen, my terminal is 243 characters wide. I need to remove the content column with from it, and divide everything by 2 to get the space needed on each side. In my case for a 243 character wide terminal with a text width of 80 characters:

    (243 - 80) / 2 = 81
    

    As is, I would add 203 spaces to my left gutter to push the rest of the gutter and the content to the right. But the gutter itself has a width of 4 characters, that I need to remove from the total. So I need to subtract them from the total, which leaves me with 76 characters to add.

    I can open my ~/.config/helix/config.toml to add a new key binding that will automatically add or remove those spaces from the left gutter when needed, to shift the content towards the center.

    [keys.normal.space.t]
    z = ":toggle gutters.line-numbers.min-width 76 3"
    

    Now when in normal mode, pressing <kbd>Space</kbd> then <kbd>t</kbd> then <kbd>z</kbd> will add/remove the spaces. Of course this workaround only works when the terminal runs in full screen mode.

    Astro

    Astro works like a charm in VS Code. The team behind it provides a Language Server and a TypeScript plugin to enable code completion and syntax highlighting.

    I only had to install those globally with

    $ pnpm install -g @astrojs/language-server typescript @astrojs/ts-plugin
    

    Now we need to add a few lines to our ~/.config/helix/languages.toml to tell it how to use the language server

    [language-server.astro-ls]
    command = "astro-ls"
    args = ["--stdio"]
    config = { typescript = { tsdk = "/Users/thibaultmartin/Library/pnpm/global/5/node_modules/typescript/lib" }}
    
    [[language]]
    name = "astro"
    scope = "source.astro"
    injection-regex = "astro"
    file-types = ["astro"]
    language-servers = ["astro-ls"]
    

    We can check that the Astro Language Server can be used by helix with

    $ hx --health astro
    Configured language servers:
      ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls
    Configured debug adapter: None
    Configured formatter: None
    Tree-sitter parser: ✓
    Highlight queries: ✓
    Textobject queries: ✘
    Indent queries: ✘
    

    I also like to get a formatter to automatically make my code consistent and pretty for me when I save a file. One of the most popular code formaters out there is Prettier . I've decided to go with the fast and easy formatter dprint instead.

    I installed it with

    $ brew install dprint
    

    Then in the projects I want to use dprint in, I do

    $ dprint init
    

    I might edit the dprint.json file to my liking. Finally, I configure Helix to use dprint globally for all Astro projects by appending a few lines in my ~/.config/helix/languages.toml .

    [[language]]
    name = "astro"
    scope = "source.astro"
    injection-regex = "astro"
    file-types = ["astro"]
    language-servers = ["astro-ls"]
    formatter = { command = "dprint", args = ["fmt", "--stdin", "astro"]}
    auto-format = true
    

    One final check, and I can see that Helix is ready to use the formatter as well

    $ hx --health astro
    Configured language servers:
      ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls
    Configured debug adapter: None
    Configured formatter:
      ✓ /opt/homebrew/bin/dprint
    Tree-sitter parser: ✓
    Highlight queries: ✓
    Textobject queries: ✘
    Indent queries: ✘
    

    YAML

    For yaml, it's simple and straightforward: Helix is preconfigured to use yaml-language-server as soon as it's in the PATH. I just need to install it with

    $ brew install yaml-language-server
    

    Is it worth it?

    Helix really grew on me. I find it particularly easy and fast to edit code with it. It takes a tiny bit more work to get the language support than it does in VS Code, but it's nothing insurmountable. There is a slightly steeper learning curve than for VS Code, but I consider it to be a good thing. It forced me to learn how to move around and edit efficiently, because there is no way to do it inefficiently. Helix remains intuitive once you've learned the basics.

    I am a GNOME enthusiast, and I adhere to the same principles: I like when my apps work out of the box, and when I have little to do to configure them. This is a strong stance that often attracts a vocal opposition. I like products that follow those principles better than those who don't.

    With that said, Helix sometimes feels like it is maintained by one or two people who have a strong vision, but who struggle to onboard more maintainers. As of writing, Helix has more than 350 PRs open. Quite a few bring interesting features, but the maintainers don't have enough time to review them.

    Those 350 PRs mean there is a lot of energy and goodwill around the project. People are willing to contribute. Right now, all that energy is gated, resulting in frustration both from the contributors who feel like they're working in the void, and the maintainers who feel like there at the receiving end of a fire hose.

    A solution to make everyone happier without sacrificing the quality of the project would be to work on a Contributor Ladder. CHAOSS' Dr Dawn Foster published a blog post about it , listing interesting resources at the end.

    • chevron_right

      Felipe Borges: Google Summer of Code Mentor Summit 2025

      news.movim.eu / PlanetGnome • 29 October • 4 minutes

    Last week, I took a lovely train ride to Munich, Germany, to represent GNOME at the Google Summer of Code Mentor Summit 2025. This was my first time attending the event, as previous editions were held in the US, which was always a bit too hard to travel.

    This was also my first time at an event with the “unconference” format, which I found to be quite interesting and engaging. I was able to contribute to a few discussions and hear a variety of different perspectives from other contributors. It seems that when done well, this format can lead to much richer conversations than our usual, pre-scheduled “one-to-many” talks.

    The event was attended by a variety of free and open-source communities from all over the world. These groups are building open solutions for everything from cloud software and climate applications to programming languages, academia, and of course, AI. This diversity was a great opportunity to learn about the challenges other software communities face and their unique circumstances.

    There was a nice discussion with the people behind MusicBrainz. I was happy and surprised to find out that they are the largest database of music metadata in the world, and that pretty much all popular music streaming services, record labels, and similar groups consume their data in some way.

    Funding the project is a constant challenge for them, given that they offer a public API that everyone can consume. They’ve managed over the years by making direct contact with these large companies, developing relationships with the decision-makers inside, and even sometimes publicly highlighting how these highly profitable businesses rely on FOSS projects that struggle with funding. Interesting stories. 🙂

    There was a large discussion about “AI slop,” particularly in GSoC applications. This is a struggle we’ve faced in GNOME as well, with a flood of AI-generated proposals. The Google Open Source team was firm that it’s up to each organization to set its own criteria for accepting interns, including rules for contributions. Many communities shared their experiences, and the common solution seems to be reducing the importance of the GSoC proposal document. Instead, organizations are focusing on requiring a history of small, “first-timer” contributions and conducting short video interviews to discuss that work. This gives us more confidence that the applicant truly understands what they are doing.

    GSoC is not a “pay-for-feature” initiative, neither for Google nor for GNOME. We see this as an opportunity to empower newcomers to become long-term GNOME contributors. Funding free and open-source work is hard, especially for people in less privileged places of the world, and initiatives like GSoC and Outreachy allow these people to participate and find career opportunities in our spaces. We have a large number of GSoC interns who have become long-term maintainers and contributors to GNOME. Many others have joined different industries, bringing their GNOME expertise and tech with them. It’s been a net-positive experience for Google, GNOME, and the contributors over the past decades.

    Our very own Karen Sandler was there and organized a discussion around diversity. This topic is as relevant as ever, especially given recent challenges to these initiatives in the US. We discussed ideas on how to make communities more diverse and better support the existing diverse members of our communities.

    It was quite inspiring. Communities from various other projects shared their stories and results, and to me, it just confirmed my perception: while diverse communities are hard to build, they can achieve much more than non-diverse ones in the long run. It is always worth investing in people.

    As always, the “hallway track” was incredibly fruitful. I had great chats with Carl Schwan (from KDE) about event organizing (comparing notes on GUADEC, Akademy, and LAS) and cross-community collaboration around technologies like Flathub and Flatpak. I also caught up with Claudio Wunder, who did engagement work for GNOME in the past and has always been a great supporter of our project. His insights into the dynamics of other non-profit foundations sparked some interesting discussions about the challenges we face in our foundation.

    I also had a great conversation with Till Kamppeter (from OpenPrinting) about the future of printing in xdg-desktop-portals. We focused on agreeing on a direction for new dialog features, like print preview and other custom app-embedded settings. This was the third time I’ve run into Till at a conference this year! 🙂

    I met plenty of new people and had various off-topic chats as well. The event was only two days long, but thanks to the unconference format, I ended up engaging far more with participants than I usually do in that amount of time.

    I also had the chance to give a lightning talk about GNOME’s long history with Google Summer of Code and how the program has helped us build our community. It was a great opportunity to also share our wider goals, like building a desktop for everyone and our focus on being a people-centric community.

    Finally, I’d like to thank Google for sponsoring my trip, and for providing the space for us all to talk about our communities and learn from so many others.

    • chevron_right

      Jakub Steiner: USB MIDI Controllers on the M8

      news.movim.eu / PlanetGnome • 28 October • 2 minutes

    The M8 has extensive USB audio and MIDI capabilities, but it cannot be a USB MIDI host. So you can control other devices through USB MIDI, but cannot sent to it over USB.

    Control Surface & Pots for M8

    Controlling things via USB devices has to be done through the old TRS (A) jacks. There’s two devices that can aid in that. I’ve used the RK06 which is very featureful, but in a very clumsy plastic case and USB micro cable that has a splitter for the HOST part and USB Power in. It also sometimes doesn’t reset properly when having multiple USB devices attached through a hub. The last bit is why I even bother with this setup.

    The Dirtywave M8 has amazing support for the Novation Launchpad Pro MK3 . Majority of peolpe hook it up directly to the M8 using the TRS MIDI cables. The Launchpad lacks any sort of pots or encoders though. Thus the need to fuss with USB dongles. What you need is to use the Launchpad Pro as a USB controller and shun at the reliable MIDI connection. The RK06 allows to combine multiple USB devices attached through an unpowered USB hub. Because I am flabbergasted how I did things here’s a schema that works.

    Retrokits RK06 schema

    If it doesn’t work, unplug the RK06 and turn LPPro off and on in the M8. I hate this setup but it is the only compact one that works (after some fiddling that you absolutely hate when doing a gig).

    Intech Knot

    The Hungarians behind the Grid USB controlles (with first class Linux support) have a USB>MIDI device called Knot . It has one great feature of a switch between TRS A/B for the non-standard devices. It is way less fiddly than the RK06, uses nice aluminium housing and is sturdier. Hoewer it doesn’t seem to work with the Launchpad Pro via USB and it seems to be completely confused by a USB hub, so it’s not useful for my use case of multiple USB controllers.

    Clean setup with Knot&Grid

    Non-compact but Reliable

    Novation came out with the Launch Control XL , which sadly replaced pots in the old one with encoders (absolute vs relative movement), but added midi in/ou/through with a MIDI mixer even. That way you can avoid USB altogether and get a reliable setup with control surfaces and encoders and sliders.

    Launchpar Pro and Intech PO16 via USB handled by RK06

    One day someone comes up with a compact midi capable pots to play along with Launchpad Pro ;) This post has been brought to you by an old man who forgets things.

    • chevron_right

      Colin Walters: Thoughts on agentic AI coding as of Oct 2025

      news.movim.eu / PlanetGnome • 27 October • 4 minutes

    Sandboxed, reviewed parallel agents make sense

    For coding and software engineering, I’ve used and experimented with various frontends (FOSS and proprietary) to multiple foundation models (mostly proprietary) trying to keep up with the state of the art. I’ve come to strongly believe in a few things:

    • Agentic AI for coding needs strongly sandboxed, reproducible environments
    • It makes sense to run multiple agents at once
    • AI output definitely needs human review

    Why human review is necessary

    Prompt injection is a serious risk at scale

    All AI is at risk of prompt injection to some degree, but it’s particularly dangerous with agentic coding. All the state of the art today knows how to do is mitigate it at best. I don’t think it’s a reason to avoid AI, but it’s one of the top reasons to use AI thoughtfully and carefully for products that have any level of criticality.

    OpenAI’s Codex documentation has a simple and good example of this.

    Disabling the tests and claiming success

    Beyond that, I’ve experienced multiple times different models happily disabling the tests or adding a println!("TODO add testing here") and claim success. At least this one is easier to mitigate with a second agent doing code review before it gets to human review.

    Sandboxing

    The “can I do X” prompting model that various interfaces default to is seriously flawed. Anthropic has a recent blog post on Claude Code changes in this area.

    My take here is that sandboxing is only part of the problem; the other part is ensuring the agent has a reproducible environment, and especially one that can be run in IaaS environments. I think devcontainers are a good fit.

    I don’t agree with the statement from Anthropic’s blog

    without the overhead of spinning up and managing a container.

    I don’t think this is overhead for most projects because Where it feels like it has overhead, we should be working to mitigate it.

    Running code as separate login users

    In fact, one thing I think we should popularize more on Linux is the concept of running multiple unprivileged login users. Personally for the tasks I work on, it often involves building containers or launching local VMs, and isolating that works really well with a full separate “user” identity. An experiment I did was basically useradd ai and running delegated tasks there instead. To log in I added %wheel ALL=NOPASSWD: /usr/bin/machinectl shell ai@ to /etc/sudoers.d/ai-login so that my regular human user could easily get a shell in the ai user’s context.

    I haven’t truly “operationalized” this one as juggling separate git repository clones was a bit painful, but I think I could automate it more. I’m interested in hearing from folks who are doing something similar.

    Parallel, IaaS-ready agents…with review

    I’m today often running 2-3 agents in parallel on different tasks (with different levels of success, but that’s its own story).

    It makes total sense to support delegating some of these agents to work off my local system and into cloud infrastructure.

    In looking around in this space, there’s quite a lot of stuff. One of them is Ona (formerly Gitpod). I gave it a quick try and I like where they’re going, but more on this below.

    Github Copilot can also do something similar to this, but what I don’t like about it is that it pushes a model where all of one’s interaction is in the PR. That’s going to be seriously noisy for some repositories, and interaction with LLMs can feel too “personal” sometimes to have permanently recorded.

    Credentials should be on demand and fine grained for tasks

    To me a huge flaw with Ona and one shared with other things like Langchain Open-SWE is basically this:

    Sorry but: no way I’m clicking OK on that button. I need a strong and clearly delineated barrier between tooling/AI agents acting “as me” and my ability to approve and push code or even do basic things like edit existing pull requests.

    Github’s Copilot gets this more right because its bot runs as a distinct identity. I haven’t dug into what it’s authorized to do. I may play with it more, but I also want to use agents outside of Github and I also am not a fan of deepening dependence on a single proprietary forge either.

    So I think a key thing agent frontends should help do here is in granting fine-grained ephemeral credentials for dedicated write access as an agent is working on a task. This “credential handling” should be a clearly distinct component. (This goes beyond just git forges of course but also other issue trackers or data sources that may be in context).

    Conclusion

    There’s so much out there on this, I can barely keep track while trying to do my real job. I’m sure I’m not alone – but I’m interested in other’s thoughts on this!

    • chevron_right

      Christian Hergert: Status Week 43

      news.movim.eu / PlanetGnome • 27 October • 4 minutes

    Got a bit side-tracked with life stuff but lets try to get back to it this week.

    Libdex

    • D-Bus signal abstraction iteration from swick

    • Merge documentation improvements for libdex

    • I got an email from the Bazaar author about a crash they’re seeing when loading textures into the GPU for this app-store.

      Almost every crash I’ve seen from libdex has been from forgetting to transfer ownership. I tried hard to make things ergonomic but sometimes it happens.

      I didn’t have any cycles to really donate so I just downloaded the project and told my local agent to scan the project and look for improper ownership transfer of DexFuture.

      It found a few candidates which I looked at in detail over about five minutes. Passed it along to the upstream developer and that-was-that. Their fixu-ps seem to resolve the issue. Considering how bad agents are at using the proper ownership transfer its interesting it can also be used to discover them.

    • Add some more tests to the testsuite for future just to give myself some more certainty over incoming issue reports.

    Foundry

    • Added a new Gir parser as FoundryGir so that we can have access to the reflected, but not compiled or loaded into memory, version of Gir files. This will mean that we could have completion providers or documentation sub-system able to provide documentation for the code-base even if the documentation is not being generated.

      It would also mean that we can perhaps get access to the markdown specific documentation w/o HTML so that it may be loaded into the completion accessory window using Pango markup instead of a WebKitWebView shoved into a GtkPopover .

      Not terribly different from what Builder used to do in the Python plugin for code completion of GObject Introspection.

    • Expanding on the Gir parser to locate gir files in the build, system, and installation prefix for the project. That allows trying to discover the documentation for a keyword (type, identifier, etc), which we can generate as something markdowny. My prime motivation here is to have Shift+K working in Vim for real documentation without having to jump to a browser, but it obviously is beneficial in other areas like completion systems. This is starting to work but needs more template improvements.

    • Make FoundryForgeListing be able to simplify the process of pulling pages from the remote service in order. Automatically populates a larger listmodel as individual pages are fetched.

    • Start on Gitlab forge implementation for querying issues.

      Quickly ran into an issue where gitlab.gnome.org is not servicing requests for the API due to Anubis. Bart thankfully updated things to allow our API requests with PRIVATE-TOKEN to pass through.

      Since validating API authorization tokens is one of the most optimized things in web APIs, this is probably of little concern to the blocking of AI scrapers.

    • Gitlab user, project, issues abstractions

    • Start loading GitlabProject after querying API system for the actual project-id from the primary git remote path part.

    • Support finding current project

    • Support listing issues for current project

    • Storage of API keys in a generic fashion using libsecret. Forges will take advantage of this to set a key for host/service pair.

    • Start on translate API for files in/out of SDKs/build environments. Flatpak and Podman still need implementations.

    • PluginGitlabListing can now pre-fetch pages when the last item has been queries from the list model. This will allow GtkListView to keep populating in the background while you scroll.

    • mdoc command helper to prototype discover of markdown docs for use in interesting places. Also prototyped some markdown->ansi conversion for nice man-like replacement in console.

    • Work on a new file search API for Foundry which matches a lot of what Builder will need for its search panel. Grep as a service basically with lots of GListModel and thread-pool trickery.

    • Add foundry grep which uses the foundry_file_manager_search() API for testing. Use it to try to improve the non-UTF-8 support you can run into when searching files/disks where that isn’t used.

    • Cleanup build system for plugins to make it obvious what is happening

    • Setup include/exclude globing for file searches (backed by grep)

    • Add abstraction for search providers in FoundryFileSearchProvider with the default fallback implementation being GNU grep. This will allow for future expansion into tooling like rg (ripgrep) which provides some nice performance and tooling like --json output. One of the more annoying parts of using grep is that it is so different per-platform. For example, we really want --null from GNU grep so that we get a \0 between the file path and the content as any other character could potentially fall within the possible filename and/or encoding of files on disk.

      Where as with ripgrep we can just get JSON and make each search result point to the parsed JsonNode and inflate properties from that as necessary.

    • Add a new IntentManager, IntentHandler, and Intent system so that we can allow applications to handle policy differently with a plugin model w/ priorities. This also allows for a single system to be able to dispatch differently when opening directories vs files to edit.

      This turned out quite nice and might be a candidate to have lower in the platform for writing applications.

    • Add a FoundrySourceBuffer comment/uncomment API to make this easily available to editors using Foundry. Maybe this belongs in GSV someday.

    Ptyxis

    • Fix shortcuts window issue where it could potentially be shown again after being destroyed. Backport to 48/49.

    • Fix issue with background cursor blink causing transparency to break.

    Builder

    • Various prototype work to allow further implementation of Foundry APIs for the on-foundry rewrite. New directory listing, forge issue list, intent handlers.

    Other

    • Write up the situation with Libpeas and GObject Introspection for GNOME/Fedora.

    Research

    • Started looking into various JMAP protocols. I’d really like to get myself off my current email configuration but it’s been around for decades and that’s a hard transition.

    • chevron_right

      Sam Thursfield: Avoid the Fedora kernel 6.16.8-200.fc42.x86_64

      news.movim.eu / PlanetGnome • 27 October

    Good morning!

    I spent some time figuring out why my build PC was running so slowly today. Thanks to some help from my very smart colleagues I came up with this testcase in Nushell to measure CPU performance:

    ~: dd if=/dev/random of=./test.in bs=(1024 * 1024) count=10
    10+0 records in
    10+0 records out
    10485760 bytes (10 MB, 10 MiB) copied, 0.0111184 s, 943 MB/s
    ~: time bzip2 test.in
    0.55user 0.00system 0:00.55elapsed 99%CPU (0avgtext+0avgdata 8044maxresident)k
    112inputs+20576outputs (0major+1706minor)pagefaults 0swap

    We are copying 10MB of random data into a file and compressing it with bzip2. 0.55 seconds is a pretty good time to compress 10MB of data with bzip2.

    But! As soon as I ran a virtual machine, this same test started to take 4 or 5 seconds, both on the host and in the virtual machine.

    There is already a new Fedora kernel available and with that version (6.17.4-200.fc42.x86_64) I don’t see any problems. I guess some issue affecting AMD Ryzen virtualization that got fixed already.

    Have a fun day!

    • chevron_right

      Aryan Kaushik: Balancing Work and Open Source

      news.movim.eu / PlanetGnome • 26 October • 1 minute

    Work pressure + Burnout == Low contributions?

    Over the past few months, I’ve been struggling with a tough question. How do I balance my work commitments and personal life while still contributing to open source?

    On the surface, it looks like a weird question. Like I really enjoy contributing and working with contributors, and when I was in college, I always thought... "Why do people ever step back? It is so fun!". It was the thing that brought a smile to my face and took off any "stress". But now that I have graduated, things have taken a turn.

    It is now that when work pressure mounts, you use the little time you get to not focus on writing code and instead perform some kind of hobby, learn something new or spend time with family. Or, just endless video scroll and sleep.

    This has led me to be on my lowest contributions streak and not able to work on all those cool things I imagined, like reworking the Pitivi timeline in Rust, finishing that one MR in GNOME Settings that is stuck for ages, or fixing some issues in GNOME Extensions website, or work on my own extension's feature request, or contributing to the committees I am a part of.

    It’s reached a point where I’m genuinely unsure how to balance things anymore, and hence wanted to give all whom I might not have been able to reply to or have not seen me for a long time an update, that I'm there but just in a dilemma of how to return.

    I believe I'm not the only one who faces this. After guiding my juniors for a long while on how to contribute and study at the same time and still manage time for other things, I now am at a road where I am in the same situation. So, if anyone has any insights on how they manage their time, or keep up the motivation and juggle between tasks, do let me know (akaushik [at] gnome [dot] org), I'd really appreciate any insights :)

    One of them would probably be to take fewer things on my plate?

    Perhaps this is just a new phase of learning? Not about code, but about balance.

    • chevron_right

      Christian Hergert: Libpeas and Introspection

      news.movim.eu / PlanetGnome • 25 October • 5 minutes

    One of the unintended side-effects of writing applications using language bindings is that you inherit the dependencies of the binding.

    This made things a bit complicated when GIRepository moved from gobject-introspection-1.0 to girepository-2.0 as we very much want language bindings to move to the new API.

    Where this adds great difficulty on maintainers is in projects like Libpeas which provides plug-in capabilities for GTK application developers across multiple programming languages.

    In practice this has allowed applications like Gedit, Rhythmbox, and GNOME Builder to be written in C but load plugins from languages such as Python, Lua, JavaScript, Rust, C, C++, Vala, or any other language capable of producing a .so / .dylib / .dll .

    A very early version of Libpeas, years before I took over maintaining the library, had support for GObject Introspection baked in. This allowed really interesting (at the time) tooling to perform dynamic method call dispatching using a sort of proxy object implemented at runtime. Practically nobody is using this feature from what I can tell.

    But maintaining ABI being what it is, the support for it has long been part of the libpeas-1.x ABI.

    A couple years ago I finally released a fresh libpeas-2.x ABI which removed a lot of legacy API. With objects implementing GListModel and PleasPluginInfo becoming a GObject , the need for libpeas-gtk dwindled. It’s extremely easy for your application to do everything provided by the library. Additionally, I removed the unused GObject Introspection support which means that libpeas-2.x no longer needs to link against gobject-introspection-1.0 (nor girepository-2.0 ).

    One area where those are still used is the testsuite. This can complicate testing because we want to make sure that APIs work across language bindings but if the language binding uses a specific version of GObject Introspection that does not align with what the libpeas project is using for tests, it will of course lead to runtime disasters.

    Such is the case with some language bindings. The Lua LGI project is scarcely maintained right now and still uses gobject-introspection-1.0 instead of girepository-2.0 . I submitted patches upstream to port it over, but without an official maintainer well versed in C and language bindings there isn’t anyone to review and say “yes merge this”.

    There is a fork now that includes some of the patches I submitted upstream, but the namespace is different so it isn’t a 1:1 replacement.

    The PyGObject project has moved to girepository-2.0 upstream and that caused some breakage with applications in Fedora 42 that were still using the legacy libpeas-1.x ABI. For that reason, I believe the older PyGObject was released With Fedora 42.

    If you are using libpeas-2.x in your application, you have nothing to fear with any of the language runtimes integrated with libpeas. Since libpeas doesn’t link against either introspection libraries, it can’t hurt you. Just make sure your dependencies and language bindings are all in sync and you should be fine.

    If you are using libpeas-1.x still (2.x was released a little over 2 years ago) then you are in a much worse shape. Language bindings are moving (or have moved) to girepository-2.0 while libpeas cannot realistically be ported and maintain ABI. Too much is exposed as part of the library itself.

    It is imperative that if you want to keep your application working that you are either on libpeas-2.x or you’re bundling your application in such a way that you can guarantee your dependencies are all on the same version of GObject Introspection.

    Halfway ABI

    There exists a sort of “half-way-ABI” that someone could work on with enough motivation which is to break ABI as a sort of libpeas-1.38 . It would move to girepository-2.0 and all the ABI side-effects that come with it. Since the introspection support in libpeas-1.x is rarely used there should be little side-effects other than recompiling against the new ABI (and the build system transitions that go along with that).

    In my experience maintaining the largest application using libpeas (being Builder), that is really a lot more effort than porting your applications to libpeas-2.x .

    Is my app effected?

    So in short, here are a few questions to ask yourself to know if you’re affected by this.

    • Does my application only use embedded plug-ins or plug-ins from shared-modules such as *.so ? If so, then you are all set!
    • Do I use libpeas-1.x? If no, then great!
    • Does my libpeas-1.x project use Python for plug-ins? If yes, port to libpeas-2.x (or alternatively work suggested halfway-ABI for libpeas).
    • Does my libpeas-1.x or libpeas-2.x project use Lua for plug-ins? If yes, make sure all your dependencies are using gobject-introspection-1.0 only. Any use of girepository-2.0 will end in doom.

    Since JavaScript support with GJS/MozJS was added in libpeas-2.x , if you’re using JavaScript plug-ins you’re already good. GJS recently transitioned to girepository-2.0 already and continues to integrate well with libpeas. But do make sure your other dependencies have made the transition.

    How this could have been avoided?

    Without a time machine there are only three options besides what was done and they all create their own painful side-effects for the ecosystem.

    1. Never break ABI even if your library was a stop gap, never change dependencies, never let dependencies change dependencies, never fix anything.
    2. When pulling GObject Introspection into the GLib project, rename all symbols to a new namespace so that both libraries may co-exist in process at the same time. Symbol multi-versioning can’t fix overlapping type name registration in GType.
    3. Don’t fix any of the glaring issues or inconsistencies when pulling GObject Introspection into GLib. Make gobject-introspection-1.0 map to the same thing that girepository-2.0 does.

    All of those have serious side-effects that are equal to if not worse than the status-quo.

    Those that want to “ do nothing ” as maintainers of their applications can really just keep shipping them on Flatpak but with the Runtime pinned to their golden age of choice.

    Moral of the story is that ABI’s are hard even when you’re good at it. Doubly so if your library does anything non-trivial.

    • chevron_right

      Allan Day: GNOME Foundation Update, 2025-10-24

      news.movim.eu / PlanetGnome • 24 October • 1 minute

    It’s Friday, so it’s time for a GNOME Foundation update, and there are some exciting items to share. As ever, these are just the highlights: there’s plenty more happening in the background that I’m not covering.

    Fundraising progress

    I’m pleased to be able to report that, in recent weeks, the number of donors in our Friends of GNOME program has been increasing. These new regular donations are already adding up to a non-trivial rise in income for the Foundation, which is already making a significant difference to us as an organization.

    I’d like to take this moment to thank every person who has signed up with a regular donation. You are all making a major difference to the GNOME Foundation and the future of the GNOME project. Thank you! We appreciate every single donation.

    The new contributions we are receiving are vital, but we want to go further, and we are working on our plans for both fundraising and future investments in the GNOME project.

    New accountant

    This week we secured the services of a new accountant, Dawn Matlak . Dawn is extremely knowledgeable, and comes with a huge amount of relevant experience, particularly around fiscal hosting. She’s also great to work with, and we’re looking forward to collaborating with her.

    Dawn is going to be doing a fair amount of work for us in the coming months. In addition to helping us to prepare for our upcoming audit, she is also going to be overhauling some of our finance systems, in order to reduce workloads, increase reliability, and speed up processing.

    GNOME.Asia

    In other news, GNOME.Asia 2025 is happening in Tokyo on 13-15 December, and it’s approaching fast! Talk submissions have been reviewed and accepted, and the schedule is starting to come together. Information is being added to the website, and social activities are being planned. It’s shaping up to be a great event.

    Registration for attendees isn’t open just yet, but it isn’t far off – look out for the announcement.

    That’s it from me this week. I am on vacation next week, so I’ll be skipping next week’s post. See you in two weeks!