Gnome Planet - Latest News

  • Christian Schaller: Fedora Workstation 40 – what are we working on (2024/03/28 18:56)
    So Fedora Workstation 40 Beta has just come out so I thought I share a bit about some of the things we are working on for Fedora Workstation currently and also major changes coming in from the community. Flatpak Flatpaks has been a key part of our strategy for desktop applications for a while now and we are working on a multitude of things to make Flatpaks an even stronger technology going forward. Christian Hergert is working on figuring out how applications that require system daemons will work with Flatpaks, using his own Sysprof project as the proof of concept application. The general idea here is to rely on the work that has happened in SystemD around sysext/confext/portablectl trying to figure out who we can get a system service installed from a Flatpak and the necessary bits wired up properly. The other part of this work, figuring out how to give applications permissions that today is handled with udev rules, that is being worked on by Hubert Figuière based on earlier work by Georges Stavracas on behalf of the GNOME Foundation thanks to the sponsorship from the Sovereign Tech Fund. So hopefully we will get both of these two important issues resolved soon. Kalev Lember is working on polishing up the Flatpak support in Foreman (and Satelitte) to ensure there are good tools for managing Flatpaks when you have a fleet of systems you manage, building on the work of Stephan Bergman. Finally Jan Horak and Jan Grulich is working hard on polishing up the experience of using Firefox from a fully sandboxed Flatpak. This work is mainly about working with the upstream community to get some needed portals over the finish line and polish up some UI issues in Firefox, like this one. Toolbx Toolbx, our project for handling developer containers, is picking up pace with Debarshi Ray currently working on getting full NVIDIA binary driver support for the containers. One of our main goals for Toolbx atm is making it a great tool for AI development and thus getting the NVIDIA & CUDA support squared of is critical. Debarshi has also spent quite a lot of time cleaning up the Toolbx website, providing easier access to and updating the documentation there. We are also moving to use the new Ptyxis (formerly Prompt) terminal application created by Christian Hergert, in Fedora Workstation 40. This both gives us a great GTK4 terminal, but we also believe we will be able to further integrate Toolbx and Ptyxis going forward, creating an even better user experience. Nova So as you probably know, we have been the core maintainers of the Nouveau project for years, keeping this open source upstream NVIDIA GPU driver alive. We plan on keep doing that, but the opportunities offered by the availability of the new GSP firmware for NVIDIA hardware means we should now be able to offer a full featured and performant driver. But co-hosting both the old and the new way of doing things in the same upstream kernel driver has turned out to be counter productive, so we are now looking to split the driver in two. For older pre-GSP NVIDIA hardware we will keep the old Nouveau driver around as is. For GSP based hardware we are launching a new driver called Nova. It is important to note here that Nova is thus not a competitor to Nouveau, but a continuation of it. The idea is that the new driver will be primarily written in Rust, based on work already done in the community, we are also evaluating if some of the existing Nouveau code should be copied into the new driver since we already spent quite a bit of time trying to integrate GSP there. Worst case scenario, if we can’t reuse code, we use the lessons learned from Nouveau with GSP to implement the support in Nova more quickly. Contributing to this effort from our team at Red Hat is Danilo Krummrich, Dave Airlie, Lyude Paul, Abdiel Janulgue and Phillip Stanner. Explicit Sync and VRR Another exciting development that has been a priority for us is explicit sync, which is critical for especially the NVidia driver, but which might also provide performance improvements for other GPU architectures going forward. So a big thank you to Michel Dänzer , Olivier Fourdan, Carlos Garnacho; and Nvidia folks, Simon Ser and the rest of community for working on this. This work has just finshed upstream so we will look at backporting it into Fedora Workstaton 40. Another major Fedora Workstation 40 feature is experimental support for Variable Refresh Rate or VRR in GNOME Shell. The feature was mostly developed by community member Dor Askayo, but Jonas Ådahl, Michel Dänzer, Carlos Garnacho and Sebastian Wick have all contributed with code reviews and fixes. In Fedora Workstation 40 you need to enable it using the command gsettings set org.gnome.mutter experimental-features "['variable-refresh-rate']" PipeWire Already covered PipeWire in my post a week ago, but to quickly summarize here too. Using PipeWire for video handling is now finally getting to the stage where it is actually happening, both Firefox and OBS Studio now comes with PipeWire support and hopefully we can also get Chromium and Chrome to start taking a serious look at merging the patches for this soon. Whats more Wim spent time fixing Firewire FFADO bugs, so hopefully for our pro-audio community users this makes their Firewire equipment fully usable and performant with PipeWire. Wim did point out when I spoke to him though that the FFADO drivers had obviously never had any other consumer than JACK, so when he tried to allow for more functionality the drivers quickly broke down, so Wim has limited the featureset of the PipeWire FFADO module to be an exact match of how these drivers where being used by JACK. If the upstream kernel maintainer is able to fix the issues found by Wim then we could look at providing a more full feature set. In Fedora Workstation 40 the de-duplication support for v4l vs libcamera devices should work as soon as we update Wireplumber to the new 0.5 release. To hear more about PipeWire and the latest developments be sure to check out this interview with Wim Taymans by the good folks over at Destination Linux. Remote Desktop Another major feature landing in Fedora Workstation 40 that Jonas Ådahl and Ray Strode has spent a lot of effort on is finalizing the remote desktop support for GNOME on Wayland. So there has been support for remote connections for already logged in sessions already, but with these updates you can do the login remotely too and thus the session do not need to be started already on the remote machine. This work will also enable 3rd party solutions to do remote logins on Wayland systems, so while I am not at liberty to mention names, be on the lookout for more 3rd party Wayland remoting software becoming available this year. This work is also important to help Anaconda with its Wayland transition as remote graphical install is an important feature there. So what you should see there is Anaconda using GNOME Kiosk mode and the GNOME remote support to handle this going forward and thus enabling Wayland native Anaconda. HDR Another feature we been working on for a long time is HDR, or High Dynamic Range. We wanted to do it properly and also needed to work with a wide range of partners in the industry to make this happen. So over the last year we been contributing to improve various standards around color handling and acceleration to prepare the ground, work on and contribute to key libraries needed to for instance gather the needed information from GPUs and screens. Things are coming together now and Jonas Ådahl and Sebastian Wick are now going to focus on getting Mutter HDR capable, once that work is done we are by no means finished, but it should put us close to at least be able to start running some simple usecases (like some fullscreen applications) while we work out the finer points to get great support for running SDR and HDR applications side by side for instance. PyTorch We want to make Fedora Workstation a great place to do AI development and testing. First step in that effort is packaging up PyTorch and making sure it can have working hardware acceleration out of the box. Tom Rix has been leading that effort on our end and you will see the first fruits of that labor in Fedora Workstation 40 where PyTorch should work with GPU acceleration on AMD hardware (RockEM) out of the box. We hope and expect to be able to provide the same for NVIDIA and Intel graphics eventually too, but this is definitely a step by step effort.
  • Jordan Petridis: Thoughts on employing PGO and BOLT on the GNOME stack (2024/03/26 15:42)
    Christian was looking at PGO and BOLT recently I figured I’d write down my notes from the discussions we had on how we’d go about making things faster on our stack, since I don’t have time or the resource to pursue those plans myself atm.First off let’s start with the basics, PGO (profile guided optimizations) and BOLT (Binary Optimization and Layout Tool) work in similar ways. You capture one or more “profiles” of a workload that’s representative of a usecase of your code and then the tools do their magic to make the common hot paths more efficient/cache-friendly/etc. Afterwards they produce a new binary that is hopefully faster than the old one and functionally identical so you can just replace it.Now already we have two issues here that arise here:First of all we don’t really have any benchmarks in our stack, let alone, ones that are rounded enough to account for the majority of usecases. Additionally we need better instrumentation to capture stats like frames, frame-times, and export them both for sysprof and so we can make the benchmark runners more useful.Once we have the benchmarks we can use them to create the profiles for optimizations and to verify that any changes have the desired effect. We will need multiple profiles of all the different hardware/software configurations. For example for GTK ideally we’d want to have a matrix of profiles for the different render backends (NGL/Vulkan) along with the mesa drivers they’d use depending on different hardware AMD/Intel and then also different architectures, so additional profile for Raspberrypi5 and Asahi stacks. We might also want to add a profile captured under qemu+virtio while we are it too.Maintaining the benchmarks and profiles would be a lot of work and very tailored to each project so they would all have to live in their upstream repositories. On the other hand, the optimization itself has to be done during the Tree/userland/OS composition and we’d have to aggregate all the profiles from all the projects to apply them. This is easily done when you are in control of the whole deployment as we can do for the GNOME Flatpak Runtime. It’s also easy to do if you are targeting an embedded deployment where most of the time you have custom images you are in full control off and know exactly the workload you will be running.If we want distros to also apply these optimizations and for this to be done at scale, we’d have to make the whole process automatic and part of the usual compilation process so there would be no room for error during integration. The downside of this would be that we’d have a lot less opportunities for aggregating different usecases/profiles as projects would either have to own optimizations of the stack beneath them (ex: GTK being the one relinking pango) or only relink their own libraries.To conclude, Post-linktime optimization would be a great avenue to explore as it seems to be one of the lower-hanging fruits when it comes to optimizing the whole stack. But it also would be quite the effort and require a decent amount of work to be committed to it. It would be worth it in the long run.
  • Andy Wingo: hacking v8 with guix, bis (2024/03/26 11:51)
    Good day, hackers. Today, a pragmatic note, on hacking on V8 from a Guix system.I’m going to skip a lot of the background because, as it turns out, I wrote about this already almost a decade ago. But following that piece, I mostly gave up on doing V8 hacking from a Guix machine—it was more important to just go with the flow of the ever-evolving upstream toolchain. In fact, I ended up installing Ubuntu LTS on my main workstations for precisely this reason, which has worked fine; I still get Guix in user-space, which is better than nothing.Since then, though, Guix has grown to the point that it’s easier to create an environment that can run a complicated upstream source management project like V8’s. This is mainly guix shell in the --container --emulate-fhs mode. This article is a step-by-step for how to get started with V8 hacking using Guix.get the codeYou would think this would be the easy part: just git clone the V8 source. But no, the build wants a number of other Google-hosted dependencies to be vendored into the source tree. To perform the initial fetch for those dependencies and to keep them up to date, you use helpers from the depot_tools project. You also use depot_tools to submit patches to code review.When you live in the Guix world, you might be tempted to look into what depot_tools actually does, and to replicate its functionality in a more minimal, Guix-like way. Which, sure, perhaps this is a good approach for packaging V8 or Chromium or something, but when you want to work on V8, you need to learn some humility and just go with the flow. (It’s hard for the kind of person that uses Guix. But it’s what you do.)You can make some small adaptations, though. depot_tools is mostly written in Python, and it actually bundles its own virtualenv support for using a specific python version. This isn’t strictly needed, so we can set the funny environment variable VPYTHON_BYPASS="manually managed python not supported by chrome operations" to just use python from the environment.Sometimes depot_tools will want to run some prebuilt binaries. Usually on Guix this is anathema—we always build from source—but there’s only so much time in the day and the build system is not our circus, not our monkeys. So we get Guix to set up the environment using a container in --emulate-fhs mode; this lets us run third-party pre-build binaries. Note, these binaries are indeed free software! We can run them just fine if we trust Google, which you have to when working on V8.no, really, get the codeEnough with the introduction. The first thing to do is to check out depot_tools.mkdir src cd src git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git I’m assuming you have git in your Guix environment already.Then you need to initialize depot_tools. For that you run a python script, which needs to run other binaries – so we need to make a specific environment in which it can run. This starts with a manifest of packages, is conventionally placed in a file named manifest.scm in the project’s working directory, though you don’t have one yet, so you can just write it into v8.scm or something anywhere:(use-modules (guix packages) (gnu packages gcc)) (concatenate-manifests (list (specifications->manifest '( "bash" "binutils" "clang-toolchain" "coreutils" "diffutils" "findutils" "git" "glib" "glibc" "glibc-locales" "grep" "less" "ld-gold-wrapper" "make" "nss-certs" "nss-mdns" "openssh" "patch" "pkg-config" "procps" "python" "python-google-api-client" "python-httplib2" "python-pyparsing" "python-requests" "python-tzdata" "sed" "tar" "wget" "which" "xz" )) (packages->manifest `((,gcc "lib"))))) Then, you guix shell -m v8.scm. But you actually do more than that, because we need to set up a container so that we can expose a standard /lib, /bin, and so on:guix shell --container --network \ --share=$XDG_RUNTIME_DIR --share=$HOME \ --preserve=TERM --preserve=SSH_AUTH_SOCK \ --emulate-fhs \ --manifest=v8.scm Let’s go through these options one by one.--container: This is what lets us run pre-built binaries, because it uses Linux namespaces to remap the composed packages to /bin, /lib, and so on.--network: Depot tools are going to want to download things, so we give them net access.--share: By default, the container shares the current working directory with the “host”. But we need not only the checkout for V8 but also the sibling checkout for depot tools (more on this in a minute); let’s just share the whole home directory. Also, we share the /run/user/1000 directory, which is $XDG_RUNTIME_DIR, which lets us access the SSH agent, so we can check out over SSH.--preserve: By default, the container gets a pruned environment. This lets us pass some environment variables through.--emulate-fhs: The crucial piece that lets us bridge the gap between Guix and the world.--manifest: Here we specify the list of packages to use when composing the environment.We can use short arguments to make this a bit less verbose:guix shell -CNF --share=$XDG_RUNTIME_DIR --share=$HOME \ -ETERM -ESSH_AUTH_SOCK -m manifest.scm I would like it if all of these arguments could somehow be optional, that I could get a bare guix shell invocation to just apply them, when run in this directory. Perhaps some day.Running guix shell like this drops you into a terminal. So let’s initialize depot tools:cd $HOME/src export VPYTHON_BYPASS="manually managed python not supported by chrome operations" export PATH=$HOME/src/depot_tools:$PATH export SSL_CERT_DIR=/etc/ssl/certs/ export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt gclient This should download a bunch of things, I don’t know what. But at this point we’re ready to go:fetch v8 This checks out V8, which is about 1.3 GB, and then probably about as much again in dependencies.build v8You can build V8 directly:# note caveat below! cd v8 tools/dev/gm.py x64.release This will build fine... and then fail to link. The precise reason is obscure to me: it would seem that by default, V8 uses a whole Debian sysroot for Some Noble Purpose, and ends up linking against it. But it compiles against system glibc, which seems to have replaced fcntl64 with a versioned symbol, or some such nonsense. It smells like V8 built against a too-new glibc and then failed trying to link to an old glibc.To fix this, you need to go into the args.gn that was generated in out/x64.release and then add use_sysroot = false, so that it links to system glibc instead of the downloaded one.echo 'use_sysroot = false' >> out/x64.release/args.gn tools/dev/gm.py x64.release You probably want to put the commands needed to set up your environment into some shell scripts. For Guix you could make guix-env:#!/bin/sh guix shell -CNF --share=$XDG_RUNTIME_DIR --share=$HOME \ -ETERM -ESSH_AUTH_SOCK -m manifest.scm -- "$@" Then inside the container you need to set the PATH and such, so we could put this into the V8 checkout as env:#!/bin/sh # Look for depot_tools in sibling directory. depot_tools=`cd $(dirname $0)/../depot_tools && pwd` export PATH=$depot_tools:$PATH export VPYTHON_BYPASS="manually managed python not supported by chrome operations" export SSL_CERT_DIR=/etc/ssl/certs/ export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt exec "$@" This way you can run ./guix-env ./env tools/dev/gm.py x64.release and not have to “enter” the container so much.notesThis all works fine enough, but I do have some meta-reflections.I would prefer it if I didn’t have to use containers, for two main reasons. One is that the resulting build artifacts have to be run in the container, because they are dynamically linked to e.g. /lib, at least for the ELF loader. It would be better if I could run them on the host (with the host debugger, for example). Using Guix to make the container is better than e.g. docker, though, because I can ensure that the same tools are available in the guest as I use on the host. But also, I don’t like adding “modes” to my terminals: are you in or out of this or that environment. Being in a container is not like being in a vanilla guix shell, and that’s annoying.The build process uses many downloaded tools and artifacts, including clang itself. This is a feature, in that I am using the same compiler that colleagues at Google use, which is important. But it’s also annoying and it would be nice if I could choose. (Having the same clang-format though is an absolute requirement.)There are two tests failing, in this configuration. It is somehow related to time zones. I have no idea why, but I just ignore them.If the build system were any weirder, I would think harder about maybe using Docker or something like that. Colleagues point to distrobox as being a useful wrapper. It is annoying though, because such a docker image becomes like a little stateful thing to do sysadmin work on, and I would like to avoid that if I can.Welp, that’s all for today. Hopefully if you are contemplating installing Guix as your operating system (rather than just in user-space), this can give you a bit more information as to what it might mean when working on third-party projects. Happy hacking and until next time!
  • Jan Lukas Gernert: Newsflash 3.2 (2024/03/25 22:34)
    Another small feature update just in time for gnome 46. Subscribe via CLI Lets start with something that already went into version 3.1.4: you can subscribe to feeds via CLI now. The idea is that this is a building block for seamlessly subscribing to websites from within a browser or something similar. Lets see how this develops further. Scrap all new Articles of a Feed If Gitlab upvotes is a valid metric, this feature was the most requested one so far. Feed settings gained a new toggle to scrap the content of new articles. The sync will complete normally and in a second operation Newsflash tries to download the full content of all new articles in the background. This is especially useful when there is no permanent internet connection. Now you can let Newsflash sync & download content while on WiFi and read the complete articles later even without an internet connection. Update Feed URL The local RSS backend gained the ability to update the URL where the feed is located (see the screenshot above). Sadly none of the other services support this via their APIs as far as I know. Clean Database The preferences dialog gained the ability to drop all old article and “vacuum” the database right away. Depending on the size of the database file this can take a few seconds, that’s why it is not done in the background during normal operations yet. (btw: I’m not sure if I should keep the button as “destructive-action”) Internal Refactoring Just a heads up that a lot of code managing the loading of the article list and keeping track of the displayed article and its state was refactored. If there are any regressions, please let me know. Profiling Christian Hergerts constant stream of profiling blog posts finally got to me. So I fired up sysprof. Fully expecting to not be knowledgeable enough to draw any meaningful conclusions from the data. After all, the app is pretty snappy on my machine ™, so any improvements must be hard to find and even harder to solve. But much to my surprise about 30 minutes later two absolutely noticeable low hanging fruit performance problems were discovered and fixed. So I encourage everyone to just try profiling your code. You may be surprised what you find. Adwaita Dialogs & Removing Configurable Shortcuts Of course this release makes use of the new Adwaita Dialogs. For all the dialogs but one: Configuring custom keybindings still spawns a new modal window. Multiple overlapping dialogs isn’t the greatest thing in the world. This and another annoying issue made me think about removing the feature from Newsflash completely. The problem is that all shortcuts need to be disabled whenever the user is about to enter text. Otherwise the keybindings with a single letter cannot be entered as text. All major feed readers (feedly, innoreader, etc) have a fixed set of cohesive keyboard shortcuts. I’ve been thinking about either having 2-3 shortcut configurations to choose from or just hard-coding keybindings all together. I’d like to hear your thoughts. Do you use custom shortcuts? Would you be fine with a well thought out but hard-coded set of shortcuts? Would you prefer to choose from a few pre-defined shorcut configurations? Let me know, and help me find the best keybindings for all the actions that can be triggered via keyboard.
  • Christian Hergert: GNOME 45/46 Retrospective (2024/03/25 03:20)
    My creative work is more aligned to GNOME cycles than years. Now that GNOME 46 is is out it’s a good time to look back at some of the larger things I did during those cycles. Fedora and Frame Pointers 2023 kicked off with quite a kerfuffle around frame pointers. Many people appear to have opinions on the topic though very few are aware of the trade-offs involved or the surface area of the problem domain. I spent quite some time writing articles to both educate and ultimately convince the Fedora council that enabling them is the single best thing they could do to help us make the operating system significantly faster release-to-release. Much to my surprise both Ubuntu and Arch are choosing to follow. Early this year I published an article in Fedora Magazine on the topic. Futures, Fibers and Await for C I still write a lot of C and have to integrate with a lot of C in my day to day work. Though I really miss asynchronous programming from other languages like when I was working on Mono all those years. Doing that sort of programming in C with the GObject stack was always painful due to the whole async/finish flow. For decades we had other ways in C but none of them integrated well with GObject and come with their own sort of foot-guns. So I put together libdex which could do futures/promises, fibers (including on threads), lock-free work-stealing among thread-pools, io_uring integration, asynchronous semaphores, channels, and more. It’s really changed how I write C now especially with asychronous workflows. Being able to await on any number of futures which suspend your fiber is so handy. It reminds me a lot of the CCR library out of the Microsoft Robotic Labs way back when. I especially love that I can set up complex “if-these-or-that” style futures and await on them. I think the part I’m most proud of is the global work queue for the thread-pool. Combining eventfd with EFD_SEMAPHRE and using io_uring worked extremely well and doesn’t suffer the thundering herd problem that you’d get if you did that with poll() or even epoll(). Being able to have work-stealing and a single worker wake up as something enters the global queue was not something I could do 20 years ago on Linux. Where this advanced even further is that it was combined with a GMainContext on the worker threads. That too was not something that worked well in the past meaning that if you used threads you had to often forgo any sort of scheduled/repeated work. Sysprof Now that I was armed with actually functioning stack traces coming from the Linux perf subsystem it was time to beef up Sysprof. Pretty much everything but the capture format was rewritten. Primarily because the way containers are constructed on Linux broke every assumption that profilers written 10 years ago had. Plus this was an opportunity to put libdex through its paces and it turned out great. Correlating marks across GTK, GLib, and Mutter was extremely valuable. Though the most valuable part is probably the addition of flamegraphs. Search Providers To test out the Sysprof rewrite I took a couple weeks to find and fix performance issues before the GNOME release. That uncovered a lot of unexpected performance issues in search providers. Fixes got in, CPU cycles were salvaged. Projects like Photos, Calculator, Characters, Nautilus, and libgweather are all performing well now it seems. I no longer find myself disabling search providers on new GNOME installations. This work also caught JXL wallpapers adding double-digit seconds to our login time. That got pushed back a release while upstream improved GdkPixbuf loader performance. GNOME Settings Daemon Another little hidden memory stealer was gnome-settings-daemon because of all the isolated sub-daemons it starts. Each of these were parsing the default theme at startup which is fallout from the Meson build system handover. I had fixed this many years ago and now it works again to both save a small amount (maybe 1-3mb each) of memory for each process and reduced start-up time. libmks At one point I wanted to see what it would take to get Boxes on GTK 4. It seemed like one of the major impediments was a GTK 4 library to display as well as handle input (mouse, keyboard). That was something I worked on considerably in the past during my tenure at a large virtualization company so the problem domain is one I felt like I could contribute to. I put together a prototype which led me to some interesting findings along the way. Notably, both Qemu and the Linux kernel virtio_gpu drivers were not passing damage regions which prevented GTK from doing damage efficiently. I pointed out the offending code to both projects and now those are fixed. Now you can use the drm backend in a Qemu VM with VirGL and virtio_gpu and have minimal damage all the way through the host. That work then got picked up by Bilal and Benjamin which has resulted in new APIs inside of GTK and consumed from libmks to further optimize damage regions. Qemu however may still need to break it’s D-Bus API to properly pass DMA-BUF modifiers. GtkExpression Robustness While working on Sysprof I ran into a number of issues with GtkExpression and GObject weak-ref safety. When you do as much threading as Sysprof does you’re bound to break things. Thankfully I had to deal with similar issues in Builder years ago so I took that knowledge to fix GtkExpression. By combining both a GWeakRef and a GWeakNotify you can more safely track tertiary object disposal without races. GObject Type-System Performance Also out of the Sysprof work came a lot flamegraphs showing various GType checking overhead. I spent some time diving in and caching the appropriate flags so that we save non-trivial percentage of CPU there. My nearly decade desire to get rid of GSlice finally happened for most platforms. If you want really a really fast allocator that can do arenas well, I suggest looking into things like tcmalloc. systemd-oomd Also out of the Sysprof work came a discovery of systemd-oomd waking up to often and keeping my laptops from deeper sleep. That got fixed upstream in systemd. Manuals One prototype I created was around documentation viewers and I’m using it daily now. I want to search/browse documentation a bit differently than how devhelp and other documentation sites seem to work. It indexes everything into SQLite and manages that in terms of SDKs. Therefore things like cross-referencing between releases is trivial. Currently it can index your host, jhbuild, and any org.gnome.Sdk.Docs Flatpak SDK you’ve installed. This too is built on libdex and Gom which allows for asynchronous SQLite using GObjects which are transparently inflated from the database. Another fun bit of integration work was wrapping libflatpak in a Future-based API. Doing so made writing the miners and indexers much cleaner as they were written with fibers. Gom A decade or more ago I made a library for automatically binding SQLite records to GObjects. I revamped it a bit so that it would play well with lazy loading of objects from a result set. More recently it also saw significant performance improvements around how it utilizes the type system (a theme here I guess). libpanel Writing an application like GNOME Builder is a huge amount of work. Some of that work is just scaffolding for what I lovingly consider Big Fucking Applications. Things like shortcut engines, menuing systems, action management, window groups and workspaces, panels, document grids, and more. A bunch of that got extracted from Builder and put into libpanel. Additionally I made it so that applications which use libpanel can still have modern GNOME sidebar styling. Builder uses this for its workspace windows in GNOME 46 and contributes to it’s modern look-and-feel. libspelling One missing stair that was holding people back from porting their applications to GTK 4 was spellcheck. I already had a custom spellchecker for GNOME Text Editor and GNOME Builder which uses an augmented B+Tree I wrote years ago for range tracking. That all was extracted into libspelling. Text Editor One of the harder parts to keep working in a Text Editor, strangely enough, is task cancellation. I took some time to get the details right so that even when closing tabs with documents loading we get those operations cancelled. The trickier bit is GtkTextView doing forward line validation and sizing. But that all appears to work well now. I also tweaked the overview map considerably to make things faster. You need to be extremely careful with widgets that produce so many render nodes which overlap complex clipping. Doubly so when you add fractional scaling and the border of a window can cross two pixels of the underlying display. GNOME Builder got similar performance treatments. Frame Jitters While tracking down GTK renderer performance for GTK 4.14 I spent a lot of timing analyzing frame timings in Sysprof. First, I noticed how Mutter was almost never dispatching events on time and mostly around 1 millisecond late. That got fixed with a timerfd patch to Mutter which tightens that up. At 120hz and higher that extra millisecond becomes extremely useful! After fixing Mutter I went to the source and made patches taking two different strategies to see which was better. One used timerfd and the other using ppoll(). Ultimately, the ppoll() strategy was better and is now merged in GLib. That will tighten up every GLib-based application including the GdkFrameClock. I also added support for the Wayland presentation-time protocol in GTK’s Wayland back-end so that predicted frame times are much more accurate. GLib In addition to the ppoll() work above in GLib I also did some work on speeding up how fast GMainContext can do an iteration of the main loop. We were doing extraneous read() on our eventfd each pass resulting in extra syscalls. I also optimized some GList traversals while I was there. I separated the GWeakRef and GWeakNotify lists inside of GObject’s weak ref system so that we could rely on all pointers being cleared before user callback functions are executed. This predictability is essential for building safety at higher levels around weak references. GtkSourceView There were a few more cases in GtkSourceView that needed optimization. Some of them were simpler like premixing colors to avoid alpha blends on the GPU. In the end, applications like Text Editor should feel a lot snappier in GNOME 46 when combined with GTK 4.14.1 or newer. GTK I spent some time at the end of the 46 cycle aiding in the performance work on NGL/Vulkan. I tried to lend a hand based on the things I remember helping/hurting/doing nothing while working on the previous GL renderer. In all, I really like where the new NGL/Vulkan renderers are going. While doing some of that work I realized that finalizing our expired cache entries of lines for the textview was reducing our chance at getting our frames submitted before their deadline. So a small patch later to defer that work until the end of the frame cycle helps considerably. Another oddity that got fixed was that we were snapshotting textview child widgets (rarely used) twice in the GtkTextView thanks to an observant bug reporter. This improved the gutter rendering times for things like GtkSourceView. I also tried to help define some of the GtkAccessibleText API so we can potentially be lazy from widget implementations. The goal here is to have zero overhead if you’re not using accessibility technology and still be fast if you are. I also added a fast path for the old GL renderer for masked textures. But that never saw a release as the default renderer now that NGL is in place, so not such a big deal. It helped for Ptyxis/VTE while it lasted though. GSK also saw a bunch of little fixes to avoid hitting the type system so hard. libpeas 2.0 Libpeas got a major ABI bump which came with a lot of cleaning up of the ABI contracts. But even better, it got a GJS (SpiderMonkey) plugin loader for writing plugins in JavaScript. GNOME Builder uses this for plugins now instead of PyGObject. VTE Performance As my monitor resolution got higher my terminal interactivity in Builder was lessened. Especially while on Wayland. It got to the point that latency was causing me to miss-type frequently. Thankfully, I had just finished work on Sysprof so I could take a look. Builder is GTK 4 of course, and it turned out VTE was drawing with Cairo and therefore spending significant time on the CPU drawing and significant memory bandwidth uploading the full texture to the GPU each frame. Something I did find funny was how up in arms people got about a prototype I wrote to find the theoretical upper bounds of PTY performance for a terminal emulator. How dare I do actual research before tackling a new problem domain for me. In the end I finally figured out how to properly use GskTextNode directly with PangoGlyphString to avoid PangoLayout. A trick I’d use again in GtkSourceView to speed up line number drawing. Along with modernizing the drawing stack in VTE I took the chance to optimize some of the cross-module performance issues. VTE performance is in pretty good shape today and will certainly get even better in it’s capable maintainers hands. They were extremely friendly and helpful to a newcomer showing up to their project with grand ideas of how to do things. GNOME Terminal To validate all the VTE performance work I also ported the venerable GNOME Terminal to GTK 4. It wasn’t quite ready to ship in concert with GNOME 46 but I’m feeling good about it’s ability to ship during GNOME 47. Ptyxis For years I had this prototype sitting around for a container-based terminal built on top of the Builder internals. I managed to extract that as part of my VTE performance work. It’s a thing now, and it’s pretty good. I can’t imagine not using it day-to-day now. VTE Accessibility Now that I have a GTK 4 terminal application to maintain the only responsible thing for me to do is to make sure that everyone can use it. So I wrote a new accessibility layer bridging VteTerminal to GtkAccessibleText in GTK. Podman and PTY While working on Ptyxis I realized that Podman was injecting an extra PTY into the mix. That makes foreground process tracking extremely difficult so I advocated the ability to remove it from Podman. That has now happened so in future versions of Ptyxis I plan to prod Podman into doing the right thing. GNOME Builder All sorts of nice little quality of life improvements happened in Builder. More control over the build pipeline and application runtime environment make it easier to integrate with odd systems and configurations. The terminal work in Ptyxis came back into Builder so we got many paper cuts triaged. You’ll also notice many new color palettes that ship with Builder which were generated from palettes bundled with Ptyxis. Memory usage has also been reduced even further. Biased Ref Counting I wrote an implementation of Biased Ref Counting to see how it would perform with GObject/GTK. Long story short the integration complexities probably out-weigh most of the gains. Removing Twitter, Mastodon, and Matrix I removed my social media accounts this year and it’s lovely. Cannot recommend it enough.
  • Christian Hergert: Debug Builds and GPUs (2024/03/24 19:35)
    Decades ago, when you wanted to run debug builds for UI applications, things were incredibly slow. First you’d wait minutes for the application to present a window. Then wait tens of seconds for each frame to render. You were extremely lucky if Valgrind caught the issue while you exercised the UI. Things have gotten much better due to movement in two different directions. In one direction GCC and Clang got compiler integration for sanitizers like ASAN. Instead of relying on extreme amounts of emulation in Valgrind compilers can insert the appropriate checks, canaries, and memory mapping tricks to catch all sorts of behavior. In the other direction we’ve started drawing modern UI toolkits with the GPU. The idea here is that if the work is dispatched to the GPU, there is less for the CPU to run and therefore less work for the sanitizers and/or Valgrind to do. Don’t let that fool you though. A lot of specialized work is done on the CPU still to allow those GPUs to go fast. You trade off framebuffer updates and huge memory bus transfers for more complex diffing, batching and reordering operations, state tracking, occasional texture uploads, and memory bandwidth for vertex buffers and the likes. Here I’ve compiled all the hot parts of a GTK application with the address sanitizer. That includes GLib/GIO/GObject, Harfbuzz, Pango, and GTK. The application is also running with GSK_DEBUG=full-redraw to ensure we redraw the entire window every single frame with full damage. We use GDK_DEBUG=no-vsync to let it run as fast as it can rather than block waiting for the next vblank. And still, GTK can produce hundreds of frames per second. Truly magical.
  • Dave Patrick Caberto: Kooha 2.3 Released! (2024/03/24 04:29)
    Kooha is a simple screen recorder for Linux with a minimal interface. You can simply click the record button without having to configure a bunch of settings. While we strive to keep Kooha simple, we also want to make it better. This release, composed of over 300 commits, is focused on quality-of-life improvements and bug fixes. This release includes a refined interface, improved area selection, more informative notifications, and other changes. Read on to learn more about the new features and improvements. New Features and Improvements Refined Interface The main screen now has a more polished look. It now shows the selected format and FPS. This makes it easier to see the current settings at a glance, without having to open the settings window. Other than that, progress is now shown when flushing the recording. This gives a better indication when encoding or saving is taking longer than expected. Furthermore, the preferences window is also improved. It is now more descriptive and selecting FPS is now easier with a dropdown menu. Improved Area Selection The area selection window is now resizable. You can now resize the window to fit your screen better. Additionally, the previously selected area is now remembered across sessions. This means that if you close Kooha and open it again, the area you selected will be remembered. Other improvements include improved focus handling, sizing fixes, better performance, and a new style. More Informative Notifications Record-done notifications now show the duration and size of the recorded video. This is inspired by GNOME Shell screencast notifications. Moreover, the notification actions now work even when the application is closed. Other Changes Besides the mentioned features, this release also includes: Logout and idle are now inhibited while recording. The audio no longer stutters and gets corrupted when recording for a long time. The audio is now recorded in stereo instead of mono when possible. The recordings are no longer deleted when flushing is canceled. Incorrect output video orientation on certain compositors is now fixed. Performance and stability are improved. Getting Kooha 2.3 Kooha is available on Flathub. You can install it from there, and since all of our code is open-source and can be freely modified and distributed according to the license, you can also download and build it from source. Closing Words Thanks to everyone who has supported Kooha, be it through donations, bug reports, translations, or just using it. Your support is what keeps this project going. Enjoy the new release!
  • Tobias Bernard: Mini GUADEC 2024: We have a Venue! (2024/03/22 18:51)
    We’ve had a lot of questions from people planning to attend this year’s edition of the Berlin Mini GUADEC from outside Berlin about where it’s going to happen, so they can book accommodation nearby. We have two good news on that front: First, we have secured (pending a few last organizational details) a very cool venue, and second: The venue has a hostel next to it, so there’s the possibility to stay very close by for cheap :) Come join us at Regenbogenfabrik The event will happen at Regenbogenfabrik in Kreuzberg (Lausitzerstraße 21a). The venue is a self-organized cultural center with a fascinating history, and consists of, in addition to the event space, a hostel, bike repair and woodworking workshops, and a kindergarten (lucky for us closed during the GUADEC days). The courtyard at Regenbogenfabrik Some of the perks of this venue: Centrally located (a few blocks from Kottbusser Tor) We can stay as late as we want (no being kicked out at 6pm!) Plenty of space for hacking Lots of restaurants, bars, and cafes nearby Right next to the Landwehrkanal and close to Görlitzer Park There’s a ping pong table! Regenbogenfabrik on Openstreetmap Stay at the venue If you’re coming to Berlin from outside and would like to stay close to the venue there’s no better option than staying directly at the venue: We’ve talked to the Regebogenfabrik Hostel, and there’s still somewhere around a dozen spots available during the GUADEC days (in rooms for 2, 3, or 8 people). Prices range between 20 and 75 Euro per person per night, depending on the size of the room. You can book using the form here (german, but Firefox Translate works well these days :) ). As the organizing team we don’t have the capacities to get directly involved in booking the accommodations, but we’re in touch with the hostel people and can help with coordination. Note: If you’re interested in staying at the hostel act fast, because spots are limited. To be sure to get one of the open spots, please book by next Tuesday (March 26th) and mention the codeword “GNOME” so they know to put you in rooms with other GUADEC attendees. Also, if you’re coming don’t forget to add your name to the attendee list on Hedgedoc, so we know roughly how many people are coming :) If you have any other questions feel free to join our Matrix room. See you in Berlin!
  • Felix Häcker: #140 Forty-six! (2024/03/22 00:00)
    Update on what happened across the GNOME project in the week from March 15 to March 22. This week we released GNOME 46! This new major release of GNOME is full of exciting changes, such as a new global file search, an enhanced Files app, improved online accounts with OneDrive support, remote login via RDP, improved accessibility, experimental variable refresh rate (VRR) support and so much more! See the GNOME 46 release notes and developer notes for more information. Readers who have been following this site will already be aware of some of the new features. If you’d like to follow the development of GNOME 47 (Fall 2024), keep an eye on this page - we’ll be posting exciting news every week! Sovereign Tech Fund Sonny announces As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects. Besides helping with the GNOME 46 release (congrats everyone!); here are the highlights for the past week This week we welcome Jerry, Tom, Neill and Jude of Codethink into the team. Jerry and Tom got started with finishing sysupdate: Implement dbus service. This will allow apps such as GNOME Software, KDE Discover, … to support systemd-sysupdate Neill got started making GNOME openQA more robust with Needle cleanup script, for unused/expired needles GNOME Shell sometimes fails to start Julian implemented 9 new properties for notifications in xdg-desktop-portal such as icon (via fd), sound, actions, markup-body, … Julian worked on making notifications in xdg-desktop-portal forward compatible by allowing unknown properties. Dorota is working on an interface for global shortcuts in Mutter/GNOME Shell suitable for the global shortcuts portal (except listing shortcuts) Dhanuka has been testing the Rust DBus Secret Service provider implementation in oo7 to replace GNOME Keyring Jonas made improvements in audio integration #25, #26 Alice resumed work on CSS custom properties / variables support in GTK; animations are now supported. Andy made a protoype to allow opening URLs with apps. The goal is for an app such as GNOME Maps to advertise support for and handle openstreetmap.org or google.com/maps URLs. Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! GNOME Core Apps and Libraries GLib The low-level core library that forms the basis for projects such as GTK and GNOME. Philip Withnall announces Christian Hergert added support for sub-millisecond timeouts in GLib using ppoll() (https://gitlab.gnome.org/GNOME/glib/-/merge_requests/3958) Philip Withnall reports Sudhanshu Tiwari has made a start on porting some of the GIO documentation comments to gi-docgen in https://gitlab.gnome.org/GNOME/glib/-/merge_requests/3969 Emmanuele Bassi reports JSON-GLib, the library for parsing and generating JSON data, is now capable of strict compliance with the JSON specification. To avoid breaking backward compatibility, strictness must be explicitly enabled by setting the JsonParser:strict property, or using the --strict option for the json-glib-validate command line tool. To enforce strict compliance, JSON-GLib now includes a whole JSON conformance test suite. GNOME Incubating Apps Sophie (she/her) announces Decibels has been accepted into the GNOME Incubator. The GNOME incubation process is for apps that are designated to be accepted into GNOME Core or GNOME Development Tools if they reach the required maturity. Decibels is a basic audio player that is supposed to fill the gap of GNOME currently not having a Core app that is designed to open single audio files. The incubation progress will be tracked in this issue. GNOME Circle Apps and Libraries Tobias Bernard reports Railway has been accepted into GNOME Circle. It allows you to easily look up travel information across rail networks and borders without having to use multiple different websites. Congratulations! Workbench A sandbox to learn and prototype with GNOME technologies. Sonny announces Workbench 46 is out on Flathub! Here are the highlights Everybody is excited about them so I’ll start by saying you can try libadwaita 1.5 adaptive dialogs with the new “Dialog” and “Message Dialogs” demos in the Library. Workbench now shows inline diagnostics for Rust and Python. A new Library demo “Snapshot” was added to demonstrate one of GTK4 coolest feature. 26 additional demos have been ported to Python 5 additional demos have been ported to Vala The GNOME 46 release notes includes all the changes between Workbench 45 and 46. Thank you to all contributors Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! Fretboard Look up guitar chords Brage Fuglseth reports Happy release week! Like many other apps, Fretboard has been updated to the GNOME 46 platform, taking advantage of the many platform improvements that have happened this cycle. It also recently gained the ability to notify you when there are no available variants of a chord in its internal chord set, prompting you to reach out and help improve it. As always, you can get Fretboard on Flathub. Third Party Projects robert.mader announces Livi 0.1.0 is now available on Flathub. Bundled with Gstreamer 1.24 and build against the GTK 4.14 it is the first desktop-targeting app to enable zero-copy Video playback by default in the Wayland ecosystem. Doing so enables highly power-efficient playback, closing the gap to other OSs or embedded environments. We expect quite a few people to hit driver bugs in the beginning - so in order to pave the way for other apps to pick up the technology, please help testing on you devices :) Alain reports Planify has received several updates this week, including bug fixes and design enhancements. As part of the effort to apply for Gnome Circle, the user interface has been updated with new icons, design elements, and typography. What’s new: Performance of synchronization with Nextcloud has been improved. It’s now possible to select the Pinboard view as the homepage. You can now add a task to the Pinboard view from the contextual menu. Various reported bugs have been fixed. Akshay Warrier reports Biblioteca 1.3 is now available on Flathub! This release comes with several additions and improvements such as: Added docs for GLib/Gio/GObject Added support for web content Improved searching UI Added support for keyboard navigation in the sidebar Added zoom buttons to the primary menu Added shortcuts to view open tabs and toggle sidebar Markus Göllnitz announces Rumour has it there was a recent release of Usage – complete with leaked release screenshots. So far, It looks like, it features an indicator for applications running in background. Apparently, it is even displaying individual Android applications when you run Waydroid, now. That is something. On top of it, I would say, the split of the performance view into processor and memory and the subsequent use of flat header bars works quite well. Find it at a distro near you. FineFindus says I’m happy to announce the first release of Hieroglyphic, a forked and updated version of TeX-Match, which helps to find LaTeX symbols by drawing them. It’s available for download on Flathub. Kooha Elegantly record your screen. Dave Patrick says Kooha 2.3 is now released on Flathub! While there are no groundbreaking new features, this release is focused more on fixes and quality-of-life improvements. The following features and fixes are the highlights: The area selector window is now resizable, making selecting an area more flexible. The previously selected area is now remembered across sessions. The current video format and FPS configurations are now visible in the main view. The recording done notification now shows the duration and size of the recording. Progress is now shown while flushing the recording. Recording in stereo rather than in mono is now more preferred. Audio stutters on long recordings are now properly fixed. The preferences dialog is now more descriptive and provides a more convenient FPS selection box. Incorrect recording orientation on certain compositors is now fixed. For a more detailed changelog, check out the full release notes. Flare Chat with your friends on Signal. schmiddi reports Flare version 0.14.1 was released. This release includes updating the dialogs to the new adwaita adaptive dialogs. Furthermore, we also have a new “new channel” dialog and channel information dialog. This release also contains a hotfix for newly linked devices not working with groups and another minor fix for an error in certain groups. Blueprint A markup language for app developers to create GTK user interfaces. Sonny announces Blueprint; the markup language and tooling for GTK is out in version 0.12 Here are the highlights ✨ Brand-new formatter to keep files tidy AdwAlertDialog are supported Emit warnings for deprecated features in GTK, GLib, etc New IDE integration features document symbols “Go to definitions” Code action for importing missing namespace We also celebrate 70 applications on Flathub built with Blueprint. Events Deepesha Burse reports The deadline for the GUADEC 2024 Call for Participation is closing soon! This year’s conference will take place in Denver, Colorado, from July 19th to July 24th and we encourage all interested contributors, speakers, and participants to submit their proposals before the deadline on 24th March. This is an excellent opportunity to share your insights, experiences, and ideas with the GNOME community and contribute to the success of GUADEC 2024. Please visit guadec.org to submit your proposals. If you have any questions or need assistance, feel free to reach out to the organizing committee at guadec@gnome.org. We look forward to receiving your submissions and seeing you at GUADEC 2024 in Denver and online! That’s all for this week! See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
  • Michael Meeks: 2024-03-20 Wednesday (2024/03/20 21:00)
    Up early, out for a run with J. Early customer call, sync with Lily, mail chew, all hands meeting. Poked at socket lifecycle management, and SocketPoll workings in much more detail. Band practice, worked until late.
  • Sam Thursfield: Status update, 20/03/2024 – TinySPARQL and Tracker Miners (2024/03/20 16:59)
    GNOME 46 just released, and with it comes TinySPARQL 3.7 (aka Tracker SPARQL) and Tracker Miners 3.7. Here’s what I’ve been involved with this month in those projects. Google Summer of Code It wasn’t my intention to prepare another internship before the last one was even finished. It seems that in GNOME we have fewer projects and mentors than ever – only eight ideas this year, compared to fourteen confirmed projects back in 2020. So I proposed an idea for TinySPARQL, and here we are. The idea, in brief: I’ve been working a bit with GraphQL recently, which doesn’t live up to the hype, but does have nice query frontends such as GraphQL Playground and graphiql that let you develop and test queries in realtime. This is a screenshot of graphiql: In TinySPARQL, we have a commandline tool tracker3 sparql which can run queries and print the results. This is handy for developing testing queries independently of the app logic, but it’s only useful if you’re already something a SPARQL expert. What if TinySPARQL had a web interface similar to the GraphQL Playground? Besides running queries and showing the output, this could have example queries, resource browsing, as-you-type error checks, integrated documentation, and more fun things listed in this issue. My hope is this would encourage more folk to play around with the data running interesting queries and would help to visualize what you can do with a detailed metadata index for your local content. I think a lot of people see Tracker Miner FS as a black box that does basic string matching, and not the flexible database that it actually is. Lots of schools teach HTML and JavaScript so this project seems like a great opportunity for an intern to take ownership of and show their skills. Applications are open until 2nd April, and we’ll be running a couple of online meetups later this week (Thursday 21st and/or Friday 22nd March) to help you create a good application. Join the #tracker:gnome.org Matrix room if you’re interested. By the way, it’s only recently been possible to separate your queries from the rest of your app’s code. I wrote about this here: Standalone SPARQL Queries. The TrackerSparqlStatement class is flexible and fun and you can read your SPARQL statements straight from a GResource file. If you used libtracker-sparql around 1.x you’ll remember a horrible thing named TrackerSparqlBuilder – the query developer experience has come a long way since then. New security features There are some new features this cycle thanks to hard work by Carlos. I’ll let him write up the fun parts. One part that’s not much fun, is the increased security protections for tracker-extract. The background here is that tracker-extract uses many different media parsing libraries, and if any one of those libraries shipped by your distro contains a vulnerability, that could potentially be exploited by getting you to download a malicious file which would then be processed by tracker-extract. We have no evidence that anyone’s ever actually done this. But there was a writeup on how it could happen recently using a vulnerability in a library named libcue which nobody is maintaining, including a clever bypass of the existing SECCOMP protection. Carlos did a writeup of this on his blog: On CVE-2023-43641. With Tracker Miners 3.7, Carlos extended the existing SECCOMP sandbox to cover the entire extractor process rather than just the processing thread, which prevents that theoretical line of attack. And, he added an additional layer of sandboxing using a new kernel API called Landlock, which lets a process block itself from accessing any files except those it specifically needs. From my perspective it’s rather draining to help maintain the sandboxing. When it works, nobody notices. When the sandboxing causes issues, we hear about it straight away. And there are plenty of issues! Even the build-time configuration for Landlock seems to need hours of debate.SECCOMP works by denying access to any kernel APIs except those legitimately needed by the extractor process and the libraries it uses. Linux has 450+ syscalls and counting, and we maintain an explicit allowlist. Any change to GLibc, GIO, GStreamer or any media parsing library may then change what syscall gets used. If an unexpected syscall is called the tracker-extract process is killed with SIGSYS, which gets reported as a crash in just the same way as segfaults caused by programming errors. It’s draining to support something that can break randomly by things that are out of our control. What else can we do though? What’s next? It might seem like openQA testing and desktop search are unrelated, but there is a clear connection. Making reproducible integration tests for a search engine is a very hard problem. Back last decade I worked on the project’s Gitlab CI setup and “functional tests”. These tests live in the tracker-miners.git source tree, and run the real the crawler and extractor, testing that we can create a file named hello.txt, wait for it to be indexed and search for its contents. Quite a step forwards from unreproducible “works on my machine” testing that came before, but not representative of real use cases. Real GNOME users do not have a single file in their home dir named hello.txt. Rather they have GBs or TBs of content to be indexed, and they have expectations about what constitutes the “best match” for a given search term. I’m not interested in working to solve this kind of thing until we can build regression tests so that things don’t just work, but keep working in the long term. Hence, the work-in-progress gnome_search test for openQA, and the example-desktop-content repo. This is at the “working prototype” stage, and is now ready for some deeper thinking about what specific scenarios we want to test. Some other things that may or may not happen next cycle in desktop search, depending on whether people care to help push them forwards: beginning the rename: this won’t happen all at once, but we want to start calling the database TinySPARQL, and the indexer something else, still to be decided. (Ideas welcome!) a ‘limiter’ to detect when a directory contains so much content that the indexer would burn significant CPU and IO resource trying to index everything up front (which requires corresponding UI changes so that there’s a way to “opt in” to indexing such locations on demand) indexing the whole $HOME directory (which I personally don’t want to land without the ‘limiter’ in place, but let’s see) One thing is certain, next month things are certainly going to slow down for me… I’m holiday for two full weeks over Easter, spring is coming and I plan to spend most of my time relaxing in a hammock. Hopefully we’ve sowed a lot of seeds this month which will soon turn into flowers.
  • Jussi Pakkanen: Color management and API design (2024/03/20 15:52)
    API design is hard. This is not a smashingly new revelation, but let's look at a sample issue I have been working on for CapyPDF. The main problem we are trying to solve is creating "print quality" PDFs. That is, ones that can be used to print things like books, magazines, posters and other high quality materials. A core component of this is color management, specifically the handling of ICC profiles for raster images.There are at least four slightly conflicting design goals.Fine-grained controlAn advanced user knows and understands the PDF spec and know exactly how they want it to come out. The library should provide for this and not do, for example, unexpected color conversions behind the user's back.Easy to use for basic casesOTOH if your needs are simple, such as just loading images from files on disk, converting them to the output colorspace (almost certainly CMYK) with minimal fuss.SimplicityThe API should be simple and readable. Even more importantly it should be understandable in the sense that when the user calls certain functions, they should be able to "know" what is going to happen and the behaviour should be the same over multiple invocations.SafetyThe API should prevent you from doing invalid things, such as using an uncalibrated RGB image in a CMYK document.A wild real world appears!Thus far things seem simple, but they get awfully complex. PDF is used in many different ways and all of those have their own requirements. For high quality printing specifically there is a specification called PDF/X that many printing shops use. Some might not even accept material that is not in this format. One of the requirements of PDF/X is that all raster images must be color managed. It would seem that a simple approach would be to convert all images to the output color space on load. And this is where things break down.For you see, PDF does not have a single color managed pipeline, logically it has two. Grayscale images are "different" from full color images. A PDF generator must never convert grayscale raster images (or colors in general, but we'll focus on images now) to "color" images. Not even if the end result were "mathematically equivalent". In high quality printing that is not enough. Suppose you have a pixel whose gray value is 10. Converting that to CMYK can lead to (at least) two different values, (10, 10, 10, 0) and (0, 0, 0, 10). You'd think that the latter would always happen, but in testing LittleCMS produced the former (it also has custom gray-preserving transforms, but I did not try those). Even though these values are mathematically equivalent they (may) produce different output when printed. The latter is pure gray while the former can look muddled and if there are any registration problems the constituent colors might be visible. The RIP can not know whether the "grayscale looking color" was intentional or not. Under some circumstances it might be exactly what the creator intended, thus it can't really be post processed away. The only correct way is to keep the image in the gray color space so the RIP has maximal information to do its thing.But this causes its own problem, because most grayscale images are not color managed. What should you do with those? Requiring color profiles would not be a nice UI, because then most images would break. For 1-bit grayscale images a color profile would not even make any sense. Not to mention that the grayscale image might not be printed at all but it instead used as an image mask for graphics composition operations (basically it would be used as the alpha channel). In that case you definitely want to use raw pixel values to obtain linear mixing. Doing gamma correction on your transparency channel could lead to some funky effects.Things get more complicated once you realize that there are 7 variations of PDF/X that permit and prohibit different things. I tried to work out the workflow by writing a full table on color modes and output spaces and what should happen with every combination. Half way through I got a headache and had to stop.Current statusThe original plan was to make things happen automatically and try to validate the semantics of the output document as much as possible. That got simplified a whole lot. Because the state space is just so massive it might turn out that eventually CapyPDF only provides you the tools to do color conversions yourself and then writes out the result without trying to do anything fancy to it. It would then be the responsibility of the user to validate all semantic requirements.All of this is to say that if you are currently using CapyPDF, just be aware that in the next version all APIs dealing with raster images have changed completely.
  • Ondřej Holý: What’s new in GVfs for GNOME 46? (2024/03/20 07:05)
    It has been 3 years since my last post with release news for GVfs. This is mainly because previous releases were more or less just bug fixes. In contrast, GVfs 1.54 comes with two new backends. Let’s take a look at them. OneDrive One of the backends adds OneDrive support thanks to Jan-Michael Brummer. This requires setting up a Microsoft 365 account through the Online Accounts panel in the Settings application. Then the OneDrive share can be accessed from the sidebar of the Files application. However, creating the account is a bit tricky now. You need to register on the Microsoft Entra portal to get a client ID. The specific steps can be found in the gnome-online-accounts#308 issue. Efforts are underway to register a client ID for GNOME, so this step will soon be unnecessary. WS-Discovery The other backend brings WS-Discovery support. It automatically discovers the shared SMB folders of the Windows devices available on your network. You can find them in the Other Locations view of the Files application. This has not worked since the NT1 protocol was deprecated. For more information on this topic, see my previous post. You won’t find the Windows Network folder in the Other Locations view, all the discovered shares are directly listed in the Networks section now. Finally, I would like to thank all the GVfs contributors. Let me know in the comments if you like the new backends. I hope the next releases will also bring some great news.
  • Michael Meeks: 2024-03-19 Tuesday (2024/03/19 21:00)
    Mail chew, planning call, sync with Naomi & Lily, lunch, monthly mgmt meeting, poked at some overdue 24.04 coding treats - discovered some corner-case performance issue by lengthing our fallback poll loop waits substantially; fun.
  • Arun Raghavan: Asymptotic: A 2023 Review (2024/03/19 14:54)
    It’s been a busy few several months, but now that we have some breathing room, I wanted to take stock of what we have done over the last year or so. This is a good thing for most people and companies to do of course, but being a scrappy, (questionably) young organisation, it’s doubly important for us to introspect. This allows us to both recognise our achievements and ensure that we are accomplishing what we have set out to do. One thing that is clear to me is that we have been lagging in writing about some of the interesting things that we have had the opportunity to work on, so you can expect to see some more posts expanding on what you find below, as well as some of the newer work that we have begun. (note: I write about our open source contributions below, but needless to say, none of it is possible without the collaboration, input, and reviews of members of the community) WHIP/WHEP client and server for GStreamer If you’re in the WebRTC world, you likely have not missed the excitement around standardisation of HTTP-based signalling protocols, culminating in the WHIP and WHEP specifications. Tarun has been driving our client and server implementations for both these protocols, and in the process has been refactoring some of the webrtcsink and webrtcsrc code to make it easier to add more signaller implementations. You can find out more about this work in his talk at GstConf 2023 and we’ll be writing more about the ongoing effort here as well. Low-latency embedded audio with PipeWire Some of our work involves implementing a framework for very low-latency audio processing on an embedded device. PipeWire is a good fit for this sort of application, but we have had to implement a couple of features to make it work. It turns out that doing timer-based scheduling can be more CPU intensive than ALSA period interrupts at low latencies, so we implemented an IRQ-based scheduling mode for PipeWire. This is now used by default when a pro-audio profile is selected for an ALSA device. In addition to this, we also implemented rate adaptation for USB gadget devices using the USB Audio Class “feedback control” mechanism. This allows USB gadget devices to adapt their playback/capture rates to the graph’s rate without having to perform resampling on the device, saving valuable CPU and latency. There is likely still some room to optimise things, so expect to more hear on this front soon. Compress offload in PipeWire Sanchayan has written about the work we did to add support in PipeWire for offloading compressed audio. This is something we explored in PulseAudio (there’s even an implementation out there), but it’s a testament to the PipeWire design that we were able to get this done without any protocol changes. This should be useful in various embedded devices that have both the hardware and firmware to make use of this power-saving feature. GStreamer LC3 encoder and decoder Tarun wrote a GStreamer plugin implementing the LC3 codec using the liblc3 library. This is the primary codec for next-generation wireless audio devices implementing the Bluetooth LE Audio specification. The plugin is upstream and can be used to encode and decode LC3 data already, but will likely be more useful when the existing Bluetooth plugins to talk to Bluetooth devices get LE audio support. QUIC plugins for GStreamer Sanchayan implemented a QUIC source and sink plugin in Rust, allowing us to start experimenting with the next generation of network transports. For the curious, the plugins sit on top of the Quinn implementation of the QUIC protocol. There is a merge request open that should land soon, and we’re already seeing folks using these plugins. AWS S3 plugins We’ve been fleshing out the AWS S3 plugins over the years, and we’ve added a new awss3putobjectsink. This provides a better way to push small or sparse data to S3 (subtitles, for example), without potentially losing data in case of a pipeline crash. We’ll also be expecting this to look a little more like multifilesink, allowing us to arbitrary split up data and write to S3 directly as multiple objects. Update to webrtc-audio-processing We also updated the webrtc-audio-processing library, based on more recent upstream libwebrtc. This is one of those things that becomes surprisingly hard as you get into it — packaging an API-unstable library correctly, while supporting a plethora of operating system and architecture combinations. Clients We can’t always speak publicly of the work we are doing with our clients, but there have been a few interesting developments we can (and have spoken about). Both Sanchayan and I spoke a bit about our work with WebRTC-as-a-service provider, Daily. My talk at the GStreamer Conference was a summary of the work I wrote about previously about what we learned while building Daily’s live streaming, recording, and other backend services. There were other clients we worked with during the year with similar experiences. Sanchayan spoke about the interesting approach to building SIP support that we took for Daily. This was a pretty fun project, allowing us to build a modern server-side SIP client with GStreamer and SIP.js. An ongoing project we are working on is building AES67 support using GStreamer for FreeSWITCH, which essentially allows bridging low-latency network audio equipment with existing SIP and related infrastructure. As you might have noticed from previous sections, we are also working on a low-latency audio appliance using PipeWire. Retrospective All in all, we’ve had a reasonably productive 2023. There are things I know we can do better in our upstream efforts to help move merge requests and issues, and I hope to address this in 2024. We have ideas for larger projects that we would like to take on. Some of these we might be able to find clients who would be willing to pay for. For the ideas that we think are useful but may not find any funding, we will continue to spend our spare time to push forward. If you made this this far, thank you, and look out for more updates!
  • Sam Thursfield: Status update, 19/03/2024 – GNOME OS and openQA (2024/03/19 09:46)
    Looking back at this month its been very busy indeed. In fact, so busy that I’m going to split this status update into two parts. Possibly this is the month where I worked the whole time yet did almost no actual software development. I guess it’s part of getting old that you spend more time organising people and sharing knowledge than actually making new things yourself. But I did still manage one hack I’m very proud of, which I go into below. GNOME OS & openQA testing This month we wrapped up the Outreachy internship on the GNOME OS end-to-end tests. I published a writeup here on the Codethink blog. You can also read first hand from Dorothy and Tanju. We had a nice end-of-internship party on meet.gnome.org last week, and we are trying to arrange US visas & travel sponsorship so we can all meet up at GUADEC in Denver this summer. There are still a few loose ends from the internship. Firstly, if you need people for Linux QA testing work, or anything else, please contact Dorothy and Tanju who are both now looking for jobs, remote or otherwise. Secondly, the GNOME OS openQA tests currently fail about 50% of the time, so they’re not very useful. This is due to a race condition that causes the initial setup process to fail so the machine doesn’t finish booting. Its the sort of problem that takes days rather than hours to diagnose so I haven’t made much of a dent in it so far. The good news is, as announced on Friday, there is now a team from Codethink working full time on GNOME OS, funded partly by the STF grant and partly by Codethink, and this issue is high on the list of priorities to fix. I’m not directly involved in this work, as I am tied up on a multi-year client project that has nothing to do with open source desktops, but of course I am helping where I can, and hopefully the end-to-end tests will be back on top form soon. ssam_openqa If you’ve looked at the GNOME openQA tests you’ll see that I wrote a small CLI tool to drive the openQA test runner and egotistically named it after myself. (The name also makes it clear that it’s not something built or supported by the main openQA project). I used Rust to write ssam_openqa so its already pretty reliable, but until now it lacked proper integration tests. The blocker was this: openQA is for testing whole operating systems, which are large and slow to deploy. We need to use the real openQA test runner in the ssam_openqa tests, otherwise the tests don’t really prove that things are working, but how can we get a suitably minimal openQA scenario? During some downtime on my last trip to Manchester I got all the pieces in place. First, a Buildroot project to build a minimal Linux ISO image, with just four components: bootloader (isolinux) kernel (Linux) libc (musl) shell and tools (Busybox) The output is 6MB – small enough to commit straight to Git. (By the way, using the default GNU libc the image size increases to 15MB!) The next step was to make a minimal test suite. This is harder than it sounds because there isn’t documentation on writing openQA test suites “from the ground up”. By copying liberally from the openSUSE tests I got it down to the following pieces: config/scenario_definitions.yaml, with QEMU and machine config. lib/minimaldistribution.pm, defining two consoles that use QEMU’s virtio terminal interface lib/serial_terminal.pm, which implements a helper function to login as root on the virtio terminal tests/minimal.pm, a simple test that runs a command and asserts that it exits with code 0 (success) main.pm, the entry point for the test runner. That’s it – a fully working test that can boot Linux and run commands. The whole point is we don’t need the openQA web UI in this workflow, so its much more comfortable for commandline-driven development. You can of course run the same testsuite with the web UI when you want. The final piece for ssam_openqa is some integration tests that trigger these tests and assert they run, and we get suitable messages from the frontend, for now implemented in tests/real.rs. I’m happy that ssam_openqa is less likely to break now when I hack on it, but i’m actually more excited about having figured out how to make a super minimal openQA test. The openQA test runner, isotovideo, does something similar in its test suite using TinyCore Linux. I didn’t reuse this; firstly because it uses the Linux framebuffer console instead of virtio terminal, which makes it impossible to read output of the commands in tests; and secondly because it’s got a lot of pieces that I’m not interested in. You can anyway see it here. Have you used ssam_openqa yet ? Let me know if you have! It’s a fun to be able to interact with and debug an entire GNOME OS VM using this tool, and I have a few more feature ideas to make this even more fun in future.
  • Christian Schaller: PipeWire camera handling is now happening! (2024/03/15 16:30)
    We hit a major milestones this week with the long worked on adoption of PipeWire Camera support finally starting to land! Not long ago Firefox was released with experimental PipeWire camera support thanks to the great work by Jan Grulich. Then this week OBS Studio shipped with PipeWire camera support thanks to the great work of Georges Stavracas, who cleaned up the patches and pushed to get them merged based on earlier work by himself, Wim Taymans and Colulmbarius. This means we now have two major applications out there that can use PipeWire for camera handling and thus two applications whose video streams that can be interacted with through patchbay applications like Helvum and qpwgraph. These applications are important and central enough that having them use PipeWire are in itself useful, but they will now also provide two examples of how to do it for application developers looking at how to add PipeWire camera support to their own applications; there is no better documentation than working code. The PipeWire support is also paired with camera portal support. The use of the portal also means we are getting closer to being able to fully sandbox media applications in Flatpaks which is an important goal in itself. Which reminds me, to test out the new PipeWire support be sure to grab the official OBS Studio Flatpak from Flathub. PipeWire camera handling with OBS Studio, Firefox and Helvum. Let me explain what is going on in the screenshot above as it is a lot. First of all you see Helvum there on the right showning all the connections made through PipeWire, both the audio and in yellow, the video. So you can see how my Logitech BRIO camera is feeding a camera video stream into both OBS Studio and Firefox. You also see my Magewell HDMI capture card feeding a video stream into OBS Studio and finally gnome-shell providing a screen capture feed that is being fed into OBS Studio. On the left you see on the top Firefox running their WebRTC test app capturing my video then just below that you see the OBS Studio image with the direct camera feed on the top left corner, the screencast of Firefox just below it and finally the ‘no signal’ image is from my HDMI capture card since I had no HDMI device connected to it as I was testing this. For those wondering work is also underway to bring this into Chromium and Google Chrome browsers where Michael Olbrich from Pengutronix has been pushing to get patches written and merged, he did a talk about this work at FOSDEM last year as you can see from these slides with this patch being the last step to get this working there too. The move to PipeWire also prepared us for the new generation of MIPI cameras being rolled out in new laptops and helps push work on supporting those cameras towards libcamera, the new library for dealing with the new generation of complex cameras. This of course ties well into the work that Hans de Goede and Kate Hsuan has been doing recently, along with Bryan O’Donoghue from Linaro, on providing an open source driver for MIPI cameras and of course the incredible work by Laurent Pinchart and Kieran Bingham from Ideas on board on libcamera itself. The PipeWire support is of course fresh and I am sure we will find bugs and corner cases that needs fixing as more people test out the functionality in both Firefox and OBS Studio and there are some interface annoyances we are working to resolve. For instance since PipeWire support both V4L and libcamera as a backend you do atm get double entries in your selection dialogs for most of your cameras. Wireplumber has implemented de-deplucation code which will ensure only the libcamera listing will show for cameras supported by both v4l and libcamera, but is only part of the development version of Wireplumber and thus it will land in Fedora Workstation 40, so until that is out you will have to deal with the duplicate options. Camera selection dialog We are also trying to figure out how to better deal with infraread cameras that are part of many modern webcams. Obviously you usually do not want to use an IR camera for your video calls, so we need to figure out the best way to identify them and ensure they are clearly marked and not used by default. Another recent good PipeWire new tidbit that became available with the PipeWire 1.0.4 release PipeWire maintainer Wim Taymans also fixed up the FireWire FFADO support. The FFADO support had been in there for some time, but after seeing Venn Stone do some thorough tests and find issues we decided it was time to bite the bullet and buy some second hand Firewire hardware for Wim to be able to test and verify himself.Focusrite firewire device. Once the Focusrite device I bought landed at Wims house he got to work and cleaned up the FFADO support and make it both work and be performant. For those unaware FFADO is a way to use Firewire devices without going through ALSA and is popular among pro-audio folks because it gives lower latencies. Firewire is of course a relatively old technology at this point, but the audio equipment is still great and many audio engineers have a lot of these devices, so with this fixed you can plop a Firewire PCI card into your PC and suddenly all those old Firewire devices gets a new lease on life on your Linux system. And you can buy these devices on places like ebay or facebook marketplace for a fraction of their original cost. In some sense this demonstrates the same strength of PipeWire as the libcamera support, in the libcamera case it allows Linux applications a way to smoothly transtion to a new generation of hardware and in this Firewire case it allows Linux applications to keep using older hardware with new applications. So all in all its been a great few weeks for PipeWire and for Linux Audio AND Video, and if you are an application maintainer be sure to look at how you can add PipeWire camera support to your application and of course get that application packaged up as a Flatpak for people using Fedora Workstation and other distributions to consume.
  • Alice Mikhaylenko: Libadwaita 1.5 (2024/03/15 15:58)
    Well, another cycle has passed. This one was fairly slow, but nevertheless has a major new feature. Adaptive Dialogs The biggest feature this time is the new dialog widgetry. Traditionally, dialogs have been separate windows. While this approach generally works, we never figured out how to reasonably support that on mobile. There was a downstream patch for auto-maximizing dialogs, which in turn required them to be resizable, which is not great on desktop, and the patch was hacky and never really supported upstream. Another problem is close buttons – we want to keep them in dialogs instead of needing to go to overview to close every dialog, and that’s why mobile gnome-shell doesn’t hide close buttons at all atm. Ideally we want to keep them in dialogs, but be able to remove them everywhere else. While it would be possible to have shell present dialogs differently, another approach is to move them to the client instead. That’s not a new approach, here are some existing examples: This has both upsides and downsides. One upside is that the toolkit/app has much more control over them. For example, it’s very easy to ensure their size doesn’t exceed the parent window. While this is possible with windows (AdwMessageDialog does this), it’s hacky and can still break fairly easily with e.g. maximize – in fact, I’m not confident it works across compositors and in both Wayland and X11. Having dialogs not exceed the parent’s size means not needing to limit their size quite so aggressively – previously it was needed so that the dialog doesn’t get ridiculously large on top of a small window. The dimming behind the dialog can also vary between light and dark styles – shell cannot do that because it doesn’t know if this particular window is light or dark, only what the whole system prefers. In future this should also allow to support per-tab dialogs. For apps like web browsers, a background tab spawning a dialog that takes over the whole window is not great. Meanwhile the main downside is the same thing as was listed in upsides: these dialogs cannot exceed the parent window’s size. Sometimes it’s still needed, e.g. if the parent window is really small. Bottom Sheets So, how does that help on mobile? Well, aside from just implementing the existing size constraints on AdwMessageDialog more cleanly, it allows to present these dialogs as bottom sheets on mobile, instead of centered floating sheets. A previous design has presented dialogs as pages with back buttons, but that had many other problems, especially on small windows on desktop. For example, what happens if you close the window? A dialog and a “regular” subpage would look identical, so you’d probably expect the close button to close the entire window? But if it’s floating above a larger window? Bottom sheets avoid this issue – you still see the parent window with its own close button, so it’s obvious that they are closed separately – while still being allowed to take full width like a subpage. They can also be swiped down, though because of GTK limitations this does not work together with scrolling content. It’s still possible to swipe down from header bar or the empty space above the sheet. And the fact they are attached to the bottom edge makes them easier to reach on huge phones. Meanwhile, AdwHeaderBar always shows a close button within dialogs, regardless of the system layout. The only hint it takes from the system is whether to display the close button on the right or left side. API For the most part they are used similarly to GtkWindow. The main differences are with presenting and closing dialogs. The :transient-for property has been replaced with a parameter in adw_dialog_present(). It also doesn’t necessarily take a window anymore, but can accept any widget within that window as well. Currently it just fetches the root widget, but once we have per-tab dialogs, that can be controlled with a simple flag instead of needing a new variant of adw_tab_present() that would take a tab page instead of a window. The ::close-request signal has been replaced as well. Because the dialogs can be swiped down on mobile, we need to know if they can be closed before the gesture starts. So, instead there’s a :can-close property that apps set ahead of time if there’s unsaved data or some other reason to prevent closing. For close confirmation, there’s a ::close-attempt signal, which will be fired when trying to close a dialog using a close button or a shortcut while :can-close is set to FALSE (or calling adw_dialog_close()). For actual closing, there’s ::closed instead. Finally, adw_dialog_force_close() closes the dialog while ignoring :can-close. It can be used to close the dialog after confirmation without needing to fiddle with :can-close or repeat ::close-attempt emissions. If this works well, AdwWindow may have something similar in future. The rest is fairly straightforward and is modelled after GtkWindow. See AdwDialog docs and migration guide for more details. Since AdwPreferencesWindow and other widgets can’t be ported to new dialogs without a significant API break, they have been replaced: AdwPreferencesWindow with AdwPreferencesDialog AdwAboutWindow with AdwAboutDialog AdwMessageDialog with AdwAlertDialog For the most part they are identical, with a few differences: AdwPreferencesDialog has search disabled by default, and gets rid of deprecated subpage API AdwAlertDialog can scroll contents, so apps that add their own scrolled windows may want to remove them Since the new widgets landed right at the end of the cycle, the old widgets are not deprecated yet. However, they will be deprecated next cycle, so it’s recommended to migrate your apps anyway. Standalone bottom sheets (like in audio players) are not available yet either, but will be in future. Esc to Close Traditionally, dialogs have been done via GtkDialog which handled this automatically. But for the last few years, apps have been steadily moving away from GtkDialog and by now it’s deprecated. While that’s not really a problem on its own, one thing that GtkDialog was doing automatically and custom dialogs don’t is closing when pressing Esc. While it’s pretty easy to add that manually, a lot of apps forget to do so. But since we have dedicated dialog API again, Esc to close is once again automatic. What about standalone dialogs? Some dialogs don’t have a parent window. Those are still presented as a window. Note that it still doesn’t work well on mobile: while there will be a close button, the sizing will work just as badly as before, so it’s recommended to avoid them. Dialogs will also be presented as a window if you try to ad them to a parent that can’t host dialogs (anything that’s not an AdwWindow or AdwApplicationWindow), or the parent is not resizable. The reason for the last one is to accommodate apps like Emblem, which has a small non-resizable window, where dialogs won’t fully fit, and since it’s non-resizable, it doesn’t work on mobile anyway. What about “Attach Modal Dialogs” Since we have the window-backed mode, it would be fairly easy to support that preference… except there’s no way to read it from sandboxed apps. What about portals? This approach obviously doesn’t work for portals, since they are in a separate process. We do have a plan for them, involving a private protocol in mutter, but it didn’t make it for 46. So, next time. What about GTK built-in dialogs? Those will be replaced as well, but it takes time. For now yes, GtkShortcutsWindow etc won’t match other dialogs. Other Changes As usual, there are some smaller changes. Jamie added the :text-length property to AdwEntryRow. AdwMessageDialog now has remove_response(). While this widget is to be deprecated, AdwAlertDialog has an equivalent as well. AdwBreakpointBin now allows to programmatically remove breakpoints. AdwSwipeTracker now has a flag to allow swiping over header bars – used in bottom sheets. Shade colors are now lighter in dark style. This was needed for dialog dimming to look good, but also it was previously too dark elsewhere, e.g. in scroll undershoots. As always, thanks to all the contributors who helped to make this release happen.
  • Marcus Lundblad: Maps and GNOME 46 (2024/03/15 12:35)
    It's that time again, a new GNOME release is just around the corner. The news in Maps for GNOME 46A lot of the new things we've been working on for the 46 release has already been covered, but here is few recaps. The new map styleThe map style used for the vector-based, client-side rendered map which is still considered experimental in 46 has been switched over to our new “GNOME-themed” style, which also supports a dark mode (enabled when the global dark mode is enabled).The vector map still needs to be explicitly enabled via the “layers menu” (the second headerbar button from the left). This also require the backing installation of libshumate to be built with vector renderer support (which is the case when using the Flatpak from Flathub, and also libshumate will default to building the vector renderer from the 1.2.0 release, so distributions should likely have it enabled in their 46 installations).The current plan looks like we're leaning towards flipping it on by default after the 46 release, so by 47 it will probably mean the old raster tiles from openstreetmap.org will be retired.Also icons on the map (such as POIs) are now directly clickable. And labels should be localized to the user's language (when the appropriate language tags are available in the OpenStreetMap data).Other visual improvementsFor 46 the zoom control buttons has been revamped (again), and put in the lower corner (as also shown in the above screenshots):The pin used to marked places selected from search results, and other things like pin-pointed locations in GeoJSON files has gained a new modernized design by Jakub Steiner.The dialog for adding an OpenStreetMap account to edit POIs gained a refresh sporting the new libadwaita dialog and widgets by Felipe Kinoshita.Also information about which floor a place is located at is shown in the place bubbles when available. This can be useful to find your way around for example big shopping malls and the like (this was an idea that came when looking for a café in a galleria in Riga last summer…).The favorites menu has also gotten a revamp. Instead of just showing a greyed-out inactive button when there's no favored places it now has an “empty state” hinting on the ability to “star” places.And favorites can be removed directly from the list without having to open them (and animate to that place to show the bubble).Looking further onFor the next cycle aside from continuing the refinements to the new map style and making the vector map the main thing another cool project that was initiated during FOSDEM in Februari has caught my attention: TransitousTransitous aims to setup a free and open public transit routing service: https://github.com/public-transport/transitousIt is using the MOTIS project (https://github.com/motis-project/motis) as the backend, with a cround sourcing approach to collect data feeds for timetable data.The routing can already be tested out at https://transitous.org. Currently it only handles “station to station” routing, so there is not yet support for walking instructions.Also, unlike the current public transit plugins support we have in Maps with Transitous you would also be able to cross-border planning (utilizing timetables from different data feeds).When it becomes a bit more mature we should make use of it in Maps ☺.So this another area to help out by creating PRs for adding transit schedule feeds for your local area that could potentially benefit both Maps and other FOSS projects (such as KDE Itinerary).Problems aheadAnd now to something of a problem.The location service backend that we are using (not just used by Maps, but also other parts like Weather, automatic timezone handling) GeoClue has been using Mozilla's location service API (MLS). This will unfortunately be retired https://github.com/mozilla/ichnaea/issues/2065 So there will be a need to come up with alternative solutions https://gitlab.freedesktop.org/geoclue/geoclue/-/issues/186 Maybe in worst case, we'd have to disable showing current location in Maps unless the device has an actual GPS unit. 
  • Felix Häcker: #139 Just Before the Release (2024/03/15 00:00)
    Update on what happened across the GNOME project in the week from March 08 to March 15. Sovereign Tech Fund Sonny announces As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects. Here are the highlights for the past week: We have been working hard on helping with and solving last minute issues for GNOME 46. This is the first GNOME release since we started the GNOME STF initiative and are very excited about our work rolling to millions of users. Sophie opened a PR to support git dependencies in the Cargo buildstream plugin. This will make it much easier to work with GNOME core applications written in Rust. Julian drafted images/sound support for notification portal V2. libportal xdg-desktop-portal xdg-desktop-portal-gtk Matt now has an end to end prototype for the Wayland-native accessibility stack he’s been working on. He published an update and instructions to run it. Jonas landed New gestures (part 2): Introduce ClutterGesture. This is one of the building blocks present in the GNOME Shell mobile project that we are working on upstreaming. Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! Alice released libadwaita 1.5 Sam made a website for Orca to replace the wiki page. This week we welcome and thank Codethink for partnering with us. Codethink has been a long time supporter of the GNOME project and will be helping us improve developer and quality assurance tooling; with a focus on immutable / image based operating systems. GNOME Core Apps and Libraries Libadwaita Building blocks for modern GNOME apps using GTK4. Alice (she/her) announces Libadwaita 1.5.0 is out! See the announcement blog post for details Calendar A simple calendar application. Hari Rana | TheEvilSkeleton (any/all) announces Daniel Garcia Moreno submitted several merge requests to GNOME Calendar that allowed us to close 25 timezone-related issues! All of these changes are expected to land in GNOME 46. https://gitlab.gnome.org/GNOME/gnome-calendar/-/merge_requests/370 https://gitlab.gnome.org/GNOME/gnome-calendar/-/merge_requests/372 https://gitlab.gnome.org/GNOME/gnome-calendar/-/merge_requests/373 https://gitlab.gnome.org/GNOME/gnome-calendar/-/merge_requests/375 GNOME Circle Apps and Libraries Switcheroo Convert and manipulate images. Khaleel Al-Adhami says Switcheroo now supports exporting multiple images into one PDF file in update 2.1.0! Pika Backup Keep your data safe. Sophie (she/her) announces Pika Backup 0.7.1 is out. It fixes a bug that prevented backup processes from lowering their CPU priority. A UI issue with scheduled backups was fixed as well. If you missed the 0.7 release because we missed posting it on TWIG, you can learn more about it in my blog post. There is also a great video by Dreams of Autonomy that gives a wonderful introduction to Pika Backup. You can support Pika’s development on Open Collective. Note that we are not affected by the Open Collective Foundation shutting down since our financial host is the Open Source Collective. The same is the case for almost all other open source projects. So please continue supporting them. Impression Create bootable drives. Khaleel Al-Adhami reports Impression has received a new update 3.1.0 to support .xz compressed file format and fix a bug that was causing slow download speeds Third Party Projects Arjan says This week @lazka (Christoph Reiter) released PyGObject 3.48.1. This release contains a couple of noteworthy changes: This is the first release using meson-python, and thus meson, instead of setuptools for PEP-517 installations. I.e. when installing via pip or similar. PyGObject finally has proper support for fundamental types. That means that you can now work with things like GSK nodes directly from Python. The documentation for PyGObject is now hosted on our GNOME hosting environment at https://gnome.pages.gitlab.gnome.org/pygobject/. We aim to have all PyGObject related documentation in one place. Nokse says I have released a new version of ASCII Draw with many improvements: Greatly improved performance, now you can use a bigger canvases Improved design to better match the GNOME style Added stepped line and merged all lines and arrows in one tool Added move tool to easily move part of your drawings Improved default character list dividing them into palettes Added custom palettes Added primary and secondary character Guido says I’ve released livi 0.1.0. Thanks to Robert Mader the mobile focused video player now supports DMABuf import and can use GTK’s new GraphicsOffload widget to render videos more efficiently (given all other components in the stack support this properly already). Aaron Erhardt announces Version 0.8 of Relm4, an idiomatic GUI library based on gtk4-rs, was released on Wednesday with many improvements. The release includes several unifications in our API, more idiomatic abstractions and updated gtk-rs dependencies. Find out more details in our release blog post. Martín Abente Lahaye says Gameeky 0.6.0 is out! This new release comes with improved compatibility with other platforms, several usability additions and improvements like: An integrated development environment for Python. An easier way to share projects. New desktop icon thanks to @jimmac and @bertob. Improved compatibility with other platforms. And more… Check the release blog post to learn more. GNOME Foundation Rosanna announces This week, in between the minutiae of everyday things, I have also been looking into some of our policies and looking into updating them. Things like the employee handbook and travel policy are high on the list of things to update, to both keep aligned with regulations and best practices as well as streamlined for practicality. I am currently in Pasadena, California attending SCaLE. I am going to be on a panel (https://www.socallinuxexpo.org/scale/21x/presentations/where-does-linux-desktop-go-here) on Saturday at 2:30PM. It’s going to be a great time! I will also be staffing the GNOME booth there. Drop on by to discuss all things GNOME. We also posted an opening for an Administrative Support Contractor (https://foundation.gnome.org/careers/). This person would be working with me to keep GNOME running and I am very much looking forward to reading all the applications! That’s all for this week! See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
  • Matthew Garrett: Digital forgeries are hard (2024/03/14 09:11)
    Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden") comments
  • Martín Abente Lahaye: Gameeky 0.6.0 (2024/03/13 16:54)
    After a busy month, a new release is out! This new release comes with improved compatibility with other platforms, several usability additions and improvements. It’s no longer necessary to run terminal commands. The most noticeable change in release is the addition of a properly-integrated development environment for Python. With this, the LOGO-like user experience was greatly improved. The LOGO-like programming interface is also bit richer. A new Rotate action was added and the general interface was simplified to further improve the user experience. It’s easier to share projects. A simple dialog to export and import projects was added, available through the redesigned project cards in the launcher. As shown above, Gameeky now has a cute desktop icon thanks to @jimmac and @bertob. Should be easier to run Gameeky on other platforms now. Under the hood, many things have changed to support other platforms, e.g., macOS. The sound backend was changed to GStreamer, the communication protocol was simplified, and the use of WebKit is now optional. There are no installers for other platforms yet but, if anyone is experienced and interested in making these, that would be an awesome contribution. As a small addition, it’s now possible to select a different entity as the user’s character. Recently, my nephews decided they wanted to their character to be a small boulder. They had a blast with their boulder-hero narrative, and it convinced me there should be more additions like that. There’s more, so check the full list of changes. On the community side of things, I already started building alliances with different organizations, e.g., the first-ever Gameeky workshop is planned for March 23 in Encarnación, Paraguay and it’s being organized by the local Python community. If you’re in Paraguay or nearby in Argentina, feel free to contact me to participate!
  • Dorothy Kabarozi: Overall experience: My Outreachy internship with GNOME (2024/03/12 16:16)
    Embarking on an Outreachy internship is a great start into the heart of open-source , a journey I’ve longed to undertake. December 2023 to March 2024 marked this exhilarating chapter of my life, where I had the honor of diving deep into the GNOME world as an Outreachy intern. In this blog, I’m happy to share my experiences, painting a vivid picture of the growth, challenges, and invaluable experiences that have shaped my journey. Discovering GNOME: A Gateway to Open-Source Excellence At its core, GNOME (GNU Network Object Model Environment) is a graphical user interface (GUI) and set of computer desktop applications for users of the Linux operating system.GNOME brings companies, volunteers, professionals, and non-profits together from around the world. We make GNOME, a completely free software solution for everyone. Why GNOME Captured My Heart The Outreachy internship presented a couple of projects to choose from, but my fascination with operating system functionalities—booting, scheduling, memory management, user interface and beyond—drew me irresistibly to GNOME. My mission? To work on the implementation of end-to-end tests, a challenge I embraced head on as i dived into the project documentation to understand the project better. From the moment I introduced myself on the GNOME community channel in the first days of contribution phase, the warmth and promptness of their welcome were unmatched, shattering the myth of the “busy, distant mentor.” This immediate sense of belonging fueled my determination, despite the initial difficulties of setup procedures and technical trials. My advice to future Outreachy aspirants From my experience is to start early, Zero down on a project, try to set up early as this took me almost 2 weeks to finally make a merge request to the project. Secondly ask questions publicly as this helps you easily get unblocked faster in cases when your mentor is busy. Milestones and Mastery: The GNOME Journey Our collective goal for the internship was to implement tests for accessibility features for GNOME desktop and also test some core apps on mobile. The creation of the gnome_accessibility test suite marked our first victory, followed by the genesis of gnome-locales and gnome_mobile test suites. Daily stand ups and weekly mentor meetings became our compass, guiding our efforts and honing our focus on the different tasks.Check out for more details here and share any feedback with us on discourse. Technically ,I learned a lot about version control and Git workflows, how to actually contribute to a project with a large code base, writing clean, readable and efficient code and ensuring code is thoroughly tested for bugs and errors before pushing it. Some of the soft skills I learned were collaboration, communication skills and the continuous desire to learn new things and being teachable. Overcoming Obstacles: Hardware Hurdles and Beyond The revelation that my iOS-based machine was ill-equipped for the task at hand was a stark challenge. The lesson was clear: understanding project specifications is crucial, and adaptability is key. This obstacle, while daunting, taught me the value of preparation and the importance of choosing the right tools for the task. Beyond Coding: Community, Engagement, and Impact I have not only interacted with my mentors for the project but also participated in sharing about the work we have done on TWIG where I highlighted the work we had done writing tests for accessibility features ie, High contrast,Large text,Overlay scrollbars, Screen reader, Zoom, Over amplification,Visual alerts and On Screen Keyboard features and added more details on the discourse channel too. I have had public engagements on contributing to Outreachy over twitter spaces in my community where I shared about how to apply to Outreachy and how to prepare for in the contribution phase and shared more about my internship with GNOME during the GNOME AFRICA Preparatory Boot camp for GSoC & Outreachy, check out my presentation here where I shared more about how to stand out as an Outreachy applicant and my experience working with GNOME .These experiences have not only boosted my technical skills but have also embedded in me a sense of community and courage to tackle the unknown. A Heartfelt Thank You As this chapter of my journey with GNOME and Outreachy draws to a close, I am overwhelmed with gratitude.To my selfless mentors , Sam Thursfield and Sonny Piers for the guidance and mentorship . I appreciate you all for what you have planted in us. To Tanjuate you have been an amazing co- intern I could ever ask for. To Kristi Progri and Felipe Borges for coordinating this internship with Outreachy and the GNOME Community. To Outreachy, thank you for this opportunity. And to every soul who has walked this path with me: your support has been amazing. As I look forward to converging paths at GUADEC in July and beyond, I carry with me not just skills and knowledge, but a heart full of memories, ready to embark on new adventures in the open-source world. Here’s to infinite learning, enduring friendships, and the unwavering spirit of contribution. May the journey continue to unfold, with success, learning, and boundless possibilities. Here are some of the accessibility tests for gnome_accessibility testsuite, we added during the internship with GNOME . Click here to take a more detailed look.
  • Peter Hutterer: Enforcing a touchscreen mapping in GNOME (2024/03/12 04:33)
    Touchscreens are quite prevalent by now but one of the not-so-hidden secrets is that they're actually two devices: the monitor and the actual touch input device. Surprisingly, users want the touch input device to work on the underlying monitor which means your desktop environment needs to somehow figure out which of the monitors belongs to which touch input device. Often these two devices come from two different vendors, so mutter needs to use ... */me holds torch under face* .... HEURISTICS! :scary face: Those heuristics are actually quite simple: same vendor/product ID? same dimensions? is one of the monitors a built-in one? [1] But unfortunately in some cases those heuristics don't produce the correct result. In particular external touchscreens seem to be getting more common again and plugging those into a (non-touch) laptop means you usually get that external screen mapped to the internal display. Luckily mutter does have a configuration to it though it is not exposed in the GNOME Settings (yet). But you, my $age $jedirank, can access this via a commandline interface to at least work around the immediate issue. But first: we need to know the monitor details and you need to know about gsettings relocatable schemas. Finding the right monitor information is relatively trivial: look at $HOME/.config/monitors.xml and get your monitor's vendor, product and serial from there. e.g. in my case this is: <monitors version="2"> <configuration> <logicalmonitor> <x>0</x> <y>0</y> <scale>1</scale> <monitor> <monitorspec> <connector>DP-2</connector> <vendor>DEL</vendor> <--- this one <product>DELL S2722QC</product> <--- this one <serial>59PKLD3</serial> <--- and this one </monitorspec> <mode> <width>3840</width> <height>2160</height> <rate>59.997</rate> </mode> </monitor> </logicalmonitor> <logicalmonitor> <x>928</x> <y>2160</y> <scale>1</scale> <primary>yes</primary> <monitor> <monitorspec> <connector>eDP-1</connector> <vendor>IVO</vendor> <product>0x057d</product> <serial>0x00000000</serial> </monitorspec> <mode> <width>1920</width> <height>1080</height> <rate>60.010</rate> </mode> </monitor> </logicalmonitor> </configuration> </monitors> Well, so we know the monitor details we want. Note there are two monitors listed here, in this case I want to map the touchscreen to the external Dell monitor. Let's move on to gsettings. gsettings is of course the configuration storage wrapper GNOME uses (and the CLI tool with the same name). GSettings follow a specific schema, i.e. a description of a schema name and possible keys and values for each key. You can list all those, set them, look up the available values, etc.: $ gsettings list-recursively ... lots of output ... $ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas' $ gsettings range org.gnome.desktop.peripherals.touchpad click-method enum 'default' 'none' 'areas' 'fingers' Now, schemas work fine as-is as long as there is only one instance. Where the same schema is used for different devices (like touchscreens) we use a so-called "relocatable schema" and that requires also specifying a path - and this is where it gets tricky. I'm not aware of any functionality to get the specific path for a relocatable schema so often it's down to reading the source. In the case of touchscreens, the path includes the USB vendor and product ID (in lowercase), e.g. in my case the path is: /org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ In your case you can get the touchscreen details from lsusb, libinput record, /proc/bus/input/devices, etc. Once you have it, gsettings takes a schema:path argument like this: $ gsettings list-recursively org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ org.gnome.desktop.peripherals.touchscreen output ['', '', ''] Looks like the touchscreen is bound to no monitor. Let's bind it with the data from above: $ gsettings set org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ output "['DEL', 'DELL S2722QC', '59PKLD3']" Note the quotes so your shell doesn't misinterpret things. And that's it. Now I have my internal touchscreen mapped to my external monitor which makes no sense at all but shows that you can map a touchscreen to any screen if you want to. [1] Probably the one that most commonly takes effect since it's the vast vast majority of devices
  • Ismael Olea: Misadventures modelling Andalusian heritage (2024/03/10 23:00)
    Related with the previous post, I have pending too to publish my notes about my experience importing, sorting and cleaning the descriptions of the rich Andalusian historical heritage. Our friends of Wikimedia Portugal invited and sponsored my travel to Lisboa to share my, sometimes sad, practical experience in their excellent Wikidata Days 2023 meeting. I used this invitation as a motivation to finally compile all the experience I learned when importing the Digital Guide to the Cultural Heritage of Andalusia into Wikidata. The presentation format is really suboptimal. To be honest I’m a bit tired of working a lot in slides with very short live (many times just one ocassion). But I still think the contents are useful enough.
  • Ismael Olea: Modelling protected areas talks (2024/03/10 23:00)
    Just keeping up to date with a talk I gave twice past year. I’m very proud of the work but never shared here neither in my Wikidata User:Olea page. As a brief introduction, for some time I did a significant work importing to Wikidata the CDDA database of European protected areas as I found we have them completely infra represented. I have previous experience with historical heritage but this showed to be a harder work. I have collected some thoughts about lessons learned and a potential standarizing proposal for natural protected areas but never structured in a comprensive way until being invited to give a couple talks about this. The first talk was in Lisboa, invited and sponsored by our friends of Wikimedia Portugal, in their Wikidata Days 2023. To be honest, my talk was a little disaster because I didn’t prepared the talk with time enough, but at least I could present a complete draft of the idea. Then I have the opportunity to talk again about the same issue in the next Data Modelling Days 2023 virtual event. My participation was sharing the session with VIGNERON, he talk about historical heritage and I did about natural heritage/natural protected area. For this session I was able to rewrite my proposal with the quality a communication requires. Now you have available the video recording of the full session: And my slides, that would be the final text of my intended proposal: As a conclusion: yes, I should promote this in Wikidata, but the amount of work it requires (editions and discussions) is, for the moment, outside my interest for my freetime.
  • Emmanuele Bassi: Accessibility improvements in GTK 4.14 (2024/03/08 09:30)
    GTK 4.14 brings various improvements on the accessibility front, especially for applications showing complex, formatted text; for WebKitGTK; and for notifications. Accessible text interface The accessibility rewrite for 4.0 provided an implementation for complex, selectable, and formatted text in widgets provided by GTK, like GtkTextView, but out of tree widgets would not be able to do the same, as the API was kept private while we discussed what ATs (assistive technologies) actually needed, and while we were looking at non-Linux implementations. For GTK 4.14 we finally have a public interface that out of tree widgets can implement to provide complex, formatted text to ATs: GtkAccessibleText. GtkAccessibleText allows widgets to provide the text contents at given offsets; the text attributes applied to the contents; and to notify assistive technologies of changes in the text, caret position, or selection boundaries. Text widgets implementing GtkAccessibleText should notify ATs in these cases: if the text widget has a caret cursor, it needs to call gtk_accessible_text_update_caret_position() every time the caret moves if the text widget has a selection, it needs to call gtk_accessible_text_update_selection_bound() every time the selection changes when the text changes, the widget needs to call gtk_accessible_text_update_contents() with the description of what changed, and the boundaries of the change Text attributes are mainly left to applications to implement—both in naming and serialization; GTK provides support for common text attributes already in use by various toolkits and assistive technologies, and they are available as constants under the GTK_ACCESSIBLE_ATTRIBUTE_* prefix in the API reference. The GtkAccessibleText interface is a requirement for implementing the accessibility of virtual terminals; the most common GTK-based library for virtual terminals, VTE, has been ported to GTK4 thanks to the efforts of Christian Hergert and in GNOME 46 will support accessibility through the new GTK interface. Bridging AT-SPI trees There are cases when a library or an application implements its own accessible tree using AT-SPI, whether in the same process or out of process. One such library is WebKitGTK, which generates the accessible object tree from the web tree inside separate processes. These processes do not use GTK, so they cannot use the GtkAccessible API to describe their contents. Thanks to the work of Georges Stavracas GTK now can bridge those accessibility object trees under the GTK widget’s own, allowing ATs to navigate into a web page using WebKit from the UI. Currently, like the rest of the accessibility API in GTK, this is specific to the AT-SPI protocol on Linux, which means it requires libraries and applications that wish to take advantage of it to ensure that the API is available at compile time, through the use of a pkg-config file and a separate C header, similarly to how the printing API is exposed. Notifications Applications using in-app notifications that are decoupled by the current widget’s focus, like AdwToast in libadwaita, can now raise the notification message to ATs via the gtk_accessible_announce() method, thanks to Lukáš Tyrychtr, in a way that is respectful of the current ATs output. Other improvements GTK 4.12 ensured that the computed accessible labels and descriptions were up to date with the ARIA specification; GTK 4.14 iterates on those improvements, by removing special cases and duplicates. Thanks to the work of Michael Weghorn from The Document Foundation, there are new roles for text-related accessible objects, like paragraphs and comments, as well as various fixes in the AT-SPI implementation of the accessibility API. The accessibility support in GTK4 is incrementally improving with every cycle, thanks to the contributions of many people; ideally, these improvements should also lead to a better, more efficient protocol for toolkits and assistive technologies to share. We are still exploring the possibility of adding backends for other accessibility platforms, like UIAutomation; and for other libraries, like AccessKit.
  • Matthias Clasen: On fractional scales, fonts and hinting (2024/03/07 03:36)
    GTK 4.14 will be released very soon, with new renderers that were introduced earlier this year. The new renderers have much improved support for fractional scaling—on my system, I now use 125% scaling instead of the ‘Large Text’ setting, and I find that works fine for my needs. Magical numbers Ever since 4.0, GTK has been advocating for linear layout. The idea is that we just place glyphs where the coordinates tell us, and if that is a fractional position somewhere between pixels, so be it, we can render the outline at that offset just fine. This approach works—if your output device has a high-enough resolution (anything above 240 dpi should be ok). Sadly, we don’t live in a world where most laptop screens have that kind of resolution, so we can’t just ignore pixels. Consequently, we added the gtk-hint-font-metrics setting that forces text layout to round things to integer positions. This is not a great fit for fractional scaling, since the rounding happens in application pixels, and we really need integral device pixel positions to produce crisp results. Application vs. device pixels The common fractional scales are 125%, 150%, 175%, 200% and 225%. At these scales (with the exception of 200%), most application pixel boundaries do not align with device pixel boundaries. What now? The new renderers gave us an opportunity to revisit the topic of font rendering and do some research on the mechanics of hinting options, and how they get passed down the stack from GTK through Pango and cairo, and then end up in freetype as a combination of render target + load flags. Hint style and antialiasing options translate to render mode and load flags The new renders recognize that there’s two basic modes of operation when it comes to glyphs: optimize for uniform spacing optimize for crisp rendering The former leads to subpixel positioning and unhinted rendering, the latter to hinted rendering and glyphs that are placed at integral pixel positions (since that is what the autohinter expects). We determine which case we’re in by looking at the font options. If they tell us to do hinting, we round the glyph position to an integral device pixel in the y direction. Why only y? The autohinter only applies hinting in the vertical direction and the horizontal direction is where the increased resolution of subpixel positions helps most. If we are not hinting, then we use subpixel positions for both x and y, just like the old renderer (with the notable difference that the new renderer uses subpixel positions in device pixels). A comparison Text rendering differences are always subtle and, to some degree, a matter a taste and preference. So these screenshots should be taken with a grain of salt—it is much better to try the new renderers for yourself. Text rendered at 125%, old renderer Text rendered at 125%, new renderer Both of these renderings were done at a scale of 125%, with hinting enabled (but note that the old renderer handles 125% by rendering at 200% and relying on the compositor to scale things down). Here is a look at some details: the horizontal bars of T and e are consistent across lines, even though we still allow the glyphs to shift by subpixel positions horizontally. Consistent vertical placement Instances of T and e, old renderer Instances of T and e, new renderer Summary The new renderers in GTK 4.14 should produce more crisp font rendering, in particular with fractional scaling. Please try it out and tell us what you think. Update: On subpixel rendering I should have anticipated that this question would come up, so here is a quick answer: We are not using subpixel rendering (aka Cleartype, or rgb antialiasing) in GTK 4, since our compositing does not have component alpha. Our antialiasing for fonts is always grayscale. Note that subixel rendering is something separate from subpixel positioning.
  • Jakub Steiner: GNOME 46 Wallpapers (2024/03/06 23:00)
    GNOME 46 is on its final stretch to be released. It’s been a custom to blog a little about the wallpaper selection, which is a big part of GNOME’s visual identity. The first notable change in 46 is that we’re finally delivering on the promise of bringing you a next generation image file format. Lots of performance issues had to be addressed first, apologies for the delay. While efficiency and filesize requirements might not be too high on the list outside of the geek crowd, there is one aspect of JPEG-XL that I am very excited about. JPEG-XL allows the use of client-side synthesized grain. A method pioneered by Netflix/AV1 I believe. Compression algorithms struggle with high frequency detail, which often introduce visible artifacts. JPEG-XL allows to decouple the grain component from the actual image data. This allows for significantly more efficient compression of images that inherently require noise, such as those in gnome-backgrounds — smooth gradients that would otherwise be susceptible to color banding. To achieve similar fidelity of the grain if it were baked in, a classic format like JPEG would need an order of magnitude larger filesize. Having the grain in the format itself also allows to skip various techniques in the rendering or compositing in the 3D software. Instead of compressing a noisy image, JPEG-XL allows to generate film-like grain as part of the decoding process. This synthesized grain combats issues like color banding while allowing a much more efficient compression on the original image data. In essence, client-side grain in JPEG-XL isn’t simply added noise, but a sophisticated strategy for achieving both efficient compression and visually pleasing image quality, especially for images that would otherwise require inherent noise. The fresh batch of wallpapers includes evolutions of the existing assets as well as new additions. A few material/shape studies have been added as well as simple 2D shape textures. Thanks to the lovely JPEG-XL grain described earlier, it’s not just Inkscape and Blender that were used. I hope you’re going to pick at least one of the wallpapers when GNOME 46 releases later next week as your favorite. Let me know on fediverse! Previously, Previously, Previously, Previously, Previously
  • Jussi Pakkanen: CapyPDF 0.9.0 released (2024/03/04 17:09)
    I have just released CapyPDF 0.9.0. It can be obtained either via Github or PyPI.There is no major big feature for this release. The most notable is probably the ability to create structured (or "tagged") PDF files. The code supports using both the builtin tags as well as defining your own. Feel free to try it, just know that the API is guaranteed to change.As a visual example, here is the full Python source code for one of the unit tests.When run it creates a tagged PDF file.  Adobe Acrobat reports that the document has the following logical structure.As you can (hopefully) tell, structure and content are the same in both of them.
  • Sophie Herold: Pika Backup Hopping Through Milestones (2024/03/03 12:11)
    Pika Backup is an app focused on backups of personal data. It’s internally based on BorgBackup and provides fast incremental backups. Last year, Pika Backup crossed the mark of 100,000 downloads on Flatub. These are numbers I couldn’t have imagined when submitting Pika Backup to Flathub only about three years ago. Thanks to everyone who has supported the project along the way. Be it with incredibly kind feedback, helpful issue reports, or financial contributions on Open Collective. It has been a blast so far. A special thanks goes to BorgBase who generously has been giving financial support to the project development for over a year now. While we still have a bunch of features planned for Pika Backup, our focus remains stability and keeping the codebase maintainable. The project was started over five years ago. Since these were still the early ages of Rust as a programming language within GNOME, a lot has changed in the way app code is commonly structured. This means that we are also planning some refactoring work to help with the maintainability and readability of the code for future contributors. After being blocked by a nasty bug for a while, we are finally releasing Pika Backup 0.7 today. Like the previous release, the new release has substantially been driven by Fina since I have been busy with other projects including moving flats. I’m thrilled that the project has two maintainers who are familiar with the codebase. The new release contains over 20 significant changes and fixes. The most noticeable new features are: A new preferences window to rename backup configurations and allow scheduled backups with the system running on battery. The ability to automatically run scripts before and after creating a backup. A new feature to check the backup repositories’ integrity. You can financially support development on Open Collective or GitHub. If you want to support my general GNOME endeavors and get some behind-the-scenes updates, you can support me on my new Patreon. If you want to try out BorgBase for hosting your backup you can get 10 GB storage for free on borgbase.com. A guide for setting up Pika Backup with BorgBase is available as well.
  • Debarshi Ray: Toolbx is a release blocker for Fedora 39 onwards (2024/03/01 11:44)
    This is the second instalment of my 2023 retrospective series on Toolbx. 1 One very important thing that we did behind the scenes was to make Toolbx a release blocker for Fedora 39 and onwards. This means that the registry.fedoraproject.org/fedora-toolbox OCI image is considered a release-blocking deliverable, and there are release-blocking test criteria to ensure that the toolbox RPM is usable. Why do that? Earlier, there was no formal requirement for Toolbx to be usable when a new Fedora was released. That was a problem for a tool that’s so popular and provides something as fundamental as an interactive command line environment for software development and troubleshooting the host operating system. Everybody expects their CLI environment to just work even under very adverse conditions, and Toolbx should be no different. Except that Toolbx is slightly more complicated than running Bash or Z shell directly on the host OS, and, therefore, requires a bit more diligence. Toolbx has two parts — an OCI image, which defaults to registry.fedoraproject.org/fedora-toolbox on Fedora hosts, and the toolbox RPM. The OCI image is pulled by the RPM to set up a containerized interactive CLI environment. Let’s look at each separately. The image First, we wanted to ensure that there is an up to date fedora-toolbox OCI image published on registry.fedoraproject.org as a release-blocking deliverable at critical points in the development schedule, just like the installation ISOs for the Editions from download.fedoraproject.org. For example, when an upcoming Fedora release is branched from Rawhide, and for the Beta and Final releases. One of the recurring complaints that we used to get were from users of Fedora Rawhide Toolbx containers, when Rawhide gets branched in preparation for the Beta for the next Fedora release. At this point, the previous Rawhide version becomes the Branched version, and the current Rawhide version increases by one. If the fedora-toolbox images aren’t part of the mass branching performed by Fedora Release Engineering, then someone has to quickly step in after they have finished to refresh the images to ensure consistency. This sort of ad hoc manual co-ordination rarely works, and it left users in the lurch. With this change, the fedora-toolbox image is part of the nightly Fedora composes, and the branching is handled by Fedora Release Engineering just like any other release-blocking deliverable. This makes the image as readily available and updated as the fedora and fedora-minimal OCI images or any other deliverable, and we hope that it will improve the user experience for Rawhide Toolbx containers. If someone installs the Fedora Beta or the Final on their host, and creates a Toolbx container using the default image, then, barring exceptions, the host and the container now have the same RPM versions for all packages. Just like Fedora Silverblue and Workstation are released with the same versions. This ensures greater consistency in terms of bug-fixes, features and pending updates. In the past, this wasn’t the case and it led to occasional surprises. For example, the change to make RPM use a Sequoia based OpenPGP parser made it impossible to install third party RPMs in the fedora-toolbox image, even long after the actual bug was fixed. The RPM Second, we wanted to have release-blocking test criteria to ensure that the toolbox RPM is usable at critical points in the development schedule. This is to ensure that changes in the Toolbx stack, and future changes in other parts of the operating system do not break Toolbx — at least not for the Beta and Final releases. It’s good to have the fedora-toolbox image be more readily available and updated, but it’s better if Toolbx works more reliably as a whole. Examples of changes in the Toolbx stack causing breakage can be FUSE preventing RPMs with file capabilities from being installed inside Toolbx containers, Toolbx bind mounts preventing RPMs with %attr() from being installed or causing systemd-tmpfiles(8) to throw errors, etc.. Examples of changes in other parts of the OS can be changes to Fedora’s Kerberos stack causing Kerberos to stop working inside Toolbx containers, changes to the sysctl(8) configuration breaking ping(8), changes in Mutter breaking graphical applications, etc.. The test criteria for the toolbox RPM also implicitly tests the fedora-toolbox image, and co-ordinates several disparate groups of developers to ensure that the containerized interactive command line Toolbx environments on Fedora are just as reliable as those running directly on the host OS. Tooling changes This does come with a significant tooling change that isn’t obvious at first. The fedora-toolbox OCI image is no longer defined as a layered image through a Container/Dockerfile. Instead, it’s built as a base image through Kickstarts and Pungi, just like the fedora and fedora-minimal images. This was necessary because the nightly Fedora composes work with Kickstarts and Pungi, not Container/Dockerfiles. Moreover, building Fedora OCI images from a Dockerfile with fedpkg container-build uses an ancient unmaintained version of OpenShift Build Service that requires equally unmaintained ancient versions of Fedora to run, and the fedora-toolbox image was the only thing using Container/Dockerfiles in Fedora. We either had to update the Fedora infrastructure to use OpenShift Build Service 2.x; or use Kickstarts and Pungi, which uses Image Factory, to build the fedora-toolbox image. We chose the latter, because updating the infrastructure would be a significant effort, and by using Kickstarts and Pungi we get to stay close to the fedora and fedora-minimal images and simplify the infrastructure. The Fedora Flatpaks were also being built using the same ancient and unmaintained version of OpenShift Build Service, and they too are in the process being migrated. However, that’s outside the scope of this post. One big benefit of fedora-toolbox not being a layered image based on top of the fedora image is that it removes the constant fight against the efforts to minimize the size of the latter. The fedora-toolbox image is designed for interactive command line use in long-lived containers, and not for deploying server-side applications and services in ephemeral ones. This means that dictionaries, documentation, locales, iconv converter modules, translations, etc. are more important than reducing the size of the images. Now that the image is built from scratch, it has full control over what goes into it. Unfortunately, Image Factory is weakly maintained and setting it up on one’s local machine is a lot more complicated than using podman build. One can do scratch builds on the Fedora infrastructure with koji image-build --scratch, but only if they have been explicitly granted permissions, and then they have to download the tarball and use skopeo copy to place them in containers-storage so that Podman can see it. All that is again more complicated than doing a podman build. Due to this difficulty of untangling the image build from the Fedora infrastructure, we haven’t published the sources of the fedora-toolbox image for recent Fedora versions upstream. We do have a fedora-toolbox:39 image defined through a Container/Dockerfile, but that was done purely as a contingency during the Fedora 39 development cycle. This does degrade the developer experience of working on the fedora-toolbox image, but, given all the other advantages, we think that it’s worth it. As of this writing, there’s a Fedora 40 Change to switch to using KIWI to build the OCI images, including fedora-toolbox, instead of Image Factory. KIWI seems more strongly maintained and a lot easier to set up locally, which is fantastic. So, it should be all rainbows and unicorns, once we soldier through another port of the fedora-toolbox image to a different tooling and source language. Acknowledgements Last but not the least, getting all this done on time required a good deal of co-ordination and help from several different individuals. I must thank Sumantro for leading the effort; Kevin, Tomáš and Samyak for all the infrastructure and release engineering work; and Adam and Kamil for all the testing and validation. Toolbx now offers built-in support for Arch Linux and Ubuntu ︎
  • Andy Wingo: on the impossibility of composing finalizers and ffi (2024/02/26 10:05)
    While poking the other day at making a Guile binding for Harfbuzz, I remembered why I don’t much do this any more: it is impossible to compose GC with explicit ownership.Allow me to illustrate with an example. Harfbuzz has a concept of blobs, which are refcounted sequences of bytes. It uses these in a number of places, for example when loading OpenType fonts. You can get a peek at the blob’s contents back with hb_blob_get_data, which gives you a pointer and a length.Say you are in LuaJIT. (To think that for a couple years, I wrote LuaJIT all day long; now I can hardly remember.) You get a blob from somewhere and want to get its data. You define a wrapper for hb_blob_get_data:local hb = ffi.load("harfbuzz") ffi.cdef [[ typedef struct hb_blob_t hb_blob_t; const char * hb_blob_get_data (hb_blob_t *blob, unsigned int *length); ]] Presumably you then arrange to release LuaJIT’s reference on the blob when GC collects a Lua wrapper for a blob:ffi.cdef [[ void hb_blob_destroy (hb_blob_t *blob); ]] function adopt_blob(ptr) return ffi.gc(ptr, hb.hb_blob_destroy) end OK, so let’s say we get a blob from somewhere, and want to copy out its contents as a byte string.function blob_contents(blob) local len_out = ffi.new('unsigned int') local contents = hb.hb_blob_get_data(blob, len_out) local len = len_out[0]; return ffi.string(contents, len) end The thing is, this code is as correct as you can get it, but it’s not correct enough. In between the call to hb_blob_get_data and, well, anything else, GC could run, and if blob is not used in the future of the program execution (the continuation), then it could be collected, causing the hb_blob_destroy finalizer to release the last reference on the blob, freeing contents: we would then be accessing invalid memory.Among GC implementors, it is a truth universally acknowledged that a program containing finalizers must be in want of a segfault. The semantics of LuaJIT do not prescribe when GC can happen and what values will be live, so the GC and the compiler are not constrained to extend the liveness of blob to, say, the entirety of its lexical scope. It is perfectly valid to collect blob after its last use, and so at some point a GC will evolve to do just that.I chose LuaJIT not to pick on it, but rather because its FFI is very straightforward. All other languages with GC that I am aware of have this same issue. There are but two work-arounds, and neither are satisfactory: either develop a deep and correct knowledge of what the compiler and run-time will do for a given piece of code, and then pray that knowledge does not go out of date, or attempt to manually extend the lifetime of a finalizable object, and then pray the compiler and GC don’t learn new tricks to invalidate your trick.This latter strategy takes the form of “remember-this” procedures that are designed to outsmart the compiler. They have mostly worked for the last few decades, but I wouldn’t bet on them in the future.Another way to look at the problem is that once you have a system working—though, how would you know it’s correct?—then you either never update the compiler and run-time, or you become fast friends with whoever maintains your GC, and probably your compiler too.For more on this topic, as always Hans Boehm has the first and last word; see for example the 2002 Destructors, finalizers, and synchronization. These considerations don’t really apply to destructors, which are used in languages with ownership and generally run synchronously.Happy hacking, and be safe out there!
  • Flathub Blog: Introducing App Brand Colors (2024/02/26 00:00)
    We're gearing up to launch curated banners on the Flathub home page! However, before we can do that there's one more blocker: Banners need a background color for each app, and many apps don't provide this metadata yet. This is why today we're expanding our MetaInfo quality guidelines and quality checks on the website; If you haven't yet, please add these colors to your app's MetaInfo file using the <branding/> appstream tag, and read on to learn more about brand colors. What are brand colors?​ App brand colors are an easy and effective way for app developers to give their listing a bit more personality in app stores. In combination with the app icon and name, they allow setting a tone for the app without requiring a lot of extra work, unlike e.g. creating and maintaining additional image assets. Why now?​ This idea was first implemented in elementary AppCenter, and later standardized as part of the AppStream specification. While it has been in AppStream itself for a few years, it was unfortunately not possible for Flathub's backend to pick it up until the recent port to libappstream. This is why many apps are still not providing this metadata—even if it was available from the app side we were unable to display it until now. Now that we can finally pick these colors up from AppStream MetaInfo files, we want to make use of them—and they are essential for the new banners. Adding brand colors​ Apps are expected to provide two different brand colors for light and dark. Here's an example of a MetaInfo file in the wild including brand colors. This is the snippet you need to include in your MetaInfo file: <branding> <color type="primary" scheme_preference="light">#faa298</color> <color type="primary" scheme_preference="dark">#7f2c22</color></branding> In choosing the colors, try to make sure the colors work well in their respective context (e.g. don't use a light yellow for the dark color scheme), and look good as a background behind the app icon (e.g. avoid using exactly the same color to maintain contrast). In most cases it's recommended to pick a lighter tint of a main color from the icon for the light color scheme, and a darker shade for the dark color scheme. Alternatively you can also go with a complementary color that goes well with the icon's colors. What's next?​ Today we've updated the MetaInfo quality guidelines with a new section on app brand colors. Going forward, brand colors will be required as part of the MetaInfo quality review. If you have an app on Flathub, check out the guidelines and update your MetaInfo with brand colors as soon as possible. This will help your app look as good as possible, and will make it eligible to be featured when the new banners ship. Let's make Flathub a more colorful, exciting place to find new apps!
  • Dorothy Kabarozi: Conversations in Open Source: Insights from Informal chats with Open Source contributors. (2024/02/21 05:06)
    Introduction Open source embodies collaboration, innovation, and accessibility within the technological realm. Seeking personal insights behind the collaborative efforts, I engaged in conversations with individuals integral to the open source community, revealing the diversity, challenges, and impacts of their work. Conversations Summary Venturing beyond my comfort zone, I connected with seasoned open source contributors, each offering unique perspectives and experiences. Their roles varied from project maintainers to mentors, working on everything from essential libraries to innovative technologies. Das: Shared valuable insights on securing roles in open source, including resources for applications and tips for academic writing and conference speaking. The best part with Das was that she also reviewed my resume and shared many ways i could make it outstanding and shared templates to use for this.We had a really great chat. Samuel: A seasoned C/C++ programmer working mainly on open-source user-space driver development.He was kind enough to share his 20 years long journey on how he started working with open source and how he loves working with low level Hardware.He also commended Outreachy as a great opportunity and my contributions with GNOME in QA testing.He encouraged me to apply for roles in the company he’s working with,and highlighted “Even if they say NO now, next time they will say YES”. Samuel also encouraged me to find my passion and this will guide me to learn faster,create my personal brand and encouraged me to submit some conference talks. Dustin: Shared his 20 year journey and we mostly talked about Programming and software engineering in general . Highlighted the significance of networking and adaptability to learn quickly in open source.He shared a story how he “printed out code on one of his first jobs and learnt a skill of figuring out early what you don’t need to understand when faced with a big code base”. This is one skill I needed at the start instead of drowning in documentation trying to understand the project and where to start. Stefan: Discussed his transition from a GSOC participant to a mentor,shared opensource job links , commended Outreachy as a big plus. He highlighted not to set yourself up by mental blocking that you can’t do anything, because you can.He encouraged to submit talks at conferences, network and publishing my work. These interactions showcased the wide-ranging backgrounds and motivations within the open source community and have deepened my respect for the open source community and its contributors. I have some homework to do with my resume and the links to opportunities that were shared with me.Open source welcomes contributors at all levels, offering a platform for innovation and collective achievement. Feel free to be an Outreachy intern on the upcoming cohorts to start your journey. Best of luck.
  • Flathub Blog: Improved build validation, increased moderation, and the long-awaited switch to libappstream (2024/02/21 00:00)
    Flathub's automatic build validation is more thorough now, and includes checks for issues we previously would have only flagged manually. There is a chance that if your app has been passing the continuous integration checks previously, it will fail now; here's why, and what to do about it. If your application no longer passes the build validation stage in either Buildbot (for apps maintained on GitHub) or flat-manager (for direct uploads), make sure to look for specific messages in the log. Explanations for various error messages can be found in the documentation. If you are interested in running the linter locally or integrating it with your own CI, please refer to the project page. We have also started moderating all permission changes and some critical MetaInfo changes. For example, if a build adds or removes a static permission (as seen in the finish-args array in the manifest) or changes the app’s user-facing name, it will be withheld for manual human review. Reviewers may reject a build and reach out for clarification about the change. Flathub has also switched to a modern, well-maintained AppStream library, known as libappstream. This enables developers to use all features described in the AppStream 1.0 specification, including specifying supported screen sizes for mobile devices, or video snippets to accompany static screenshots. It also improves the validation of AppStream metadata. Many thanks to Philip Withnall, Luna Dragon and Hubert Figuière for their work on this across the Flatpak stack, and Matthias Klumpp for implementing knobs needed by Flathub in the library itself. This work has been ongoing since 2021. At one point along the way we briefly switched over to libappstream, but had to revert due to unexpected breakage; however, today we are finally ready with all blocking issues addressed! While we were focused on closing the gaps to prevent potentially broken builds from being published, we regret that we failed to provide a heads-up about the coming validation changes. Any future breaking changes will be properly announced on this blog, and going forward we will also inform maintainers of affected apps about required changes in advance.
  • Carlos Garcia Campos: A Clarification About WebKit Switching to Skia (2024/02/20 18:11)
    In the previous post I talked about the plans of the WebKit ports currently using Cairo to switch to Skia for 2D rendering. Apple ports don’t use Cairo, so they won’t be switching to Skia. I understand the post title was confusing, I’m sorry about that. The original post has been updated for clarity.
  • Matthew Garrett: Debugging an odd inability to stream video (2024/02/19 22:30)
    We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets) comments
  • Carlos Garcia Campos: WebKitGTK and WPEWebKit Switching to Skia for 2D Graphics Rendering (2024/02/19 13:27)
    In recent years we have had an ongoing effort to improve graphics performance of the WebKit GTK and WPE ports. As a result of this we shipped features like threaded rendering, the DMA-BUF renderer, or proper vertical retrace synchronization (VSync). While these improvements have helped keep WebKit competitive, and even perform better than other engines in some scenarios, it has been clear for a while that we were reaching the limits of what can be achieved with a CPU based 2D renderer. There was an attempt at making Cairo support GPU rendering, which did not work particularly well due to the library being designed around stateful operation based upon the PostScript model—resulting in a convenient and familiar API, great output quality, but hard to retarget and with some particularly slow corner cases. Meanwhile, other web engines have moved more work to the GPU, including 2D rendering, where many operations are considerably faster. We checked all the available 2D rendering libraries we could find, but none of them met all our requirements, so we decided to try writing our own library. At the beginning it worked really well, with impressive results in performance even compared to other GPU based alternatives. However, it proved challenging to find the right balance between performance and rendering quality, so we decided to try other alternatives before continuing with its development. Our next option had always been Skia. The main reason why we didn’t choose Skia from the beginning was that it didn’t provide a public library with API stability that distros can package and we can use like most of our dependencies. It still wasn’t what we wanted, but now we have more experience in WebKit maintaining third party dependencies inside the source tree like ANGLE and libwebrtc, so it was no longer a blocker either. In December 2023 we made the decision of giving Skia a try internally and see if it would be worth the effort of maintaining the project as a third party module inside WebKit. In just one month we had implemented enough features to be able to run all MotionMark tests. The results in the desktop were quite impressive, getting double the score of MotionMark global result. We still had to do more tests in embedded devices which are the actual target of WPE, but it was clear that, at least in the desktop, with this very initial implementation that was not even optimized (we kept our current architecture that is optimized for CPU rendering) we got much better results. We decided that Skia was the option, so we continued working on it and doing more tests in embedded devices. In the boards that we tried we also got better results than CPU rendering, but the difference was not so big, which means that with less powerful GPUs and with our current architecture designed for CPU rendering we were not that far from CPU rendering. That’s the reason why we managed to keep WPE competitive in embeeded devices, but Skia will not only bring performance improvements, it will also simplify the code and will allow us to implement new features . So, we had enough data already to make the final decision of going with Skia. In February 2024 we reached a point in which our Skia internal branch was in an “upstreamable” state, so there was no reason to continue working privately. We met with several teams from Google, Sony, Apple and Red Hat to discuss with them about our intention to switch from Cairo to Skia, upstreaming what we had as soon as possible. We got really positive feedback from all of them, so we sent an email to the WebKit developers mailing list to make it public. And again we only got positive feedback, so we started to prepare the patches to import Skia into WebKit, add the CMake integration and the initial Skia implementation for the WPE port that already landed in main. We will continue working on the Skia implementation in upstream WebKit, and we also have plans to change our architecture to better support the GPU rendering case in a more efficient way. We don’t have a deadline, it will be ready when we have implemented everything currently supported by Cairo, we don’t plan to switch with regressions. We are focused on the WPE port for now, but at some point we will start working on GTK too and other ports using cairo will eventually start getting Skia support as well.
  • Federico Mena-Quintero: Rustifying libipuz: character sets (2024/02/18 04:21)
    It has been, what, like four years since librsvg got fully rustified, and now it is time to move another piece of critical infrastructure to a memory-safe language. I'm talking about libipuz, the GObject-based C library that GNOME Crosswords uses underneath. This is a library that parses the ipuz file format and is able to represent various kinds of puzzles. Libipuz is an interesting beast. The ipuz format is JSON with a lot of hair: it needs to represent the actual grid of characters and their solutions, the grid's cells' numbers, the puzzle's clues, and all the styling information that crossword puzzles can have (it's more than you think!). { "version": "http://ipuz.org/v2", "kind": [ "http://ipuz.org/crossword#1", "https://libipuz.org/barred#1" ], "title": "Mephisto No 3228", "styles": { "L": {"barred": "L" }, "T": {"barred": "T" }, "TL": {"barred": "TL" } }, "puzzle": [ [ 1, 2, 0, 3, 4, {"cell": 5, "style": "L"}, 6, 0, 7, 8, 0, 9 ], [ 0, {"cell": 0, "style": "L"}, {"cell": 10, "style": "TL"}, 0, 0, 0, 0, {"cell": 0, "style": "T"}, 0, 0, {"cell": 0, "style": "T"}, 0 ] # the rest is omitted ], "clues": { "Across": [ {"number":1, "clue":"Having kittens means losing heart for home day", "enumeration":"5", "cells":[[0,0],[1,0],[2,0],[3,0],[4,0]] }, {"number":5, "clue":"Mostly allegorical poet on writing companion poem, say", "enumeration":"7", "cells":[[5,0],[6,0],[7,0],[8,0],[9,0],[10,0],[11,0]] }, ] # the rest is omitted } } Libipuz uses json-glib, which works fine to ingest the JSON into memory, but then it is a complete slog to distill the JSON nodes into C data structures. You need iterate through each node in the JSON tree and try to fit its data into yours. Get me the next node. Is the node an array? Yes? How many elements? Allocate my own array. Iterate the node's array. What's in this element? Is it a number? Copy the number to my array. Or is it a string? Do I support that, or do I throw an error? Oh, don't forget the code to meticulously free the partially-constructed thing I was building. This is not pleasant code to write and test. Ipuz also has a few mini-languages within the format, which live inside string properties. Parsing these in C unpleasant at best. Differences from librsvg While librsvg has a very small GObject-based API, and a medium-sized library underneath, libipuz has a large API composed of GObjects, boxed types, and opaque and public structures. Using libipuz involves doing a lot of calls to its functions, from loading a crossword to accessing each of its properties via different functions. I want to use this rustification as an exercise in porting a moderately large C API to Rust. Fortunately, libipuz does have a good test suite that is useful from the beginning of the port. Also, I want to see what sorts of idioms appear when exposing things from Rust that are not GObjects. Mutable, opaque structs can just be passed as a pointer to a heap allocation, i.e. a Box<T>. I want to take the opportunity to make more things in libipuz immutable; currently it has a bunch of reference-counted, mutable objects, which are fine in single-threaded C, but decidedly not what Rust would prefer. For librsvg it was very beneficial to be able to notice parts of objects that remain immutable after construction, and to distinguish those parts from the mutable ones that change when the object goes through its lifetime. Let's begin! In the ipuz format, crosswords have a character set or charset: it is the set of letters that appear in the puzzle's solution. Internally, GNOME Crosswords uses the charset as a histogram of letter counts for a particular puzzle. This is useful information for crossword authors. Crosswords uses the histogram of letter counts in various important algorithms, for example, the one that builds a database of words usable in the crosswords editor. That database has a clever format which allows answering questions like the following quickly: What words in the database match ?OR?? — WORDS and CORES will match. IPuzCharset is one of the first pieces of code I worked on in Crosswords, and it later got moved to libipuz. Originally it didn't even keep a histogram of character counts; it was just an ordered set of characters that could answer the question, "what is the index of the character ch within the ordered set?". I implemented that ordered set with a GTree, a balanced binary tree. The keys in the key/value tree were the characters, and the values were just unused. Later, the ordered set was turned into an actual histogram with character counts: keys are still characters, but each value is now a count of the coresponding character. Over time, Crosswords started using IPuzCharset for different purposes. It is still used while building and accessing the database of words; but now it is also used to present statistics in the crosswords editor, and as part of the engine in an acrostics generator. In particular, the acrostics generator has been running into some performance problems with IPuzCharset. I wanted to take the port to Rust as an opportunity to change the algorithm and make it faster. Refactoring into mutable/immutable stages IPuzCharset started out with these basic operations: /* Construction; memory management */ IPuzCharset *ipuz_charset_new (void); IPuzCharset *ipuz_charset_ref (IPuzCharset *charet); void ipuz_charset_unref (IPuzCharset *charset); /* Mutation */ void ipuz_charset_add_text (IPuzCharset *charset, const char *text); gboolean ipuz_charset_remove_text (IPuzCharset *charset, const char *text); /* Querying */ gint ipuz_charset_get_char_index (const IPuzCharset *charset, gunichar c); guint ipuz_charset_get_char_count (const IPuzCharset *charset, gunichar c); gsize ipuz_charset_get_n_chars (const IPuzCharset *charset); gsize ipuz_charset_get_size (const IPuzCharset *charset); All of those are implemented in terms of the key/value binary tree that stores a character in each node's key, and a count in the node's value. I read the code in Crosswords that uses the ipuz_charset_*() functions and noticed that in every case, the code first constructs and populates the charset using ipuz_charset_add_text(), and then doesn't modify it anymore — it only does queries afterwards. The only place that uses ipuz_charset_remove_text() is the acrostics generator, but that one doesn't do any queries later: it uses the remove_text() operation as part of another algorithm, but only that. So, I thought of doing this: Split things into a mutable IPuzCharsetBuilder that has the add_text / remove_text operations, and also has a build() operation that consumes the builder and produces an immutable IPuzCharset. IPuzCharset is immutable; it can only be queried. IPuzCharsetBuilder can work with a hash table, which turns the "add a character" operation from O(log n) to O(1) amortized. build() is O(n) on the number of unique characters and is only done once per charset. Make IPuzCharset work with a different hash table that also allows for O(1) operations. Basics of IPuzCharsetBuilder IPuzCharsetBuilder is mutable, and it can live on the Rust side as a Box<T> so it can present an opaque pointer to C. #[derive(Default)] pub struct CharsetBuilder { histogram: HashMap<char, u32>, } // IPuzCharsetBuilder *ipuz_charset_builder_new (void); */ #[no_mangle] pub unsafe extern "C" fn ipuz_charset_builder_new() -> Box<CharsetBuilder> { Box::new(CharsetBuilder::default()) } For extern "C", Box<T> marshals as a pointer. It's nominally what one would get from malloc(). Then, simple functions to create the character counts: impl CharsetBuilder { /// Adds `text`'s character counts to the histogram. fn add_text(&mut self, text: &str) { for ch in text.chars() { self.add_character(ch); } } /// Adds a single character to the histogram. fn add_character(&mut self, ch: char) { self.histogram .entry(ch) .and_modify(|e| *e += 1) .or_insert(1); } } The C API wrappers: use std::ffi::CStr; // void ipuz_charset_builder_add_text (IPuzCharsetBuilder *builder, const char *text); #[no_mangle] pub unsafe extern "C" fn ipuz_charset_builder_add_text( builder: &mut CharsetBuilder, text: *const c_char, ) { let text = CStr::from_ptr(text).to_str().unwrap(); builder.add_text(text); } CStr is our old friend that takes a char * and can wrap it as a Rust &str after validating it for UTF-8 and finding its length. Here, the unwrap() will panic if the passed string is not UTF-8, but that's what we want; it's the equivalent of an assertion that what was passed in is indeed UTF-8. // void ipuz_charset_builder_add_character (IPuzCharsetBuilder *builder, gunichar ch); #[no_mangle] pub unsafe extern "C" fn ipuz_charset_builder_add_character(builder: &mut CharsetBuilder, ch: u32) { let ch = char::from_u32(ch).unwrap(); builder.add_character(ch); } Somehow, the glib-sys crate doesn't have gunichar, which is just a guint32 for a Unicode code point. So, we take in a u32, and check that it is in the appropriate range for Unicode code points with char::from_u32(). Again, a panic in the unwrap() means that the passed number is out of range; equivalent to an assertion. Converting to an immutable IPuzCharset pub struct Charset { /// Histogram of characters and their counts plus derived values. histogram: HashMap<char, CharsetEntry>, /// All the characters in the histogram, but in order. ordered: String, /// Sum of all the counts of all the characters. sum_of_counts: usize, } /// Data about a character in a `Charset`. The "value" in a key/value pair where the "key" is a character. #[derive(PartialEq)] struct CharsetEntry { /// Index of the character within the `Charset`'s ordered version. index: u32, /// How many of this character in the histogram. count: u32, } impl CharsetBuilder { fn build(self) -> Charset { // omitted for brevity; consumes `self` and produces a `Charset` by adding // the counts for the `sum_of_counts` field, and figuring out the sort // order into the `ordered` field. } } Now, on the C side, IPuzCharset is meant to also be immutable and reference-counted. We'll use Arc<T> for such structures. One cannot return an Arc<T> to C code; it must first be converted to a pointer with Arc::into_raw(): // IPuzCharset *ipuz_charset_builder_build (IPuzCharsetBuilder *builder); #[no_mangle] pub unsafe extern "C" fn ipuz_charset_builder_build( builder: *mut CharsetBuilder, ) -> *const Charset { let builder = Box::from_raw(builder); // get back the Box from a pointer let charset = builder.build(); // consume the builder and free it Arc::into_raw(Arc::new(charset)) // Wrap the charset in Arc and get a pointer } Then, implement ref() and unref(): // IPuzCharset *ipuz_charset_ref (IPuzCharset *charet); #[no_mangle] pub unsafe extern "C" fn ipuz_charset_ref(charset: *const Charset) -> *const Charset { Arc::increment_strong_count(charset); charset } // void ipuz_charset_unref (IPuzCharset *charset); #[no_mangle] pub unsafe extern "C" fn ipuz_charset_unref(charset: *const Charset) { Arc::decrement_strong_count(charset); } The query functions need to take a pointer to what really is the Arc<Charset> on the Rust side. They reconstruct the Arc with Arc::from_raw() and wrap it in ManuallyDrop so that the Arc doesn't lose a reference count when the function exits: // gsize ipuz_charset_get_n_chars (const IPuzCharset *charset); #[no_mangle] pub unsafe extern "C" fn ipuz_charset_get_n_chars(charset: *const Charset) -> usize { let charset = ManuallyDrop::new(Arc::from_raw(charset)); charset.get_n_chars() } Tests The C tests remain intact; these let us test all the #[no_mangle] wrappers. The Rust tests can just be for the internals, simliar to this: #[test] fn supports_histogram() { let mut builder = CharsetBuilder::default(); let the_string = "ABBCCCDDDDEEEEEFFFFFFGGGGGGG"; builder.add_text(the_string); let charset = builder.build(); assert_eq!(charset.get_size(), the_string.len()); assert_eq!(charset.get_char_count('A').unwrap(), 1); assert_eq!(charset.get_char_count('B').unwrap(), 2); assert_eq!(charset.get_char_count('C').unwrap(), 3); assert_eq!(charset.get_char_count('D').unwrap(), 4); assert_eq!(charset.get_char_count('E').unwrap(), 5); assert_eq!(charset.get_char_count('F').unwrap(), 6); assert_eq!(charset.get_char_count('G').unwrap(), 7); assert!(charset.get_char_count('H').is_none()); } Integration with the build system Libipuz uses meson, which is not particularly fond of cargo. Still, cargo can be used from meson with a wrapper script and a bit of easy hacks. See the merge request for details. Further work I've left the original C header file ipuz-charset.h intact, but ideally I'd like to automatically generate the headers from Rust with cbindgen. Doing it that way lets me check that my assumptions of the extern "C" ABI are correct ("does foo: &mut Foo appear as Foo *foo on the C side?"), and it's one fewer C-ism to write by hand. I need to see what to do about inline documentation; gi-docgen can consume C header files just fine, but I'm not yet sure about how to make it work with generated headers from cbindgen. I still need to modify the CI's code coverage scripts to work with the mixed C/Rust codebase. Fortunately I can copy those incantations from librsvg. Is it faster? Maybe! I haven't benchmarked the acrostics generator yet. Stay tuned!
Enter your comment. Wiki syntax is allowed:
O E D G᠎ Z
 
  • news/planet/gnome.txt
  • Last modified: 2021/10/30 11:41
  • by 127.0.0.1