Gnome Planet - Latest News

  • Matthew Garrett: SSH agent extensions as an arbitrary RPC mechanism (2024/06/12 02:57)
    A while back, I wrote about using the SSH agent protocol to satisfy WebAuthn requests. The main problem with this approach is that it required starting the SSH agent with a special argument and also involved being a little too friendly with the implementation - things worked because I could provide an arbitrary public key and the implementation never validated that, but it would be legitimate for it to start doing so and then break everything. And it also only worked for keys stored on tokens that ssh supports - there was no way to extend this to other keystores on the client (such as the Secure Enclave on Macs, or TPM-backed keys on PCs). I wanted a better solution.It turns out that it was far easier than I expected. The ssh agent protocol is documented here, and the interesting part is the extension support extension mechanism. Basically, you can declare an extension and then just tunnel whatever you want over it. As before, my goto was the go ssh agent package which conveniently implements both the client and server side of this. Implementing the local agent is trivial - look up SSH_AUTH_SOCK, connect to it, create a new agent client that can communicate with that by calling NewClient, and then implement the ExtendedAgent interface, create a new socket, and call ServeAgent against that. Most of the ExtendedAgent functions should simply call through to the original agent, with the exception of Extension(). Just add a case statement against extensionType, define some reasonably namespaced extension, and you're done.Now you need to use this agent. You probably don't want to use this for arbitrary hosts (agent forwarding should only be enabled for remote systems you trust, not arbitrary machines you connect to - if you enabled agent forwarding for github and github got compromised, github would be able to use any private keys loaded into your agent, and you probably don't want that). So the right approach is to add a Host entry to the ssh config with a ForwardAgent stanza pointing at the socket you created in your new agent. This way the configured subset of remote hosts will automatically talk to this new custom agent, while forwarding for anything else will still be at the user's discretion.For the remote end things are even easier. Look up SSH_AUTH_SOCK and call NewClient as before, and then simply call client.Extension(). Whatever you stick in the contents argument will simply end up being received at the client end. You now have a communication channel between a the remote system and the local client, and what you do with that is up to you. I'm using it to allow a remote system to obtain auth tokens from Okta and forward WebAuthn challenges that can either be satisfied via a local WebAuthn token or by passing the query off to Mac TouchID, but there's fundamentally no constraints whatsoever on what can be done here.(If you want to do this on Windows and still have everything work with existing clients you'll need to take this into account - Windows didn't really do Unix sockets until recently so everything there is awful) comments
  • Lennart Poettering: Announcing systemd v256 (2024/06/11 22:00)
    Yesterday evening we released systemd v256 into the wild. While other projects, such as Firefox are just about to leave the 7bit world and enter 8bit territory, we already entered 9bit version territory! For details about the release, see our announcement mail. In the weeks leading up to this release I have posted a series of serieses of posts to Mastodon about key new features in this release. Mastodon has its goods and its bads. Among the latter is probably that it isn't that great for posting listings of serieses of posts. Hence let me provide you with a list of the relevant first post in the series of posts here: Post #1: v./ Directories Post #2: User-Scoped Encrypted Service Credentials Post #3: X_SYSTEMD_UNIT_ACTIVE= sd_notify() Messages Post #4: System-wide ProtectSystem= Post #5: run0 As sudo Replacement Post #6: System Credentials Post #7: Unprivileged DDI Mounts + Unprivileged systemd-nspawn Post #8: ssh into systemd-homed Accounts Post #9: systemd-vmspawn Post #10: Mutable systemd-sysext Post #11: Network Device Ownership Post #12: systemctl sleep Post #13: systemd-ssh-generator Post #14: systemd-cryptenroll without device argument Post #15: dlopen() ELF Metadata Post #16: Capsules I intend to do a similar series of serieses of posts for the next systemd release (v257), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity. And while I have you: note that the All Systems Go 2024 Conference (Berlin) Call for Papers ends 😲 THIS WEEK 🤯! Hence, HURRY, and get your submissions in now, for the best low-level Linux userspace conference around!
  • Miguel de Icaza: 11 Jun 2024 (2024/06/11 20:09)
    SwiftNavigation To celebrate that RealityKit's is coming to MacOS, iOS and iPadOS and is no longer limited to VisionOS, I am releasing SwiftNavigation for RealityKit. Last year, as I was building a game for VisionPro, I wanted the 3D characters I placed in the world to navigate the world, go from one point to another, avoid obstacles and have those 3D characters avoid each other. Almost every game engine in the world uses the C++ library RecastNavigation library to do this - Unity, Unreal and Godot all use it. SwiftNavigation was born: Both a Swift wrapper to the underlying C++ library which leverages extensively Swift's C++ interoperability capabilities and it directly integrates into the RealityKit entity system. This library is magical, you create a navigation mesh from the world that you capture and then you can query it for paths to navigate from one point to another or you can create a crowd controller that will automatically move your objects. Until I have the time to write full tutorials, your best bet is to look at the example project that uses it.
  • Andy Holmes: GNOME 46 and Beyond (2024/06/11 18:27)
    With GNOME 46.2 released, it seems like a good time to write a post about goings on in GNOME Online Accounts and other STF-funded initiatives. There's a lot to be excited about this cycle and most of it is leading to more improvements in the near future. # GNOME Online Accounts A lot happened in GNOME 46 for GNOME Online Accounts, including two new providers, a port to GTK4 and Adwaita, authentication in the desktop browser, and a large refactoring towards contemporary platform conventions. The new WebDAV and Microsoft 365 providers contrast quite a bit, although both made progress in the general direction we want to move. The existing Nexcloud provider was a good starting point for WebDAV, but support for more implementations and auto-discovery were important goals for our push towards open protocols and local-first principles. # WebDAV The WebDAV provider seemed like it would be fairly straightforward, however feedback from the community has shown that several popular services like Fastmail and offer support for features like custom domain names and app passwords restricted by content type. These are great features, but not supported by a naive implementation of standard service discovery. The large refactoring in GNOME 46 de-duplicated a lot of code and is much easier to use, but a lot of the account setup process is still tightly coupled to the user interface. The new work being done to support more service configurations and improve the user experience is separate from the provider code, which has already led to the easy integration of some nice features: In the video above, you can see the implementation of Mail Autoconfig at work detecting settings for Fastmail from the email address, in the otherwise unchanged Email setup dialog. I'd like to thank Tyler and the rest of the development team at Fastmail for extending me an account for additional testing this cycle. By design, Mail Autoconfig doesn't need authentication, but this account was very helpful while improving support for content-restricted app passwords. These app passwords are especially relevant to WebDAV, which received a few adjustments to adopt an internal API compatible with the Mail Autoconfig implementation. While the WebDAV setup dialog hasn't landed the same UI feedback improvements yet, there is a work-in-progress that does: In this video, you can see an early prototype for a goal of a lot of this work. Thanks to the research and work of Éloi Rivard we have an excellent whiteboard for Generic Service Providers, detailing a large amount of the subject matter. The immediate goal then is to offer an account type that's easy to set up, supports the common set of services (mail, calendar, contacts and files) and adapts to the available services using open protocols and standards. We still have longer-term changes planned to support a sandbox-friendly ecosystem, but it's not yet clear what form GNOME Online Accounts will take or how applications will interact with it. For this reason, all the new code supporting Mail Autoconfig and WebDAV was written for potential reuse later, without investing in becoming yet another Accounts SSO. # Microsoft 365 Much of the recent work on GNOME Online Accounts was funded by the Sovereign Tech Fund, while the Microsoft 365 provider is the work of Jan-Michael Brummer. The initial support includes access to OneDrive with Files and GVfs, with support for email, contacts and calendar planned for the near future. While our focus remains on open protocols and local-first principles, it's still a goal to support services that the people use in their work and provide benefit to the community. Something interesting the project gained from Jan's work is the first OAuth provider with support for user-provided client IDs. Currently, every distributor and fork of GNOME Online Accounts has been using the GNOME Foundation's client IDs for OAuth providers. This can be problematic depending on the terms of service and restrictive for those using enterprise or organizational accounts. Unfortunately, the first iterations of the setup dialog did not result in a good user experience. Microsoft's services come with their own concepts and terminology, which really resulted in a lack of clear direction in the design of the interface during an already busy release cycle. The tight coupling between logic and user interface did not help here either, as evidenced by the double-modal: The amount of feedback and issues reported has been the most help here, as we learn how Microsoft 365 is being used by the open-source community in both personal and work-related environments. Support for more services like email, calendar and contacts are planned, and hopefully some better support for organizational accounts. # Orca and Spiel Something I am thrilled to have the opportunity to take part in is Spiel, a new speech synthesis service by Eitan Isaacson that's a bit reminiscent of MPRIS. New speech providers can be easily written in C, Rust or any other language binding and libspiel will aggregate them for the client. An interesting difference with Speech Dispatcher is that while the speech provider takes responsibility for speech synthesis, the client application takes responsibility for the audio output. Internally, the speech provider returns a file descriptor over D-Bus and GStreamer is used to output the audio on the client side. While Spiel does have some exciting possibilities outside of screen readers, including new synthesizers like Piper, you may be surprised to find the speech rate that many users operate a screen reader at. Léonie Watson has an excellent blog post titled Notes on synthetic speech, with plenty of audio clips and insights into how screen readers are used by real people. I was fortunate enough to have the opportunity to integrate Spiel into Orca, which was a great learning experience and re-sparked my interest in accessibility in general. Something else to watch out for is Matt Campbell's Newton project, bringing a modern accessibility stack to Wayland. # Acknowledgements I'd like to again thank the Sovereign Tech Fund for investing in the GNOME project. Their targeted funding of infrastructure and accessibility has empowered a lot of overdue improvements for open source and the GNOME platform. I'd also like to thank the development team at Fastmail, who upon request graciously granted us an extended account, to continue testing support for their service. The support staff at also extended a trial period as a show of good faith. It's been really encouraging to have these companies show support for the community, thank you! As always, GNOME's community of contributors and users have been the most help, with diligent reporting, code reviews and advice. There are so many things happening in open source, I really wouldn't be able to keep up without all your help.
  • Michael Meeks: 2024-06-10 Monday (2024/06/10 19:40)
    Mail chew; packed for OW2con; 1:1s with Andras and Miklos, M. drove me to the station, train(s) to Paris, caught up with admin. Eventually got to the hotel; met up with Meven for dinner - and back to work.
  • Michael Meeks: 2024-06-09 Sunday (2024/06/09 21:00)
    Up; All Saints, played Guitar with Mary and iSing; home for pizza lunch; Hopper wedding - played with Cedric and Sian & Mary - reception afterwards. Home to rest; finished All the light you cannot see or somesuch - good; then also About Time - unsuitable but well meant.
  • Alice Mikhaylenko: CSS Happenings (2024/06/07 13:33)
    This cycle GTK got a lot of updates to its CSS engine. I started this work as part of the Sovereign Tech Fund initiative, and later on Matthias Clasen joined in and we ended up splitting the work on colors. Let’s go through the changes, as well as how they affect GTK, libadwaita and apps. Variables The most notable addition is CSS variables, more correctly known as custom properties. Since before GTK switched to CSS for styles, it has had named colors, a non-standard addition providing a similar functionality. You could define a color, then refer to it by name. Unfortunately, they had big limitations. First, they are global. You can only define colors for the whole stylesheet, and if you want to override them for a single widget subtree – you’re out of luck. Second, they were obviously only for colors. The only option to have them local to a widget was to use gtk_style_context_add_provider() on a GtkStyleContext obtained via gtk_widget_get_style_context. However, it had an even bigger problem: being local to a widget, it didn’t propagate to children, which made it practically useless. Even for widgets that seemingly don’t have children: for example, if you add a provider to a GtkEntry, it won’t affect its text, or the icons, or the progress bar – they are all separate widgets within the entry. So, it shouldn’t be a big surprise that this API is deprecated. Variables, meanwhile, don’t have any of these limitations. They can be defined anywhere, propagate to children, and they can be anything – colors, numbers, dimensions, etc. For example, we can override them for a specific widget as follows: :root { --some-color: red; } my-widget { --some-color: green; } After defining, they can be accessed with the var() function: my-widget { color: var(--some-color); } This function also allows to specify a fallback in case a variable doesn’t exist, as follows: my-widget { color: var(--some-color, var(--some-other-color, red)); } Here it will try, in order, --some-color, then --some-other-color, and if neither exists, red. The standard place to declare global variables is the :root selector, and so it’s now supported too. While GTK root widgets usually have the window selector, that’s not always the case. For example, GtkCheckButton within a GtkTreeView are their own toplevels and targeting window would not include them. While tree view is deprecated and hopefully on its way out, we have to support it anyway for now, and who knows what other root widgets appear in future, and having :root solves it all nicely. Variables can even be animated, though that’s not particularly useful: since they can be anything and you may potentially be animating red into 2px, the only way to interpolate them is to have a jump from the initial value to final value at 50%. Either way, the spec allows that and we implement it. You can still use them within another property’s value and animate that property instead – that will work as expected. There are some things we don’t support at the moment, like the @property at-rule. It allows to declare variable types, optionally prevent inheriting, and specify the default value. Having a type not only provides type checking (and hence more informative error messages), but also allows to actually interpolate variables in animations. See, if a variable can be anything, we can’t interpolate it other than with a jump in the middle. But if we can guarantee it’s a color, or a dimension, or a number, or whatever, at every keyframe, then we can. But, while that would be neat to have, it’s really niche compared to having variables themselves, and so not very important. Another thing is that it’s not possible to use variables within named colors, like this: @define-color my_color var(--something); /* this is an error */ Since named colors will be going away in future, that’s not a big deal, but it is worth mentioning anyway. The other way around does work though: @define-color something red; :root { --my-color: @something; /* this is perfectly fine */ } (and that’s fairly important, as it allows us to switch to variables without breaking backwards compatibility) Colors Next, colors. A lot of features from CSS Color Module Level 4 and Level 5 were implemented as well. Modern syntax In the past, CSS has used the following syntax for defining colors: rgb(255, 0, 0) rgba(255, 0, 0, 0.5) hsl(0, 100%, 50%) hsla(0, 100%, 50%, 0.5) That’s still supported, but modern CSS also has a simpler and more flexible syntax: rgb(255 0 0) rgb(255 0 0 / 50%) hsl(0 100 50) hsl(0 100 50 / 50%) It allows to freely mix percentages and numbers, and makes alpha an optional parameter separated by solidus instead of having separate functions. And now GTK supports it too. Modern syntax also supports specifying missing components, for example: hsl(none 100% 50%). In most situations they behave same as 0, but they make a difference for interpolation, so we’ll talk about that in more details later. Additionally, it supports specifying hue in hsl() as an angle instead of a number (meaning that it’s possible to use units like turn instead of degrees), as well as calc() anywhere within these functions: hsl(calc(0.5turn - 10deg) 100% 50% / calc(1 / 2)) More color spaces GTK also supports a bunch more color spaces now, all using the modern syntax: Color space CSS Linear sRGB color(srgb-linear 1 0 0) HWB hwb(0deg 0 0) Oklab oklab(62.8% 0.22 0.13) Oklch oklch(62.8% 0.25 29) I won’t be describing the color spaces in detail, but that information can be easily found online. For example, Oklab and Oklch are very well described in their creator’s blog post. color() also supports sRGB, but it works same as rgb(), except the channels have 0-1 range instead of 0-255. color() in the spec supports a lot more color spaces, for example Display P3, but since we don’t have HDR support in GTK just yet, supporting them wouldn’t be of much use at the moment, so that’s omitted. Also omitted are Lab, LCh and various XYZ color spaces for now. Oklab/Oklch work better than Lab/LCh for UI colors anyway, and XYZ is fairly niche and not widely used in this context. However, just defining colors in different color spaces isn’t that interesting to me. What’s more interesting is deriving new colors from from existing ones in those color spaces, so let’s look at that. Color mixing First, we have support for the color-mix() function. While GTK has had a non-standard mix() function for more than a decade, color-mix() is not only standard CSS, but also a whole lot more flexible. Most important is the fact it allows to mix colors in different color spaces instead of just sRGB – all of the ones mentioned above: color-mix(in oklch, red, green) For HSL, HWB and Oklch, it’s possible to specify the hue interpolation mode too – for example, color-mix(in hsl longer hue, red, green). color-mix() also allows more sophisticated mixing via missing components. They allow some channels to be taken from one of the colors without taking the other one into account at all. For example, the following mix: color-mix(in srgb, rgb(100% none 100%), rgb(none 50% 0)) takes the red channel from the first color, the green channel from the second color, and mixes the blue channel from both colors, resulting in the following color: color(srgb 1 0.5 0.5). For comparison, the same mix but with none replaced by 0: color-mix(in srgb, rgb(100% 0 100%), rgb(0 50% 0)) mixes every channel and produces color(srgb 0.5 0.25 0.5). While mix() specifies a single fraction for the mix, color-mix() specifies two percentages. Usually they are normalized so that they add up to 100%, but if their sum is lower than that, it’s used as an alpha multiplier, allowing to add transparency to the mix: color-mix(in srgb, red 30%, blue 20%) /* color(srgb 0.6 0 0.4 / 0.5) */ Percentages are optional though. When one is omitted, it’s assumed to be the other one subtracted from 100%. When both are omitted, they are assumed to be 50%/50%. Relative colors GTK now also supports relative colors. Unlike mixing, these take one color and change its individual channels. For example, we can create a complementary color by inverting the hue: hsl(from var(--some-color) calc(h + 0.5turn) s l) Change a color’s lightness to a given value: oklab(from var(--some-color) 0.9 a b) Or add transparency to a color, like the SASS transparentize() function: rgb(from var(--some-color) r g b / calc(alpha - 25%)) Combined with calc() and variables, this is a very flexible system, and I’m really curious to see how it will get used in future. Math functions There’s also support for a lot of math functions: min(), max(), clamp() round() rem() mod() sin(), cos(), tan(), asin(), acos(), atan(), atan2() pow(), sqrt(), hypot(), log(), exp() abs(), sign() e, pi, infinity, NaN They can also be used in calc(), of course. I already found an interesting use case for two of them in libadwaita. Misc changes The opacity property can now accept percentages in addition to numbers. This wouldn’t matter much, but it means it can accept the same values as color-mix(). Now let’s look at other changes all of this allowed. GTK deprecations GTK has a bunch of non-standard additions to CSS, related to colors: most notably, @define-color and named colors, but also the alpha(), mix(), shade(), lighter() and darker() functions. They allow to manipulate colors. Some are more useful than others, but, for example, alpha() is used extensively in libadwaita and modern GNOME apps – especially in combination with currentColor. For example, libadwaita buttons have the following background color: alpha(currentColor, 0.1); and can work with both light and dark background – they just follow the text color. As useful as they are, CSS now has standard ways to replace every single one of them, and apps are encouraged to do so. Named colors @define-color and named colors can be replaced with variables. For example, this snippet: @define-color my_color red; my-widget { color: @my_color; } becomes: :root { --my-color: red; } my-widget { color: var(--my-color); } mix() This is straightforward. color-mix() works exactly same when using the sRGB color space. /* mix(red, blue, .3) */ color-mix(in srgb, red 30%, blue) alpha() There are multiple ways to replace it. Both color-mix() and relative colors will do the job: /* alpha(currentColor, 0.15); */ color-mix(in srgb, currentColor 15%, transparent) rgb(from currentColor r g b / calc(alpha * 0.15)) Note that only the latter works for factors larger than 1. alpha() is also often used with hardcoded colors, usually black. In that case just defining the color as is is sufficient: /* alpha(black, .8) */ rgb(0 0 0 / 80%) shade() Shading is less obvious. One might assume that it would work same as mixing with black or white color, but… not quite. It was converting the colors to HSL, multiplying lightness and saturation, and converting back to sRGB. As such, mixing wouldn’t exactly work here, it will produce close but subtly different results, even when done in the HSL color space. For some color/factor combinations it would be same, but other times it will be different. Relative colors, meanwhile, allow to manipulate each channel separately, and together with calc() we can do exactly same thing: /* shade(red, 1.1) */ hsl(from red h calc(s * 1.1) calc(l * 1.1)) This is a lot more verbose, but it produces the same result as shade() for any color and factor. Of course, one could go further and use color spaces like Oklch or Oklab instead, and have more consistent results due to their perceptual nature. More on that later. lighter() and darker() Well, these are simple. They are just shade() with hardcoded factor parameter: 1.3 for lighter() and 0.7 for darker(). As such, they are replaced the same way: /* lighter(red) */ hsl(from red h calc(s * 1.3) calc(l * 1.3)) /* darker(red) */ hsl(from red h calc(s * 0.7) calc(l * 0.7)) Libadwaita changes This has allowed to clean up a lot of things in libadwaita styles too. Named colors? First of all, all of the existing named colors are available as variables too now. For backwards compatibility reasons their defaut values reference the old colors, for example: :root { --window-bg-color: @window_bg_color; --window-fg-color: @window_fg_color; } They are named the same way as for the old colors, but with dashes in the names instead of underscores. One exception is that @borders became --border-color instead, for consistency with other colors. However, being variables they can still be overridden per widget, have fallbacks and so on. NOTE: since they reference named colors, using these variables will produce deprecation warnings with GTK_DEBUG=css. This is a known issue, but there isn’t much that can be done here, other than breaking compatibility for all existing apps with custom styles. Also, overriding these variables won’t affect styles that use the old named colors directly. This isn’t a big issue if you’re porting your app (because you can replace both definitions and mentions at once), but may be a problem with libraries like libpanel which provide their own styles and use a lot of named colors. Once those libraries are ported though, it will work fine for both apps that have and haven’t been updated. All of the available CSS variables are documented on their page, just like the old named colors. If you’re using libadwaita 1.5 or below, the old docs are still where they were. It should be noted that compability colors like @theme_bg_color don’t have equivalent variables. Those colors only exist for compatibility, and apps using libadwaita shouldn’t be using them in the first place – specifically that color can be replaced by --window-bg-color, and so on. This can be a problem for libraries like WebKitGTK, however, so maybe there should be an agreed upon set of variables too. For now though, named colors still exist and won’t go away until GTK5, but this is something that needs be figured out at some point. New variables There are a few additions as well: The opacity used for borders is also available separately now, as --border-opacity. The opacities used for the .dim-label style class and disabled widgets are available as well, as --dim-opacity and --disabled-opacity respectively. --window-radius refers to the window’s current corner radius. Normally it’s 12px, but it becomes 0 when the window is maximized, tiled and so on. It can be used for things like rounding focus rings near window corners only for floating windows, instead of using a long and unwieldy selector like this: window:not(.maximized):not(.tiled):not(.tiled-left):not(.tiled-right):not(.tiled-top):not(.tiled-bottom):not(.fullscreen):not(.solid-csd) my-widget { /* yikes */ } Style changes There were a few things that we wanted to do for a while, but that needed variables to be feasible. Now they are feasible, and are implemented. For example, the .osd style overrides accent color to be light grey. Previously this was done with separate overrides for every single widget that uses accent, and in some cases blue accents sneaked through anyway. It also didn’t work with custom widgets defined in apps, unless they special cased it themselves. Now it just overrides the accent color variables and is both much simpler and actually consistent. Destructive buttons previously had blue focus rings, and now they are red, matching the buttons themselves. Moreover, apps can easily get custom-colored widgets like this themselves, with matching focus rings. Since previously we couldn’t override accent color per-widget, the way to recolor buttons (as well as checks, switches etc) was to just set background-color and color properties manually. It worked, but obviously didn’t affect focus rings, and focus rings are complicated, so doing it manually wouldn’t be feasible. They have a very specific color with a specific opacity, a different opacity for the high contrast mode, and the actual selector changes a lot between widgets. In case of buttons it’s button:focus:focus-visible, for entries it’s entry:focus-within and so on. Not very good, so we didn’t encourage apps to change focus rings, and didn’t change them on destructive buttons either. But now that we have variables, people can just override the accent color, and focus rings will follow the suit. For example, to make a green button, an app can just apply .suggested-action and override accent on it to green: So, for example, Calculator can now make its big orange result button orange and have matching focus ring on it without changing accent color in the whole app. Because of that, the .opaque style class has been deprecated. Instead, apps are encouraged to use .suggested-action like above. Meanwhile, GtkEntry with .error, .warning and .success style classes have had red focus rings, as an exception. Later, AdwEntryRow gained the same styles, but that was messy overall. But now these style classes just set accent color in addition to text color, so these styles aren’t special cases anymore – they will work with any widget. Deriving accent colors Since 1.0, libadwaita has had two separate accent colors: --accent-bg-color and --accent-color. The former is suitable for use as a background on widgets like buttons, checks and switches, but usually has too low contrast to be used as a text color. The latter has a higher contrast, suitable for use as text, but it doesn’t work well as a background. That’s a bit confusing, and I’ve seen a lot of apps misusing them. Some apps set a custom accent color and just set them to the same value, so they don’t have enough contrast. Some apps mix up the colors and use the wrong one. And so on. It would be really nice if we could generate one from the other one. Previously this was firmly in the realm of pipe dreams, but now that we have relative colors, it’s a lot more feasible. For example, the following functions produce consistently good colors for light and dark styles respectively: /* light style */ --accent-color: oklab(from var(--accent-bg-color) min(l, 0.5) a b); /* dark style */ --accent-color: oklab(from var(--accent-bg-color) max(l, 0.85) a b); Here are some examples, for both light and dark style: Unlike the HSL color space, Oklab is perceptual and lightness stays uniform regardless of other channels. So, as simple as it sounds, just limiting the lightness does the job. In fact, we could even do this: /* light style */ --accent-color: oklab(from var(--accent-bg-color) 0.5 a b); /* dark style */ --accent-color: oklab(from var(--accent-bg-color) 0.85 a b); The downside is that --accent-bg-color: black; in light style would produce a dark gray --accent-color instead of black, and --accent-bg-color: white; in dark style would produce a light gray accent instead of white. This may be fine, but the goal here is to ensure minimum contrast, not to prevent too much contrast. These functions are also in the docs, but there’s one problem. So why not do it automatically? Say, we define these variables at :root, as follows: :root { --accent-bg-color: var(--blue-3); --accent-color: oklab(from var(--accent-bg-color) min(l, 0.5) a b); } Then, we override accent color for a specific widget: my-green-widget { --accent-bg-color: var(--green-3); } But --accent-color would still be blue, so the we would need to re-declare it – not very good. There is a way to get around that – use a wildcard: * { --accent-color: oklab(from var(--accent-bg-color) min(l, 0.5) a b); } But that has its own downsides – it may impact performance and memory use, like wildcards in CSS tend to do. That said, I haven’t profiled it and maybe it’s not that bad. Either way, for now we’re keeping --accent-color separate, but apps overriding it are welcome to use the above functions themselves, instead of picking the color by hand. Maybe in future we’ll figure something out. Future There are a lot more things we could do if we didn’t need to care about backwards compatibility. For example, instead of having colors like --shade-color that are always supposed to be partially-transparent black, we could just provide the opacity as a variable. (or maybe even derive it from background color lightness?) We could specify accent color as a hue and/or chroma in the Oklch color space, while keeping the lightness consistent. And so on. Can we drop SCSS entirely? A lot of people asked this or assumed that these additions were the few missing pieces for dropping it. As much as I’d like to, not yet. While this helps with reducing dependency on it a bit, there are still a lot of things that cannot be replicated, like @mixin and @extend. It also allows us to have private variables and control what gets exposed as API, which is pretty important for a library. In fact, there are situations where we had to add more SASS-specific things, tho mostly because we’re using the older (and unmaintained) sassc compiler instead of dart-sass. Being old and unmaintained, sassc doesn’t support the modern rgb() syntax, and errors out. There is a way to placate it, by using RGB() instead, but that’s a hack. It also doesn’t like the slash symbol (well, solidus) and thinks it’s division. This can be worked around with string interpolation, but it’s a hack too. So, instead of this: rgb(0 0 0 / if($contrast == 'high', 80%, 5%)) we have to do this: RGB(0 0 0 / #{if($contrast == 'high', 80%, 5%)}) (and then keep in mind that SASS won’t see this as a color, so you can’t use functions like transparentize() on it, not that we really need to) Finally, the opacity() function (used within the filter property) kept getting replaced with alpha(), but only when it contained a variable instead of a literal: filter: opacity(55%); /* SASS output: filter: opacity(55%); */ filter: opacity(var(--disabled-opacity)); /* SASS output: filter: alpha(var(--disabled-opacity)); */ I worked around this by once again manipulating case. opacity() was affected, but Opacity() wasn’t. All of this could be solved by migrating to dart-sass. This is difficult, however – GNOME SDK doesn’t have any Dart components at the moment, and it would need to build that first. A lot of manifests I’ve seen on Flathub just use pre-built SASS compiler, but I doubt this would be acceptable for the SDK. So, for now we’re stuck with sassc. Help is very much welcome, particularly with the SDK side of things. Programmatic API? Another question I see fairly often: is it possible to read variables programmatically? The short answer is: no. The longer answer is: variables can be anything. The only way to read them would be as a string. At that point, you’d need to have a CSS parser in your app to actually parse them into a useful value. For example, let’s say we have a variable declaring a color, like red. One might think: so what, just use gdk_rgba_parse(). Now, what about rgb(0 0 0 / 50%), the modern syntax? gdk_rgba_parse() doesn’t support that, but one might argue it should. Alright, what about this: color-mix(in oklab, currentColor, var(--something) calc(var(--base-percentage) * 0.5)) This is a potentially valid color, and yet there’s no way gdk_rgba_parse() would be able to parse this – how would it know what currentColor is when you don’t supply it a widget? It would need to resolve other variables somehow. And it would also need a full-blown calc() parser. Yeah, colors get complicated fast. It’s fine when we’re already inside a CSS parser, but gdk_rgba_parse() isn’t one. Now, could we have a function to get a variable value specifically as a color? Sure, except variables don’t have to be colors. You might also get 3px, 0 or asjdflaskjd. They don’t even have to be valid values, they can also be parts of the values, or even random gibberish. CSS doesn’t even try to compute variables on their own, only insert them into other values when you reference them with var() and then parse and compute that once every reference is resolved. The following is perfectly valid: my-widget { --space: srgb-linear; --r: 100%; --gb: 0 0; color: color(var(--space) var(--r) var(--gb) / 50%); } So, with that in mind, would API for fetching them be useful? IMO not really. There are very specific cases where it may work (e.g. if you define a variable containing an integer and then treat it as pixels in your code), but they are far too limited, break easily, and arguably are a misuse of CSS anyway. It would be a bit more feasible if we had @property, as then you can at least guarantee the variable type. Even then the type system is very flexible and one can define things like <color># | <integer>#. The valid variables with this type would be comma-separated lists of colors, and comma-separated lists of integers, so good luck with that. And we don’t even need to go that far: take a look at <image>. It can be a gradient (linear (possibly repeating), radial or conic), a URL, image() which produces a solid color image from a given color, or many other things. At best I can see getters being limited to, say, colors, numbers and dimensions. Many thanks to: Matthias Clasen for reviews and implementing HWB, Oklch and Oklab color spaces, relative colors, color interpolation and math functions. STF for funding this work. Sonny Piers and Tobias Bernard for organizing everything.
  • Felix Häcker: #151 Pride Month (2024/06/07 00:00)
    Update on what happened across the GNOME project in the week from May 31 to June 07. This Week in GNOME is dedicated to the struggles of all lesbian, gay, trans, inter, bi, pan, asexual, aromantic, non-binary, and queer people. LGBTQIA+ people have always been an integral part of our project, across all different roles. We wish to take this opportunity to thank you all from the bottom of our hearts for all your contributions to GNOME and to our community as whole. We as a project have a fundamental responsibility to ensure the safety and well-being of everyone in our community. This can only be accomplished by prioritizing marginalized people’s safety over privileged people’s comfort. To accomplish this we expect everyone to follow our Code of Conduct, we moderate our chat rooms via cross-community moderation lists and we believe in speaking up whenever something is not right. This takes a remarkable effort every single day, and we still have a lot to learn and to improve. Let us learn and grow together, to become the best we can be. Our thoughts go out to all the people that have to fight for their right to exist. We stand with you and we fight with you. Sovereign Tech Fund Sonny reports As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects. Here are the highlights for the past two weeks: Tooling and QA Martin published a post on discourse: Towards a better way to hack and test your system components. Feedback welcome! Martin made an initial implementation of sysext-utils based on the proposal ahead. Abderrahim made a GNOME OS single installer image that can install both the ostree and the sysupdate variants. The ostree variant can be dropped when sysupdate is no longer considered experimental. Platform Alice wrote a blog post about all of the CSS work Alice landed AdwMultiLayoutView in libadwaita. Sophie added option to apply image orientation to texture data in glycin. Sophie split C bindings into libglycin and libglycin-gtk4 and added necessary features such that projects that don’t want/need to depend on GTK (liker mutter) can use glycin without depending on it. Flatpak Dorota added an internal API to GNOME Shell that provides additional functionality for managing key grabs, suitable for the GlobalShortcuts portal. Hub fixed a leak in Flatpak that was spoiling the CLI output. Hub wrote the Flatpak docs chapter on fallback devices. António landed many refactors and the part 1 of FileChooser portal implementation for Nautilus. Accessibility Matt implemented support for the GtkTextView multi-line text widget in the GTK AccessKit implementation. Matt fixed a bug in libadwaita that caused the accessibility tree under an AdwTabView or AdwViewStack to be inconsistent, leading to a crash in AccessKit. Matt started key grabbing protocol support in Mutter/Orca to improve screen reader compatibility in Wayland. Mutter Orca Georges fixed a bug in WebKit where accessible events aren’t notified Multidisciplinary fix, pending reviews but the relevant people are MIA WebKit: Flatpak: xdg-dbus-proxy: Secrets Felix added file monitor to Key Rack to watch for Flatpak app keyring changes Felix landed a major refactor in Key Rack to merge logic for system and flatpak items. Dhanuka implemented the secret prompt interface in oo7 Implemented org.gnome.keyring.Prompter interface Implemented org.freedesktop.Secret.Prompt interface Integrated secret prompt into oo7-daemon Interoperability Andy landed tons of things on GNOME Online Account SRV lookup Parallel SRV lookups for DAV IMAP/SMTP autoconfig Kerberos KEYRING support (big battery improvements) Port to AdwDialog Settings: don’t show account details when setup completes Neill Whillans says More visible CVE reports for gnome-build-meta Recently we have made a few changes to the generation and accessibility of CVE reports for gnome-build-meta. These changes include the generation of additional CVE reports for both the vm and vm-secure manifests, for master and all future, supported release branches. We also make use of Gitlab Pages to automatically publish not only the CVE reports for the master branch, but also reports for each currently supported release branch. A list of these individual reports can be obtained through a newly added badge that can be found at the top of gnome-build-meta’s README. This work was carried out as part of Codethink’s collaboration with the GNOME Foundation, through the Sovereign Tech Fund (STF). GNOME Core Apps and Libraries GTK Cross-platform widget toolkit for creating graphical user interfaces. Alice (she/her) reports I wrote a blog post about the recent work in GTK to modernize its CSS engine, and the implications for apps using GTK and libadwaita: GNOME Incubating Apps Sophie (she/her) reports Showtime has been accepted into the GNOME Incubator. The GNOME incubation process is for apps that are designated to be accepted into GNOME Core or GNOME Development Tools if they reach the required maturity. Showtime is slated to replace Totem, GNOME’s current video app. The reason is that Totem has now been unmaintained for nearly a year and still uses GTK 3. You can help improve Showtime by testing the nightly version and contributing. The incubation progress will be tracked in this issue. GNOME Circle Apps and Libraries Railway Travel with all your train information in one place. Markus Göllnitz says We have some great news about Railway! has done a tremendous job at boring up the backend code of the application, as well as its library counterpart. Railway is no longer limited to the HAFAS API. 🎉 That is not just a theoretical improvement: As the first newly supported network, you can test the Swiss SBB through If you use Railway and want to test this new network, or want to help us make sure we did not introduce obscure regressions, be sure to test our Railway beta release on Flathub beta: flatpak install If you do test it: Thank You! And feel free to share feedback with us over at or at our issue tracker. Aaaand … even went so far as to implement support for MOTIS. You never heard of that? It is used by an open data based routing project: the Transitous project. As they say about themselves, “Transitous is a community-run provider-neutral international public transport routing service.” If you hate the need to change the network in Railway, care about FLOSS server-side as much as client-side, and want to see open data in open formats, such as GTFS, excel, you will love this. In the beta release, you can find it as “Worldwide (beta) using Transitous,” but bear in mind that official providers are still more accurate, more up-to-date, and will stay for a reason. If you want to see that changed, you can get involved with said project. gtk-rs Safe bindings to the Rust language for fundamental libraries from the GNOME stack. Bilal Elmoussaoui says It took a bit too long but we have finally released a blog post that covers the changes that landed in the latest Rust bindings release. Third Party Projects Nokse reports I have released Exhibit, an app to view 3D models, powered by F3D. Some of the features are: Support for many file formats, including glTF, USD, STL, STEP, PLY, OBJ, FBX, Alembic and many more Many rendering and visual settings HDRI or colored background Export render to image Drag and drop files It installs mimetype and icon for supported file types For now it only works with X11/Xwayland because the library can only be compiled to work with one compositor at a time. Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! Daniel Wood announces Design - 2D CAD for GNOME has recieved an update with lots of changes. This update forms a foundation for more advanced drawing entities like dimensions. Highlights include: Manage text and line styles from the preferences window Update text and line style from the properties window Highlight design entities on mouse hover Fix opening files as arguments e.g. ‘open with’ from file browser Improve Trim and Extend command selection highlighting Fix behaviour when no commands have been performed Prevent selection of items on locked or frozen layers Fix ortho and polar snaps when typed point data is entered Update rotate command to match typical workflow Underline shortcut letter of command options Use consistent prompt for command with options Remove custom headerbar buttons from layers window Use RGB colours internally for all colour definitions Support DXF true colours Fix issue with layers being updated with incorrect state when editing layer data Draw entities using correct colour when part of a block, supporting colour ‘ByBlock’ Add Block command, enabling creation of blocks Add Explode command, enabling decomposition of blocks into primitive entities Add Hatch command and common hatch patterns Update to the latest GNOME dev kit icons Add Purge command, clearing unused, blocks, layers and line types Show suggestions for invalid command input Get it on Flathub: Keypunch Practice your typing skills Brage Fuglseth reports How fast can you type? Find out with my new app, Keypunch! I’ve worked on this for the last couple of months, and it’s finally out. Keypunch lets you practice your typing skills with automatically generated pseudo-text in your language of choice. Alternatively, you can supply it with your own text, such as song lyrics, Wikipedia articles, and quotes. Get ready to accelerate your typing speed! Get it on Flathub: Shell Extensions Arca says Day Progress: an extension that shows a progress bar of your day to help you track your time. I actually released this last week, but it took another week for it to become stable enough to call it a stable release. Here’s how it works: you set the start and reset (end) times in the extension preferences (for example, the start and end of your workday) and it displays a progress bar of the current time in the top panel. Features: Customisable start and end times, including the ability to wrap around midnight Option to show time elapsed instead of remaining Customisable bar width and height Option to display a bar with smooth, curved ends (experimental) Customisable panel position and index Shows percentage and time elapsed and remaining in the tooltip (see image below) You can download the extension at: The GitHub repository is at: Internships Pablo Correa Gomez announces as part of GSoC camelCaseNick landed support for floating zoom buttons in Papers! This design pattern was part of the mockups for more than 5 years, but was never implemented! This is a first step to make the top bar fitting on mobile! Events Asmit Malakannawar reports Hello everyone, We’re thrilled to announce that the call for volunteers and the call for BoFs for GUADEC 2024 is now open! If you’re passionate about making GUADEC 2024 a success and want to lend a helping hand, please fill out the volunteer form. For those interested in presenting a workshop or leading a Birds of a Feather (BoF) session, we’d love to hear from you! Please fill out the BoFs form to submit your proposal. Volunteer Form Link BoFs Form Link GNOME Foundation Rosanna reports My week has been very busy with financial things. I had meetings with our bookkeeper and our accountant. With their support, our books have never looked as good as they do now. In addition, working within our board-approved balanced budget has meant that our numbers look healthier as well. This point was made more apparent to me when our accountant asked if we had considered finding higher interest accounts for our funds. More research on this is now on my to-do list. The Foundation’s Strategic Plan Draft is still open for feedback. Please submit using the link on that page. While we value all feedback, having it all in one place makes it easier for us to make sure we don’t lose any of your views. If you haven’t registered for GUADEC yet, please do so:! The earlier we know how many people are coming, the easier it will be for us to plan. And don’t forget to register for the baseball game as well. Go Red Sox! We are still looking for more sponsors for GUADEC. If you or your employer are interested, please let us know! Caroline Henriksen reports In honor of Pride Month, we’d like to give a special shout out to our LGBTQIA+ community. Thank you for contributing your voices and efforts to GNOME, we appreciate all of your hard work! We’d also like to invite everyone to join us in celebrating Pride Month at a special GNOME Social Hour on June 24 at 17:30 UTC. Come chat with contributors and Foundation members, and share ideas on how we can better support our LGBTQIA+ community. That’s all for this week! See you next week, and be sure to stop by with updates on your own projects!
  • Peter Hutterer: goodbye xsetwacom, hello gsetwacom (2024/06/06 06:22)
    Back in the day when presumably at least someone was young, the venerable xsetwacom tool was commonly used to configure wacom tablets devices on Xorg [1]. This tool is going dodo in Wayland because, well, a tool that is specific to an X input driver kinda stops working when said X input driver is no longer being used. Such is technology, let's go back to sheep farming. There's nothing hugely special about xsetwacom, it's effectively identical to the xinput commandline tool except for the CLI that guides you towards the various wacom driver-specific properties and knows the right magic values to set. Like xinput, xsetwacom has one big peculiarity: it is a fire-and-forget tool and nothing is persistent - unplugging the device or logging out would vanish the current value without so much as a "poof" noise [2]. If also somewhat clashes with GNOME (or any DE, really). GNOME configuration works so that GNOME Settings (gnome-control-center) and GNOME Tweaks write the various values to the gsettings. mutter [3] picks up changes to those values and in response toggles the X driver properties (or in Wayland the libinput context). xsetwacom short-cuts that process by writing directly to the driver but properties are "last one wins" so there were plenty of use-cases over the years where changes by xsetwacom were overwritten. Anyway, there are plenty of use-cases where xsetwacom is actually quite useful, in particular where tablet behaviour needs to be scripted, e.g. switching between pressure curves at the press of a button or key. But xsetwacom cannot work under Wayland because a) the xf86-input-wacom driver is no longer in use, b) only the compositor (i.e. mutter) has access to the libinput context (and some behaviours are now implemented in the compositor anyway) and c) we're constantly trying to think of new ways to make life worse for angry commenters on the internets. So if xsetwacom cannot work, what can we do? Well, most configurations possible with xsetwacom are actually available in GNOME. So let's make those available to a commandline utility! And voila, I present to you gsetwacom, a commandline utility to toggle the various tablet settings under GNOME: $ gsetwacom list-devices devices: - name: "HUION Huion Tablet_H641P Pen" usbid: "256C:0066" - name: "Wacom Intuos Pro M Pen" usbid: "056A:0357" $ gsetwacom tablet "056A:0357" set-left-handed true $ gsetwacom tablet "056A:0357" set-button-action A keybinding "<Control><Alt>t" $ gsetwacom tablet "056A:0357" map-to-monitor --connector DP-1 Just like xsetwacom was effectively identical to xinput but with a domain-specific CLI, gsetwacom is effectively identical to the gsettings tool but with a domain-specific CLI. gsetwacom is not intended to be a drop-in replacement for xsetwacom, the CLI is very different. That's mostly on purpose because I don't want to have to chase bug-for-bug compatibility for something that is very different after all. I almost spent more time writing this blog post than on the implementation so it's still a bit rough. Also, (partially) due to how relocatable schemas work error checking is virtually nonexistent - if you want to configure Button 16 on your 2-button tablet device you can do that. Just don't expect 14 new buttons to magically sprout from your tablet. This could all be worked around with e.g. libwacom integration but right now I'm too lazy for that [4] Oh, and because gsetwacom writes the gsettings configuration it is persistent, GNOME Settings will pick up those values and they'll be re-applied by mutter after unplug. And because mutter-on-Xorg still works, gsetwacom will work the same under Xorg. It'll also work under the GNOME derivatives as long as they use the same gsettings schemas and keys. Le utilitaire est mort, vive le utilitaire! [1] The git log claims libwacom was originally written in 2009. By me. That was a surprise... [2] Though if you have the same speakers as I do you at least get a loud "pop" sound whenever you log in/out and the speaker gets woken up [3] It used to be gnome-settings-daemon but with mutter now controlling the libinput context this all moved to mutter [4] Especially because I don't want to write Python bindings for libwacom right now
  • Christian Hergert: Manuals on Flathub (2024/06/05 19:13)
    Manuals contains the documentation engine from Builder as a standalone application. Not only does it browse documentation organized by SDK but can install additional SDKs too. This is done using the same techniques Builder uses to manage your project SDKs. It should feel very familiar if you’re already using the documentation tooling in Builder. In the past, we would just parse all the *.devhelp2 files up-front when loading. GMarkupParseContext is fast enough that it isn’t too much overhead at start-up for a couple hundred files. However, once you start dealing with SDKs and multiple versions of all these files the startup performance can take quite a hit. So Manuals indexes these files into SQLite using GOM and performs queries using that instead. It conveniently makes cross-referencing easy too so you can jump between SDK revisions for a particular piece of documentation. Enjoy!
  • Alberto Garcia: More ways to install software in SteamOS: Distrobox and Nix (2024/06/05 15:53)
    Introduction In my previous post I talked about how to use systemd-sysext to add software to the Steam Deck without modifying the root filesystem. In this post I will give a brief overview of two additional methods. Distrobox distrobox is a tool that uses containers to create a mutable environment on top of your OS. With distrobox you can open a terminal with your favorite Linux distro inside, with full access to the package manager and the ability to install additional software. Containers created by distrobox are integrated with the system so apps running inside have normal access to the user’s home directory and the Wayland/X11 session. Since these containers are not stored in the root filesystem they can survive an OS update and continue to work fine. For this reason they are particularly suited to systems with an immutable root filesystem such as Silverblue, Endless OS or SteamOS. Starting from SteamOS 3.5 the system comes with distrobox (and podman) preinstalled and it can be used right out of the box without having to do any previous setup. For example, in order to create a Debian bookworm container simply open a terminal and run this: $ distrobox create -i debian:bookworm debbox Here debian:bookworm is the image that this container is created from (debian is the name and bookworm is the tag, see the list of supported tags here) and debbox is the name that is given to this new container. Once the container is created you can enter it: $ distrobox enter debbox Or from the ‘Debian’ entry in the desktop menu -> Lost & Found. Once inside the container you can run your Debian commands normally: $ sudo apt update $ sudo apt install vim-gtk3 Nix Nix is a package manager for Linux and other Unix-like systems. It has the property that it can be installed alongside the official package manager of any distribution, allowing the user to add software without affecting the rest of the system. Nix installs everything under the /nix directory, and packages are made available to the user through a new entry in the PATH and a ~/.nix-profile symlink stored in the home directory. Nix is more things, including the basis of the NixOS operating system. Explaning Nix in more detail is beyond the scope of this blog post, but for SteamOS users these are perhaps its most interesting properties: Nix is self-contained: all packages and their dependencies are installed under /nix. Unlike software installed with pacman, Nix survives OS updates. Unlike podman / distrobox, Nix does not create any containers. All packages have normal access to the rest of the system, just like native SteamOS packages. Nix has a very large collection of packages, here is a search engine: The only thing that Nix needs from SteamOS is help to set up the /nix directory so its contents are not stored in the root filesystem. This is already happening starting from SteamOS 3.5 so you can install Nix right away in single-user mode: $ sudo chown deck:deck /nix $ wget $ sh ./install --no-daemon This installs Nix and adds a line to ~/.bash_profile to set up the necessary environment variables. After that you can log in again and start using it. Here’s a very simple example (refer to the official documentation for more details): # Install and run Midnight Commander $ nix-env -iA $ mc # List installed packages $ nix-env -q mc-4.8.31 nix-2.21.1 # Uninstall Midnight Commander $ nix-env -e mc-4.8.31 What we have seen so far is how to install Nix in single-user mode, which is the simplest one and probably good enough for a single-user machine like the Steam Deck. The Nix project however recommends a multi-user installation, see here for the reasons. Unfortunately the official multi-user installer does not work out of the box on the Steam Deck yet, but if you want to go the multi-user way you can use the Determinate Systems installer: Conclusion Distrobox and Nix are useful tools and they give SteamOS users the ability to add additional software to the system without having to modify the base operating system. While for graphical applications the recommended way to install third-party software is still Flatpak, Distrobox and Nix give the user additional flexibility and are particularly useful for installing command-line utilities and other system tools.
  • Christian Hergert: Red Hat Day of Learning (2024/06/04 22:26)
    Occasionally at Red Hat we have a “Day of Learning” where we get to spend time learning about technology of our choice. I spent some time listening to various AI explanations which were suggested readings for the day. Nothing too surprising but also not exactly engaging to me. Maybe that’s because I grew up with a statistics professor for a father. So while that was playing I spent a little time learning how the GitLab API works. Immediately it stood out that one of the primary challenges in presenting UI for such an API would be in bridging GListModel to their implementation. So I spent a little time on the architecture for how you might want to do that in the form of a Gitlab-GLib library. Pagination There are essentially two modes of pagination supported by GitLab depending on the result set size. Both methods use HTTP headers to denote information about the result set. Importantly, any design would want to have an indirection object (and in this case GitlabListItem) which can have it’s resource dynamically loaded. Resources (such as a GitlabProject, or GitlabIssue) are “views” into the JSON result set. They are only used for reading, not for mutating or creating resources on the API server. Offset-based Pagination This form is somewhat handy because you can know the entire result-set size up front. Then you can back-fill entries as accessed by the user using bucketed lazy-loading. When the bucketed page loads, the indirection objects are supplied with their resource object which provides a typed API over the JsonNode backing it. This is the “ideal” form from the consuming standpoint but can put a great deal of load on the GitLab server instance. Progressive Pagination The other form of pagination lets you fetch the next page after each subsequent request. The header provides the next page number. One might expect that you could use this to still jump around to the appropriate page. However if you are not provided the “number of pages” header from the server then there is not much you can do to clamp your page range. Conclusion This was a fun little side-project to get to know some of the inner workings of the API backing what those of us in GNOME use every day. I have no idea if anything will come of it, but it certainly could be useful from Builder if anyone has time to run with it. For example, imagine having access to common GitLab operations from the header bar.
  • GNOME Foundation News: GUADEC 2024 Call for BoFs and Workshops (2024/06/04 14:55)
    We have opened an additional call for submissions for Birds-of-a-Feather (BoF) sessions and Workshops for GUADEC 2024! BoF and Workshop sessions will be scheduled in one or two-hour blocks on Monday, July 22, and Tuesday, July 23. If you’re interested in hosting a session on either of these days, please fill out this form to apply. If you previously submitted a proposal during the call for proposals, your request has already been included in the conference schedule. The deadline for new submissions is 15th June. GUADEC 2024 will take place at Tivoli Student Union in the Auraria Campus. For more information, visit Please note that BoFs and Workshops will not be live-streamed. Sessions can be held either remotely or in person, GUADEC BoF rooms on will be available for each track. If you are hosting in person and want to allow remote attendance you will need to use your own computers, cameras, and microphones to connect to the remote conference room. We look forward to receiving your BoF and Workshop proposals and making GUADEC 2024 a memorable event for everyone. For more details or to register, please visit We can’t wait to see you at GUADEC 2024! Submit a Proposal
  • Hari Rana: Libadwaita: Splitting GTK and Design Language (2024/06/03 00:00)
    Introduction Recently, the Linux Mint Blog published Monthly News – April 2024, which goes into detail about wanting to fork and maintain older GNOME apps in collaboration with other GTK-based desktop environments. Despite the good intentions of the author, Clem, many readers interpreted this as an attack against GNOME. Specifically: GTK, libadwaita, the relationship between them, and their relevance to any desktop environment or desktop operating system. Unfortunately, many of these readers seem to have a lot of difficulty understanding what GTK is trying to be, and how libadwaita helps. In this article, we’ll look at the history of why and how libadwaita was born, the differences between GTK 4 and libadwaita in terms of scope of support, their relevance to each desktop environment and desktop operating system, and the state of GTK 4 today. What Is GTK? First of all, what is GTK? GTK is a cross-platform widget toolkit from the GNOME Project, which means it provides interactive elements that developers can use to build their apps. The latest major release of GTK is 4, which brings performance improvements over GTK 3. GTK 4 also removes several widgets that were part of the GNOME design language, which became a controversy. In the context of application design, a design language is the visual characteristics that are communicated to the user. Fonts, colors, shapes, forms, layouts, writing styles, spacing, etc. are all elements of the design language.(Source) Unnecessary Unification of the Toolkit and Design Language In general, cross-platform toolkits tend to provide general-purpose/standard widgets, typically with a non-opinionated styling, i.e. widgets and design patterns that are used consistently across different operating systems (OSes) and desktop environments. However, GTK had the unique case of bundling GNOME’s design language into GTK, which made it far from generic, leading to problems of different lexicons, mainly philosophical and technical problems. Clash of Philosophies When we look at apps made for the GNOME desktop (will be referred to as “GNOME apps”) as opposed to non-GNOME apps, we notice that they’re distinctive: GNOME apps tend to have hamburger buttons, header bars, larger buttons, larger padding and margins, etc., while most non-GNOME apps tend to be more compact, use menu bars, standard title bars, and many other design metaphors that may not be used in GNOME apps. This is because, from a design philosophy standpoint, GNOME’s design patterns tend to go in a different direction than most apps. As a brand and product, GNOME has a design language it adheres to, which is accompanied by the GNOME Human Interface Guidelines (HIG). As a result, GTK and GNOME’s design language clashed together. Instead of being as general-purpose as possible, GTK as a cross-platform toolkit contained an entire design language intended to be used only by a specific desktop, thus defeating the purpose of a cross-platform toolkit. For more information on GNOME’s design philosophy, see “What is GNOME’s Philosophy?”. Inefficient Diversion of Resources The unnecessary unification of the toolkit and design language also divided a significant amount of effort and maintenance: Instead of focusing solely on the general-purpose widgets that could be used across all desktop OSes and environments, much of the focus was on the widgets that were intended to conform to the GNOME HIG. Many of the general-purpose widgets also included features and functionality that were only relevant to the GNOME desktop, making them less general-purpose. Thus, the general-purpose widgets were being implemented and improved slowly, and the large codebase also made the GNOME widgets and design language difficult to maintain, change, and adapt. In other words, almost everything was hindered by the lack of independence on both sides. Libhandy: the Predecessor Because of the technical bottlenecks caused by the philosophical decisions, libhandy was created in 2017, with the first experimental version released in 2018. As described on the website, libhandy is a collection of “[b]uilding blocks for modern adaptive GNOME applications.” In other words, libhandy provides additional widgets that can be used by GNOME apps, especially those that use GTK 3. For example, Boxes uses libhandy, and many GNOME apps that used to use GTK 3 also used libhandy. However, some of the problems remained: Since libhandy was relatively new at the time, most GNOME widgets were still part of GTK 3, which continued to suffer from the consequences of merging the toolkit and design language. Furthermore, GTK 4 was released at the end of December 2020 — after libhandy. Since libhandy was created before the initial release of GTK 4, it made little sense to fully address these issues in GTK 3, especially when doing so would have caused major breakages and inconveniences for GTK, libhandy, and app developers. As such, it wasn’t worth the effort. With these issues in mind, the best course of action was to introduce all these major changes and breakages in GTK 4, use libhandy as an experiment and to gain experience, and properly address these issues in a successor. Libadwaita: the Successor Because of all the above problems, libadwaita was created: libhandy’s successor that will accompany GTK 4. GTK 4 was initially released in December 2020, and libadwaita was released one year later, in December 2021. With the experience gained from libhandy, libadwaita managed to become extensible and easy to maintain. Libadwaita is a platform library accompanying GTK 4. A platform library is a library used to complement a specific platform. In the case of libadwaita, the platform it targets is the GNOME desktop. Porting Widgets to Libadwaita Some GNOME widgets from GTK 3 (or earlier versions of GTK 4) were removed or deprecated in GTK 4 and were reimplemented in / transferred to libadwaita, for example: GtkDialog → AdwDialog 1 GtkInfoBar → AdwBanner These aforementioned widgets only benefited GNOME apps, as they were strictly designed to provide widgets that conformed to the GNOME HIG. Non-GNOME apps usually didn’t use these widgets, so they were practically irrelevant to everyone else. In addition, libadwaita introduced several widgets as counterparts to GTK 4 to comply with the HIG: GtkHeaderBar → AdwHeaderBar GtkAlertDialog → AdwAlertDialog GtkAboutDialog → AdwAboutDialog Similarly, these aforementioned GTK 4 (the ones starting with Gtk) widgets are not designed to comply with the GNOME HIG. Since GTK 4 widgets are supposed to be general-purpose, they should not be platform-specific; the HIG no longer has any influence on GTK, only on the development of libadwaita. Scope of Support The main difference between GTK 4 and libadwaita is the scope of support, specifically the priorities in terms of the GNOME desktop, and desktop environment and OS support. While most resources are dedicated to GNOME desktop integration, GTK 4 is not nearly as focused on the GNOME desktop as libadwaita. GTK 4, while opinionated, still tries to get closer to the traditional desktop metaphor by providing these general-purpose widgets, while libadwaita provides custom widgets to conform to the GNOME HIG. Since libadwaita is only made for the GNOME desktop, and the GNOME desktop is primarily officially supported on Linux, libadwaita thus primarily supports Linux. In contrast, GTK is officially supported on all major operating systems (Windows, macOS, Linux). However, since GTK 4 is mostly developed by GNOME developers, it works best on Linux and GNOME — hence “opinionated”. State of GTK 4 Today Thanks to the removal of GNOME widgets from GTK 4, GTK developers can continue to work on general-purpose widgets, without being influenced or restricted in any way by the GNOME HIG. Developers of cross-platform GTK 3 apps that rely exclusively on general-purpose widgets can be more confident that GTK 4 won’t remove these widgets, and hopefully enjoy the benefits that GTK 4 offers. At the time of writing, there are several cross-platform apps that have either successfully ported to GTK 4, or are currently in the process of doing so. To name a few: Freeciv gtk4 client, HandBrake, Inkscape, Transmission, and PulseAudio Volume Control. The LibreOffice developers are working on the GTK 4 port, with the gtk4 VCL plugin option enabled. For example, the libreoffice-fresh package from Arch Linux has it enabled. Here are screenshots of the aforementioned apps: Freeciv gtk4 client in the game view, displaying a title bar, a custom background, a menu bar, a tab view with the Chat tab selected, an entry, and a few buttons. HandBrake in the main view, displaying a title bar, a menu bar, a horizontal toolbar below it with custom buttons, entries, popover buttons, a tab view with the Summary tab selected, containing a popover button and several checkboxes. Development version of Inkscape in the main view, displaying a title bar, a menu bar, a horizontal toolbar below, vertical toolbars on the left and right, a canvas grid on the center left, a tab view on the center right with the Display Properties tab selected, and a toolbar at the bottom. LibreOffice Writer with the experimental gtk4 VCL plugin in workspace view, displaying a title bar, a menu bar, two horizontal toolbars below, a vertical toolbar on the right, a workspace grid in the middle with selected text, and a status bar at the bottom. Transmission in the main view, displaying a title bar, a menu bar, a horizontal toolbar, a filter bar, an empty field in the center of the view, and a status bar at the bottom. PulseAudio Volume Control in the Output Devices view, displaying a title bar, a tab section, a list of output devices, and a bottom bar with a combo box. A GNOME App Remains a GNOME App, Unless Otherwise Stated This is a counter-response to Thom Holwerda’s response to this article. An app targeting a specific platform will typically run best on that platform and will naturally struggle to integrate with other platforms. Whether the libraries change over time or stay the same forever, if the developers are invested in the platform they are targeting, the app will follow the direction of the platform and continue to struggle to integrate with other platforms. At best, it will integrate in other platforms by accident. In this case, developers who have and will continue to target the GNOME desktop will actively adapt their apps to follow the GNOME philosophy, for better or worse. Hamburger buttons, header bars, typography, and distinct design patterns were already present a decade ago (2014).(Source) Since other platforms were (and still are) adhering to different design languages, with or without libhandy/libadwaita, the GTK 3 apps targeting GNOME were already distinguishable a decade ago. Custom solutions such as theming were (and still are) inadequate, as there was (and still is) no 🪄 magical 🪄 solution that converts GNOME’s design patterns into their platform-agnostic counterparts. Whether the design language is part of the toolkit or a separate library has no effect on integration, because GNOME apps already looked really different long before libhandy was created, and non-GNOME apps already looked “out of place” in GNOME as well. Apps targeting a specific platform that unintentionally integrate with other platforms will eventually stop integrating with other platforms as the target platform progresses and apps adapt. In rare cases, developers may decide to no longer adhere to the GNOME HIG. Alternate Platforms While libadwaita is the most popular and widely used platform library that accompanies GTK 4, there are several alternatives to libadwaita: Granite is developed and maintained by elementary, Inc., and focuses on elementary OS and the Pantheon desktop. Apps that use Granite can be found in the elementary AppCenter. Libhelium is developed and maintained by Fyra Labs, and focuses on tauOS. Apps using libhelium can be found in the “libhelium” topics on GitHub. There are also several alternatives to libhandy: libxapp is developed and maintained by Linux Mint, and focuses on multiple GTK desktop environments, such as Cinnamon, MATE, and Xfce. libxfce4ui is developed and maintained by Xfce, and focuses on Xfce. Just like libadwaita and libhandy, these platform libraries offer custom widgets and styling that differ from GTK and are built for their respective platforms, so it’s important to realize that GTK is meant to be built with a complementary platform library that extends its functionality when targeting a specific platform. Similarly, Kirigami from KDE accompanies Qt to build Plasma apps. MauiKit from the Maui Project (another KDE project) also accompanies Qt, but targets Nitrux. Libcosmic by System76 accompanies iced to build COSMIC apps. Conclusion A cross-platform toolkit should primarily provide general-purpose widgets. Third parties should be able to extend the toolkit as they see fit through a platform library if they want to target a specific platform. As we’ve seen throughout the philosophical and technical issues with GTK, a lot of effort has gone into moving GNOME widgets from GTK 4 to libadwaita. GTK 4 will continue to provide these general-purpose widgets for apps intended to run on any desktop or OS, while platform libraries such as libadwaita, Granite and libhelium provide styling and custom widgets that respect their respective platforms. Libadwaita is targeted exclusively at the GNOME ecosystem, courtesy of the GNOME HIG. Apps built with libadwaita are intended to run best on GNOME, while GTK 4 apps that don’t come with a platform library are intended to run everywhere. The core functionality of GtkDialog, i.e. creating dialogs, has been moved to GtkWindow. ↩
  • GNOME Foundation News: Exciting Updates on the GNOME Development Initiative and Sovereign Tech Fund (2024/05/31 19:26)
    GNOME Foundation Executive Director Holly Million had a call this week with Tara Tarakiyee, our program manager at Sovereign Tech Fund (STF), providing him with an update on the project work taking place under the Foundation’s current contract with STF and the Foundation’s plans to continue and expand the work. We’re thrilled to share those updates with the greater community here! Key Updates The contracted work continues to progress, and the Sovereign Tech Fund is very encouraged by what has been accomplished to date. The areas of work currently being funded by STF are planned to continue and to be strengthened and expanded as part of our new, permanent GNOME Development Initiative, as described in our draft strategic plan: The Foundation is reorganizing the project and hiring an additional program manager to work with current managers on the new Initiative. We are finalizing a contract for transitional work with the new manager and will make a formal announcement next week. We hope to significantly increase the amount of development work happening through the Initiative with a process that allows community suggestions for needed work and an application process for grants for proposed work. The Foundation recently applied to the Open Tech Fund to strengthen the Initiative, including proposing to hire a permanent full-time program manager and to invest in other important work to support our community The Foundation will apply for a new round of contract funding when the Sovereign Tech Fund reopens for applications in mid-June.  We have launched the GNOME Development Fund, which will raise additional support from the community to fuel the development work possible through the Initiative. Starting immediately, all donations made through the Fund will build the Initiative. This Fund page will continue to develop, with counters, a backer list, tiered benefits for backers at differing levels, and badges coming next. Donate today to support the future of GNOME. In other exciting news, the Foundation has new professional bookkeeping systems in place, completed a financial review in preparation for a required financial audit next year, and at the completion of the second quarter of this fiscal year, the Foundation is performing under budget and is on track in our commitment to having a non-deficit year. We will share more details, including graphs and financial details in a separate update soon. Learn More about the GNOME Development Fund
  • Felix Häcker: #150 Multiple Layouts (2024/05/31 00:00)
    Update on what happened across the GNOME project in the week from May 24 to May 31. GNOME Core Apps and Libraries Libadwaita Building blocks for modern GNOME apps using GTK4. Alice (she/her) announces libadwaita now has AdwMultiLayoutView, allowing to define multiple layouts and reparent children between them. This allows to completely reorganize UI (say, turn sidebar into a bottom bar, or a grid into a vertical box) by changing a single property, e.g. via a breakpoint setter GNOME Circle Apps and Libraries Workbench A sandbox to learn and prototype with GNOME technologies. Sonny says Workbench is now available on the GNOME Nightly repository, and we welcomed 2 GSoC students. You can read about it here NewsFlash feed reader Follow your favorite blogs & news sites. Jan Lukas says Newsflash can now play video attachments like those in video podcasts or youtube subscriptions. This is thanks to the amazing video player clapper, which is now also available as a library. In the process of integrating libclapper I generated rust bindings for it, that are available here. Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! Fretboard Look up guitar chords Brage Fuglseth announces The 1st of June is right around the corner, so why not pick up the guitar and learn some catchy tunes for the summer? I’ve just published version 7.0 of Fretboard, which brings more accurate chord name prediction, note names on hover for the neck top toggles, and a couple of fixes for various small issues encountered since last release. Get Fretboard on Flathub! Third Party Projects Alain announces Planify 4.8: New Features to Enhance Your Task and Project Management! We are excited to announce the new update for Planify, our task and project management app. With version 4.8, we’ve added features that will help you organize and visualize your tasks more efficiently. Here are the most notable updates: 1. Markdown Support in Task Descriptions Bring your descriptions to life! You can now use Markdown to format text in your task descriptions. This means you can add bold, italics, links, lists, and much more, allowing for greater customization and clarity in your descriptions. 2. Markdown Support in Task Titles Want to make your titles more striking and descriptive? With the new Markdown support in task titles, you can do just that. Use Markdown to highlight important parts of the title, making your tasks easier to identify at a glance. 3. New All Tasks View We know managing multiple projects and tasks can be challenging. That’s why we’ve introduced a new view that lets you see all your tasks and projects in one place. This feature provides you with a comprehensive overview of all your work, making it easier to plan and track your progress. 4. Expand or Collapse Tasks Functionality For better visual management, you can now expand or collapse tasks. This functionality allows you to quickly show or hide task details, helping you focus on what’s most important at any given moment without getting lost in the information. What to Expect from These Improvements? With these new features, Planify 4.8 becomes an even more powerful tool for managing your tasks and projects. Markdown support offers flexibility in how you present information, while the new view and the ability to expand or collapse tasks enhance the app’s usability and efficiency. Update to version 4.8 and discover how these new features can boost your productivity and organization! Cleo Menezes Jr. says Introducing Aurea, an app for Flatpak application developers. Aurea is a simple banner previewer that reads metainfo files and displays them as they will appear in Flathub banners, making app publication easier. Kudos to Tobias Bernard for the design. Get it on Flathub DaKnig says DewDuct 0.2.2: Wow! Couldn’t imagine I would get so many users! Thanks for all the feedback! New features this week: Import your NewPipe subscriptions (but better)! Or maybe manually subscribe to channels! The subscription list is saved and reloaded automatically Regarding flatpak, will work on it once the main feature is implemented: recent videos from your subscriptions. Please help via PR so that the flatpak will come out sooner. The new version is available in alpine, on the testing repo, or on postmarketos edge. Check it out if you have a linux phone! Turtle Manage git repositories in Nautilus. Philipp says Overcoming flatpak limitations Turtle 0.9 has been released, which finally brings better flatpak support. The turtle service is now available in the flatpak version and uses dbus to communicate with the Nautilus extension. A new plugin installer dialog, which can be opened directly from the Settings dialog, allows you to install the Nautilus plugin file with a single click. Additionally it is now possible to sign commits via the seahorse dbus interface. Crosswords A crossword puzzle game and creator. jrb announces Crosswords 0.3.13 was released! This version features several keyboard behavior cleanups for the main game providing a more natural feel when entering puzzles. For the editor, we moved the autofill functionality moved in-line instead of being in a modal dialog. Autofilling has been made faster and more correct, as well. Check out the release announcement for more information and download it from flathub! Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! GNOME Foundation Caroline Henriksen announces Final location proposals for GUADEC 2025 are due today! Make sure to submit yours by end-of-day or contact the Foundation if you have additional questions or need more time. More details and links to the submission form can be found here: Holly Million announces We have some exciting updates from the GNOME Foundation. Executive Director Holly Million had a call this week with Tara Tarakiyee, our program manager at Sovereign Tech Fund, providing him with an update on the project work taking place under the Foundation’s current contract with STF and the Foundation’s plans to continue and expand the work. The updates included: The contracted work continues to progress, and the Sovereign Tech Fund is very encouraged by what has been accomplished to date. The areas of work currently being funded by STF are planned to continue and to be strengthened and expanded as part of our new, permanent GNOME Development Initiative, as described in our draft strategic plan – The Foundation is reorganizing the project and hiring an additional program manager to work with current managers on the new Initiative. We are finalizing a contract for transitional work with the new manager and will make a formal announcement next week. We hope to significantly increase the amount of development work happening through the Initiative with a process that allows community suggestions for needed work and an application process for grants for proposed work. The Foundation recently applied to the Open Tech Fund to strengthen the Initiative, including proposing to hire a permanent full-time program manager and to invest in other important work to support our community The Foundation will apply for a new round of contract funding when the Sovereign Tech Fund reopens for applications in mid June. We have launched the GNOME Development Fund, which will raise additional support from the community to fuel the development work possible through the Initiative. Starting immediately, donations made through the Fund will build the Initiative – – This Fund page will continue to develop, with counters, a backer list, and tiered benefits for backers at differing levels, and badges coming next. Donate today to support the future of GNOME. In other exciting news, the Foundation has new professional bookkeeping systems in place, completed a financial review in preparation for a required financial audit next year, and at the completion of the second quarter of this fiscal year, the Foundation is performing under budget and is on track in our commitment to having a non-deficit year. We will share more details, including graphs and financial details soon. That’s all for this week! See you next week, and be sure to stop by with updates on your own projects!
  • Daniel García Moreno: rpmlint: Google Summer of Code 2024 (2024/05/29 22:00)
    I'm glad to say that I'll participate again in the GSoC, as mentor. This year we will continue the work done during the past year, as part of the openSUSE project. So this summer I'll be mentoring an intern and we'll continue working on improving the testing framework of the rpmlint project. This year we've a better testing framework, thanks to the work done during the past Summer of Code, by Afrid. So the goal for this year is to try to modernize existing tests and remove as much files as possible from test/binary directory, replacing those with mock packages defined with python code. The selected intern is Luz Marina Montilla Marín. She has done some initial work in the rpmlint project, creating the mock packages for some tests and we've just started with the work to do during the GSoC program, evaluating the tests that we've right now and planning were to start. She studies at Córdoba, Spain, my hometown. Every year I try to reach young people at different local universities, here in Andalucía, and sometimes I'm able to convince some students to participate, like the GSoC 2020, when Alejandro Dominguez, from Seville, were working on Fractal. So I'm happy that I'm increasing the number of free software developers in my local community :D I'm sure that she will be able to achieve great things during these three months, so I'm looking forward to start to code and see how far can we go.
  • Tamnjong Larry Tabeh: Hello Planet Gnome! (2024/05/29 03:24)
    Hello World!My name is Tamnjong Larry, and I live in a moderately sized town in Cameroon called Bamenda. My primary experience in the tech field has been in backend development using .NET and Java. Despite my focus on backend development, I believe the true magic in software happens when users interact with the interface — clicking buttons, scrolling bars, switching tabs, and more.Seeing users interact with systems and how intuitive designs make their lives easier, I found a deeper fulfillment.I have always been interested in open source. My new passion for design and understanding user behavior gave me an added boost to get involved in open source, which is the best place to learn from actual experts building interfaces that millions of people use. I applied to the Outreachy May to August 2024 cohort. I found the GNOME project, “Conduct a series of short user research exercises, using a mix of research methods,” to be the best match for my goal of learning about the UI/UX design process. I love interacting with people, asking questions, and understanding them.I believe in:Adventure: I should try out new things.Contribution: I should be able to give back to the world something of value, no matter how small it is.Optimism: Tomorrow is always better than today as long as I don’t give up today.I am happy to be part of the amazing GNOME community. Over the next few months, I will share my journey here.I am excited to be working with amazing mentors, Allan Day and Aryan Kaushik.
  • Sonny Piers: Workbench News (2024/05/28 14:22)
    Nightly Workbench is now available on the GNOME nightly repository. Please prefer Workbench from Flathub but if you're a GNOME contributor, Workbench nightly can come handy flatpak remote-add --if-not-exists gnome-nightly flatpak install gnome-nightly re.sonny.Workbench.Devel It is the first app on GitHub to be available on GNOME nightly. Thanks to Jordan Petridis and Bilal Elmoussaoui for the help. GSoC I'm very happy to announce that as of yesterday we are mentoring 2 students on Workbench. Angelo Verlain (aka vixalien) is a student from Kigali, Rwanda. Angelo is already a GNOME Foundation member and made significant contributions including an audio player app “Decibels” that is being incubated to join GNOME core apps. Bharat Tyagi is a student from Jaipur, India. Bharat made great contributions to Workbench during the GSoC contribution period, and I'm looking forward to seeing more of it. You can read their introduction here. Angelo is working on TypeScript support in Workbench and GNOME. Bharat is working on porting remaining demos to Vala, redesigning the Library and add code search to it. Very happy working with both of them
  • Felipe Borges: GNOME will have two Outreachy interns conducting a series of short user research exercises (2024/05/28 08:51)
    We are happy to announce that GNOME is sponsoring two Outreachy internship projects for the May-August 2024 Outreachy internship round where they will be conducting a series of short user research exercises, using a mix of research methods. Udo Ijibike and Tamnjong Larry Tabeh will be working with mentors Allan Day and Aryan Kaushik. Stay tuned to Planet GNOME for future updates on the progress of this project!
  • Cassidy James Blaede: Recovering the BIOS on a Dell XPS 13 (9310) (2024/05/28 00:00)
    It seems like every few months my work Dell XPS 13 just… dies. Or at least, it seems like it for a few minutes, making me panic about being able to do my work and wondering about backups. And then I remember that it just likes to play dead for no reason—maybe I recently did a BIOS update, maybe I had to open its chassis for some reason (like a jittery trackpad… another blog post I need to write), maybe the battery was totally discharged, or maybe it just wants to troll me. Idk. But it happens more regularly than it should. The symptoms are more or less the same: It doesn’t turn on If it does turn on (e.g. after leaving it plugged in for a while and/or holding the power button), I get a Dell logo and some blinking lights If I disconnect the battery and try to boot up, I get other blinky lights and/or an exciting RGB rave on the display The BIOS is just… inaccessible Perhaps the most frustrating thing about this situation is that Dell’s own documentation is terrible. They lean too heavily on forums with bad advice, their support site is hard to search, and different documentation will tell you to do different things for no discernible reason. The service manual is hard to find and has some useful information, but then too often tells you to contact their technical support—who will refuse to help if it’s out of warranty. I last documented how to recover from this on Mastodon when it happened to me in December, but surprise, it happened to me again! At this point I figured I’d make it more easily searchable/findable by documenting in a blog post. Here goes. Charge the Battery First, if your Dell XPS is failing to boot, try plugging it into power for an hour or so, then try booting after that. It’s possible the battery is just low and the laptop is refusing to boot—I’ve had this more than once, and the laptop frustratingly does a bad job of telling you what’s happening—and it seems to require being charged up more than just a little bit to turn back on after this happens. I don’t know why you can’t just connect power and boot up with a dead battery; I guess that would be too easy, and Dell prefers you to waste an hour without being able to use your computer. Neat stuff! If you’ve done this and still just get the Dell logo (and maybe some blinky lights from the battery/status LED strip below the trackpad), carry on. But note that you need at least 10–15% battery charge to do the BIOS recovery, anyway, so don’t skip this first step. Discharge (a.k.a. “flea power release”) I’m not sure how critical this is, but several versions of Dell’s documentation told me to do it. Unscrew the bottom panel (with a T5 torx bit) Disconnect the internal battery cable Hold the power button for 30 secs Yes, you will look like a fool doing this with the laptop propped up on its display so you can reach things. Next: Important! Keep the laptop display open until told otherwise! The XPS 13 9310 (and probably newer) power on when the display is opened, which will undo your hard work Re-connect the internal battery cable Carefully snap the bottom panel on—I’m not 100% sure this is needed, but I think it is because there’s a hardware tamper switch that tells the computer when the bottom panel is off Remember to keep the display open this whole time, too—I know, it’s a pain. Download the Latest BIOS Version Search Dell’s site for your model and find the latest BIOS file; for my XPS 13 9310 (not 2-in-1), it’s here. Don’t just Google for the model/BIOS, because Dell’s support site has terrible SEO/tagging. Instead, head to the Dell website, go to the support section for drivers and downloads, and search for your specific serial number/model to find the very latest BIOS version. Download the latest BIOS file in the Recovery Image (.rcv) format if offered, otherwise rename .exe to BIOS_IMG.rcv. Format a USB flash drive as FAT32, then drop the BIOS file to the root of the drive. You’ll also want a USB-C to USB-A adapter handy (unless you’re using a fancy USB-C flash drive). Recover the BIOS Important! If you can’t get this step to work, repeat the previous step, trying with a different flash drive. While keeping the laptop powered off, plug in your flash drive; again, I’m not sure how much this matters, but I seemed to only have success when using the right-hand USB port (near the power button) for the flash drive—it could just be a coincidence, but it seems to be the case for me While still keeping the laptop powered off, hold the Ctrl+Esc keys on the keyboard While still holding the keys, plug in your AC adapter (e.g. USB-C charger); once the Caps Lock key light lights up, release the keys If all goes well, you should eventually get a BIOS recovery screen; it might reboot/flash a few times before getting to this screen, so be patient Let me know on Mastodon if this worked for you, or if you have other tips. I still don’t know why I have to do this every once in a while, but it’s honestly making me reconsider Dells altogether. I really like the hardware, otherwise, but these sorts of issues have just been pretty common across models and years—I think for my next laptop, I’m eyeing that Star Labs StarFighter. 👀
  • Andy Wingo: cps in hoot (2024/05/27 12:36)
    Good morning good morning! Today I have another article on the Hoot Scheme-to-Wasm compiler, this time on Hoot’s use of the continuation-passing-style (CPS) transformation.calls calls callsSo, just a bit of context to start out: Hoot is a Guile, Guile is a Scheme, Scheme is a Lisp, one with “proper tail calls”: function calls are either in tail position, syntactically, in which case they are tail calls, or they are not in tail position, in which they are non-tail calls. A non-tail call suspends the calling function, putting the rest of it (the continuation) on some sort of stack, and will resume when the callee returns. Because non-tail calls push their continuation on a stack, we can call them push calls.(define (f) ;; A push call to g, binding its first return value. (define x (g)) ;; A tail call to h. (h x)) Usually the problem in implementing Scheme on other language run-times comes in tail calls, but WebAssembly supports them natively (except on JSC / Safari; should be coming at some point though). Hoot’s problem is the reverse: how to implement push calls?The issue might seem trivial but it is not. Let me illustrate briefly by describing what Guile does natively (not compiled to WebAssembly). Firstly, note that I am discussing residual push calls, by which I mean to say that the optimizer might remove a push call in the source program via inlining: we are looking at those push calls that survive the optimizer. Secondly, note that native Guile manages its own stack instead of using the stack given to it by the OS; this allows for push-call recursion without arbitrary limits. It also lets Guile capture stack slices and rewind them, which is the fundamental building block we use to implement exception handling, Fibers and other forms of lightweight concurrency.The straightforward function call will have an artificially limited total recursion depth in most WebAssembly implementations, meaning that many idiomatic uses of Guile will throw exceptions. Unpleasant, but perhaps we could stomach this tradeoff. The greater challenge is how to slice the stack. That I am aware of, there are three possible implementation strategies.generic slicingOne possibility is that the platform provides a generic, powerful stack-capture primitive, which is what Guile does. The good news is that one day, the WebAssembly stack-switching proposal should provide this too. And in the meantime, the so-called JS Promise Integration (JSPI) proposal gets close: if you enter Wasm from JS via a function marked as async, and you call out to JavaScript to a function marked as async (i.e. returning a promise), then on that nested Wasm-to-JS call, the engine will suspend the continuation and resume it only when the returned promise settles (i.e. completes with a value or an exception). Each entry from JS to Wasm via an async function allocates a fresh stack, so I understand you can have multiple pending promises, and thus multiple wasm coroutines in progress. It gets a little gnarly if you want to control when you wait, for example if you might want to wait on multiple promises; in that case you might not actually mark promise-returning functions as async, and instead import an async-marked async function waitFor(p) { return await p} or so, allowing you to use Promise.race and friends. The main problem though is that JSPI is only for JavaScript. Also, its stack sizes are even smaller than the the default stack size.instrumented slicingSo much for generic solutions. There is another option, to still use push calls from the target machine (WebAssembly), but to transform each function to allow it to suspend and resume. This is what I think of as Joe Marshall’s stack trick (also see §4.2 of the associated paper). The idea is that although there is no primitive to read the whole stack, each frame can access its own state. If you insert a try/catch around each push call, the catch handler can access local state for activations of that function. You can slice a stack by throwing a SaveContinuation exception, in which each frame’s catch handler saves its state and re-throws. And if we want to avoid exceptions, we can use checked returns as Asyncify does.I never understood, though, how you resume a frame. The Generalized Stack Inspection paper would seem to indicate that you need the transformation to introduce a function to run “the rest of the frame” at each push call, which becomes the Invoke virtual method on the reified frame object. To avoid code duplication you would have to make normal execution flow run these Invoke snippets as well, and that might undo much of the advantages. I understand the implementation that Joe Marshall was working on was an interpreter, though, which bounds the number of sites needing such a transformation.cps transformationThe third option is a continuation-passing-style transformation. A CPS transform results in a program whose procedures “return” by tail-calling their “continuations”, which themselves are procedures. Taking our previous example, a naïve CPS transformation would reify the following program:(define (f' k) (g' (lambda (x) (h' k x)))) Here f' (“f-prime”) receives its continuation as an argument. We call g', for whose continuation argument we pass a closure. That closure is the return continuation of g, binding a name to its result, and then tail-calls h with respect to f. We know their continuations are the same because it is the same binding, k.Unfortunately we can’t really slice abitrary ranges of a stack with the naïve CPS transformation: we can only capture the entire continuation, and can’t really inspect its structure. There is also no way to compose a captured continuation with the current continuation. And, in a naïve transformation, we would be constantly creating lots of heap allocation for these continuation closures; a push call effectively pushes a frame onto the heap as a closure, as we did above for g'.There is also the question of when to perform the CPS transform; most optimizing compilers would like a large first-order graph to work on, which is out of step with the way CPS transformation breaks functions into many parts. Still, there is a nugget of wisdom here. What if we preserve the conventional compiler IR for most of the pipeline, and only perform the CPS transformation at the end? In that way we can have nice SSA-style optimizations. And, for return continuations of push calls, what if instead of allocating a closure, we save the continuation data on an explicit stack. As Andrew Kennedy notes, closures introduced by the CPS transform follow a stack discipline, so this seems promising; we would have:(define (f'' k) (push! k) (push! h'') (g'' (lambda (x) (define h'' (pop!)) (define k (pop!)) (h'' k x)))) The explicit stack allows for generic slicing, which makes it a win for implementing delimited continuations.hoot and cpsHoot takes the CPS transformation approach with stack-allocated return closures. In fact, Hoot goes a little farther, too far probably:(define (f''') (push! k) (push! h''') (push! (lambda (x) (define h'' (pop!)) (define k (pop!)) (h'' k x))) (g''')) Here instead of passing the continuation as an argument, we pass it on the stack of saved values. Returning pops off from that stack; for example, (lambda () 42) would transform as (lambda () ((pop!) 42)). But some day I should go back and fix it to pass the continuation as an argument, to avoid excess stack traffic for leaf function calls.There are some gnarly details though, which I know you are here for!splitsFor our function f, we had to break it into two pieces: the part before the push-call to g and the part after. If we had two successive push-calls, we would instead split into three parts. In general, each push-call introduces a split; let us use the term tails for the components produced by a split. (You could also call them continuations.) How many tails will a function have? Well, one for the entry, one for each push call, and one any time control-flow merges between two tails. This is a fixpoint problem, given that the input IR is a graph. (There is also some special logic for call-with-prompt but that is too much detail for even this post.)where to save the variablesGuile is a dynamically-typed language, having a uniform SCM representation for every value. However in the compiler and run-time we can often unbox some values, generally as u64/s64/f64 values, but also raw pointers of some specific types, some GC-managed and some not. In native Guile, we can just splat all of these data members into 64-bit stack slots and rely on the compiler to emit stack maps to determine whether a given slot is a double or a tagged heap object reference or what. In WebAssembly though there is no sum type, and no place we can put either a u64 or a (ref eq) value. So we have not one stack but three (!) stacks: one for numeric values, implemented using a Wasm memory; one for (ref eq) values, using a table; and one for return continuations, because the func type hierarchy is disjoin from eq. It’s.... it’s gross? It’s gross.what variables to saveBefore a push-call, you save any local variables that will be live after the call. This is also a flow analysis problem. You can leave off constants, and instead reify them anew in the tail continuation. I realized, though, that we have some pessimality related to stacked continuations. Consider:(define (q x) (define y (f)) (define z (f)) (+ x y z)) Hoot’s CPS transform produces something like:(define (q0 x) (save! x) (save! q1) (f)) (define (q1 y) (restore! x) (save! x) (save! y) (save! q2) (f)) (define (q2 z) (restore! x) (restore! y) ((pop!) (+ x y z))) So q0 saved x, fine, indeed we need it later. But q1 didn’t need to restore x uselessly, only to save it again on q2‘s behalf. Really we should be applying a stack discipline for saved data within a function. Given that the source IR is a graph, this means another flow analysis problem, one that I haven’t thought about how to solve yet. I am not even sure if there is a solution in the literature, given that the SSA-like flow graphs plus tail calls / CPS is a somewhat niche combination.calling conventionsThe continuations introduced by CPS transformation have associated calling conventions: return continuations may have the generic varargs type, or the compiler may have concluded they have a fixed arity that doesn’t need checking. In any case, for a return, you call the return continuation with the returned values, and the return point then restores any live-in variables that were previously saved. But for a merge between tails, you can arrange to take the live-in variables directly as parameters; it is a direct call to a known continuation, rather than an indirect call to an unknown call site.cps soup?Guile’s intermediate representation is called CPS soup, and you might wonder what relationship that CPS has to this CPS. The answer is not much. The continuations in CPS soup are first-order; a term in one function cannot continue to a continuation in another function. (Inlining and contification can merge graphs from different functions, but the principle is the same.)It might help to explain that it is the same relationship as it would be if Guile represented programs using SSA: the Hoot CPS transform runs at the back-end of Guile’s compilation pipeline, where closures representations have already been made explicit. The IR is still direct-style, just that syntactically speaking, every call in a transformed program is a tail call. We had to introduce save and restore primitives to implement the saved variable stack, and some other tweaks, but generally speaking, the Hoot CPS transform ensures the run-time all-tail-calls property rather than altering the compile-time language; a transformed program is still CPS soup.finDid we actually make the right call in going for a CPS transformation?I don’t have good performance numbers at the moment, but from what I can see, the overhead introduced by CPS transformation can impose some penalties, even 10x penalties in some cases. But some results are quite good, improving over native Guile, so I can’t be categorical.But really the question is, is the performance acceptable for the functionality, and there I think the answer is more clear: we have a port of Fibers that I am sure Spritely colleagues will be writing more about soon, we have good integration with JavaScript promises while not relying on JSPI or Asyncify or anything else, and we haven’t had to compromise in significant ways regarding the source language. So, for now, I am satisfied, and looking forward to experimenting with the stack slicing proposal as it becomes available.Until next time, happy hooting!
  • Andy Wingo: hoot's wasm toolkit (2024/05/24 10:37)
    Good morning! Today we continue our dive into the Hoot Scheme-to-WebAssembly compiler. Instead of talking about Scheme, let’s focus on WebAssembly, specifically the set of tools that we have built in Hoot to wrangle Wasm. I am peddling a thesis: if you compile to Wasm, probably you should write a low-level Wasm toolchain as well.(Incidentally, some of this material was taken from a presentation I gave to the Wasm standardization organization back in October, which I think I haven’t shared yet in this space, so if you want some more context, have at it.)naming thingsCompilers are all about names: definitions of globals, types, local variables, and so on. An intermediate representation in a compiler is a graph of definitions and uses in which the edges are names, and the set of possible names is generally unbounded; compilers make more names when they see fit, for example when copying a subgraph via inlining, and remove names if they determine that a control or data-flow edge is not necessary. Having an unlimited set of names facilitates the graph transformation work that is the essence of a compiler.Machines, though, generally deal with addresses, not names; one of the jobs of the compiler back-end is to tabulate the various names in a compilation unit, assigning them to addresses, for example when laying out an ELF binary. Some uses may refer to names from outside the current compilation unit, as when you use a function from the C library. The linker intervenes at the back-end to splice in definitions for dangling uses and applies the final assignment of names to addresses.When targetting Wasm, consider what kinds of graph transformations you would like to make. You would probably like for the compiler to emit calls to functions from a low-level run-time library written in wasm. Those functions are probably going to pull in some additional definitions, such as globals, types, exception tags, and so on. Then once you have your full graph, you might want to lower it, somehow: for example, you choose to use the stringref string representation, but browsers don’t currently support it; you run a post-pass to lower to UTF-8 arrays, but then all your strings are not constant, meaning they can’t be used as global initializers; so you run another post-pass to initialize globals in order from the start function. You might want to make other global optimizations as well, for example to turn references to named locals into unnamed stack operands (not yet working :).Anyway what I am getting at is that you need a representation for Wasm in your compiler, and that representation needs to be fairly complete. At the very minimum, you need a facility to transform that in-memory representation to the standard WebAssembly text format, which allows you to use a third-party assembler and linker such as Binaryen’s wasm-opt. But since you have to have the in-memory representation for your own back-end purposes, probably you also implement the names-to-addresses mapping that will allow you to output binary WebAssembly also. Also it could be that Binaryen doesn’t support something you want to do; for example Hoot uses block parameters, which are supported fine in browsers but not in Binaryen.(I exaggerate a little; Binaryen is a more reasonable choice now than it was before the GC proposal was stabilised. But it has been useful to be able to control Hoot’s output, for example as the exception-handling proposal has evolved.)one thing leads to anotherOnce you have a textual and binary writer, and an in-memory representation, perhaps you want to be able to read binaries as well; and perhaps you want to be able to read text. Reading the text format is a little annoying, but I had implemented it already in JavaScript a few years ago; and porting it to Scheme was a no-brainer, allowing me to easily author the run-time Wasm library as text.And so now you have the beginnings of a full toolchain, built just out of necessity: reading, writing, in-memory construction and transformation. But how are you going to test the output? Are you going to require a browser? That’s gross. Node? Sure, we have to check against production Wasm engines, and that’s probably the easiest path to take; still, would be nice if this were optional. Wasmtime? But that doesn’t do GC.No, of course not, you are a dirty little compilers developer, you are just going to implement a little wasm interpreter, aren’t you. Of course you are. That way you can build nice debugging tools to help you understand when things go wrong. Hoot’s interpreter doesn’t pretend to be high-performance—it is not—but it is simple and it just works. Massive kudos to Spritely hacker David Thompson for implementing this. I think implementing a Wasm VM also had the pleasant side effect that David is now a Wasm expert; implementation is the best way to learn.Finally, one more benefit of having a Wasm toolchain as part of the compiler: %inline-wasm. In my example from last time, I had this snippet that makes a new bytevector:(%inline-wasm '(func (param $len i32) (param $init i32) (result (ref eq)) ( $mutable-bytevector (i32.const 0) ( $raw-bytevector (local.get $init) (local.get $len)))) len init) %inline-wasm takes a literal as its first argument, which should parse as a Wasm function. Parsing guarantees that the wasm is syntactically valid, and allows the arity of the wasm to become apparent: we just read off the function’s type. Knowing the number of parameters and results is one thing, but we can do better, in that we also know their type, which we use for intentional types, requiring in this case that the parameters be exact integers which get wrapped to the signed i32 range. The resulting term is spliced into the CPS graph, can be analyzed for its side effects, and ultimately when written to the binary we replace each local reference in the Wasm with a reference of the appropriate local variable. All this is possible because we have the tools to work on Wasm itself.finHoot’s Wasm toolchain is about 10K lines of code, and is fairly complete. I think it pays off for Hoot. If you are building a compiler targetting Wasm, consider budgetting for a 10K SLOC Wasm toolchain; you won’t regret it.Next time, an article on Hoot’s use of CPS. Until then, happy hacking!
  • Jonathan Blandford: Crosswords 0.3.13: Side Quests (2024/05/23 14:45)
    It’s time for another Crosswords release. I’ll keep this update short and sweet. I had grand plans last cycle to work on the word data and I did work a little on it — just not in the way I intended. Instead, a number of new contributors showed up which sent me in a different direction. I’m always happy to get new contributors and wanted to make sure they had a good experience. It ended up being a fun set of side quests before returning to the main set of features in the editor. Cursor behavior New contributor Adam filed a series of absolutely fantastic bug reports about the cursor behavior in the game. Adam fixed a couple bugs himself, and then pointed out that undo/redo behavior is uniquely weird with crosswords. Unlike text boxes, cursors have a natural direction associated with them that matters for the flow of editing. In a nutshell, when you undo a word you want the cursor to be restored at the same orientation as where it was at the start of the guess. On the other hand, when redoing a guess, you want the cursor to advance normally, which might be in a different place or orientation. It’s subtle, but is the kind of user touch that you would normally never notice. It just feels “clunky” without a fix. With all these changes, the cursor behavior feels a lot more natural. Can you spot the difference? Selections and Intersections Another side quest was to change the Autofill dialog to operate in-place. I foolishly thought that this would be a relatively quick change, but it ended up being a lot more work than expected. I’ll spare the details, but along the way, I also had to add three more features as dependencies. First, I’ve wanted a way to leave pencil markings for a long time. These would transient markings that show possibilities for a square without marking. We use it to show the results of the in-place autofill operation. Autofilling a section of the in place selection. Potential grids are written in pencil. Second I fixed an old bug that I’ve wanted to fix for a long time. Previously, the word list showed all possible words independently. Now it only shows words that show in both directions. As an example, in the grid below we don’t show “ACID — (80)”  in the Down list as that final “D” would mean the Across clue would have “WD” as its prefix. The acid test. WD-40 isn’t in our dictionary This required writing code to efficiently calculate the intersection of two lists. It sounds easy enough, but the key word here is “efficient”. While I’m sure the implementation could be improved, it’s fast enough for now for it to be used synchronously. Finally, I was able to use the intersection function to optimize the autofill algorithm itself. It’s significantly faster also more correct than the previous implementation, which means that the resulting boards will be more usable. It still can’t do a full 15×15 grid in a reasonable time, but it can solve about 1/3 of a grid. Other Federico and I are working with Pranjal as a GSOC student for the summer. He’s going to work on porting libipuz to rust, and we spent a good amount of time planning the approach for that as well as prepping the library. Tanmay has continued to work on the acrostic generator as part of last summer’s GSoC project. I’m so proud of his continued efforts in this space. Check out his recent post! Gwyneth showed up with support for UTF-8 embedding in puz files as well as support for loading .xd crossword files. I updated our use of libadwaita widgets to the latest release, and enabled style settings per-cell in the editor. Until next time!
  • Patrick Griffis: Introducing the WebKit Container SDK (2024/05/23 04:00)
    Developing WebKitGTK and WPE has always had challenges such as the amount of dependencies or it’s fairly complex C++ codebase which not all compiler versions handle well. To help with this we’ve made a new SDK to make it easier. Current Solutions There have always been multiple ways to build WebKit and its dependencies on your host however this was never a great developer experience. Only very specific hosts could be “supported”, you often had to build a large number of dependencies, and the end result wasn’t very reproducable for others. The current solution used by default is a Flatpak based one. This was a big improvement for ease of use and excellent for reproducablity but it introduced many challenges doing development work. As it has a strict sandbox and provides read-only runtimes it was difficult to use complex tooling/IDEs or develop third party libraries in it. The new SDK tries to take a middle ground between those two alternatives, isolating itself from the host to be somewhat reproducable, yet being a mutable environment to be flexible enough for a wide range of tools and workflows. The WebKit Container SDK At the core it is an Ubuntu OCI image with all of the dependencies and tooling needed to work on WebKit. On top of this we added some scripts to run/manage these containers with podman and aid in developing inside of the container. It’s intention is to be as simple as possible and not change traditional development workflows. You can find the SDK and follow the quickstart guide on our GitHub: The main requirements is that this only works on Linux with podman 4.0+ installed. For example Ubuntu 23.10+. In the most simple case, once you clone, using the SDK can be a few commands: source /your/path/to/webkit-container-sdk/ wkdev-create --create-home wkdev-enter From there you can use WebKit’s build scripts (./Tools/Scripts/build-webkit --gtk) or CMake. As mentioned before it is an Ubuntu installation so you can easily install your favorite tools directly like VSCode. We even provide a wkdev-setup-vscode script to automate that. Advanced Usage Disposibility A workflow that some developers may not be familiar with is making use of entirely disposable development environments. Since these are isolated containers you can easily make two. This allows you to do work in parallel that would interfere with eachother while not worrying about it as well as being able to get back to a known good state easily: wkdev-create --name=playground1 wkdev-create --name=playground2 podman rm playground1 # You would stop first if running. wkdev-enter --name=playground2 Working on Dependencies An important part of WebKit development is working on the dependencies of WebKit rather than itself, either for debugging or for new features. This can be difficult or error-prone with previous solutions. In order to make this easier we use a project called JHBuild which isn’t new but works well with containers and is a simple solution to work on our core dependencies. Here is an example workflow working on GLib: wkdev-create --name=glib wkdev-enter --name=glib # This will clone glib main, build, and install it for us. jhbuild build glib # At this point you could simply test if a bug was fixed in a different versin of glib. # We can also modify and debug glib directly. All of the projects are cloned into ~/checkout. cd ~/checkout/glib # Modify the source however you wish then install your new version. jhbuild make Remember that containers are isoated from each other so you can even have two terminals open with different builds of glib. This can also be used to test projects like Epiphany against your build of WebKit if you install it into the JHBUILD_PREFIX. To Be Continued In the next blog post I’ll document how to use VSCode inside of the SDK for debugging and development.
  • Daniel García Moreno: Python 3.13 Beta 1 (2024/05/21 22:00)
    Python 3.13 beta 1 is out, and I've been working on the openSUSE Tumbleweed package to get it ready for the release. Installing python 3.13 beta 1 in Tumbleweed If you are adventurous enough to want to test the python 3.13 and you are using openSUSE Tumbleweed, you can give it a try and install the current devel package: # zypper addrepo -p 1000 # zypper refresh # zypper install python313 What's new in Python 3.13 Python interpreter is pretty stable nowadays and it doesn't change too much to keep code compatible between versions, so if you are writing modern Python, your code should continue working whit this new version. But it's actively developed and new versions have cool new functionalities. New and improved interactive interpreter, colorized prompts, multiline editing with history preservation, interactive help with F1, history browsing with F2, paste mode with F3. A set of performance improvements. Removal of many deprecated modules: aifc, audioop, chunk, cgi, cgitb, crypt, imghdr, mailcap, msilib, nis, nntplib, ossaudiodev, pipes, sndhdr, spwd, sunau, telnetlib, uu, xdrlib, lib2to3. Enabling Experimental JIT Compiler The python 3.13 version will arrive with an experimental functionality to improve performance. We're building with the --enable-experimental-jit=yes-off so it's disabled by default but it can be enabled with a virtualenv before launching: $ PYTHON_JIT=1 python3.13 Free-threaded CPython The python 3.13 has another build option to disable the Global Interpreter Lock (--disable-gil), but we're not enabling it because in this case it's not possible to keep the same behavior. Building with disabled-gil will break compatibility. In any case, maybe it's interesting to be able to provide another version of the interpreter with the GIL disabled, for specific cases where the performance is something critical, but that's something to evaluate. We can think about having a python313-nogil package, but it's not something trivial to be able to have python313 and python313-nogil at the same time in the same system installation, so I'm not planning to work on that for now.
  • Bharat Tyagi: GSoC 2024: An Introductory Post (2024/05/21 17:45)
    From Me to YouI’m Bharat Tyagi, a third-year Computer Science student. I enjoy exploring things which is probably why I am here, writing this post for you all :). I’ll be working on Workbench this summer with my mentors Sonny Piers and Lorenz Wildberg.My Journey with Linux and GNOMEI stumbled upon Linux when my phone needed a revival and only Linux had drivers for it. I only extensively started using Ubuntu at the beginning of college, and since then it has been a fun journey. The terminal is beautiful and knowing your way around it I think is an essential part of the ecosystem.GNOME had been with me all along (considering I used Ubuntu and then Fedora) but always in the shadows, until recently when one of my extensions stopped working because it needed to be compiled for a more updated version of GNOME. I went looking into Github repositories and got to know about this wonderful desktop environment. I always enjoyed my distros because of how easy they were to use. Currently, I have settled for Fedora as my main, and I am enjoying what it has to offer.Let's dive into some of the good stuffI’ll be working on three things inside of Workbench, the first being porting the existing demos to Vala (An object-oriented programming language built for GNOME developers).Secondly, building a new Library for Workbench. As the Library grows bigger, the vertical stack of demos makes it harder to look for a specific demo. To amend this we will be working on a new design that will make it easier and faster for everyone to navigate and search through demos.The final piece of this puzzle will be implementing a more optimized code search using sqlite (blazing fast searches on the way!)To ConcludeI think if it weren’t for the wonderful community members helping me during my initial contributions with their insights, I wouldn’t have been able to learn and progress at all. Therefore, I’d love to give back and help as much as I can.I am excited about GSoC and thankful to my mentors for the opportunity. I also appreciate your time to reach this far into the post. I’ll be targeting bi-weekly updates, stay tuned for those! Until then
  • Jiri Eischmann: Fedora 40 Release Party in Prague (2024/05/20 11:08)
    Last Friday I organized a Fedora 40 release party in Prague. A month ago I got a message from Karel Ziegler of Etnetera Core if we could do a Fedora release party in Prague again. Etnetera Core is a mid-sized company that does custom software development and uses Red Hat technologies. They have a really cool office in Prague which we used as a venue for release parties several times in pre-covid times. We got a bit unlucky with the date this time. It was really terrible weather in Prague on Friday. It was pouring outside. Moreover the Ice Hockey World Championship is taking place in Prague now and the Czech team played against Austria at the time of the release party. These two things contributed to the less than expected attendance. But in the end roughly 15 people showed up. Fedora swag for party attendees. The talk part was really interesting. In the end it took almost 4 hours because there was a lot of discussion. The first talk was mine, traditionally on Fedora Workstation that switched to a long discussion about Distrobox vs Toolbx. As a result of that Luboš Kocman of SUSE got interested in Ptyxis saying that it’s something they may actually adopt, too. Lukáš Kotek talking about his legacy computing machine. The second talk was delivered by Lukáš Kotek who talked about building a retro-gaming machine based on Fedora and Atomic Pi. Several people asked us to provide his slides online, so here they are: kotek-fedora-legacy-computingDownload The third talk was delivered by Karel Ziegler who spoke on the new release of his favorite desktop environment – Plasma 6. The last talk was supposed to be delivered by Ondřej Kolín, but at the beginning of the party we were not sure if he’d make it because he was travelling from Berlin and was stuck in Friday traffic. The first three talks took so long due to interesting discussions that Ondřej arrived just on time for his talk. He spoke about his experience building a simple app and distributing it on Flathub. This again started an interesting discussion about new and traditional models of Linux app distribution. In the middle of the party we were joined by Andre Klapper, a long-time GNOME contributor living in Prague, and Keywan Tonekaboni, a German open source journalist who is currently on his holidays travelling on trains around the Czech Republic. We found out that we were taking the same train to Brno next day, so on Saturday we had another two hours for Linux software topics. I’d like to thank the Fedora Project for sponsoring my travel to Prague to organize the event and also big thanks to Etnetera Core for providing just perfect venue for the party and sponsoring refreshment (They even had a beer tap!) and the party cake. Fedora 40 cake.
  • Pablo Correa Gomez: Analysis of GNOME Foundation’s public economy: concerns and thoughts (2024/05/19 22:11)
    Apart from Software Development, I also have an interest in governance and finances. Therefore, last July, I was quite happy to attend my first Annual General Meeting (AGM), taking place in  GUADEC in Riga. I was a bit surprised by the format, as I was expecting something closer to an assembly than to a presentation with a Q&A at the end. It was still interesting to witness, but I was even more shocked by the huge negative cash flow (difference between revenue and expenditures). With the numbers presented, the foundation had lost approximately 650 000 USD in the 2021 exercise, and 300 000 USD in the 2022 exercise. And nobody seemed worry about it. I would have expected that such difference would come consequence of a great investment aimed at improving the situation of the foundation long-term. However, nothing like that was part of the AGM. This left me thinking, and a bit worried about what was going on with the financials and organization of the foundation. After asking a member of the Board in private, and getting no satisfactory response, I started doing some investigations Public information research The GNOME Foundation (legally GNOME Foundation Inc) has the 501(c)3 status. It means it is tax exempt. As part of such status, the tax payments, economic status and whereabouts of the GNOME Foundation Inc are public. So I had some look at the tax filling declarations of the last years. These contain detailed information about income and expenses, net assets (e.g: money in bank accounts), retribution of the Board, Executive director, and key employees, amount of money spent on fulfilling the goals of the foundation, and lots of other things. Despite their wide goal, the tax fillings are not very hard to read, and it’s easy to learn how much money the  foundation made or spent. Looking at the details, I found several worrying things, like the fact that revenue and expenses in the Annual Report presented in the AGM did not match those in the tax reports, that most expenses were aggregated in sections that required no explanation, or that there were some explanations for expenses required but missing. So I moved on to open a confidential issue in the Board Team in gitlab expressing my concerns. The answer mostly covered an explanation of the big deficits in the previous year (that would have been great to have in the Annual Report), but was otherwise generally disappointing. Most of my concerns (all of which are detailed below) were answered with nicely-written variations of: “that’s a small problem, we are aware of it and working on it”, or “this is not common practice and you can find unrelated information in X place”. It has been 6 months, a new tax statement and annual report are available, but problems persist. So I am sharing publicly my concerns with several goals: Make these concerns available to the general GNOME community. Even though everything I am presenting comes from public sources, it is burdensome to research, and requires some level of experience with bureaucracy. Show my interest about the topic, as I plan to present myself to the Board of Directors in the next elections. My goal is to become part of the Finance Committee to help improve in the transparency and efficiency of accounting. Make the Board aware of my concerns (and hopefully show that others also share them), so things can be improved regardless of whether I get or don’t get elected for the board Analysis of data and concerns The first analysis I did some months ago was not very detailed, and quite manual. This time, I gather information in more detailed, and compiled it in an spread sheet that I made publicly available. All the numbers are taken from GNOME’s Annual Reports and from the Tax declarations available in Pro Publica. I am very happy to get those values reviewed, as there could always be mistakes. I am still fairly certain that small errors won’t change my concerns, since those are based on patterns and not on one-time problems. So to my concerns: Non-matching values between reports and taxes: in the last 3 years, for revenue and income, only the revenue in presented for Fiscal Year 2021/2022 matches what is actually declared. For the rest, differences vary, but go up to close to 9%. I was told that some difference is expected (as these numbers are crafted a bit earlier than taxes), the Board had worked on it, and the last year (the only one with at least revenue matching) is certainly better. But there are still something like 18 000 USD of mismatch in expenses. For me, this is a clear sign, that something is going wrong with the accounting of the foundation, even if improved in the last year. Non-matching values between reports from different years: each Annual Report contains not only the results for that year, but also from the previous one. However, numbers only match half of the time. This is still the case for the latest report in 2023, where suddenly 10 000 USD disappeared from 2022’s expenses, growing the difference from what was declared that year to 27 000 USD. This again shows accountability issues, as previous-years’ numbers should certainly not diverge even more from the tax declarations than initial numbers. Impossibility to match tax declarations and Annual Reports: the way the annual reports are presented, makes it impossible to get a more detailed picture of how are expenses and revenue split. For example, more than 99% of the revenue in 2023 is grouped under a single tax category, while the previous year at least 3 where used. However, the split in the Annual Reports remains roughly the same. So either the accounting is wrong in one of those years, or the split of expenses for the Annual Report was crafted from different data sources. Another example is how “Staff” makes the greatest expense until it ceases to exist in the latest report. However, staff-related expenses in the taxes do not make up for the “Staff” expense in the repots. The chances are that part of that is due to subcontracting, and thus counted in “Fees for services, Other” in the taxes. Unfortunately that category has its own issues. Missing information in the tax declaration: most remarkably, in the tax fillings of fiscal years 2020/2021 and 2021/2022, the category: “Fees for services, Other” represents more than 10% of the expense, which is clearly stated that it should be explained in a later part of the tax filling. However, it is not. I was told 6 months ago that might have to be with some problem with ProPublica not getting the data, and that they would try to fix it. But I was not provided with the information, and 6 months later the public tax fillings still have not been amended. Lack of transparency on expenses: First, in the last 2 tax fillings, more than 50% of expenses lay under “Other salaries and wages”, and “Fees for services, Other”. These fields do not provide enough transparency (maybe they would if the previous point was addressed), and means most of the expenses actually go unaccounted. Second, in the Annual Reports. For the previous 2 years, the biggest expenses were by far “Staff”. There exists a website with the staff and their roles, but there is no clear explanation of which money goes to whom or why. This can be a great problem if some part of the community does not feel supported in its affairs by the foundation. Compare for example with Mastodon’s Annual Report, where everybody on a pay-slip of free-lancing is accounted and written down how much they earn. This is made worse since the current year’s Annual Report has completely removed that category in favor of others. Tax fillings (once available) will, however, provide more context if proper explanations regarding “Fees for services, Other” is finally available. Different categories and reporting formats: the reporting format changed completely in 2021/2022 compared to previous years, and changed completely again this year. This is a severe issue for transparency, since continuously updating formats make it hard to compare between years (which as noted above, is useful!). One of course can understand that things need to be updated to improve things, but such drastic changes do not help with transparency. There are certainly other small things that I noticed that caught my attention. However, I hope these examples are enough to get my point across. And there is no need to make this blog post even longer! Conclusions My main conclusion from the analysis is that the foundation accounting and decision-making regarding expenses has been sub-par in the last years. It is also a big issue that there is a huge lack in transparency regarding the economic status and decision-making of the foundation. I learned more about the economic status of the foundation by reading tax fillings than by reading Annual Reports. Unfortunately, opening an issue with the Board six months ago to share these concerns has not make it better. It could possibly be, that things are much better than they look from outside, but the lack of transparency is making it not appear as so. I hope that I can join the Finance Committee, and help address these issues in the short term!
  • Sam Thursfield: Status update, 19/05/2024 – GNOME OS and more (2024/05/19 16:36)
    Seems this is another of those months where I did enough stuff to merit two posts. (See Thursday’s post on async Rust). Sometimes you just can’t get out of doing work, no matter how you try. So here is part 2. A few weeks ago I went to the USA for a week to meet a client team who I’ve been working with since late 2022. This was actually the first time I left Europe since 2016*. Its wild how a Euro is now pretty much equal value to a US dollar, but everything costs about double compared to Europe. It was fun though and good practice for another long trip to the Denver GUADEC in July. * The UK is still part of Europe, it hasn’t physically moved, has it? GNOME OS stuff The GNOME OS project has at least 3 active maintainers and a busy Matrix room, which makes it fairly healthy as GNOME modules go. There’s no ongoing funding for maintenance though and everyone who contributes is doing so mostly as a volunteer — at least, as far as I’m aware. So there are plenty of plans and ideas for how it could develop, but many of them are incomplete and nobody has the free time to push them to completion. We recently announced some exciting collaboration between Codethink, GNOME and the Sovereign Tech Fund. This stint of full time work will help complete several in-progress tasks. Particularly interesting to me is finishing the migration to systemd-sysupdate (issue 832), and creating a convenient developer workflow and supporting tooling (issue 819) so we can finally kill jhbuild. Plus, of course, making the openQA tests great again. Getting to a point where the team could start work, took a lot of work, most of which isn’t visible to the outside world. Discussions go back at least to November 2023. Several people worked over months on scoping, estimates, contracts and resourcing the engineering team before any of the coding work started: Sonny Piers working to represent GNOME, and on the Codethink side, Jude Onyenegecha and Weyman Lo, along with Abderrahim Kitouni and Javier Jardón (who are really playing for both teams ;-). I’m not working directly on the project, but I’m helping out where I can on the communications side. We have at least 3 IRC + Matrix channels where communication happens every day, each with a different subset of people and cocumentation is scattered all over the place. Some of the Codethink team are seasoned GNOME contributors, others are not, and the collaborative nature of the GNOME OS project – there is no “BDFL” figure who takes all the decisions – means it’s hard to get clear answers around how things should be implemented. Hopefully my efforts will mean we make the most of the time available. You can read more about the current work here on the Codethink blog: GNOME OS and systemd-sysupdate, the team will hopefully be posting regular progress updates to This Week In GNOME, and Martín Abente Lahaye (who very recently joined the team on the Codethink side \o/) is opening public discussions around the next generation developer experience for GNOME modules – see the discussion here. Tiny SPARQL, Twinql, Sqlite-SPARQL, etc. We’re excited to welcome Demigod and Rachel to the GNOME community, working on a SPARQL web IDE as part of Google Summer of Code 2024. Since this is going to hopefully shine a new light on the SPARQL database project, it seems like a good opportunity to start referring to it by a better name than “Tracker SPARQL”, even while we aren’t going to actually rename the whole API and release 4.0 any time soon. There are a few name ideas already, the front runners being Tiny SPARQL or Twinql, which I still can’t quite decide which I prefer. The former is unique but rather utilitarian, while the latter is a nicer name but is already used by a few other (mostly abandoned) projects. Which do you prefer? Let me know in the comments.. Minilogues and Minifreaks I picked up a couple of hardware synthesizers, the Minilogue XD and the Minifreak. I was happy for years with my OP-1 synth, but after 6 years of use it has so many faults to be unplayable, and replacing it would cost more than a second hand car, plus its a little too tiny for on-stage use. The Minilogue XD is one of the only mainstream synths to have an open SDK for custom oscillators and effects, full respect to Korg for their forward thinking here … although their Linux tooling is a closed source binary with an critical bug that they won’t fix, so, still some way to go before they get 10/10 for openness. The Minifreak, by contrast, has a terrible Windows-only firmware update system, which works so poorly that I already had to the return the synth once to Arturia after a firmware update caused it to brick itself. There’s a stark lesson here in having open protocols which hopefully Arturia can pick up on. This synth has absolutely incredible sound design capabilities though so I decided to keep it and just avoid ever updating the firmware. Here’s a shot of the Minifreak next to another mini freak:
  • Allan Day: GNOME maintainers: here’s how to keep your issue tracker in good shape (2024/05/17 15:29)
    One of the goals of the new GNOME project handbook is to provide effective guidelines for contributors. Most of the guidelines are based on recommendations that GNOME already had, which were then improved and updated. These improvements were based on input from others in the project, as well as by drawing on recommendations from elsewhere. The best example of this effort was around issue management. Before the handbook, GNOME’s issue management guidelines were seriously out of date, and were incomplete in a number of areas. Now we have shiny new issue management guidelines which are full of good advice and wisdom! The state of our issue trackers matters. An issue tracker with thousands of open issues is intimidating to a new contributor. Likewise, lots of issues without a clear status or resolution makes it difficult for potential contributors to know what to do. My hope is that, with effective issue management guidelines, GNOME can improve the overall state of its issue trackers. So what magic sauce does the handbook recommend to turn an out of control and burdensome issue tracker into a source of calm and delight, I hear you ask? The formula is fairly simple: Review all incoming issues, and regularly conduct reviews of old issues, in order to weed out reports which are ambiguous, obsolete, duplicates, and so on Close issues which haven’t seen activity in over a year Apply the “needs design” and “needs info” labels as needed Close issues that have been labelled “need info” for 6 weeks Issues labelled “needs design” get closed after 1 year of inactivity, like any other Recruit contributors to help with issue management To some readers this is probably controversial advice, and likely conflicts with their existing practice. However, there’s nothing new about these issue management procedures. The current incarnation has been in place since 2009, and some aspects of them are even older. Also, personally speaking, I’m of the view that effective issue management requires taking a strong line (being strong doesn’t mean being impolite, I should add – quite the opposite). From a project perspective, it is more important to keep the issue tracker focused than it is to maintain a database of every single tiny flaw in its software. The guidelines definitely need some more work. There will undoubtedly be some cases where an issue needs to be kept open despite it being untouched for a year, for example, and we should figure out how to reflect that in the guidelines. I also feel that the existing guidelines could be simplified, to make them easier to read and consume. I’d be really interested to hear what changes people think are necessary. It is important for the guidelines to be something that maintainers feel that they can realistically implement. The guidelines are not set in stone. That said, it would also be awesome if more maintainers were to put the current issue management guidelines into practice in their modules. I do think that they represent a good way to get control of an issue tracker, and this could be a really powerful way for us to make GNOME more approachable to new contributors.
  • Sam Thursfield: Status update, 16/05/2024 – Learning Async Rust (2024/05/16 12:10)
    This is another month where too many different things happened to stick them all in one post together. So here’s a ramble on Rust, and there’s more to come in a follow up post. I first started learning Rust in late 2020. It took 3 attempts before I could start to make functional commandline apps, and the current outcome of this is the ssam_openqa tool, which I work on partly to develop my Rust skills. This month I worked on some intrusive changes to finally start using async Rust in the program. How it started Out of all the available modern languages I might have picked to learn, I picked Rust partly for the size and health of its community: every community has its issues, but Rust has no “BDFL” figure and no one corporation that employs all the core developers, both signs of a project that can last a long time. Look at GNOME, which is turning 27 this year. Apart from the community, learning Rust improved the way I code in all languages, by forcing more risks and edge cases to the surface and making me deal with them explicitly in the design. The ecosystem of crates has most of what you could want (although there is quite a lot of experimentation and therefore “churn”, compared to older languages). It’s kind of addictive to know that when you’ve resolved all your compile time errors, you’ll have a program that reliably does what you want. There are still some blockers to me adopting Rust everywhere I work (besides legacy codebases). The “cycle time” of the edit+compile+test workflow has a big effect on my happiness as a developer. The fastest incremental build of my simple CLI tool is 9 seconds, which is workable, and when there are compile errors (i.e. most of the time) its usually even faster. However, a release build might take 2 minutes. This is 3000 lines of code with 18 dependencies. I am wary of embarking on a larger project in Rust where the cycle time could be problematically slow. Binary size is another thing, although I’ve learned several tricks to keep ssam_openqa at “only” 1.8MB. Use a minimal arg parser library instead of clap. Use minreq for HTTP. Follow the min-size-rust guidelines. Its easy to pull in one convenient dependency that brings in a tree of 100 more things, unless you are careful. (This is a problem for C programmers too, but dependency handling in C is traditionally so horrible that we are already conditioned to avoid too many external helper libraries). The third thing I’ve been unsure about until now is async Rust. I never immediately liked the model used by Rust and Python of having a complex event loop hidden in the background, and a magic async keyword that completely changes how a function is executed, and requires all other functions to be async such as you effectively have two *different* languages: the async variant, and the sync variant; and when writing library code you might need to provide two completely different APIs to do the same thing, one async and one sync. That said, I don’t have a better idea for how to do async. Complicating matters in Rust is the error messages, which can be mystifying if you hit an edge case (see below for where this bit me). So until now I learned to just use thread::spawn for background tasks, with a std::sync::mpsc channel to pass messages back to the main thread, and use blocking IO everywhere. I see other projects doing the same. How it’s going My blissful ignorance came to an end due to changes in a dependency. I was using the websocket crate in ssam_openqa, which embeds its own async runtime so that callers can use a blocking interface in a thread. I guess this is seen as a failed experiment, as the library is now “sluggishly” maintained, the dependencies are old, and the developers recommend tungstenite instead. Tungstenite seems unusable from sync code for anything more than toy examples, you need an async wrapper such as async-tungstenite (shout out to slomo for this excellent library, by the way). So, I thought, I will need to port my *whole codebase* to use an async runtime and an async main loop. I tried, and spent a few days lost in a forest of compile errors, but its never the correct approach to try and port code “in one shot” and without a plan. To make matters worse, websocket-rs embeds an *old* version of Rust’s futures library. Nobody told me, but there is “futures 0.1” and “futures 0.3.” Only the latter works with the await keyword; if you await a future from futures 0.1, you’ll get an error about not implementing the expected trait. The docs don’t give any clues about this, eventually I discovered the Compat01As03 wrapper which lets you convert types from futures 0.1 to futures 0.3. Hopefully you never have to deal with this as you’ll only see futures 0.1 on libraries with outdated dependencies, but, now you know. Even better, I then realized I could keep the threads and blocking IO around, and just start an async runtime in the websocket processing thread. So I did that in its own MR, gaining an integration test and squashing a few bugs in the process. The key piece is here: use tokio::runtime; use std::thread; ... thread::spawn(move || { let runtime = runtime::Builder::new_current_thread() .enable_io() .build() .unwrap(); runtime.block_on(async move { // Websocket event loop goes here This code uses the tokio new_current_thread() function to create an async main loop out of the current thread, which can then use block_on() to run an async block and wait for it to exit. It’s a nice way to bring async “piece by piece” into a codebase that otherwise uses blocking IO, without having to rewrite everything up front. I have some more work in progress to use async for the two main loops in ssam_openqa: these currently have manual polling loops that periodically check various message queue for events and then call thread::sleep(250), which work fine in practice for processing low frequency control and status events, but it’s not the slickest nor most efficient way to write a main loop. The classy way to do it is using the tokio::select! macro. When should you use async Rust? I was hoping for a simple answer to this question, so I asked my colleagues at Codethink where we have a number of Rust experts. The problem is, cooperative task scheduling is a very complicated topic. If I convert my main loop to async, but I use the std library blocking IO primitives to read from stdin rather than tokio’s async IO, can Rust detect that and tell me I did something wrong? Well no, it can’t – you’ll just find that event processing stops while you’re waiting for input. Which may or may not even matter. There’s no way automatically detect “syscall which might wait for user input” vs “syscall which might take a lot of CPU time to do something”, vs “user-space code which might not defer to the main loop for 10 minutes”; and each of these have the same effect of causing your event loop to freeze. The best advice I got was to use tokio console to monitor the event loop and see if any tasks are running longer than they should. This looks like a really helpful debugging tool and I’m definitely going to try it out. So I emerge from the month a bit wiser about async Rust, no longer afraid to use it in practice, and best of all, wise enough to know that its not an “all or nothing” switch – its perfectly valid to mix and sync and async in different places, depending on what performance characteristics you’re looking for.
  • Jussi Pakkanen: Generative non-AI (2024/05/14 17:47)
    In last week's episode of the Game Scoop podcast an idea was floated that modern computer game names are uninspiring and that better ones could be made by picking random words from existing NES titles. This felt like a fun programming challenge so I went and implemented it. Code and examples can be found in this GH repo.Most of the game names created in this way are word salad gobbledigook or literally translated obscure anime titles (Prince Turtles Blaster Family). Running it a few times does give results that are actually quite interesting. They range from games that really should exist (Operation Metroid) to surprisingly reasonable (Gumshoe Foreman's Marble Stadium), to ones that actually made me laugh out loud (Punch-Out! Kids). Here's a list of some of my favourites:Ice Space PianoCastelian Devil Rainbow Bros.The Lost Dinosaur IcarusMighty Hoops, Mighty RivalsRad Yoshi GSnake Hammerin'MD Totally HeavyDisney's Die! ConnorsMonopoly Ransom Manta Caper!Revenge MarbleKung-Fu Hogan's F-15Sinister P.O.W.Duck Combat BaseballI emailed my findings back to the podcast host and they actually discussed it in this week's show (video here starting at approximately 35 minutes). All in all this was an interesting exercise. However pretty quickly after finishing the project I realized that doing things yourself is no longer what the cool kids are doing. Instead this is the sort of thing that is seemingly tailor-made for AI. All you have to do is to type in a prompt like "create 10 new titles for video games by only taking words from existing NES games" and post that to tiktokstagram.I tried that and the results were absolute garbage. Since the prompt has to have the words "video game" and "NES", and LLMs work solely on the basis of "what is the most common thing (i.e. popular)", the output consists almost entirely of the most well known NES titles with maybe some words swapped. I tried to guide it by telling it to use "more random" words. The end result was a list of ten games of which eight were alliterative. So much for randomness.But more importantly every single one of the recommendations the LLM created was boring. Uninspired. Bland. Waste of electricity, basically.Thus we find that creating a list of game names with an LLM is easy but the end result is worthless and unusable. Doing the same task by hand did take a bit more effort but the end result was miles better because it found new and interesting combinations that a "popularity first" estimator seem to not be able to match. Which matches the preconception I had about LLMs from prior tests and seeing how other people have used them.
  • Sudhanshu Tiwari: GSoC Introductory Post (2024/05/11 22:06)
    My journey as a GNOME user started in 2020 when I first set up Ubuntu on my computer, dual-booting it with Windows. Although I wasn't aware of GNOME back then, what I found fascinating was that despite Ubuntu being open source, it's performance and UI was comparable to Windows. I switched to become a regular user of Ubuntu and loved the way the GNOME Desktop Environment seamlessly performed different tasks. I could run multiple instances of various applications at the same time without it lagging or crashing down, which was often a problem in Windows.A beginning in open sourceThe first time I came across the term "open source" was while installing the MingW GCC Compiler for C++ from SourceForge. I had a rough idea of what the term meant but being a complete noob at the time, I didn't make a decision of whether to start contributing. When I felt I had enough skills to contribute, I was introduced to p5.js, which is a JavaScript library for creative coding. With my familiarity in JavaScript, the codebase of p5.js was easy to understand, and thus began my journey as an open source contributor. Opening my first PR in p5.js gave me a feeling of accomplishment that reminded me of the time I compiled my first C++ program. I started contributing more and began to learn about the GNOME environment and wanted to contribute to the desktop environment I had been a user of.Contributing to GNOMEI learnt about the libraries GLib and GTK that empower programmers to build apps using modern programming techniques. I scrambled through documentation and watched some introductory videos about GLib, GObject, and GObject Introspection, and diving deeper into this repository of knowledge I found myself wanting to learn more about how GNOME apps are built. The GNOME Preparatory Bootcamp for GSoC & Outreachy conducted by GNOME Africa prepared me to become a better contributor. Thanks to Pedro Sader Azevedo and Olosunde Ayooluwa for teaching us about setting up the development environment and getting started with the contribution process. It was around this time that I found out about the programming language Vala and a prospective GSoC project that peaked my interest. I was always fascinated by the low-level implementation details of compilers and how compilers work, and this project was related to the Vala compiler.Learning the Vala languageVala is an object-oriented programming language built on top of the GObject type system. It contains many high-level abstractions which the native C ABI does not provide, thus making it an ideal language to build GNOME applications. Vala is not widely used, so there are few online resources to learn it, however the Vala tutorial provides a robust documentation and is a good starting point for beginners. The best way to learn something is learning by doing so I decided to learn Vala by building apps using GTK and Libadwaita. However, being completely new to the GNOME environment, this approach got me limited success. I haven't yet learnt GTK or Libadwaita but I did manage to understand Vala language constructs by reading through the source code of some Vala applications. I worked on some issues in the Vala repository and this gave me a sneak peek into the working of the Vala compiler. I got to learn about how it builds the Vala AST and compiles the Vala code into GObject C, although I still have a lot to learn to understand how it is put together.My GSoC ProjectAs part of my GSoC project we have to add support for the latest GIR attributes to the Vala compiler and the Vala API generator. We can do this by including these attributes in the parsing and generation of GIR files, and linking them with Vala language constructs if needed. This also involves adding test cases for these attributes in the test suite to make sure that the .gir and .vapi files are generated correctly. Once this is done we need to work on Valadoc. Valadoc parses documentation in the Gtkdoc format, but this project involves making it parse documentation in the GI-Docgen format too. Adding this support will require creation of some new files and modifying the documentation parser in Valadoc. After implementing this support the plan is to modernise the appearance of The website was clearly built a while ago and needs redesign to make it more interactive and user friendly. This will require changing some CSS styles and JavaScript code of the website. With the completion of this project, the look of the website will be changed to be at par with the online documentation of any other programming language.Thanks to my mentor Lorenz Wildberg, I now have a coherent idea about what needs to be done in the project and we have a workable plan to achieve it. I'm very optimistic about the project, and I'm sure that we will be able to meet all the project goals within the stipulated timeline. In the coming few days I plan to read the Vala documentation and understand the codebase so that I can get started with achieving project objectives in the coding period.
  • Marcus Lundblad: May Maps (2024/05/11 13:23)
     It's about time for the spring update of goings on in Maps!There's been some changes going on since the release of 46.Vector Map by DefaultThe vector map is now being used by default, and with it Maps supports dark mode (also the old raster tiles has been retired, though there still exists the hidden feature of running with a local tile directory. Which was never really intended for general use but more as a way to experiment with offline map support). The plan will be to eventually support proper offline map support with a way to download areas in a more user-friendly and organized way then to provide a raw path…).Dark ImprovementsFollowing the introduction of dark map support the default rendering of public transit routes and lines has been improved for the dark mode to give better contrast (something that trickier before when the map view was always light even when the rest of the UI, such as the sidebar itinerary was shown in dark mode).More Transit Mode IconsJakub Steiner and Sam Hewitt has been working on designing icons for some additional modes of transit, such as trolley buses, taxi, and monorail.Trolley bus routesThis screenshot was something I “mocked” by changing the icon for regular bus to temporarily use the newly designed trolley bus icon as we don't currently have any supported transit route provider in Maps currently that exposed trolley bus routes. I originally made this for an excursion with a vintage trolley bus I was going to attend, but that was cancelled in the last minute because of technical issues.Showing a taxi stationAnd above we have the new taxi icon (this could be used both for showing on-demand communal taxi transit and for taxi stations on the map.These icons have not yet been merged into Maps, as there's still some work going on finalizing their design. But I thought I still wanted to show them here…Brand LogosFor a long time we have shown a title image from Wikidata or Wikipedia for places when available. Now we show a logo image (using the Wikidata reference for the brand of a venue) when available, and the place has no dedicated article).Explaining Place TypesAs sometimes it can be a bit hard to determine the exact type from the icons shown on the map. And especially for more generic types, such as shops where we have dedicated icons for some, and a generic icon. We now show the type also in the place bubble (using the translations extracted from the iD OSM editor). Places with a name shows the icon and type description below the name, dimmed.For unnamed places we show the icon and type instead of the name, in the same bold style as the name would normally use.Additional ThingsAnother detail worth mentioning is that you can now clear the currently showing route from the context menu so you won't have to open the sidebar again and manually erase the filled in destinations. Another improvement is that if you already enter a starting point with ”Route from Here“, or enter an address in the sidebar and then use the “Directions”  button from a place bubble, that starting point will now be used instead of the current location.Besides this, also some old commented-out code was removed… but there's no screenshots of that, I'm afraid ☺
  • Tanmay Patil: Acrostic Generator: Part one (2024/05/11 05:15)
    It’s been a while since my last blog post, which was about my Google Summer of Code project. Even though it has been months since I completed GSoC, I have continued working on the project, increasing acrostic support in Crosswords.We’ve added support for loading Acrostic Puzzles in Crosswords, but now it’s time to create some acrostics.Now that Crosswords has acrostic support, I can use screenshots to help explain what an acrostic is and how the puzzle works.Let’s load an Acrostic in Crosswords first.Acrostic Puzzle loaded in CrosswordsThe main grid here represents the QUOTE: “CARNEGIE VISITED PRINCETON…” and if we read out the first letter of each clue answer (displayed on the right) it forms the SOURCE. For example, in the image above, name of the author is “DAVID ….”. Now, the interesting part is answers for the clues fit it in the SOURCE.Let’s consider another small example:QUOTE: “To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”AUTHOR: “Ralph Waldo Emerson”If you see correctly, letters of SOURCE here are part of the QUOTE. One set of answers to clues could be:Solutions generated using AcrosticGenerator. Read the first letter of each Answer from top to bottom. It forms ‘Ralph Waldo Emerson’.Coding the Acrostic GeneratorAs seen above, to create an acrostic, we need two things: the QUOTE and the SOURCE string. These will be our inputs from the user.Additionally, we need to set some constraints on the generated word size. By default, we have set MIN_WORD_SIZE to 3 and MAX_WORD_SIZE to 20.. The user is allowed to change it. However, users are allowed to change these settings.Step 1: Check if we can create an acrostic from given inputYou must have already guessed it. We check if the characters in the SOURCE are available in the QUOTE string. To do this, we utilize IPuzCharset data structure. Without going in much detail, it simply stores characters and their frequencies.For example, for the string “MAX MIN”, it’s charset looks like [{‘M’: 2}, {‘A’: 1}, {‘X’:1}, {‘I’: 1}, {’N’: 1}].First, We build a charset of the source string and then iterate through it. For the source string to be valid, the count of every character in the source charset should be less than or equal to the count of that character in the quote charset.for (iter = ipuz_charset_iter_first (source_charset); iter; iter = ipuz_charset_iter_next (iter)) { IPuzCharsetIterValue value; value = ipuz_charset_iter_get_value (iter); if (value.count > ipuz_charset_get_char_count (quote_charset, value.c)) { // Source characters are missing in the provided quote return FALSE; } }return TRUE;Since, now we have a word size constraint, we need to add one more check.Let’s understand through this an example.QUOTE: LIFE IS TOO SHORTSOURCE: TOLSTOIMIN_WORD_SIZE: 3MAX_WORD_SIZE: 20Since, MIN_WORD_SIZE is set to 3, the generated answers should have a minimum of three letters in them.Possible solutions considering every solutionhas a length equal to the minimum word size: T _ _ O _ _ L _ _ S _ _ T _ _ O _ _ O _ _If we take the sum of the number of letters in the above solutions, It’s 21. That is greater than number of letters in the QUOTE string (14). So, we can’t create an acrostic from the above input.if ((n_source_characters * min_word_size) > n_quote_characters) { //Quote text is too short to accomodate the specificed minimum word size for each clue return FALSE; }While writing this blog post, I found out we called this error “SOURCE_TOO_SHORT”. It should be “QUOTE_TOO_SHORT” / “SOURCE_TOO_LARGE”.Stay tuned for further implementation in the next post!
  • Sudhanshu Tiwari: Being a beginner open source contributor (2024/05/11 03:26)
    In October 2023, with quite a bit of experience in web development and familiarity with programming concepts in general, I was in search of avenues where I could put this experience to good use. But where could a beginner programmer get the opportunity to work with experienced developers? And that too, in a real-world project with a user-base in millions...Few people would hire a beginner! We all know the paradox of companies intent on hiring experienced people for entry-level roles. That's where it gets tricky because we can't really have an experience without being hired. Well... maybe we can :)What is open source software?Open source software is source code made available to the public, allowing anyone to view, modify, and distribute the software. It is free and the people who work to improve it are most often not paid. The source code of open source software provides a good opportunity for a beginner to understand, work on, and modify a project to improve its usability.Why contribute to open source?Open source contribution has many benefits. It is especially beneficial for a beginner; working on open source projects hones ones skills as a developer and provides a good foundation for a future career in the software industry. Here are some of the major benefits that open source contribution provides:Experience of working on large projects Open source projects often have large and complex codebases. Working on such a project requires one to understand the ins and outs of the codebase, how things are put together and how they work to result in a fully functioning software.Read code written by others The source code of an open source software can be read and modified by anyone, this allows hundreds of people to make code contributions ranging from fixing bugs, adding a new feature to just updating the documentation. To do any of this, we need to read and understand the code written by others and make implement our own changes. This allows the contributor to learn good programming practices like writing readable and well documented code, using version control like Git and Github correctly, etc.Ability to work in a team An open source project is a joint endeavour and requires collaboration. Any code that is written in the project must be readable and understandable by every other contributor, and this team effort results in an efficient and correctly functioning software. Often, when enhancing the software by adding a new feature or updating legacy code, people need to reach a consensus on what features need to be implemented, how they should be implemented and what platforms/libraries should be used. This requires discussion with contributors and the users, and hones ones ability to work in a team which is an invaluable skill to have in software development.Opportunity to work with experienced developers Since many projects are quite old and have been there for a long time, they have many people with tens of years of experience who are maintainers and have been writing and fixing the codebase since years. This is a good opportunity for a beginner to  learn the best programming practices from people with more experience. This helps them in becoming employable and gaining the "experience" that companies demand from prospective employees. Using programming skills to benefit end users Large projects often have millions of dedicated users that use the software on a daily basis. Just like VLC Media Player or Chromium, these softwares are quite popular and have a loyal fanbase. If anyone contributes to make the software better, it improves the user experience for millions of people. This contribution might be a small optimization that makes the software load faster, or a new feature that users have been requesting - in any case, it ends up improving the experience for its day to day users and makes a meaningful impact on the community.A chance to network with others Contributing to open source is a fun and pleasant experience. It allows us to meet people from different backgrounds with different levels of experience. Contributors are often geographically distributed but have the same goal - to ensure the success of the project by benefiting its end users. This common goal allows us to connect and interact with people from diverse backgrounds and different opinions. This ends up being an enriching and learning journey, it broadens our perspectives, and makes us a better developer. Interested in contributing to open source? This article provides a step by step guide on how you can get started with open source contribution. In case of any doubts, please feel free to contact me on my email : or connect with me on LinkedIn !                                                       
  • Peter Hutterer: libwacom and Huion/Gaomon devices (2024/05/09 00:01)
    TLDR: Thanks to José Exposito, libwacom 2.12 will support all [1] Huion and Gaomon devices when running on a 6.10 kernel. libwacom, now almost 13 years old, is a C library that provides a bunch of static information about graphics tablets that is not otherwise available by looking at the kernel device. Basically, it's a set of APIs in the form of libwacom_get_num_buttons and so on. This is used by various components to be more precise about initializing devices, even though libwacom itself has no effect on whether the device works. It's only a library for historical reasons [2], if I were to rewrite it today, I'd probably ship libwacom as a set of static json or XML files with a specific schema. Here are a few examples on how this information is used: libinput uses libwacom to query information about tablet tools.The kernel event node always supports tilt but the individual tool that is currently in proximity may not. libinput can get the tool ID from the kernel, query libwacom and then initialize the tool struct correctly so the compositor and Wayland clients will get the right information. GNOME Settings uses libwacom's information to e.g. detect if a tablet is built-in or an external display (to show you the "Map to Monitor" button or not, if builtin), GNOME's mutter uses the SVGs provided by libwacom to show you an OSD where you can assign keystrokes to the buttons. All these features require that the tablet is supported by libwacom. Huion and Gamon devices [3] were not well supported by libwacom because they re-use USB ids, i.e. different tablets from seemingly different manufacturers have the same vendor and product ID. This is understandable, the 16-bit product id only allows for 65535 different devices and if you're a company that thinks about more than just the current quarterly earnings you realise that if you release a few devices every year (let's say 5-7), you may run out of product IDs in about 10000 years. Need to think ahead! So between the 140 Huion and Gaomon devices we now have in libwacom I only counted 4 different USB ids. Nine years ago we added name matching too to work around this (i.e. the vid/pid/name combo must match) but, lo and behold, we may run out of unique strings before the heat death of the universe so device names are re-used too! [4] Since we had no other information available to userspace this meant that if you plugged in e.g. a Gaomon M106 and it was detected as S620 and given wrong button numbers, a wrong SVG, etc. A while ago José got himself a tablet and started contributing to DIGIMEND (and upstreaming a bunch of things). At some point we realised that the kernel actually had the information we needed: the firmware version string from the tablet which conveniently gave us the tablet model too. With this kernel patch scheduled for 6.10 this is now exported as the uniq property (HID_UNIQ in the uevent) and that means it's available to userspace. After a bit of rework in libwacom we can now match on the trifecta of vid/pid/uniq or the quadrella of vid/pid/name/uniq. So hooray, for the first time we can actually detect Huion and Gaomon devices correctly. The second thing Jose did was to extract all model names from the .deb packages Huion and Gaomon provide and auto-generate all libwacom descriptions for all supported devices. Which meant, in one pull request we added around 130 devices. Nice! As said above, this requires the future kernel 6.10 but you can apply the patches to your current kernel if you want. If you do have one of the newly added devices, please verify the .tablet file for your device and let us know so we can remove the "this is autogenerated" warnings and fix any issues with the file. Some of the new files may now take precedence over the old hand-added ones so over time we'll likely have to merge them. But meanwhile, for a brief moment in time, things may actually work. [1] fsvo of all but should be all current and past ones provided they were supported by Huions driver [2] anecdote: in 2011 Jason Gerecke from Wacom and I sat down to and decided on a generic tablet handling library independent of the xf86-input-wacom driver. libwacom was supposed to be that library but it never turned into more than a static description library, libinput is now what our original libwacom idea was. [3] and XP Pen and UCLogic but we don't yet have a fix for those at the time of writing [4] names like "HUION PenTablet Pen"...
  • Ismael Olea: This website now has GPDR friendly statistics (2024/05/05 22:00)
    Now this website uses a simple statistics system which is GDPR compatible and privacy friendly. It uses Libre Counter which not need user registration neither configuration beyond adding some code like this: <a href="" target="_blank"> <img src="" alt="GPDR friendly statistics" width="14" style="filter: grayscale(1);" title="GPDR friendly statistics" referrerpolicy="unsafe-url title="/> </a> No cookies also. Thanks Pinchito!
  • Justin W. Flory: Outreachy May 2024: A letter to Fedora applicants (2024/05/02 13:05)
    The post Outreachy May 2024: A letter to Fedora applicants appeared first on /home/jwf/. /home/jwf/ - Free & Open Source, technology, travel, and life reflections To all Outreachy May 2024 applicants to the Fedora Project, Today is May 2nd, 2024. The Outreachy May 2024 round results will be published in a few short hours. This year, the participation in Fedora for Outreachy May 2024 was record-breaking. Fedora will fund three internships this year. During the application and contribution phase, over 150 new contributors appeared in our Mentored Project contribution channels. For the project I am mentoring specifically, 38 applicants recorded contributions and 33 applicants submitted final applications. This is my third time mentoring, but this Outreachy May 2024 round has been a record-breaker for all the projects I have mentored until now. But breaking records is not what this letter is about. This day can be either enormously exciting and enormously disappointing. It is a tough day for me. There are so many Outreachy applicants who are continuing to contribute after the final applications were due. I see several applicants from my project who are contributing across the Fedora community, and actually leveling up to even bigger contributions than the application period. It is exciting to see people grow in their confidence and capabilities in an Open Source community like Fedora. Mentoring is a rewarding task for me, and I feel immensely proud of the applicants we have had in the Fedora community this round. But the truth is difficult. Fedora has funding for three interns, hard and simple. Hard decisions have to be made. If I had unlimited funding, I would have hired so many of our applicants. But funding is not unlimited. Three people will receive great news today, and most people will receive sad news. Throughout this entire experience in the application phase, I wanted to design me and Joseph Gayoso’s project so that even folks who were not selected would have an enriching experience. We wanted to put something real in the hands of our applicants at the end. We also wanted to boost their confidence in showing up in a community and guide them on how to roll up your sleeves and get started. Looking at the portfolios that applicants to our project submitted, I admire how far our applicants came since the day that projects were announced. Most applicants never participated in an open source community before. And for some, you would never have known that either! So, if you receive the disappointing news today, remember that it does not reflect badly on you. The Outreachy May 2024 round was incredibly competitive. Literally, record-breaking. We have to say no to many people who have proved that they have what it takes to be a capable Fedora Outreachy intern. I hope you can look at all the things you learned and built over these past few months, and use this as a step-up to the next opportunity awaiting you. Maybe it is an Outreachy internship in a future round, or maybe it is something else. If there is anything I have learned, it is that life takes us on the most unexpected journeys sometimes. And whatever is meant to happen, will happen. I believe that there is a reason for everything, but we may not realize what that reason is until much later in the future. Thank you to all of the Fedora applicants who put in immense effort over the last several months. I understand if you choose to stop contributing to Fedora. I hope that you will not be discouraged from open source generally though, and that you will keep trying. If you do choose to continue contributing to Fedora, I promise we will find a place for you to continue on. Regardless of your choice in contributing, keep shining and be persistent. Don’t give up easily, and remember that what you learned in these past few months can give a leading edge on that next opportunity waiting around the corner for you. Freedom, Friends, Features, First! — Justin
Enter your comment. Wiki syntax is allowed:
  • news/planet/gnome.txt
  • Last modified: 2021/10/30 11:41
  • by