Gnome Planet - Latest News

  • Gotam Gorabh: GSoC 2023 [Week 2 Report]: Add “Remote Desktop” page into the system panel (2023/06/04 10:00)
    Project Title: Create a New “System” panel in GNOME SettingsMentor : Felipe BorgesContributor : Gotam GorabhIntroductionImplementing Remote Desktop page will allow users to access and control a computer or a desktop environment remotely from another location. It enables users to connect to a remote computer or server and interact with it as if they were physically present in front of it.This week I will implement only the GUI of the Remote Desktop Page.Remote Desktop MockupI will address this Issue(#2241) and implement this mockup in this project. For more details here is my Proposal.Week 1 Goals:Add remote_desktop_row using CcListRow into the empty system panel.Implement the content page for the remote_desktop_row using AdwLeaflet.Inside the content page Implement the GUI according to the Remote Desktop mockup.Take feedback from my mentors and design team members for any modification apart from mockups.Eliminate errors and wrong coding standards.Progress Made:Added remote_desktop_row using CcListRow into the empty system panel.Implemented the content page for the remote_desktop_row using AdwLeaflet.Implemented the GUI according to the Remote Desktop mockup.The current look of the Remote Desktop page:Deliverables:To create a Remote Desktop page, I added three files name cc-system-remote-page.c, cc-system-remote-page.h, and cc-system-remote-page.ui inside system folder.Also modified cc-system-panel.c , cc-system-panel.ui , system.gresource.xml , and system/meson.build files.Related Merge Request:system: Add 'Remote Desktop' page into the system panel !1818Plan for the Next Week:In the next week, I will move About panel as a page into the new system panel.This week is still ongoing, so stay tuned for more updates.
  • Hari Rana: Response to “Developers are lazy, thus Flatpak” (2023/06/04 00:00)
    Introduction Recently, the article “Developers are lazy, thus Flatpak”, by Martijn Braam, was published to criticize a few things regarding Flatpak. I want to go over the article and address some points that were raised. While Martijn, the author, contrasted Flatpak with Alpine Linux, I’m going to be contrasting Flatpak with popular Linux distributions, as, to me, it makes sense to contrast Flatpak with some of the most used distributions. I recommend reading his article before my response, as I won’t be replying to every point raised. “Flatpak is a distribution” While the developers like to pretend real hard that Flatpak is not a distribution, it’s still suspiciously close to one. It lacks a kernel and a few services and it lacks the standard Linux base directory specification but it’s still a distribution you need to target. Instead of providing seperate packages with a package manager it provides a runtime that comes with a bunch of dependencies. Conveniently it also provides multiple runtimes to make sure there’s not actually a single base to work on. Because sometimes you need Gnome libraries, sometimes you need KDE libraries. Since there’s no package manager those will be in seperate runtimes. I’m not really sure who denies that Flatpak is a distribution, but I’m certainly not one of them. If anything, this seems to be a really good thing to me, because we’re bundling a distribution and deploying it to users’ systems, meaning that they’re getting the exact same builds that we, the developers, tested against. It makes it even easier for us to troubleshoot, because the environment is near-identical. To quote from Flatpak’s documentation: “Flatpak is a framework for distributing desktop applications across various Linux distributions.” Even Flatpak agrees that it’s a distribution. “No built in package manager” If you need a dependency that’s not in the runtime there’s no package manager to pull in that dependency. The solution is to also package the dependencies you need yourself and let the flatpak tooling build this into the flatpak of your application. So now instead of being the developer for your application you’re also the maintainer of all the dependencies in this semi-distribution you’re shipping under the disguise of an application. Separating every dependency would be really inconvenient, just like managing graphical apps in a Linux distribution. If, for example, a flatpak named XYZ needs 10 dependencies, whereby one of the dependencies changes the version that the developers of the XYZ flatpak have not tested, then it could lead to bugs or even breakages that would make it really difficult for developers to trace back, as they would need to frequently contact each packager and figure out together. So, as mentioned by the author, the solution is to let the developers bundle everything, as they test their apps against an environment that is easier to understand. Separating all dependencies is easier for smaller programs and dependencies, as they have a small volume of dependencies, and thus are not as difficult for troubleshooting. However, Flatpak targets graphical apps, which are typically large and sophisticated and need a model that can scale well. In this case, letting developers manage the dependencies is a better approach, as it’s easier to maintain and troubleshoot.1 And one thing is for sure, I don’t trust application developers to maintain dependencies. That’s completely fair, as you’re entitled to your own opinion. Likewise, I don’t trust the worryingly large amount of non-programmers/developers that package dependencies in larger distributions, especially when they have no real world experience in developing apps. Furthermore, I trust even less the package managers that allow dependencies to self-destruct in case something goes wrong. This gets really nuts by looking at some software that deals with multimedia. Lets look at the Audacity flatpak. It builds as dependency: wxwidgets ffmpeg sqlite chrpath portaudio portmidi So lets look at how well dependencies are managed here. Since we’re now almost exactly half a year into 2023 I’ll look at the updates for the last 6 months and compare it to the same dependencies in Alpine Linux. audacity has been updated 4 times in the flatpak. It has been updated 5 times on Alpine. ffmpeg has been updated to 6.0 in both the flatpak and Alpine, but the ffmpeg package has had 9 updates because if codecs that have been updated. sqlite hasn’t been updated in the flatpak and has been updated 4 times in Alpine wxwidgets hasn’t been updated in the flatpak and has been updated 2 times in Alpine chrpath hasn’t had updates portaudio hasn’t had updates in flatpak and Alpine. portmidi hasn’t had updates This is just a random package I picked and it already had a lot more maintainance of the dependencies than the flatpak has. It most likely doesn’t scale to have all developers keep track of all the dependencies of all their software. The main issue here is that Audacity has many technical limitations. To name one, in Audacity’s source code, it’s documented that “Audacity on Linux uses vanilla version of wxWidgets, we require that version 3.1.3 is used. This version is not available in most of the distributions.” This is why wxWidgets in the Audacity flatpak wasn’t updated. If they updated to major versions, then they might have ran into issues. Alpine Linux, on the other hand, packages wxWidgets version 3.2.2.1, which, as seen above, is outside of the “required” version. Even then, in this case, the amount of updates doesn’t signify which is better maintained, as context absolutely matters. It is best to consult the maintainer of the flatpak about their decision, rather than cherry picking and misinforming yourself without doing thorough investigations. “The idea of isolation” This whole section is criticizing GNOME Software, not Flatpak itself. GNOME Software is a frontend that decides how they should display flatpaks to users. If the GNOME Software developers wanted, they could display a dialog that warns the user when they try to install the app. In this case, they went against it and put it under a separate button. In my opinion, it is really unfair to blame a backend utility when the frontend does it “wrong”.2 “So what about traditional distributions” I’ve heard many argument for Flatpaks by users and developers but in the end I can’t really say the pros outweigh the cons. I think it’s very important that developers do not have the permissions to push whatever code they want to everyone under the disguise of a secure system. And that’s my opinion as a software developer. Software packaged by distributions has at least some degree of scrutiny and it often results in at least making sure build flags are set to disable user tracking and such features. As user u/natermer explains really well on Reddit: “[T]he reviewing and security benefits of distribution packaging is heavily overstated by many people defending it. Only a small number of high profile packages ever receive serious security scrutiny. For the most part the packaging authors put in just enough to work to get the software to build correctly and the dependencies to install without breaking anything and then it is up to the end users to report on any problems that crop up. Which means that the guarantees are not really much better then with stuff like “pip”.” In the Audacity example discussed above, I mentioned that Alpine Linux packages wxWidgets version 3.2.2.1 and Audacity, which Audacity has explicitly stated to use 3.1.3. This is a good example of “the packaging authors put in just enough to work to get the software to build correctly and the dependencies to install without breaking anything and then it is up to the end users to report on any problems that crop up”. The app launching isn’t enough to conclude that it works well. While the author’s point was regarding security, I think that scrutiny beyond security is equally as important and should be mentioned. Furthermore, this isn’t only about scrutiny, but also maintenance, or the lack thereof. Let’s look at the amount of packages that some popular distributions contain in their repositories. As of writing this article: Debian has over 1,200 orphaned packages (no maintainer), many of which are over 1,000 days! Fedora Linux has over 800 orphaned packages (no maintainer). Arch Linux has over 400 packages flagged as outdated. So, while the author is concerned that developers will “mishandle” dependencies with Flatpak, we observe that the more worrying bit is the amount of unmaintained packages on the distributions you run on your system; packages that are installed on your host. So you either choose a mishandled package on your host, or a mishandled dependency inside a container. I will happily take the latter. I also believe software in general is better if it’s made with the expectation that it will run outside of Flatpak. It’s not that hard to make sure you don’t depend on bleeding edge versions of libraries while that’s not needed. It’s not that hard to have optional dependencies in software. It’s not that hard to actually follow XDG specifications instead of hardcoding paths. I’ve written “Traditional Packaging is not Suitable for Modern Applications” which goes in depth with the problems of the lack of robust dependency management. To summarize, with traditional packaging systems, developers cannot cherry pick dependencies, and distributions cannot provide consistent experiences and environments if they’re not making use of the same containers or similar. Audacity, as an example, on a traditional packaging system, if the distribution chooses to use the latest version of wxWidgets, 3.2.2.1, and the distribution ships that version, then the packagers cannot cherry pick 3.1.3 or any other version, otherwise that version will conflict with the existing version. This isn’t only regarding versioning, but it goes as far as applying custom patches. For example, as taken from my article I linked, OBS Studio requires a few patches for ffmpeg to neatly integrate with OBS Studio. This, again, cannot be done if the distribution ships an unpatched ffmpeg or one that doesn’t have the patches OBS Studio requires to work properly, otherwise they have to find all sorts of workarounds (refer to my article). Another problem that traditional packaging systems cannot solve is providing a consistent and predictable environment. For example, Bottles is a “fragile” piece of software, because it needs an environment for Wine and other utilities to run in secure, contained and predictable environments. I’ve written a long comment that explains why supporting traditional packaging systems is a burden for us, while being infeasible at best. Steam, another example, uses steam-runtime-tools, which uses bubblewrap, a utility that originated from Flatpak, to contain and isolate games. Even, though, Steam is available and “supported” on many distributions, they all originate from the same archive, which means that they’re all the same binaries and thus are somewhat consistent, just like Flatpak. As Linus Torvalds said, “[…] I guarantee you Valve will not make 15 different binaries […]” — In reality, developers who have other things to worry about cannot spend their time to build their software 5 times for 5 different Linux distributions and continuously support them. To be fair, “5” is an understatement. Instead, they want one binary, and continuously and thoroughly test that binary and ship it to users. “But packaging for distributions is hard” That’s the best thing! Developers are not supposed to be the ones packaging software so it’s not hard at all. It’s not your task to get your software in all the distributions, if your software is useful to people it tends to get pulled in. I have software that’s packaged in Alpine Linux, ALT Linux, Archlinux AUR, Debian, Devuan, Fedora, Gentoo, Kali, LiGurOS, Nix, OpenMandriva, postmarketOS, Raspbian, Rosa, Trisquel, Ubuntu and Void. I did not have to package most of this. The most I notice from other distributions packaging my software is patches from maintainers that improve the software, usually in dealing with some edge case I forgot with a hardcoded path somewhere. The most time I’ve ever spent on distribution packaging is actually the few pieces of software I’ve managed to push to Flathub. Dealing with differences between distributions is easy, dealing with differences between runing inside and outside Flatpak is hard. Masochism aside, as the author wrote in the beginning of the article, “[s]adly there’s reality”. That reality is that developers do not and literally cannot deal with all the burden that distributions cause. They want something that is easy for them to maintain, while having the control to bundle everything they want, and however they see fit, because developers know how their programs work the best. They can test their apps in environments and ship the same environments to users. If the distribution does not put an effort to make it easy for developers to package, test and ship their apps to users, then the distribution has failed to appeal the majority of developers. The decreased difficulty that Flatpak and Flathub offer is precisely why many distributions are starting to include and use Flathub by default, like Steam OS; it’s why GNOME and KDE have been focusing on Flatpak as the primary distribution method; and it’s also why Flatpak and Flathub have grown in popularity really quickly. “But Flatpaks are easier for end users” A second issue I’ve had on my Pinebook Pro is that it has a 64GB rootfs. Using too many flatpaks is just very wasteful of space. In theory you have a runtime that has your major dependencies and then a few Megabytes of stuff in your application flatpak. In practice I nearly have an unique platform per flatpak installed because the flatpaks depend on different versions of that platform or just on different platforms. While I have a 256 GB SSD, I probably have more graphical apps than the average user, because I test and contribute to several apps. I would go as much as saying that my Flatpak setup is cursed, but I digress. Many of the apps use different runtimes and different versions of runtimes. I believe that I fall into the “I nearly have an unique platform per flatpak installed because the flatpaks depend on different versions of that platform or just on different platforms” category, although I believe his statement was hyperbolic. First, we’ll check the amount of storage all flatpaks take. I’ll be using flatpak-dedup-checker to measure the storage usage: $ ./flatpak-dedup-checker --user Directories: /var/home/TheEvilSkeleton/.local/share/flatpak/{runtime,app} Size without deduplication: 89.31 GB Size with deduplication: 37.73 GB (42% of 89.31 GB) Size with compression: 27.66 GB (30% of 89.31 GB; 73% of 37.73 GB) We notice that all flatpaks, without compression, take 37.73 GB in total. Let’s look at how many apps I have installed: $ flatpak list --app --user | wc -l 173 173 graphical apps — including major browsers, such as Firefox, LibreWolf, Tor Browser, Chromium, ungoogled-chromium, Google Chrome, Brave, Microsoft Edge, and Epiphany. If you’re curious, feel free to look at all my installed apps: $ flatpak list --app --user Name Application ID Version Branch Origin Dialect app.drey.Dialect 2.1.1 stable flathub Elastic app.drey.Elastic 0.1.3 stable flathub Cambalache ar.xjuan.Cambalache 0.10.3 stable flathub Valent ca.andyholmes.Valent master valent-origin Dconf Editor ca.desrt.dconf-editor 43.0 stable flathub Decoder com.belmoussaoui.Decoder 0.3.3 stable flathub ASHPD Demo com.belmoussaoui.ashpd.demo 0.3.0 stable flathub Bitwarden com.bitwarden.desktop 2023.5.0 stable flathub Boxy SVG com.boxy_svg.BoxySVG 3.96.0 stable flathub Brave Browser com.brave.Browser 1.52.117 stable flathub Tally for Plausible com.cassidyjames.plausible 3.0.1 stable flathub Discord Canary com.discordapp.DiscordCanary 0.0.156 beta flathub-beta Mindustry com.github.Anuken.Mindustry 144.3 stable flathub ungoogled-chromium com.github.Eloston.UngoogledChromium 113.0.5672.126 stable flathub Gradience com.github.GradienceTeam.Gradience 0.4.1 stable flathub Desktop Files Creator com.github.alexkdeveloper.desktop-files-creator 1.2.2 stable flathub Eyedropper com.github.finefindus.eyedropper 0.6.0 stable flathub Rnote com.github.flxzt.rnote 0.6.0 stable flathub Wike com.github.hugolabe.Wike 2.0.1 stable flathub Text Pieces com.github.liferooter.textpieces 3.4.1 stable flathub Tor Browser Launcher com.github.micahflee.torbrowser-launcher 0.3.6 stable flathub G4Music com.github.neithern.g4music 1.13 stable flathub Czkawka com.github.qarmin.czkawka 5.1.0 stable flathub Clapper com.github.rafostar.Clapper 0.5.2 stable flathub Logisim-evolution com.github.reds.LogisimEvolution 3.8.0 stable flathub Avvie com.github.taiko2k.avvie 2.3 stable flathub Flatseal com.github.tchx84.Flatseal 2.0.1 stable flathub Frog com.github.tenderowl.frog 1.3.0 stable flathub Video Downloader com.github.unrud.VideoDownloader 0.12.4 stable flathub Easy Effects com.github.wwmm.easyeffects 7.0.4 stable flathub NewsFlash com.gitlab.newsflash 2.3.0 stable flathub Google Chrome com.google.Chrome 113.0.5672.126-1 stable flathub Inochi Creator com.inochi2d.inochi-creator 0.8.0 stable flathub qView com.interversehq.qView 5.0 stable flathub GtkStressTesting com.leinardi.gst 0.7.5 stable flathub Forge Sparks com.mardojai.ForgeSparks 0.1.1 stable flathub Extension Manager com.mattjakeman.ExtensionManager 0.4.0 stable flathub Microsoft Edge com.microsoft.Edge 114.0.1823.37-1 stable flathub OBS Studio com.obsproject.Studio 29.1.2 stable flathub Share Preview com.rafaelmardojai.SharePreview 0.3.0 stable flathub Black Box com.raggesilver.BlackBox 0.13.2 stable flathub Geopard com.ranfdev.Geopard 1.4.0 stable flathub Transmission com.transmissionbt.Transmission 4.0.3 stable flathub Bottles com.usebottles.bottles 51.6 master bottles-origin Bottles com.usebottles.bottles 51.6 stable flathub Steam com.valvesoftware.Steam 1.0.0.75 stable flathub Visual Studio Code com.visualstudio.code 1.78.2-1683731010 stable flathub Fragments de.haeckerfelix.Fragments 2.1.1 stable flathub Tubefeeder de.schmidhuberj.tubefeeder v1.9.4 master tubefeeder-origin Tubefeeder de.schmidhuberj.tubefeeder v1.9.6 stable flathub Tuba dev.geopjr.Tuba 0.3.2 stable flathub Forecast dev.salaniLeo.forecast 0.1.0 stable flathub HandBrake fr.handbrake.ghb 1.6.1 stable flathub Metadata Cleaner fr.romainvigier.MetadataCleaner 2.5.2 stable flathub Cartridges hu.kramo.Cartridges 1.5.4 stable flathub Element im.riot.Riot 1.11.31 stable flathub Cinny in.cinny.Cinny 2.2.6 stable flathub Komikku info.febvre.Komikku 1.21.1 stable flathub Amberol io.bassi.Amberol 0.10.3 stable flathub Bavarder io.github.Bavarder.Bavarder 0.2.3 stable flathub AdwSteamGtk io.github.Foldex.AdwSteamGtk 0.6.0 stable flathub Youtube Downloader Plus io.github.aandrew_me.ytdn 3.14.0 stable flathub Epic Asset Manager io.github.achetagames.epic_asset_manager 3.8.4 stable flathub GPU-Viewer io.github.arunsivaramanneo.GPUViewer 2.26 stable flathub Celluloid io.github.celluloid_player.Celluloid 0.25 stable flathub Escambo io.github.cleomenezesjr.Escambo 0.1.1 stable flathub PinApp io.github.fabrialberio.pinapp 1.1.7 stable flathub Monitorets io.github.jorchube.monitorets 0.10.0 stable flathub AppImage Pool io.github.prateekmedia.appimagepool 5.1.0 stable flathub Kooha io.github.seadve.Kooha 2.2.3 stable flathub Mousai io.github.seadve.Mousai 0.7.5 stable flathub WebCord io.github.spacingbat3.webcord 4.2.0 stable flathub Converter io.gitlab.adhami3310.Converter 1.6.1 stable flathub Sudoku Solver io.gitlab.cyberphantom52.sudoku_solver 1.0.1 stable flathub Letterpress io.gitlab.gregorni.ASCIIImages 1.3.0 stable flathub Calligraphy io.gitlab.gregorni.Calligraphy 1.0.0 stable flathub LibreWolf io.gitlab.librewolf-community 113.0.2-1 stable flathub Upscaler io.gitlab.theevilskeleton.Upscaler master upscaler3-origin Upscaler io.gitlab.theevilskeleton.Upscaler 1.1.2 stable flathub Upscaler io.gitlab.theevilskeleton.Upscaler 1.1.2 test upscaler1-origin Devel io.gitlab.theevilskeleton.Upscaler.Devel master devel-origin Dev Toolbox me.iepure.devtoolbox 1.0.2 stable flathub Passes me.sanchezrodriguez.passes 0.7 stable flathub Lutris net.lutris.Lutris 0.5.13 stable flathub Mullvad Browser net.mullvad.MullvadBrowser 102.9.0esr-12.0-2-build1 stable flathub Poedit net.poedit.Poedit 3.3.1 stable flathub RPCS3 net.rpcs3.RPCS3 0.0.28-1-33558d14 stable flathub Live Captions net.sapples.LiveCaptions 0.4.0 stable flathub Color Picker nl.hjdskes.gcolor3 2.4.0 stable flathub Audacity org.audacityteam.Audacity 3.3.2 stable flathub Chromium Web Browser org.chromium.Chromium 114.0.5735.90 stable flathub Chromium application base org.chromium.Chromium.BaseApp 21.08 flathub Electron2 application base org.electronjs.Electron2.BaseApp 21.08 flathub Electron2 application base org.electronjs.Electron2.BaseApp 22.08 flathub Flatpak External Data Checker org.flathub.flatpak-external-data-checker stable flathub Builder org.flatpak.Builder stable flathub Piper org.freedesktop.Piper 0.7 stable flathub VulkanInfo org.freedesktop.Platform.VulkanInfo 22.08 flathub appstream-glib org.freedesktop.appstream-glib 0.8.1 stable flathub Feeds org.gabmus.gfeeds 2.2.0 stable flathub Giara org.gabmus.giara 1.1.0 stable flathub Zola org.getzola.zola 0.17.2 stable flathub GNU Image Manipulation Program org.gimp.GIMP 2.99.14 beta flathub-beta GNU Image Manipulation Program org.gimp.GIMP 2.10.34 master gnome-nightly GNU Image Manipulation Program org.gimp.GIMP 2.10.34 stable flathub Adwaita Demo org.gnome.Adwaita1.Demo 1.4.alpha master gnome-nightly Boxes org.gnome.Boxes 44.2 stable flathub Builder org.gnome.Builder 44.2 stable flathub Calculator org.gnome.Calculator 44.0 stable flathub Calendar org.gnome.Calendar 44.0 stable flathub Contacts org.gnome.Contacts 44.0 stable flathub Devhelp org.gnome.Devhelp 43.0 stable flathub Web org.gnome.Epiphany 44.3 stable flathub File Roller org.gnome.FileRoller 43.0 stable flathub Firmware org.gnome.Firmware 43.2 stable flathub Fractal org.gnome.Fractal.Devel 5~beta1-c3d77b7 master gnome-nightly Geary org.gnome.Geary 43.0 stable flathub Glade org.gnome.Glade 3.40.0 stable flathub Lollypop org.gnome.Lollypop 1.4.37 stable flathub Loupe org.gnome.Loupe 44.3 stable flathub Maps org.gnome.Maps 44.2 stable flathub Files org.gnome.NautilusDevel 44.1 master gnome-nightly Notes org.gnome.Notes 40.1 stable flathub Document Scanner org.gnome.SimpleScan 44.0 stable flathub Text Editor org.gnome.TextEditor 44.0 stable flathub Endeavour org.gnome.Todo 43.0 stable flathub Videos org.gnome.Totem 43.0 stable flathub Weather org.gnome.Weather 44.0 stable flathub Pika Backup org.gnome.World.PikaBackup 0.6.1 stable flathub Secrets org.gnome.World.Secrets 7.3 stable flathub Clocks org.gnome.clocks 44.0 stable flathub App Icon Preview org.gnome.design.AppIconPreview 3.3.0 stable flathub Contrast org.gnome.design.Contrast 0.0.8 stable flathub Emblem org.gnome.design.Emblem 1.2.0 stable flathub Icon Library org.gnome.design.IconLibrary 0.0.16 stable flathub Lorem org.gnome.design.Lorem 1.2 stable flathub Color Palette org.gnome.design.Palette 2.0.2 stable flathub Symbolic Preview org.gnome.design.SymbolicPreview 0.0.8 stable flathub Typography org.gnome.design.Typography 0.2.0 stable flathub Fonts org.gnome.font-viewer 44.0 stable flathub Identity org.gnome.gitlab.YaLTeR.Identity 0.5.0 stable flathub Iotas org.gnome.gitlab.cheywood.Iotas 0.1.16 stable flathub Apostrophe org.gnome.gitlab.somas.Apostrophe 2.6.3 stable flathub GTK Demo org.gtk.Demo4 master gnome-nightly Inkscape org.inkscape.Inkscape 1.2.2 stable flathub JDownloader org.jdownloader.JDownloader 2.0 stable flathub Dolphin org.kde.dolphin 23.04.1 stable flathub Kdenlive org.kde.kdenlive 23.04.1 stable flathub Krita org.kde.krita 5.1.5 stable flathub NeoChat org.kde.neochat 23.04.1 stable flathub Xwayland Video Bridge org.kde.xwaylandvideobridge master xwaylandvideobridge-origin KeePassXC org.keepassxc.KeePassXC 2.7.5 stable flathub LibreOffice org.libreoffice.LibreOffice 7.5.3.2 stable flathub Thunderbird org.mozilla.Thunderbird 102.11.2 stable flathub Firefox org.mozilla.firefox 113.0.2 stable flathub Tagger org.nickvision.tagger 2022.11.2 stable flathub Nicotine+ org.nicotine_plus.Nicotine 3.2.9 stable flathub ONLYOFFICE Desktop Editors org.onlyoffice.desktopeditors 7.3.3 stable flathub Helvum org.pipewire.Helvum 0.4.0 stable flathub qBittorrent org.qbittorrent.qBittorrent 4.5.3 stable flathub Tenacity org.tenacityaudio.Tenacity 1.3-beta3 test tenacity-origin Wine org.winehq.Wine 7.0 stable-21.08 flathub Wine org.winehq.Wine 8.0 stable-22.08 flathub Zrythm org.zrythm.Zrythm 1.0.0-beta.4.9.1 stable flathub Imaginer page.codeberg.Imaginer.Imaginer 0.2.2 stable flathub Atoms pm.mirko.Atoms 1.1.1 stable flathub Commit re.sonny.Commit 4.0 stable flathub Oh My SVG re.sonny.OhMySVG 1.2 stable flathub Playhouse re.sonny.Playhouse 1.1 stable flathub Workbench re.sonny.Workbench 44.1 stable flathub Graphs se.sjoerd.Graphs 1.5.2 stable flathub Cawbird uk.co.ibboard.cawbird 1.5 stable flathub ArmCord xyz.armcord.ArmCord 3.2.0 stable flathub Alright, let’s look at the amount of runtimes installed: $ flatpak list --runtime --user | wc -l 97 And the runtimes themselves: $ flatpak list --runtime --user Name Application ID Version Branch Origin Codecs com.github.Eloston.UngoogledChromium.Codecs stable flathub Proton (community build) com.valvesoftware.Steam.CompatibilityTool.Proton 7.0-6 beta flathub-beta Proton (community build) com.valvesoftware.Steam.CompatibilityTool.Proton 7.0-5 stable flathub Proton experimental (community build) com.valvesoftware.Steam.CompatibilityTool.Proton-Exp 7.0-20230208 stable flathub Proton-GE (community build) com.valvesoftware.Steam.CompatibilityTool.Proton-GE 8.3-1 beta flathub-beta Proton-GE (community build) com.valvesoftware.Steam.CompatibilityTool.Proton-GE 8.3-1 stable flathub gamescope com.valvesoftware.Steam.Utility.gamescope 3.11.51 stable flathub Codecs org.audacityteam.Audacity.Codecs stable flathub Codecs org.chromium.Chromium.Codecs stable flathub Calf org.freedesktop.LinuxAudio.Plugins.Calf 0.90.3 22.08 flathub LSP org.freedesktop.LinuxAudio.Plugins.LSP 1.2.6 22.08 flathub MDA org.freedesktop.LinuxAudio.Plugins.MDA 1.2.10 22.08 flathub TAP-plugins org.freedesktop.LinuxAudio.Plugins.TAP 1.0.1 22.08 flathub ZamPlugins org.freedesktop.LinuxAudio.Plugins.ZamPlugins 4.1 22.08 flathub SWH org.freedesktop.LinuxAudio.Plugins.swh 0.4.17 22.08 flathub Freedesktop Platform org.freedesktop.Platform 21.08.18 21.08 flathub Freedesktop Platform org.freedesktop.Platform 22.08.12.1 22.08 flathub i386 org.freedesktop.Platform.Compat.i386 21.08 flathub i386 org.freedesktop.Platform.Compat.i386 22.08 flathub Mesa org.freedesktop.Platform.GL.default 21.3.9 21.08 flathub Mesa org.freedesktop.Platform.GL.default 23.1.1 22.08 flathub Mesa (Extra) org.freedesktop.Platform.GL.default 23.1.1 22.08-extra flathub Mesa git snapshot org.freedesktop.Platform.GL.mesa-git 23.0-branchpoint-4408-g4ac56e3e5a4 23.08beta flathub-beta default org.freedesktop.Platform.GL32.default 21.08 flathub Mesa org.freedesktop.Platform.GL32.default 23.1.1 22.08 flathub Mesa (Extra) org.freedesktop.Platform.GL32.default 23.1.1 22.08-extra flathub Mesa git snapshot org.freedesktop.Platform.GL32.mesa-git 23.0-branchpoint-4408-g4ac56e3e5a4 23.08beta flathub-beta MangoHud org.freedesktop.Platform.VulkanLayer.MangoHud 0.6.9-1 22.08 flathub vkBasalt org.freedesktop.Platform.VulkanLayer.vkBasalt 0.3.2.9 22.08 flathub ffmpeg-full org.freedesktop.Platform.ffmpeg-full 21.08 flathub ffmpeg-full org.freedesktop.Platform.ffmpeg-full 22.08 flathub i386 org.freedesktop.Platform.ffmpeg_full.i386 21.08 flathub i386 org.freedesktop.Platform.ffmpeg_full.i386 22.08 flathub openh264 org.freedesktop.Platform.openh264 2.1.0 2.0 flathub openh264 org.freedesktop.Platform.openh264 2.1.0 2.0beta flathub-beta openh264 org.freedesktop.Platform.openh264 2.1.0 2.2.0 flathub openh264 org.freedesktop.Platform.openh264 2.1.0 2.2.0beta gnome-nightly Freedesktop SDK org.freedesktop.Sdk 21.08.18 21.08 flathub Freedesktop SDK org.freedesktop.Sdk 22.08.12.1 22.08 flathub i386 org.freedesktop.Sdk.Compat.i386 21.08 flathub i386 org.freedesktop.Sdk.Compat.i386 22.08 flathub .NET Core SDK extension org.freedesktop.Sdk.Extension.dotnet6 6.0.408 21.08 flathub Free Pascal Compiler and Lazarus org.freedesktop.Sdk.Extension.freepascal 3.2.2 21.08 flathub Go programming language Sdk extension org.freedesktop.Sdk.Extension.golang 1.20.2 21.08 flathub OpenJDK 11 SDK Extension org.freedesktop.Sdk.Extension.openjdk11 21.08 flathub OpenJDK 17 SDK Extension org.freedesktop.Sdk.Extension.openjdk17 22.08 flathub Rust stable org.freedesktop.Sdk.Extension.rust-stable 1.67.0 21.08 flathub Rust stable org.freedesktop.Sdk.Extension.rust-stable 1.70.0 22.08 flathub toolchain-i386 org.freedesktop.Sdk.Extension.toolchain-i386 21.08 flathub toolchain-i386 org.freedesktop.Sdk.Extension.toolchain-i386 22.08 flathub toolchain-i386 org.freedesktop.Sdk.Extension.toolchain-i386 22.08beta flathub-beta GNOME Boxes Osinfo DB org.gnome.Boxes.Extension.OsinfoDb 20230518 stable flathub HEIC org.gnome.Loupe.HEIC stable flathub GNOME Application Platform version 41 org.gnome.Platform 41 flathub GNOME Application Platform version 42 org.gnome.Platform 42 flathub GNOME Application Platform version 43 org.gnome.Platform 43 flathub GNOME Application Platform version 44 org.gnome.Platform 44 flathub GNOME Application Platform version Nightly org.gnome.Platform master gnome-nightly i386 org.gnome.Platform.Compat.i386 41 flathub i386 org.gnome.Platform.Compat.i386 43 flathub i386 org.gnome.Platform.Compat.i386 44 flathub GNOME Software Development Kit version 41 org.gnome.Sdk 41 flathub GNOME Software Development Kit version 42 org.gnome.Sdk 42 flathub GNOME Software Development Kit version 43 org.gnome.Sdk 43 flathub GNOME Software Development Kit version 44 org.gnome.Sdk 44 flathub GNOME Software Development Kit version Nightly org.gnome.Sdk master gnome-nightly i386 org.gnome.Sdk.Compat.i386 41 flathub i386 org.gnome.Sdk.Compat.i386 42 flathub i386 org.gnome.Sdk.Compat.i386 43 flathub i386 org.gnome.Sdk.Compat.i386 44 flathub i386 org.gnome.Sdk.Compat.i386 master gnome-nightly Codecs org.gnome.Totem.Codecs stable flathub yt-dl totem-pl-parser plugin org.gnome.Totem.Videosite.YouTubeDl stable flathub Adwaita dark GTK theme org.gtk.Gtk3theme.Adwaita-dark 3.22 flathub adw-gtk3 Gtk Theme org.gtk.Gtk3theme.adw-gtk3 3.22 flathub adw-gtk3-dark Gtk Theme org.gtk.Gtk3theme.adw-gtk3-dark 3.22 flathub Adwaita theme org.kde.KStyle.Adwaita 6.4 flathub Kvantum theme engine org.kde.KStyle.Kvantum 1.0.6 5.15-21.08 flathub KDE Application Platform org.kde.Platform 5.15-21.08 flathub KDE Application Platform org.kde.Platform 5.15-22.08 flathub KDE Application Platform org.kde.Platform 6.4 flathub QGnomePlatform org.kde.PlatformTheme.QGnomePlatform 5.15-21.08 flathub QGnomePlatform org.kde.PlatformTheme.QGnomePlatform 5.15-22.08 flathub QGnomePlatform org.kde.PlatformTheme.QGnomePlatform 6.4 flathub QtSNI org.kde.PlatformTheme.QtSNI 5.15-21.08 flathub KDE Software Development Kit org.kde.Sdk 5.15-21.08 flathub KDE Software Development Kit org.kde.Sdk 6.4 flathub QGnomePlatform-decoration org.kde.WaylandDecoration.QGnomePlatform-decoration 5.15-21.08 flathub QGnomePlatform-decoration org.kde.WaylandDecoration.QGnomePlatform-decoration 5.15-22.08 flathub QGnomePlatform-decoration org.kde.WaylandDecoration.QGnomePlatform-decoration 6.4 flathub Codecs org.kde.krita.Codecs stable flathub DXVK org.winehq.Wine.DLLs.dxvk 1.10.3 stable-21.08 flathub DXVK org.winehq.Wine.DLLs.dxvk 1.10.3 stable-22.08 flathub Gecko org.winehq.Wine.gecko stable-21.08 flathub Gecko org.winehq.Wine.gecko stable-22.08 flathub Mono org.winehq.Wine.mono stable-21.08 flathub Mono org.winehq.Wine.mono stable-22.08 flathub In that output, here are some of the interesting bits: Name Application ID Version Branch Origin […] GNOME Application Platform version 41 org.gnome.Platform 41 flathub GNOME Application Platform version 42 org.gnome.Platform 42 flathub GNOME Application Platform version 43 org.gnome.Platform 43 flathub GNOME Application Platform version 44 org.gnome.Platform 44 flathub GNOME Application Platform version Nightly org.gnome.Platform master gnome-nightly […] KDE Application Platform org.kde.Platform 5.15-21.08 flathub KDE Application Platform org.kde.Platform 5.15-22.08 flathub KDE Application Platform org.kde.Platform 6.4 flathub […] DXVK org.winehq.Wine.DLLs.dxvk 1.10.3 stable-21.08 flathub DXVK org.winehq.Wine.DLLs.dxvk 1.10.3 stable-22.08 flathub Gecko org.winehq.Wine.gecko stable-21.08 flathub Gecko org.winehq.Wine.gecko stable-22.08 flathub Mono org.winehq.Wine.mono stable-21.08 flathub Mono org.winehq.Wine.mono stable-22.08 flathub We can observe that, just like the author, I have many different versions of runtimes. Even with an unusual amount of runtimes and apps, Flatpak somehow manages to use 37.73 GB, even with most browsers installed as a flatpak. I imagine that most users have a small selection of apps installed, which only come with a few runtimes and a version apart, which also means that my setup is possibly one of the worst case scenarios. Even with that amount of torture, Flatpak still manages to handle storage fairly well. “Flatpak does have it’s uses” I wouldn’t say Flatpak is completely useless. For certain usecases it is great to have available. It think Flatpak makes most sense for when closed source software would need to be distributed. This is something that is really important to address: Flatpak (Flathub) makes even more sense for free and open-source (FOSS) apps than closed source, because they make apps easily discoverable. For example, Upscaler, an app I developed as a final assignment in CS50x and published it on Flathub on November 16 2022, was featured on OMG! Ubuntu! on that same day (hover over the date for the published date). Another example, gregorni, a friend of mine, published Calligraphy on June 1 2023, which was featured on OMG! Linux! on that same day. While it makes a lot of sense for closed source apps to publish their apps on Flathub, in my opinion, FOSS apps get even more benefits than closed source apps, because news outlets, especially the FOSS targeted ones, will quickly discover your apps, and might even write an article about them. This also means that more users will discover your apps, which helps it grow in popularity. This also means that you’re not forced to rely on GitHub to make your app discoverable. You could actually use Codeberg and have your apps easily discoverable if they are published on stores that are designed to be discovered. Imaginer and Bavarder are some GTK4+libadwaita apps that are primarily available on Codeberg, yet both gain more than 100 downloads a day on average, which is a pretty big achievement in my opinion. Conclusion In the end, I respect the opinion of disliking Flatpak, as we all have different opinions and disagree on many things. However, there is a difference between having an opinion, and being misinformed and displaying them to viewers or readers, especially when it can potentially hurt the community who are working really hard on addressing genuine issues on the Linux desktop. As an app developer, I cannot predict users’ setups; I prefer not to waste my time to learn what version of XYZ dependency Fedora Linux, Debian, Ubuntu, Arch Linux, etc. package. I prefer not to waste my time to refer users to Bugzilla, mailing lists, IRC or other inconvenient platforms that no one wants to use, and later figure out that they’re packaging a 6 month old version of my app. Instead, I prefer to devote my time in fixing actual bugs, adding new features, tweak some designs, work on other projects, write articles, or, you know, touch grass for once. Footnotes Unless it’s more practical to collaborate and create BaseApps. BaseApps’ use case is bundling dependencies that frameworks or the like need, to reduce the amount of duplication effort. ↩ In my humble opinion, I prefer GNOME Software’s approach as it’s less obstructive and doesn’t get in my way. ↩
  • Michael Meeks: 2023-06-02 Friday (2023/06/02 20:59)
    Mail chew, partner call, sales meeting, tested collaborative editing in a call; lunch, partner call. More mail chew, syncing etc. partner call. Continued combing through CVs for various new roles at Collabora. Good to see LibreOffice in the Flatpak app-store announced as the future on RHEL.
  • Christian Hergert: GJS plugins for libpeas-2.0 (2023/06/02 18:22)
    One of the main features I want to land for the libpeas-2.0 ABI break is support for plugins in JavaScript. With the right set of patches, you can get that. Thanks to Philip Chimento, GJS will hopefully soon land support for running code in a SpiderMonkey realm. Philip also did us a solid and wrote the code to exfiltrate enough GType information from an imported JavaScript module. That allows libpeas to correlate which GTypes are provided by a plugin. With the GJS realm support in place, we can land the new GJS loader for libpeas-2.0. My personal goal for this is to enable JavaScript-based plugins in GNOME Builder. With how much GJS has improved over the years to support GNOME Shell, it is probably our most-maintained language binding for a dynamic language with modern JIT features. For example, if you wanted to make an addin in Builder which responded to changes of a file within the editor, you might write something like this as your plugin. Keep in mind I’m not a JavaScript developer and GJS developers may tell you there are fancy new language features you can use to simplify this code further. import GObject from 'gi://GObject'; import Ide from 'gi://Ide'; export var TestBufferAddin = GObject.registerClass({ Implements: [Ide.BufferAddin], }, class TestBufferAddin extends GObject.Object { vfunc_language_set(buffer, language_id) { print('language set to', language_id); } vfunc_file_loaded(buffer, file) { print(file.get_uri(), 'loaded'); } vfunc_save_file(buffer, file) { print('before saving buffer to', file.get_uri()); } vfunc_file_saved(buffer, file) { print('after buffer saved to', file.get_uri()); } vfunc_change_settled(buffer) { print('spurious changes have settled'); } vfunc_load(buffer) { print('load buffer addin'); } vfunc_unload(buffer) { print('unload buffer addin'); } vfunc_style_scheme_changed(buffer) { let scheme = buffer.get_style_scheme(); print('style scheme changed to', scheme ? scheme.get_id() : scheme); } }); You can easily correlate that to the IdeBufferAddin interface definition.
  • Hubert Figuière: Niepce May 2023 updates (2023/06/02 00:00)
    This is the May 2023 update for Niepce. Life comes at you fast. And hit hard. tl;dr not as much progress as I wished: I had to put this project a bit more on the sideline. This will be a short update. The importer Some small bits: Some fixing in the metadata processing on import, notably a better handling of raw files. Turns out the previous logic broke getting metadata from video files. Also fixing some rexiv2 / gexiv2-sys bugs that are mostly memory leaks. I am pondering directly binding Exiv2 now that 0.28 got rid of auto_ptr<> that had been deprecated for over a decade. cxx should make this easier. This would automatically resolve the problem above, and I don't need to bind all the API, just what I need. Forecast To move forward the importer, I need to fix the recursive creation of folders (it currently flatten them to a single level). Misc There are always the other fixes. Fedora 38 I pulled the trigger and updated to Fedora 38. And Niepce failed to build because of a bug with bindgen with clang 16, that was already fixed, triggered by libgphoto2-sys. Submitted the PR for the crate and we are good to go. The short version is that clang 16 sent different data for anonymous enums, that bindgen 0.60 couldn't handle. But at that bindgen 0.65 was fine. This is the reason why I always check into git the generated bindings and update them as needed, instead of doing it at build time. Application ID I have been using org.gnome.Niepce as an application ID. While it is hosted in the main namespace repositories (it was back in the days of Subversion, to which I am thankful), Niepce is not a core app. Policies with the GNOME project do not allow using that namespace for non core apps. So I had a perform a global change rename. Not a big deal, and it doesn't change anything for users since there is no release. I just needed to get this out of the way. libopenraw Pushed a bit on the Rust port to implement metadata extraction. This is driven by the goal of not having three different libraries. Still some parity gaps with the C++ code, but it's closing in. I hope to be able to release 0.4.0 based on the Rust code. Thanks you for reading.
  • Felix Häcker: #98 Fast Searching (2023/06/02 00:00)
    Update on what happened across the GNOME project in the week from May 26 to June 02. GNOME Core Apps and Libraries Files Providing a simple and integrated way of managing your files and browsing your file system. antoniof says Files search is faster. A series of performance optimizations by Carlos Garnacho gave momentum to a testing and developing team effort. There are further optimizations ahead, but the difference can already be felt in the Nightly flatpak. Libadwaita Building blocks for modern GNOME apps using GTK4. Alice (she/they) announces I just landed AdwNavigationSplitView - the other part of replacing AdwLeaflet. This widget displays sidebar and content side by side or inside an AdwNavigationView. Meanwhile, AdwHeaderBar automatically hides redundant window buttons when used inside a split view. AdwNavigationSplitView also manages sidebar width as a percentage of the full width when possible, as well as finally allows to implement the style from the original mockups that was impossible with AdwLeaflet Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! Third Party Projects Rafael Mardojai CM announces I just released Forge Sparks, a simple git forges (Github, Gitea, Forgejo) notifier app. You can get it on Flathub, and help translate it on Weblate. gregorni reports This week, I released Calligraphy, an app that turns your text into large ASCII banners. Spice up your online conversations and add that extra oomph to your messages! Iman Salmani announces IPlan 1.3.0 is now out! Changes this week contain Code refactoring and UI improvements: New widgets for selecting Date and Time Changing duration of the new record by setting end time Add Brazilian Portuguese Translation thanks to Fúlvio Leo You can get it on flathub. Tube Converter Get video and audio from the web. Nick announces Tube Converter V2023.6.0 is here! We have been hard at work implementing many new features, configuration options, user interface improvements, and plenty of bug fixes for this update. I’d like to especially thank @fsobolev (my right-hand man), @soumyaDghosh (our wonderful snap maintainer), @DaPigGuy (our continuous tester and feature implementer), and all of the other testers, contributors and especially translators who put in countless time to make Tube Converter be the best that it can be! We wouldn’t be here if it wasn’t for all of your support ❤️ Here’s the full changelog: Added the ability to upload a cookie file to use for media downloads that require a login Added support for downloading media as M4A Added more configurable options for aria2 downloader Added options to configure when completed download notifications are shown Added the ability to clear completed downloads Added the ability to disallow conversions and simply download the appropriate video/audio format for the selected quality without converting to other formats Overwrite Existing Files is now a global option in Preferences instead of an individual setting in the AddDownloadDialog Tube Converter will check the clipboard for a valid media url and when AddDownloadDialog is opened Fixed an issue that prevented downloading m3u8 streams Fixed an issue that prevented downloading media to NTFS drives Updated UI/UX Updated translations (Thanks everyone on Weblate!) Phosh A pure wayland shell for mobile devices. Guido says Phosh 0.28.0 is out. This is a “small things matter” release: Several transitions have been smoothed out. Notifications can unblank the screen and you can set at which urgency this happens. <super>-key can open the overview. Pressing and holding down the volume button works now (no more tapping multiple times). The lockscreen works on smaller displays and libcall-ui was updated to 0.1.0 bringing in some visual improvements. Miscellaneous feborges says GNOME will have an Outreachy intern working on “Make GNOME platform demos for Workbench”! We are happy to announce that GNOME is sponsoring an Outreachy internship project for the May-August cohort where the intern will be working on “Make GNOME platform demos for Workbench". Jose Hunter will be working with mentor Sonny Piers. Stay tuned to Planet GNOME for future updates on the progress of this project! GNOME Foundation Rosanna announces This week has been a lot of churn of what used to be called paperwork. With the help with some expert bookkeepers we were able to get the info the accountant needed to file our taxes by their deadline. I am always grateful when we can get expert help; it saves me a lot of time trying to figure it out on my own and I can be more confident in the results. Speaking of deadlines, today is also the deadline for Executive Director applicants. I’ve been collecting the applications and collating them for the search committee to go through. Will spend this weekend and early next week going through them before the next committee meeting. The travel committee had a lot more requests this year for GUADEC than our budget to handle. Because GUADEC is such a priority for us and being able to send interns is so fundamental to getting them to integrate with our community, I liaised between the committee and the Board to request additional funding. Hoping to be able to process the rest of the travel sponsorships soon. Speaking of GUADEC, the call for volunteers is still ongoing. If you are attending and have ever wondered what is necessary behind the scenes to get everything running smoothly, this is a great opportunity. Quick form is located here: https://events.gnome.org/event/101/surveys/14 — Make sure you also have registered for GUADEC first! Deadline for GUADEC BoFs or Workshops: June 12 — If you want to run a BoF or Workshop at GUADEC this year but have not submitted a request yet, please do so by June 12. More details here: https://foundation.gnome.org/2023/05/25/guadec-2023-call-for-bofs/ That’s all for this week! See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
  • Michael Meeks: 2023-06-01 Thursday (2023/06/01 21:00)
    Up, had some meetings. Technical planning call, COOL community call, catch up with Kendy, 1:1 with Miklos. Encouraging marketing call.
  • Luis Villa: The brief guide to MSCD 5 (2023/05/31 19:50)
    My primary goal when trying to improve a contract’s drafting is not “plain english”. The goal is simplicity, clarity, and consistency, because complexity is a source of errors. As a pleasant side-effect, contracts drafted with rigorous attention to consistency and clarity are generally shorter, and almost always much easier to read. Ken Adam’s Manual of Style for Contract Drafting has helped me immeasurably in reaching that primary goal, both by teaching me habits of mind and by being a reference for better linguistic structures. But it’s also hefty. So I prepared this mini-guide to MSCD for an outside counsel: it’s what I recommend first-timers read, skim, and skip. I share it here in hopes it may be useful for others. All section references are to the Fifth Edition, but at least at the chapter level most should apply to earlier versions as well. MSCD core concepts These are the most impactful sections of MSCD. Introduction If you are confused about why I recommend the book, read “About this manual”, “Traditional Contract Language is Dysfunctional”, and “Excuses for Sticking With Traditional Contract Language”. Skip or skim the rest. Ch. 1: Characteristics of Optimal Contract Language; Ch. 17: Drafting as Writing These can be skimmed, since in some sense most of it will be obvious to anyone who has given any thought to better contract language. But both are good, and brief, and explain (in part) why traditional drafting is so bad. So if you find yourself confused by something in other chapters, come back to these two—they may help explain things. While reading, I particularly recommend these sections, which have had the biggest impact on my personal drafting style and are, in my experience, the biggest red flags for errors that stem from bad “traditional” drafting: “Contract Language Shouldn’t Be Legalistic” (1.7-1.28) “Contract Language Should Express Ideas without Redundancy” (1.35-1.56) “Contract Language Should Be Consistent” (1.57-1.60) Ch. 2: Front of the Contract; Ch. 5: Back of the Contract In my practice area, the front and back matter are not usually the source of critical errors, so I do not religiously follow these sections. However, I find that attention to detail on these topics is both a good introduction to MSCD’s style of thinking, and (when I do them in a contract) help put me in a rigorous frame of mind for the rest of the document. So these chapters are worth skimming and then consistently using as a reference. If you are pressed for time, 95% of chapter 2’s value can be obtained by: reading 2.129-2.150, Recitals. Once you get used to improving the drafting of the recitals, it sets a good tone for the rest of the document. reading 2.159-2.164, Wording of the lead-in. Understanding, and sticking with, Ken’s recommended use of “agree” here sets up Ch. 3’s critical “categories of contract language”. skimming Samples 1 and 2, and Appendix A’s front matter, to get the flavor of the kinds of improvements that can be made to most contracts. Ch. 5 is similar—just skim the back matter of Appendix A, then refer to the samples in the chapter for models. Ch. 3: Categories of Contract Language If you only read one chapter, read this one. This chapter is the core of MSCD’s style of thought: the core of clear and correct drafting is that different parts of contracts do different things, and you should rigorous using specific language to implement each of those things. You can do a lot to improve a contract simply by looking at each table in this section, reading the relevant section to understand the tables, and cleaning up every reference in a contract from the table’s disfavored language to the favored language. Doing this consistently across the entire contract will force rewrites that leave the resulting language much clearer, more precise, and more accurate. Two words discussed in this chapter are particular warning signs for me when I see them misused in a contract: “shall” (particularly 3.115-3.132 and Table 2) and “agrees” (3.27-3.30). Again, simply cleaning these up is a great way to do a “first pass” edit of a document—doing this helps you think about what each clause of a contract is actually trying to do, and often leads to more correct, more readable drafting. Ch. 13: Selected Usages This is a reference section, not something to read end-to-end. But it’s invaluable. When you find something that “feels wrong” in a contract, you’ll often be able to come to this section and find a much better, clearer way to state it. You may have to rewrite things substantially, but you’ll have a more logically consistent, consistent, correct, clearly-readable document when you’re done. I suggest skimming the table of contents, and then reading a few examples to give you the flavor of it. A couple of my favorites: For the Avoidance of Doubt: 13.312-319 “Including”—long, but a great analysis of the complexity and nuances of a single word: 13.359-407 Ch. 14: Numbers Read the brief part on “Words or Digits” to learn why there should be zero(o) instances of number(numeral). Skip or skim the rest. Advanced Drafting to Reduce Ambiguity These chapters are useful primarily for the most complex drafting problems. I tend to do a “first pass” edit based on the “core concepts” chapters I identified previously. If that first edit pass identifies a particularly complex/problematic section, then the concepts in these chapters can be quite helpful in making the challenging contract sections more correct. Ch. 7: Sources of Uncertain Meaning: 95% of the value of this chapter is one concept: the distinction between ambiguity (bad, unintentional, should be eliminated) and vagueness (intentional, may be strategic, must be deployed carefully). Ch. 11: Ambiguity of the Part v. The Whole: most useful in commercial contracts where it’s important to be precise about what is being sold (or not). Ch. 12: Syntactic Ambiguity: a collection of techniques that are frequently useful in deconstructing, and then reconstructing, extremely long or complex sentences. Interesting but not useful (to me/my practice) These chapters are intellectually interesting but I have not found them to be particularly high-impact on my practice. Might be different for your practice! Ch. 6: Defined Terms I do not find definitions to be a significant source of problems, as a practical matter, so don’t find this chapter hugely valuable. However, there are a few sections that cover errors that I find frequently, so are more useful than the rest: “Be Consistent” (6.8-6.13) “stuffed definitions” (6.49-58) Mistakes: 6.110-6.122 I don’t usually follow the advice of “where to put the definition section” (6.96-6.98), but I think it’s at least worth trying in many documents. Other interesting-but-less-useful-to-me chapters These chapters are all interesting for drafting nerds, and may be relevant to some practices, but not frequently an issue for me. Ch. 8: Reasonable Efforts Ch. 9: Materiality Ch. 18: Amendments; Ch. 19 Letter Agreements Neither interesting nor useful (to me/my practice) I mostly ignore these—they’re not wrong but the bang-for-buck of rewriting with them in mind is pretty low, in my experience. Ch. 4: Layout; Ch. 16 Typography I tend to rely on Butterick’s Typography for Lawyers for these topics. Ch. 10: Time Suspect most relevant for certain types of commercial contracts where extreme specificity about time (delivery, etc.) are common sources of disputes. Ch. 15: Internal rules of interpretation Have never seen one of these in the wild that I can recall.
  • Tim F. Brüggemann: Bi-weekly GSoC Update: Reaching FlatSync's MVP (2023/05/31 00:00)
    In this post, I want to sum up the latest events regarding GSoC and FlatSync, and what's been done to reach our project's MVP goal. # Latest GSoC Updates We participants were invited to a Contributor Summit where tips and tricks regarding GSoC and open-source involvement were shared. We heard talks from previous contributors, mentors as well as Google employees regarding OSS and its development flow. Alongside many other topics, the importance of communication was highlighted a lot. But other than just keeping up-to-date with our mentors, we were encouraged to also engage in a wider range of communication, so e.g. within the org's community, be that through chats like Matrix rooms, project issues and MRs or blogging. Many other topics were being discussed as well, but this would probably go a little too far for this blog entry. # FlatSync Development Progress To reach our MVP goal, we only had one remaining issue left open: autostarting FlatSync's daemon on user login, and adding the ability to toggle this behavior. # Implementing Autostart Functionality What sounds easy at first turned out to be quite a bit of a challenge already. At first, we started by writing a custom .desktop file that was meant to install to $XDG_CONFIG_HOME/autostart (or just $HOME/.config/autostart). After a bit of trial and error, we managed to code a working implementation. But since the actual function to install the autostart file resides inside the daemon executable and is called via D-Bus (so that this code can easily be shared between CLI and GUI later on), we needed a way to automatically start up the daemon when trying to call the D-Bus interface from our CLI. For that, we implemented a D-Bus service, which took quite a bit of trial and error actually, but in the end, we managed to get the (un-)installation of our .desktop file implemented and working correctly. That's what we managed to pull off for a native build, though. We quickly realized that this method does not work within a Flatpak build, and since we're building an app to assist with Flatpaks anyway, we need to make sure this also works properly as a Flatpak. There were two underlying problems with our previous approach: We need permanent rw access to the user's autostart directory which maybe could be avoided. The autostart file that's installed is not a Flatpak-configured one. Whilst the first point is pretty straightforward, the second may not be. By default, our autostart file has the following content: [Desktop Entry] Name=FlatSync Daemon Comment=Start Flatsync Daemon Type=Application Exec=flatsync-daemon Icon=app.drey.FlatSync.Devel Terminal=false Categories=GNOME;GTK; # Translators: Search terms to find this application. Do NOT translate or localize the semicolons! The list MUST also end with a semicolon! Keywords=Gnome;GTK; # Translators: Do NOT translate or transliterate this text (this is an icon file name)! StartupNotify=false NoDisplay=true The Exec=flatsync-daemon line is what's interesting here. As long as flatsync-daemon is in our $PATH (which should be the case when installing natively), this executes properly. However, since we need to call the flatsync-daemon executable that's within our Flatpak sandbox, a file like this won't do us any good. Luckily, Flatpak automatically rewrites all the .desktop files to properly wrap the command and saves them to $XDG_DATA_HOME/flatpak/exports/share/applications (or /var/lib/flatpak/exports for system-wide installations). A properly configured version of the file above would then look like the following: [Desktop Entry] Name=FlatSync Daemon Comment=Start Flatsync Daemon Type=Application Exec=/usr/bin/flatpak run --branch=master --arch=x86_64 --command=flatsync-daemon app.drey.FlatSync.Devel Icon=app.drey.FlatSync.Devel Terminal=false Categories=GNOME;GTK; Keywords=Gnome;GTK; StartupNotify=false NoDisplay=true X-Flatpak=app.drey.FlatSync.Devel As we can see, we're now passing flatsync-daemon as the command to run inside our application's Flatpak sandbox. Now, the problem here is that all the .desktop files within our app are not the exported but the default ones! We could hack our way around this, but both my mentor and I were not satisfied with that, so I looked around in search of a different approach. I ended up finding out about Portals. Portals are API methods exposed via D-Bus that are meant to assist with the permissions of sandboxed applications. Furthermore, they expose org.freedesktop.portal.Background, which is used to allow applications to reside in the background and also auto-start, so just what we need! Sadly, the Portal API currently doesn't play nice with native apps all the time, so we currently just fall back to our previous approach when running natively. Other than that, with the help of ashpd, implementing the required autostart functionality was very easy, here's a code snippet (the install bool is used to switch between installation and uninstallation): async fn autostart_file_sanbox(&self, install: bool) -> Result<(), Error> { // `dbus_activatable` has to be set to false, otherwise this doesn't work for some reason. // I guess this has something to do with the fact that in our D-Bus service file we call `app.drey.FlatSync.Daemon` instead of `app.drey.FlatSync`? Background::request() .reason("Enable autostart for FlatSync's daemon") .auto_start(install) .command(&["flatsync-daemon"]) .dbus_activatable(false) .send() .await?; Ok(()) } By calling this method, a file with the following contents is created within our autostart directory: [Desktop Entry] Type=Application Name=app.drey.FlatSync.Devel Exec=flatpak run --command=flatsync-daemon app.drey.FlatSync.Devel X-Flatpak=app.drey.FlatSync.Devel And that's basically all there's to it! We now have proper autostart functionality for both native as well as Flatpak builds. # Fixing Default Flatpak Permissions As we previously only tested native implementations, we didn't notice that we were missing some required permission sets within our Flatpak sandbox environment. We specifically needed the following: Enabling communication via our own D-Bus interface Getting our installed Flatpak applications Communicating via the host's network All of this was actually pretty easy to fix by extending finish-args within our build manifest: D-Bus communication was enabled by setting --own-name=app.drey.FlatSync.Daemon. This tells Flatpak that our app owns this interface and lets us communicate via D-Bus without problems. By default, only our own application was saved and pushed into our GH Gist file. To fix this issue, the app now has read-only access to the user's as well as the system's Flatpak applications. This was done by setting --filesystem=xdg-data/flatpak:ro as well as --filesystem=/var/lib/flatpak:ro. Network communication was simply added by setting --share=network. With all that done, our application now also worked properly within a Flatpak environment!
  • Jussi Pakkanen: A4PDF release 0.2.0 (2023/05/30 17:14)
    I have just tagged relase 0.2.0 of A4PDF, the fully color managed PDF generation library.There are not that many new exposed features added in the public API since 0.1.0. The main goal of this release has been to make the Python integration work and thus the release is also available in Pypi. Due to reasons the binary wheel is available only for Windows.Trying it outOn Linux you probably want to do a Git checkout and run it from there.On Windows the easiest way is to install the package via Pip.On macOS you can't do anything, because Apple's compiler toolchain is too old to build the code.What functionality does is provide?There is no actual documentation, so the best bet is to look at the unit test file. There is a lot more functionality in the C++ code, but it is not exposed in the public API yet. These include things like (basics of) tagged PDF generation, annotations and external file embedding.Impending name changeThere is an official variant of PDF called PDF/A. Thre are several versions of it including PDF/A-4. I did not know that when deciding the name. Because having a library called A4PDF that produces PDF/A-4:s is confusing, the name needs to be changed. The new name has not been decided yet, suggestions welcome.
  • Emmanuele Bassi: Configuring portals (2023/05/29 17:31)
    One of the things I’ve been recently working on at Igalia is the desktop portals implementation, the middleware layer of API for application and toolkit developers that allows sandboxed applications to interact with the host system. Sandboxing technologies like Flatpak and Snap expose the portal D-Bus interfaces inside the sandbox they manage, to handle user-mediated interactions like opening a file that exists outside of the locations available to the sandboxed process, or talking to privileged components like the compositor to obtain a screenshot. Outside of allowing dynamic permissions for sandboxed applications, portals act as a vendor-neutral API for applications to target when dealing with Linux as an OS; this is mostly helpful for commercial applications that are not tied to a specific desktop environment, but don’t want to re-implement the layer of system integration from the first principles of POSIX primitives. The architecture of desktop portals has been described pretty well in a blog post by Peter Hutterer, but to recap: desktop portals are a series of D-Bus interfaces toolkits and applications call methods on those D-Bus interfaces there is a user session daemon called xdg-desktop-portal that provides a service for the D-Bus interfaces xdg-desktop-portal implements some of those interface directly for the interfaces that involve user interaction, or interaction with desktop-specific services, we have separate services that are proxied by xdg-desktop-portal; GNOME has xdg-desktop-portal-gnome, KDE has xdg-desktop-portal-kde; Sway and wlroot-based compositors have xdg-desktop-portal-wlr; and so on, and so forth There’s also xdg-desktop-portal-gtk, which acts a bit as a reference portal implementation, and a shared desktop portal implementation for a lot of GTK-based environments. Ideally, every desktop environment should have their own desktop portal implementation, so that applications using the portal API can be fully integrated with each desktop’s interface guidelines and specialised services. One thing that is currently messy is the mechanism by which xdg-desktop-portal finds the portal implementations available on the system, and decides which implementation should be used for a specific interface. Up until the current stable version of xdg-desktop-portal, the configuration worked this way: each portal implementation (xdg-desktop-portal-gtk, -gnome, -kde, …) ships a ${NAME}.portal file; the file is a simple INI-like desktop entry file with the following keys: DBusName, which contains the service name of the portal, for instance, org.freedesktop.impl.portal.desktop.gnome for the GNOME portals; this name is used by xdg-desktop-portal to launch the portal implementation Interfaces, which contains a list of D-Bus interfaces under the org.freedesktop.impl.portal.* namespace that are implemented by the desktop-specific portal; xdg-desktop-portal will match the portal implementation with the public facing D-Bus interface internally UseIn, which contains the name of the desktop to be matched with the contents of the $XDG_CURRENT_DESKTOP environment variable once xdg-desktop-portal starts, it finds all the .portal files in a well-known location and builds a list of portal implementations currently installed in the system, containing all the interfaces they implement as well as their preferred desktop environment whenever something calls a method on an interface in the org.freedesktop.portal.* namespace, xdg-desktop-portal will check the current desktop using the XDG_CURRENT_DESKTOP environment variable, and check if the portal that has a UseIn key that matches the current desktop once there’s a match, xdg-desktop-portal will activate the portal implementation and proxy the calls made on the org.freedesktop.portal interfaces over to the org.freedesktop.impl.portal ones This works perfectly fine for the average case of a Linux installation with a single session, using a single desktop environment, and a single desktop portal. Where things get messy is the case where you have multiple sessions on the same system, each with its own desktop and portals, or even no portals whatsoever. In a bad scenario, you may get the wrong desktop portal just because the name sorts before the one you’re interested in, so you get the GTK “reference” portals instead of the KDE-specific ones; in the worst case scenario, you may get a stall when launching an application just because the wrong desktop portal is trying to contact a session service that simply does not exist, and you have to wait 30 seconds for a D-Bus timeout. The problem is that some desktop portal implementations are shared across desktops, or cover only a limited amount of interfaces; a mandatory list of desktop environments is far too coarse a tool to deal with this. Additionally, xdg-desktop-portal has to have enough fallbacks to ensure that, if it cannot find any implementation for the current desktop, it will proxy to the first implementation it can find in order to give a meaningful answer. Finally, since the supported desktops are shipped by the portal themselves, there’s no way to override this information by packagers, admins, or users. After iterating over the issue, I ended up writing the support for a new configuration file. Instead of having portals say what kind of desktop environment they require, we have desktop environments saying which portal implementations they prefer. Now, each desktop should ship a ${NAME}-portals.conf INI-like desktop entry file listing each interface, and what kind of desktop portal should be used for it; for instance, the GNOME desktop should ship a gnome-portals.conf configuration file that specifies a default for every interface: [preferred] default=gnome On the other hand, you could have a Foo desktop that relies on the GTK portal for everything, except for specific interfaces that are implemented by the “foo” portal: [preferred] default=gtk org.freedesktop.impl.portal.Screenshot=foo org.freedesktop.impl.portal.Screencast=foo You could also disable all portals except for a specific interface (and its dependencies): [preferred] default=none org.freedesktop.impl.portal.Account=gtk org.freedesktop.impl.portal.FileChooser=gtk org.freedesktop.impl.portal.Lockdown=gtk org.freedesktop.impl.portal.Settings=gtk Or, finally, you could disable all portal implementations: [preferred] default=none A nice side effect of this work is that you can configure your own system, by dropping a portals.conf configuration file inside the XDG_CONFIG_HOME/xdg-desktop-portal directory; this should cover all the cases in which people assemble their desktop out of disparate components. By having desktop environments (or, in a pinch, the user themselves) owning the kind of portals they require, we can avoid messy configurations in the portal implementations, and clarify the intended behaviour to downstream packagers; at the same time, generic portal implementations can be adopted by multiple environments without necessarily having to know which ones upfront. In a way, the desktop portals project is trying to fulfill the original mission of freedesktop.org’s Cross-desktop Group: a set of API that are not bound to a single environment, and can be used to define “the Linux desktop” as a platform. Of course, there’s a lot of work involved in creating a vendor-neutral platform API, especially when it comes to designing both the user and the developer experiences; ideally, more people should be involved in this effort, so if you want to contribute to the Linux ecosystem, this is an area where you can make the difference.
  • Gotam Gorabh: GSoC 2023 [Week 1 Report]: Create a New “System” panel in GNOME Settings (2023/05/29 15:44)
    Project Title: Create a New “System” panel in GNOME SettingsMentor : Felipe BorgesContributor : Gotam GorabhIntroductionThis summer I’m working on a project titled Create a New “System” panel in GNOME Settings, which aims to create a New System panel. This blog summarizes my progress during the bonding period and the first week of the Google Summer Of Code 2023.I will address this Issue(#2241) and implement this mockup in this project. For more details here is my Proposal.Week 1 Goals:Add an empty system panel to the panel list of GNOME Settings.Eliminate errors and wrong coding standards.Progress Made:Completed the setup of the development environment and configured the project structure.Gathered all the necessary documents that will help in this project such as GObject, GTK, etc.Took inspiration from other panels(especially the Accessibility panel) to implement a new system panel.Successfully implemented a new empty system panel. to implement a new system panel.Successfully implemented a new empty system panel.Current status of system panel:Deliverables:To create a new system panel, I created a new folder name system inside gnome-control-center/panels folder.Below is the structure of the files and folders inside system foldergnome-system-panel.desktop.in is a new desktop file for the system panel which needs to be installed into the system path to load the panel.org.gnome.Settings-gear-symbolic.svg file provides a gear symbolic icon to the new system panel.Also modified gnome-control-center/panels/meson.build , gnome-control-center/shell/cc-panel-list.c , and gnome-control-center/shell/cc-panel-loader.c files.Related Merge Requests:Create initial base structure for the new "System" panel !1800Add title to system panel !1815Plan for the Next Week:In the next week, I will move Remote Desktop Panel as a page into the new system panel.This week is not over yet, so stay tuned for more updates.
  • Jose Hunter: Introducing myself! (2023/05/27 17:09)
    Hello, I'm Jos. I'm an open source fanatic and I'll be working on Workbench this summer. I'm required to blog about this experience so here we go! I'm currently a college student studying Computer Science. I've been programming for fun for years. I switched to Linux 2 years ago and I've been getting into programming using GTK and Vala and now I'm getting paid to do exactly that. Which is crazy because I'm still very new to this, but I'm enjoying it so much. "Core Values" I'm required to talk about core values in this week's blog post. So core values are "important personality traits" which describe yours truly. Having gone through the list provided to me. I've decided to pick only one... Challenge I like this word. It's not a personality trait or ideal that I hold on to tightly, but una palabra que me describe. Like most people, I've faced challenges and I'm still facing them. I am a disabled minority. I am proud to be a statistical anomaly, and I would not be here without programs like Outreachy. Why I applied to Outreachy because I need to pay rent and I'm not working another horrible food service job. In actuality, I applied to Outreachy after seeing a post on Mastodon. Which makes me even more grateful that I joined FOSS Community. I've met many cool people and they are the only reason I'm here now. Outreachy and programs like it are allowing me to start my career while I'm still in college. I can't wait to see where this journey goes and good luck to all my fellow GSOC/Outreachy Interns.
  • Felix Häcker: #97 GNOME Latam 2023 (2023/05/26 00:00)
    Update on what happened across the GNOME project in the week from May 19 to May 26. Martín Abente Lahaye reports Today, Friday May 26th, and tomorrow, Saturday May 27th, come join us at GNOME Latam 2023! This is a two days event celebrating our Latin GNOME community with speakers from all over the Americas. GNOME Core Apps and Libraries Libadwaita Building blocks for modern GNOME apps using GTK4. Alice (she/they) announces I merged AdwNavigationView. This is a widget that implements page-based navigation with an easier to use API than AdwLeaflet, and will eventually replace it Your browser doesn't support embedded videos, but don't worry, you can download it and watch it with your favorite video player! Calendar A simple calendar application. Georges Stavracas (feaneron) says GNOME Calendar just received a small facelift on top of the new widgets provided by libadwaita. It now features a better delineated sidebar, and the views have a uniform look too. GNOME Circle Apps and Libraries Sonny reports This week, Cartridges joined GNOME Circle. Cartridges is a simple game launcher for all of your games. Congratulations! Cartridges Launch all your games kramo says I just released Cartridges 1.5! Extra Steam libraries are now detected and added automatically, executables are now passed directly to the shell, allowing for more complex arguments, and a lot of UX improvements have been made. Oh yeah, and the app is now part of GNOME Circle! Check it out on Flathub! Amberol Plays music, and nothing else. Emmanuele Bassi reports Amberol 0.10.3 is now out! Not a lot of changes, but two nice bug fixes: the waveform for short songs is now appropriately sized; and the cover art with a portrait orientation does not get squished any more. Plus, as usual, lots of translation updates. Third Party Projects Iman Salmani reports IPlan 1.2.0 is now out! Changes: Subtasks, Bug fixes, and UI improvements Task Window which contains task info, subtasks, and records Translation to Persian and Turkish languages. thanks to Sabri Ünal New toast message for moving back task from the tasks done list Users are now able to pick emojis for their projects in the edit window Projects and Tasks now can have descriptions 0xMRTT reports Imaginer 0.2.2 has been released with the ability to use stable diffusion running locally. It’s also now possible to customize the filename and some bugs has been fixed. 0xMRTT says Bavarder 0.2.3 has been released with the ability to use a custom model which enable you to use either a model running on your computer or a custom API providing the model. The loading mechanism is now faster and some bugs has been fixed. You can download Bavarder from Flathub or from either GitHub or Codeberg. tfuxu says I’ve released Halftone, a simple app for lossy image compression using quantization and dithering techniques. Give your images a pixel art-like style and reduce the file size in the process with Halftone. You can check it out on Github. JumpLink reports I am pleased to announce the release of ts-for-gir v3.0.0 🚀 ts-for-gir is a powerful tool for generating TypeScript type definitions for GJS and GObject Introspection-based libraries. In this release, I have focused on introducing NPM packages 📦. These packages contain pre-generated TypeScript types that can be easily integrated into your GJS projects. By utilizing these types, you can benefit from TypeScript’s strong typing and improved code navigation, whether you are working with JavaScript or TypeScript. The pre-generated NPM packages can be accessed directly on GitHub, or you can find them on NPM, such as the Gtk-4.0 package. I encourage you to explore ts-for-gir / the NPM packages and provide your valuable feedback. Your input is greatly appreciated! 🤗️ Bilal Elmoussaoui reports I have added multi windows support to Contrast because why not. Along with other fixes in v0.0.8 Tube Converter Get video and audio from the web. Nick says Tube Converter V2023.5.0 is here! This week’s release features a brand new backend that makes Tube Converter much more stable! Besides the new backend, we added the ability to stop all downloads and retry all failed downloads, as well as clear all queued downloads. This release also introduces the feature to crop the thumbnail of a download as square (useful for downloading music) and the ability to choose specific resolutions for video downloads instead of the qualities Best, Good, Worst. Here’s the full changelog: Added the per-download option to crop the thumbnail of a video as a square Added the ability to stop all downloads Added the ability to retry all failed downloads Added the ability to clear queued download When downloading a video, the quality options will be the specific resolutions of the video instead of Best, Good, Worst Fixed an issue where some downloads could not be stopped and retried Fixed an issue where some users would experience random crashing when downloading Updated translations (Thanks everyone on Weblate!) Pods A podman desktop application. marhkb says I have released version 1.2.0 of Pods with the following new features: Usabilty and UX has been improved in many places. Pruning of containers and pods is now possible. Container terminals are now detachable and can be used in parallel. Images can be pushed to a registry. The CPU utilization now takes the number of cores into account. Denaro Manage your personal finances. Nick says Your favorite pizza man is back with another Denaro release! 🍕 Denaro V2023.5.0 is here! This week’s release features many fixes for bugs users were experiencing across the app. Denaro will also now show error messages if it attempts to access inaccessible files instead of crashing. Here’s the full changelog: Fixed an issue where Denaro would crash on systems with unconfigured locales Fixed an issue where PDF exporting failed for accounts with many receipts Fixed an issue where a group’s filter was reactivated when a transaction was added to that group Error messages will be shown if Denaro attempts to access inaccessible files instead of crashing Updated translations (Thanks to everyone on Weblate)! Documentation Sonny reports libmanette dev documentation is back online and will be included in org.gnome.Sdk.Docs https://gnome.pages.gitlab.gnome.org/libmanette/ libmanette offers painless access to game controllers, from any programming language and with little dependencies. GNOME Foundation Kristi Progri says After we successfully wrapped up LinuxAppSummit with a great team, we are moving forward with GUADEC preparations. The schedule is out now; you can check it at https://events.gnome.org/event/101/timetable/#20230726 We are pleased to inform you that the call for registrations is now open as well https://events.gnome.org/event/101/registrations/, get your ticket now and keep up with all the latest news and announcements we make. You can still submit your Bof/workshop even if you missed the original deadline at: https://events.gnome.org/event/101/surveys/13 If you would like to volunteer and help us, consider signing up as a volunteer during the event time https://events.gnome.org/event/101/surveys/14 That’s all for this week! See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
  • Sriyansh Shivam: 10 Web Development Tools Every Developer Should Know About (2023/05/24 23:29)
    Web development has become an integral part of building modern websites and applications. To streamline the development process, enhance productivity, and ensure high-quality output, developers rely on a range of web development tools. In this article, we'll explore ten essential web development tools that every developer should know about. These tools are designed to simplify various aspects of web development, from coding and testing to performance optimization and collaboration. Introduction Web development tools refer to software applications or utilities that aid developers in designing, coding, testing, and maintaining websites and web applications. These tools offer features that automate repetitive tasks, provide debugging capabilities, optimize code performance, and enhance collaboration among team members. By leveraging the right web development tools, developers can streamline their workflow, save time, and deliver high-quality web projects. Essential Web Development Tools 1. Text Editors and Integrated Development Environments (IDEs) Text editors and IDEs are the starting point for any web development project. They provide a platform to write, edit, and manage code efficiently. Popular text editors include Visual Studio Code, Sublime Text, and Atom, while well-known IDEs like JetBrains' WebStorm and Eclipse offer more comprehensive development environments. 2. Version Control Systems Version control systems such as Git enable developers to track changes in their codebase, collaborate with others, and easily revert to previous versions if needed. Git, with platforms like GitHub and GitLab, has become the industry standard for version control. 3. Package Managers Package managers like npm (Node Package Manager) and Yarn simplify the installation and management of external libraries and frameworks. They automate the process of fetching dependencies, ensuring that developers can easily integrate and update third-party code in their projects. 4. Command Line Tools Command line tools provide developers with a powerful interface to execute various tasks quickly. Tools like npm scripts, Gulp, and Grunt allow developers to automate repetitive tasks, such as bundling and minifying code, running tests, and deploying projects. 5. Browser Developer Tools Browser developer tools, such as Chrome DevTools and Firefox Developer Tools, are indispensable for web development. They enable developers to inspect and debug HTML, CSS, and JavaScript code, analyze network requests, optimize performance, and test responsive layouts. 6. Testing and Debugging Tools Testing and debugging are crucial for ensuring the functionality and stability of web projects. Tools like Jest, Mocha, and Jasmine provide frameworks for automated testing, while debuggers like Chrome DevTools and VS Code's debugger help identify and fix code errors. 7. Task Runners Task runners like Gulp and Grunt automate repetitive tasks in web development workflows. They can handle tasks such as compiling CSS preprocessors, transpiling JavaScript code, optimizing images, and live-reloading the browser during development. 8. CSS Preprocessors CSS preprocessors like Sass and Less extend the capabilities of CSS by introducing variables, mixins, and nested syntax. They improve code maintainability and make it easier to write modular and reusable CSS code. 9. JavaScript Frameworks and Libraries JavaScript frameworks and libraries, such as React, Angular, and Vue.js, provide developers with tools and components to build interactive and dynamic web applications more efficiently. They offer a structured approach to web development, promoting code reusability and maintainability. 10. Performance Optimization Tools Performance optimization tools help optimize website speed and improve user experience. Tools like Google PageSpeed Insights, Lighthouse, and WebPageTest analyze and provide suggestions for optimizing page load times, minimizing file sizes, and improving caching strategies. Additional Web Development Tools Apart from the essential tools mentioned above, developers can also benefit from other web development tools that cater to specific needs or enhance productivity further. Some of these additional tools include: 1. Wireframing and Prototyping Tools Wireframing and prototyping tools like Adobe XD and Sketch assist in creating visual mockups and interactive prototypes, allowing developers and designers to validate their ideas before diving into actual coding. 2. Code Editors with Live Preview Code editors with live preview features, such as Brackets and CodePen, provide real-time visual feedback as developers write code, making it easier to see how changes affect the appearance and behavior of the web page. 3. Responsive Design Testing Tools Responsive design testing tools, like Responsive Design Checker and BrowserStack, help developers ensure their websites look and function correctly across different screen sizes and devices. 4. Content Management Systems (CMS) Content Management Systems like WordPress and Drupal offer a user-friendly interface for creating and managing website content. They are particularly useful for non-technical users who want to update website content without diving into code. 5. SEO and Analytics Tools SEO and analytics tools, such as Google Analytics and Moz, help developers monitor website performance, track user behavior, and optimize websites for search engines. These tools provide valuable insights for improving website visibility and attracting organic traffic. 6. Collaboration and Project Management Tools Collaboration and project management tools like Slack, Trello, and Jira facilitate communication, task tracking, and collaboration among team members, ensuring smooth project execution and efficient workflow. 7. Security Testing Tools Security testing tools like OWASP ZAP and Burp Suite assist in identifying potential security vulnerabilities in web applications. They simulate attacks and provide insights to strengthen the security of websites and protect sensitive data. 8. Image Optimization Tools Image optimization tools like TinyPNG and ImageOptim help reduce image file sizes without compromising quality, improving website performance and reducing bandwidth usage. 9. Deployment Tools Deployment tools like Netlify and Heroku simplify the process of deploying web applications to production environments. They automate tasks such as building and deploying code, managing server configurations, and scaling applications. 10. Code Quality and Analysis Tools Code quality and analysis tools like ESLint and SonarQube help maintain code consistency, identify potential bugs, and enforce coding best practices. They analyze code for issues related to code style, security vulnerabilities, and maintainability. Conclusion In today's rapidly evolving web development landscape, using the right tools can make a significant difference in productivity, code quality, and project success. The ten essential web development tools mentioned in this article provide developers with a solid foundation for efficient and effective web development. Additionally, exploring additional tools that cater to specific needs can further enhance the development process and deliver exceptional results. FAQs What are web development tools? Web development tools are software applications or utilities that aid developers in designing, coding, testing, and maintaining websites and web applications. Why are web development tools important? Web development tools simplify tasks, automate repetitive processes, enhance collaboration, optimize performance, and improve code quality, thereby streamlining the development process and delivering high-quality web projects. Can I use multiple web development tools together? Absolutely! Developers often use a combination of tools that work together seamlessly to enhance their workflow and achieve desired outcomes. Are web development tools free? Many web development tools have free versions or open-source alternatives available. However, premium versions or enterprise-level tools may come with a cost. How can I learn to use web development tools effectively? To learn and master web development tools, you can explore online tutorials, documentation, and community forums. Additionally, hands-on practice and real-world projects will further solidify your skills and familiarity with these tools.
  • Jussi Pakkanen: Advanced dependency management and building Python wheels with Meson (2023/05/24 20:26)
    One of the most complex pieces of developing C and C++ programs (and most other languages) is dependency management. When developing A4PDF I have used Ubuntu's default distro dependencies. This is very convenient because you typically don't need to fiddle with getting them built and they are battle tested and almost always work.Unfortunately you can't use those in most dev environments, especially Windows. So let's see how much work it takes to build the whole thing on Windows using only Visual Studio and to bundle the whole thing into a Python wheel so it can be installed and distributed. I would have wanted to also put it in Pypi but currently there is a lockdown caused by spammers so no go on that front.Seems like a lot of effort? Let's start by listing all the dependencies:fmtFreetypeLibTIFFLibPNGLibJPEG (turbo)LittleCMS2ZlibThese are all available via WrapDB, so each one can be installed by executing a command like the following:meson wrap install fmtWith that done Meson will automatically download and compile the dependencies from source. No changes need to be done in the main project's meson.build files. Linux builds will keep using system deps as if nothing happened.Next we need to build a Python extension package. This is different from a Python extension module, as the project uses ctypes for Python <-> C interop. Fortunately thanks to the contributors of Meson-Python this comes down to writing an 18 line toml file. Everything else is automatically handled for you. Installing the package is then a question of running this command:pip install .After a minute or two of compilation the module is installed. Here is a screenshot of the built libraries in the system Python's site-packages.Now we can open a fresh terminal and start using the module.Random things of noteEverything here uses only Meson. There are no external dependency managers, unix userland emulators or special terminals that you have to use.In theory this could work on macOS too, but the code is implemented in C++23 and Apple's toolchain is too old to support it.The build definitions for A4PDF take only 155 lines (most of which is source and program name listings).If, for whatever reason, you can't use WrapDB, you can host your own.
  • Hans de Goede: Fedora IPU6 camera support now available in rpmfusion-nonfree (2023/05/24 10:24)
    InstallationI am happy to announce that Intel's IPU6 camera stack has been packaged in rpmfusion and now can be installed under Fedora 37 and newer with a single `dnf install` command.Note since this uses an out of tree kernel module build as unsigned akmod you need to disable secureboot for this to work; or alternatively sign the kmod with your own local key (instructions here).First enable both the rpmfusion-free and rpmfusion-nonfree repositories, for instructions see https://rpmfusion.org/ConfigurationThe IPU6 support requires kernel >= 6.3.1 which is in updates-testing for now and v4l2loopback also needs to be updated to the latest version (in case you already have it installed):sudo dnf update \  --enablerepo=updates-testing \  --enablerepo=rpmfusion-free-updates-testing \  --enablerepo=rpmfusion-nonfree-updates-testing \  'kernel*' '*v4l2loopback'And now things are ready to install the IPU6 driver stack:sudo dnf install \  --enablerepo=updates-testing \  --enablerepo=rpmfusion-free-updates-testing \  --enablerepo=rpmfusion-nonfree-updates-testing \  akmod-intel-ipu6After this command reboot and you should be able to test your camera with https://mozilla.github.io/webrtc-landing/gum_test.html under firefox now.This relies on Intel's partly closed-source hw-enablement for the IPU6, as such this known to not work on laptop models which are not covered by Intel's hw-enablement work. If your laptop has an option to come with Linux pre-installed and that SKU uses the IPU6 cameras then this should work. Currently known to work models are:Dell Latitude 9420 (ov01a1s sensor)Dell Precision 5470 (ov01a10 sensor)Dell XPS 13 Plus 9320 (ov01a10 sensor)Lenovo ThinkPad X1 Carbon Gen 10 (ov2740 sensor)Lenovo ThinkPad X1 Nano Gen 2 (ov2740 sensor)Lenovo ThinkPad X1 Yoga Gen 7 (ov2740 sensor)If the IPU6 driver works for you on an unlisted model please drop mean email at so that the above list can be updated.Description of the stackThe IPU6 camera stack consists of the following layers:akmod-intel-ipu6 the IPU6 kernel drivers. These are currently out of tree. Work is ongoing on getting various IO-expander, sensor drivers and the CSI2 receiver patches upstream. This is a slow process though and currently there is no clear path to getting the actual ISP part of the IPU supported upstream.ipu6-camera-bins this is a set of closed-source userspace libraries which the rest of the userspace stack builds on top of. There is a separate set of libraries for each IPU6 variant. Currently there are 2 sets, "ipu6" for Tiger Lake and "ipu6ep" for Alder Lake.ipu6-camera-hal this is a library on top of the set of libraries in ipu6-camera-bins. This needs to be built separately for the "ipu6" and "ipu6ep" library sets from ipu6-camera-bins.gstreamer1-plugins-icamerasrc a gstreamer plugin built on top of ipu6-camera-hal. This allows using the camera through gstreamer.akmod-v4l2loopback + v4l2-relayd. Most apps don't use gstreamer for camera access and even those that do don't know they need to use the icamerasrc element. v4l2-relayd will monitor a v4l2loopback /dev/video0 node and automatically start a gstreamer pipeline to send camera images into the loopback when e.g. firefox opens the /dev/video0 node to capture video.Packaging challenges and technical detailsakmod-intel-ipu6: There were 2 challenges to overcome before the IPU6 kernel drivers could be packaged:The sensor drivers required patches to the main kernel package, specifically to the INT3472 driver which deals with providing GPIO, clk, regulator and LED resources to the sensor drivers. Patches have been written for both the main kernel, including some LED subsystem core additions, as well as patches to the IPU6 sensor drivers to bring them inline with mainline kernel conventions for GPIOs, clks and LEDs. All the necessary patches for this are upstream now, allowing the latest ipu6-drivers code to work with an unmodified mainline kernel.Until now the IPU6 drivers seem to have been used with a script which manually loads the modules in a specific order. Using automatic driver loading by udev exposed various probe-ordering issues. Requiring numerous patches (all upstreamed to Intel's github repo) to fix.ipu6-camera-bins: Since there were 2 sets of libraries for different IPU6 versions, these are now installed in separate /usr/lib64/ipu6[ep] directories and the headers and pkgconfig files are also installed in 2 different variants.ipu6-camera-hal: This needs to be built twice against the 2 different sets of ipu6-camera-bins libraries. Letting the user pick the right libcamhal.so build to install is not very user friendly, to avoid the user needing to manually chose:Both builds are installed in separate /usr/lib64/ipu6[ep] directories.The libcamhal.so under these directories is patched to include the directory it is installed in as RPATH, so that dynamic-linking against that libcamhal.so will automatically link against the right set of ipu6-camera-bins libraries.To make all this all work transparently the actual /usr/lib64/libcamhal.so is a symlink to /run/libcamhal.so and /run/libcamhal.so is set by udev rules to point to /usr/lib64/ipu6[ep]/libcamhal.so depending on the actual hw. The /run/libcamhal.so indirection is there so that things will also work with an immutable /usr .ipu6-camera-hal's udev rules also set a /run/v4l2-relayd config file symlink to configure the gstreamer pipeline use by v4l2-relayd to match the ipu6 vs ipu6ep capabilities.akmod-v4l2loopback + v4l2-relayd: Getting this to work with Firefox was somewhat tricky, there were 2 issues which had to be solved:Firefox does not accept the NV12 image format generated by ipu6ep pipelines. To work around this a conversion to YUV420 has been added to the v4l2-relayd pipeline feeding into v4l2loopback. This workaround can be dropped once Firefox 114, which will have NV12 support, is released.Gstreamer sends v4l2-buffers with a wrong bytesused field value into v4l2loopback causing Firefox to reject the video frames. A patch has been written and merged upstream to make v4l2loopback fix up the bytesused value, fixing this.Many thanks to my colleague Kate Hsuan for doing most of the packaging work for this.And also a big thank you to the rpmfusion team for maintaining the rpmfusion repo and infrastructure which allows packaging software which does not meet Fedora's strict guidelines outside of the Fedora infra.
  • Philip Withnall: Getting from the UK to Riga for GUADEC 2023 (2023/05/23 12:48)
    I’ve just booked travel for getting to Riga, Latvia for GUADEC 2023, and I thought I’d quickly write up my travel plans in case it saves someone else some time in planning travel. I am not flying, because planes are too polluting. Instead, I am taking the train to Lübeck in Germany, then an overnight ferry to Liepāja, and then a bus the following morning to Riga. It’s a bit slower, but means it’s a bit easier to get some hacking done, stretch my legs and move around, and not fund fossil fuel companies as much. I’ll get enough stopover time in Köln, Hamburg and Lübeck to quickly look round, and a night in Liepāja to see it. Overall the travel time is just over 2 days, with half of that spent on trains, and half on a ferry. By comparison, a flight is about 7 hours (5 hours flying, 2 hours faffing in airports) plus travel time to the airport. The carbon emissions (140kgCO2e return) are roughly a quarter of those from flying (520kgCO2e), and interestingly a significant part of those emissions (46kgCO2e) is the 3 hour bus journey to get from Liepāja to Riga, though that’s quite sensitive to the occupancy level of the bus. The financial cost (£800 return) is about two times that of flying (£380), though I have not factored in the costs of getting to/from airports and have not fully explored the hidden fees for baggage and other essentials so the ratio might be a little lower. This is quite upsetting. A disproportionate part of the cost (£178 return) is the Eurostar, because it’s oversubscribed and I missed the early ticket releases due to waiting for grant approval. Perhaps I should not wait next time. The journey On 2023-07-22: Eurostar from London to Brussels-Midi, departing 07:04 Train from Brussels-Midi to Lübeck-Travemünde Skandinavienkai, departing 10:25 (ICE 15, ICE 200, RE 11428, RB 11528) Nice walk from there to the ferry terminal for half an hour Overnight ferry from Lübeck/Travemünde to Liepāja, departing 23:30 On 2023-07-23: On the ferry all day, then stay overnight in Liepāja On 2023-07-24: Bus from Liepāja to Riga, departing early morning Alternatives I strongly looked at taking the train from Hamburg to Stockholm, and then the ferry from there to Ventspils. Unfortunately, it has limited capacity and there is track maintenance planned for around my travel dates, so I could not get suitable tickets. It would have made the timings a little more convenient overall, for about the same overall carbon emissions and cost. Join me If anybody else is going overland from the UK or far-western Europe, this is hopefully a sensible route for you to take, and it would be lovely if you wanted to join me. I will be arriving 2 days early for GUADEC (as we’re having an Endless OS Foundation meetup), but if you wanted to do the same journey 1 or 2 days later then it shouldn’t differ significantly. In any case, I can put you in touch with others making this journey if you want.
  • Andy Wingo: approaching cps soup (2023/05/20 07:10)
    Good evening, hackers. Today's missive is more of a massive, in the sense that it's another presentation transcript-alike; these things always translate to many vertical pixels.In my defense, I hardly ever give a presentation twice, so not only do I miss out on the usual per-presentation cost amortization and on the incremental improvements of repetition, the more dire error is that whatever message I might have can only ever reach a subset of those that it might interest; here at least I can be more or less sure that if the presentation would interest someone, that they will find it.So for the time being I will try to share presentations here, in the spirit of, well, why the hell not. CPS Soup A functional intermediate language 10 May 2023 – Spritely Andy Wingo Igalia, S.L. Last week I gave a training talk to Spritely Institute collaborators on the intermediate representation used by Guile's compiler. CPS Soup Compiler: Front-end to Middle-end to Back-end Middle-end spans gap between high-level source code (AST) and low-level machine code Programs in middle-end expressed in intermediate language CPS Soup is the language of Guile’s middle-end An intermediate representation (IR) (or intermediate language, IL) is just another way to express a computer program. Specifically it's the kind of language that is appropriate for the middle-end of a compiler, and by "appropriate" I meant that an IR serves a purpose: there has to be a straightforward transformation to the IR from high-level abstract syntax trees (ASTs) from the front-end, and there has to be a straightforward translation from IR to machine code.There are also usually a set of necessary source-to-source transformations on IR to "lower" it, meaning to make it closer to the back-end than to the front-end. There are usually a set of optional transformations to the IR to make the program run faster or allocate less memory or be more simple: these are the optimizations."CPS soup" is Guile's IR. This talk presents the essentials of CPS soup in the context of more traditional IRs. How to lower? High-level: (+ 1 (if x 42 69)) Low-level: cmpi $x, #f je L1 movi $t, 42 j L2 L1: movi $t, 69 L2: addi $t, 1 How to get from here to there? Before we dive in, consider what we might call the dynamic range of an intermediate representation: we start with what is usually an algebraic formulation of a program and we need to get down to a specific sequence of instructions operating on registers (unlimited in number, at this stage; allocating to a fixed set of registers is a back-end concern), with explicit control flow between them. What kind of a language might be good for this? Let's attempt to answer the question by looking into what the standard solutions are for this problem domain. 1970s Control-flow graph (CFG) graph := array<block> block := tuple<preds, succs, insts> inst := goto B | if x then BT else BF | z = const C | z = add x, y ... BB0: if x then BB1 else BB2 BB1: t = const 42; goto BB3 BB2: t = const 69; goto BB3 BB3: t2 = addi t, 1; ret t2 Assignment, not definition Of course in the early days, there was no intermediate language; compilers translated ASTs directly to machine code. It's been a while since I dove into all this but the milestone I have in my head is that it's the 70s when compiler middle-ends come into their own right, with Fran Allen's work on flow analysis and optimization.In those days the intermediate representation for a compiler was a graph of basic blocks, but unlike today the paradigm was assignment to locations rather than definition of values. By that I mean that in our example program, we get t assigned to in two places (BB1 and BB2); the actual definition of t is implicit, as a storage location, and our graph consists of assignments to the set of storage locations in the program. 1980s Static single assignment (SSA) CFG graph := array<block> block := tuple<preds, succs, phis, insts> phi := z := φ(x, y, ...) inst := z := const C | z := add x, y ... BB0: if x then BB1 else BB2 BB1: v0 := const 42; goto BB3 BB2: v1 := const 69; goto BB3 BB3: v2 := φ(v0,v1); v3:=addi t,1; ret v3 Phi is phony function: v2 is v0 if coming from first predecessor, or v1 from second predecessor These days we still live in Fran Allen's world, but with a twist: we no longer model programs as graphs of assignments, but rather graphs of definitions. The introduction in the mid-80s of so-called "static single-assignment" (SSA) form graphs mean that instead of having two assignments to t, we would define two different values v0 and v1. Then later instead of reading the value of the storage location associated with t, we define v2 to be either v0 or v1: the former if we reach the use of t in BB3 from BB1, the latter if we are coming from BB2.If you think on the machine level, in terms of what the resulting machine code will be, this either function isn't a real operation; probably register allocation will put v0, v1, and v2 in the same place, say $rax. The function linking the definition of v2 to the inputs v0 and v1 is purely notational; in a way, you could say that it is phony, or not real. But when the creators of SSA went to submit this notation for publication they knew that they would need something that sounded more rigorous than "phony function", so they instead called it a "phi" (φ) function. Really. 2003: MLton Refinement: phi variables are basic block args graph := array<block> block := tuple<preds, succs, args, insts> Inputs of phis implicitly computed from preds BB0(a0): if a0 then BB1() else BB2() BB1(): v0 := const 42; BB3(v0) BB2(): v1 := const 69; BB3(v1) BB3(v2): v3 := addi v2, 1; ret v3 SSA is still where it's at, as a conventional solution to the IR problem. There have been some refinements, though. I learned of one of them from MLton; I don't know if they were first but they had the idea of interpreting phi variables as arguments to basic blocks. In this formulation, you don't have explicit phi instructions; rather the "v2 is either v1 or v0" property is expressed by v2 being a parameter of a block which is "called" with either v0 or v1 as an argument. It's the same semantics, but an interesting notational change. Refinement: Control tail Often nice to know how a block ends (e.g. to compute phi input vars) graph := array<block> block := tuple<preds, succs, args, insts, control> control := if v then L1 else L2 | L(v, ...) | switch(v, L1, L2, ...) | ret v One other refinement to SSA is to note that basic blocks consist of some number of instructions that can define values or have side effects but which otherwise exhibit fall-through control flow, followed by a single instruction that transfers control to another block. We might as well store that control instruction separately; this would let us easily know how a block ends, and in the case of phi block arguments, easily say what values are the inputs of a phi variable. So let's do that. Refinement: DRY Block successors directly computable from control Predecessors graph is inverse of successors graph graph := array<block> block := tuple<args, insts, control> Can we simplify further? At this point we notice that we are repeating ourselves; the successors of a block can be computed directly from the block's terminal control instruction. Let's drop those as a distinct part of a block, because when you transform a program it's unpleasant to have to needlessly update something in two places.While we're doing that, we note that the predecessors array is also redundant, as it can be computed from the graph of block successors. Here we start to wonder: am I simpliying or am I removing something that is fundamental to the algorithmic complexity of the various graph transformations that I need to do? We press on, though, hoping we will get somewhere interesting. Basic blocks are annoying Ceremony about managing insts; array or doubly-linked list? Nonuniformity: “local” vs ‘`global’' transformations Optimizations transform graph A to graph B; mutability complicates this task Desire to keep A in mind while making B Bugs because of spooky action at a distance Recall that the context for this meander is Guile's compiler, which is written in Scheme. Scheme doesn't have expandable arrays built-in. You can build them, of course, but it is annoying. Also, in Scheme-land, functions with side-effects are conventionally suffixed with an exclamation mark; after too many of them, both the writer and the reader get fatigued. I know it's a silly argument but it's one of the things that made me grumpy about basic blocks.If you permit me to continue with this introspection, I find there is an uneasy relationship between instructions and locations in an IR that is structured around basic blocks. Do instructions live in a function-level array and a basic block is an array of instruction indices? How do you get from instruction to basic block? How would you hoist an instruction to another basic block, might you need to reallocate the block itself?And when you go to transform a graph of blocks... well how do you do that? Is it in-place? That would be efficient; but what if you need to refer to the original program during the transformation? Might you risk reading a stale graph?It seems to me that there are too many concepts, that in the same way that SSA itself moved away from assignment to a more declarative language, that perhaps there is something else here that might be more appropriate to the task of a middle-end. Basic blocks, phi vars redundant Blocks: label with args sufficient; “containing” multiple instructions is superfluous Unify the two ways of naming values: every var is a phi graph := array<block> block := tuple<args, inst> inst := L(expr) | if v then L1() else L2() ... expr := const C | add x, y ... I took a number of tacks here, but the one I ended up on was to declare that basic blocks themselves are redundant. Instead of containing an array of instructions with fallthrough control-flow, why not just make every instruction a control instruction? (Yes, there are arguments against this, but do come along for the ride, we get to a funny place.)While you are doing that, you might as well unify the two ways in which values are named in a MLton-style compiler: instead of distinguishing between basic block arguments and values defined within a basic block, we might as well make all names into basic block arguments. Arrays annoying Array of blocks implicitly associates a label with each block Optimizations add and remove blocks; annoying to have dead array entries Keep labels as small integers, but use a map instead of an array graph := map<label, block> In the traditional SSA CFG IR, a graph transformation would often not touch the structure of the graph of blocks. But now having given each instruction its own basic block, we find that transformations of the program necessarily change the graph. Consider an instruction that we elide; before, we would just remove it from its basic block, or replace it with a no-op. Now, we have to find its predecessor(s), and forward them to the instruction's successor. It would be useful to have a more capable data structure to represent this graph. We might as well keep labels as being small integers, but allow for sparse maps and growth by using an integer-specialized map instead of an array. This is CPS soup graph := map<label, cont> cont := tuple<args, term> term := continue to L with values from expr | if v then L1() else L2() ... expr := const C | add x, y ... SSA is CPS This is exactly what CPS soup is! We came at it "from below", so to speak; instead of the heady fumes of the lambda calculus, we get here from down-to-earth basic blocks. (If you prefer the other way around, you might enjoy this article from a long time ago.) The remainder of this presentation goes deeper into what it is like to work with CPS soup in practice. Scope and dominators BB0(a0): if a0 then BB1() else BB2() BB1(): v0 := const 42; BB3(v0) BB2(): v1 := const 69; BB3(v1) BB3(v2): v3 := addi v2, 1; ret v3 What vars are “in scope” at BB3? a0 and v2. Not v0; not all paths from BB0 to BB3 define v0. a0 always defined: its definition dominates all uses. BB0 dominates BB3: All paths to BB3 go through BB0. Before moving on, though, we should discuss what it means in an SSA-style IR that variables are defined rather than assigned. If you consider variables as locations to which values can be assigned and which initially hold garbage, you can read them at any point in your program. You might get garbage, though, if the variable wasn't assigned something sensible on the path that led to reading the location's value. It sounds bonkers but it is still the C and C++ semantic model.If we switch instead to a definition-oriented IR, then a variable never has garbage; the single definition always precedes any uses of the variable. That is to say that all paths from the function entry to the use of a variable must pass through the variable's definition, or, in the jargon, that definitions dominate uses. This is an invariant of an SSA-style IR, that all variable uses be dominated by their associated definition.You can flip the question around to ask what variables are available for use at a given program point, which might be read equivalently as which variables are in scope; the answer is, all definitions from all program points that dominate the use site. The "CPS" in "CPS soup" stands for continuation-passing style, a dialect of the lambda calculus, which has also has a history of use as a compiler intermediate representation. But it turns out that if we use the lambda calculus in its conventional form, we end up needing to maintain a lexical scope nesting at the same time that we maintain the control-flow graph, and the lexical scope tree can fail to reflect the dominator tree. I go into this topic in more detail in an old article, and if it interests you, please do go deep. CPS soup in Guile Compilation unit is intmap of label to cont cont := $kargs names vars term | ... term := $continue k src expr | ... expr := $const C | $primcall ’add #f (a b) | ... Conventionally, entry point is lowest-numbered label Anyway! In Guile, the concrete form that CPS soup takes is that a program is an intmap of label to cont. A cont is the smallest labellable unit of code. You can call them blocks if that makes you feel better. One kind of cont, $kargs, binds incoming values to variables. It has a list of variables, vars, and also has an associated list of human-readable names, names, for debugging purposes.A $kargs contains a term, which is like a control instruction. One kind of term is $continue, which passes control to a continuation k. Using our earlier language, this is just goto *k*, with values, as in MLton. (The src is a source location for the term.) The values come from the term's expr, of which there are a dozen kinds or so, for example $const which passes a literal constant, or $primcall, which invokes some kind of primitive operation, which above is add. The primcall may have an immediate operand, in this case #f, and some variables that it uses, in this case a and b. The number and type of the produced values is a property of the primcall; some are just for effect, some produce one value, some more. CPS soup term := $continue k src expr | $branch kf kt src op param args | $switch kf kt* src arg | $prompt k kh src escape? tag | $throw src op param args Expressions can have effects, produce values expr := $const val | $primcall name param args | $values args | $call proc args | ... There are other kinds of terms besides $continue: there is $branch, which proceeds either to the false continuation kf or the true continuation kt depending on the result of performing op on the variables args, with immediate operand param. In our running example, we might have made the initial term via:(build-term ($branch BB1 BB2 'false? #f (a0))) The definition of build-term (and build-cont and build-exp) is in the (language cps) module.There is also $switch, which takes an unboxed unsigned integer arg and performs an array dispatch to the continuations in the list kt, or kf otherwise.There is $prompt which continues to its k, having pushed on a new continuation delimiter associated with the var tag; if code aborts to tag before the prompt exits via an unwind primcall, the stack will be unwound and control passed to the handler continuation kh. If escape? is true, the continuation is escape-only and aborting to the prompt doesn't need to capture the suspended continuation.Finally there is $throw, which doesn't continue at all, because it causes a non-resumable exception to be thrown. And that's it; it's just a handful of kinds of term, determined by the different shapes of control-flow (how many continuations the term has).When it comes to values, we have about a dozen expression kinds. We saw $const and $primcall, but I want to explicitly mention $values, which simply passes on some number of values. Often a $values expression corresponds to passing an input to a phi variable, though $kargs vars can get their definitions from any expression that produces the right number of values. Kinds of continuations Guile functions untyped, can multiple return values Error if too few values, possibly truncate too many values, possibly cons as rest arg... Calling convention: contract between val producer & consumer both on call and return side Continuation of $call unlike that of $const When a $continue term continues to a $kargs with a $const 42 expression, there are a number of invariants that the compiler can ensure: that the $kargs continuation is always passed the expected number of values, that the vars that it binds can be allocated to specific locations (e.g. registers), and that because all predecessors of the $kargs are known, that those predecessors can place their values directly into the variable's storage locations. Effectively, the compiler determines a custom calling convention between each $kargs and its predecessors.Consider the $call expression, though; in general you don't know what the callee will do to produce its values. You don't even generally know that it will produce the right number of values. Therefore $call can't (in general) continue to $kargs; instead it continues to $kreceive, which expects the return values in well-known places. $kreceive will check that it is getting the right number of values and then continue to a $kargs, shuffling those values into place. A standard calling convention defines how functions return values to callers. The conts cont := $kfun src meta self ktail kentry | $kclause arity kbody kalternate | $kargs names syms term | $kreceive arity kbody | $ktail $kclause, $kreceive very similar Continue to $ktail: return $call and return (and $throw, $prompt) exit first-order flow graph Of course, a $call expression could be a tail-call, in which case it would continue instead to $ktail, indicating an exit from the first-order function-local control-flow graph.The calling convention also specifies how to pass arguments to callees, and likewise those continuations have a fixed calling convention; in Guile we start functions with $kfun, which has some metadata attached, and then proceed to $kclause which bridges the boundary between the standard calling convention and the specialized graph of $kargs continuations. (Many details of this could be tweaked, for example that the case-lambda dispatch built-in to $kclause could instead dispatch to distinct functions instead of to different places in the same function; historical accidents abound.)As a detail, if a function is well-known, in that all its callers are known, then we can lighten the calling convention, moving the argument-count check to callees. In that case $kfun continues directly to $kargs. Similarly for return values, optimizations can make $call continue to $kargs, though there is still some value-shuffling to do. High and low CPS bridges AST (Tree-IL) and target code High-level: vars in outer functions in scope Closure conversion between high and low Low-level: Explicit closure representations; access free vars through closure CPS soup is the bridge between parsed Scheme and machine code. It starts out quite high-level, notably allowing for nested scope, in which expressions can directly refer to free variables. Variables are small integers, and for high-level CPS, variable indices have to be unique across all functions in a program. CPS gets lowered via closure conversion, which chooses specific representations for each closure that remains after optimization. After closure conversion, all variable access is local to the function; free variables are accessed via explicit loads from a function's closure. Optimizations at all levels Optimizations before and after lowering Some exprs only present in one level Some high-level optimizations can merge functions (higher-order to first-order) Because of the broad remit of CPS, the language itself has two dialects, high and low. The high level dialect has cross-function variable references, first-class abstract functions (whose representation hasn't been chosen), and recursive function binding. The low-level dialect has only specific ways to refer to functions: labels and specific closure representations. It also includes calls to function labels instead of just function values. But these are minor variations; some optimization and transformation passes can work on either dialect. Practicalities Intmap, intset: Clojure-style persistent functional data structures Program: intmap<label,cont> Optimization: program→program Identify functions: (program,label)→intset<label> Edges: intmap<label,intset<label>> Compute succs: (program,label)→edges Compute preds: edges→edges I mentioned that programs were intmaps, and specifically in Guile they are Clojure/Bagwell-style persistent functional data structures. By functional I mean that intmaps (and intsets) are values that can't be mutated in place (though we do have the transient optimization).I find that immutability has the effect of deploying a sense of calm to the compiler hacker -- I don't need to worry about data structures changing out from under me; instead I just structure all the transformations that you need to do as functions. An optimization is just a function that takes an intmap and produces another intmap. An analysis associating some data with each program label is just a function that computes an intmap, given a program; that analysis will never be invalidated by subsequent transformations, because the program to which it applies will never be mutated.This pervasive feeling of calm allows me to tackle problems that I wouldn't have otherwise been able to fit into my head. One example is the novel online CSE pass; one day I'll either wrap that up as a paper or just capitulate and blog it instead. Flow analysis A[k] = meet(A[p] for p in preds[k]) - kill[k] + gen[k] Compute available values at labels: A: intmap<label,intset<val>> meet: intmap-intersect<intset-intersect> -, +: intset-subtract, intset-union kill[k]: values invalidated by cont because of side effects gen[k]: values defined at k But to keep it concrete, let's take the example of flow analysis. For example, you might want to compute "available values" at a given label: these are the values that are candidates for common subexpression elimination. For example if a term is dominated by a car x primcall whose value is bound to v, and there is no path from the definition of V to a subsequent car x primcall, we can replace that second duplicate operation with $values (v) instead.There is a standard solution for this problem, which is to solve the flow equation above. I wrote about this at length ages ago, but looking back on it, the thing that pleases me is how easy it is to decompose the task of flow analysis into manageable parts, and how the types tell you exactly what you need to do. It's easy to compute an initial analysis A, easy to define your meet function when your maps and sets have built-in intersect and union operators, easy to define what addition and subtraction mean over sets, and so on. Persistent data structures FTW meet: intmap-intersect<intset-intersect> -, +: intset-subtract, intset-union Naïve: O(nconts * nvals) Structure-sharing: O(nconts * log(nvals)) Computing an analysis isn't free, but it is manageable in cost: the structure-sharing means that meet is usually trivial (for fallthrough control flow) and the cost of + and - is proportional to the log of the problem size. CPS soup: strengths Relatively uniform, orthogonal Facilitates functional transformations and analyses, lowering mental load: “I just have to write a function from foo to bar; I can do that” Encourages global optimizations Some kinds of bugs prevented by construction (unintended shared mutable state) We get the SSA optimization literature Well, we're getting to the end here, and I want to take a step back. Guile has used CPS soup as its middle-end IR for about 8 years now, enough time to appreciate its fine points while also understanding its weaknesses.On the plus side, it has what to me is a kind of low cognitive overhead, and I say that not just because I came up with it: Guile's development team is small and not particularly well-resourced, and we can't afford complicated things. The simplicity of CPS soup works well for our development process (flawed though that process may be!).I also like how by having every variable be potentially a phi, that any optimization that we implement will be global (i.e. not local to a basic block) by default.Perhaps best of all, we get these benefits while also being able to use the existing SSA transformation literature. Because CPS is SSA, the lessons learned in SSA (e.g. loop peeling) apply directly. CPS soup: weaknesses Pointer-chasing, indirection through intmaps Heavier than basic blocks: more control-flow edges Names bound at continuation only; phi predecessors share a name Over-linearizes control, relative to sea-of-nodes Overhead of re-computation of analyses CPS soup is not without its drawbacks, though. It's not suitable for JIT compilers, because it imposes some significant constant-factor (and sometimes algorithmic) overheads. You are always indirecting through intmaps and intsets, and these data structures involve significant pointer-chasing.Also, there are some forms of lightweight flow analysis that can be performed naturally on a graph of basic blocks without looking too much at the contents of the blocks; for example in our available variables analysis you could run it over blocks instead of individual instructions. In these cases, basic blocks themselves are an optimization, as they can reduce the size of the problem space, with corresponding reductions in time and memory use for analyses and transformations. Of course you could overlay a basic block graph on top of CPS soup, but it's not a well-worn path.There is a little detail that not all phi predecessor values have names, since names are bound at successors (continuations). But this is a detail; if these names are important, little $values trampolines can be inserted.Probably the main drawback as an IR is that the graph of conts in CPS soup over-linearizes the program. There are other intermediate representations that don't encode ordering constraints where there are none; perhaps it would be useful to marry CPS soup with sea-of-nodes, at least during some transformations.Finally, CPS soup does not encourage a style of programming where an analysis is incrementally kept up to date as a program is transformed in small ways. The result is that we end up performing much redundant computation within each individual optimization pass. Recap CPS soup is SSA, distilled Labels and vars are small integers Programs map labels to conts Conts are the smallest labellable unit of code Conts can have terms that continue to other conts Compilation simplifies and lowers programs Wasm vs VM backend: a question for another day :) But all in all, CPS soup has been good for Guile. It's just SSA by another name, in a simpler form, with a functional flavor. Or, it's just CPS, but first-order only, without lambda.In the near future, I am interested in seeing what a new GC will do for CPS soup; will bump-pointer allocation palliate some of the costs of pointer-chasing? We'll see. A tricky thing about CPS soup is that I don't think that anyone else has tried it in other languages, so it's hard to objectively understand its characteristics independent of Guile itself.Finally, it would be nice to engage in the academic conversation by publishing a paper somewhere; I would like to see interesting criticism, and blog posts don't really participate in the citation graph. But in the limited time available to me, faced with the choice between hacking on something and writing a paper, it's always been hacking, so far :)Speaking of limited time, I probably need to hit publish on this one and move on. Happy hacking to all, and until next time.
  • Philip Chimento: December of Rust Project, Part 2: The Assembler Macro (2023/05/19 06:26)
    Welcome back! This is the second post of a series which turned out to be more occasional than I thought it would be. You might remember that I originally called it December of Rust 2021. Look how that worked out! Not only is it not December 2021 anymore, but also it is not December 2022 anymore. In the previous installment, I wrote about writing a virtual machine for the LC-3 architecture in Rust, that was inspired by Andrei Ciobanu’s blog post Writing a simple 16-bit VM in less than 125 lines of C. The result was a small Rust VM with a main loop based on the bitmatch crate. By the end of the previous post, the VM seemed to be working well. I had tested it with two example programs from Andrei Ciobanu’s blog post, and I wanted to write more programs to see if they would work as well. Unfortunately, the two test programs were tedious to create; as you might remember, I had to write a program that listed the hex code of each instruction and wrote the machine code to a file. And that was even considering that Andrei had already done the most tedious part, of assembling the instructions by hand into hex codes! Not to mention that when I wanted to add one instruction to one of the test programs, I had to adjust the offset in a neighbouring LEA instruction, which is a nightmare for debugging. This was just not going to be good enough to write any more complicated programs, because (1) I don’t have time for that, (2) I don’t have time for that, and (3) computers are better at this sort of work anyway. The real question is, can the LC-3 running on modern hardware emulate this badboy in all its Eurostile Bold Extended glory?? (Image by Viktorya Sergeeva, Pexels) In this post, I will tell the story of how I wrote an assembler for the LC-3. The twist is, instead of making a standalone program that processes a text file of LC-3 assembly language into machine code1, I wanted to write it as a Rust macro, so that I could write code like this, and end up with an array of u16 machine code instructions: asm! { .ORIG x3000 LEA R0, hello ; load address of string PUTS ; output string to console HALT hello: .STRINGZ "Hello World!\n" .END } (This example is taken from the LC-3 specification, and I’ll be using it throughout the post as a sample program.) The sample already illustrated some features I wanted to have in the assembly language: instructions, of course; the assembler directives like .ORIG and .STRINGZ that don’t translate one-to-one to instructions; and most importantly, labels, so that I don’t have to compute offsets by hand. Learning about Rust macros This could be a foolish plan. I’ve written countless programs over the years that process text files, but I have never once written a Rust macro and have no idea how they work. But I have heard they are powerful and can be used to create domain-specific languages. That sounds like it could fit this purpose. Armed with delusional it’s-all-gonna-work-out overconfidence, I searched the web for “rust macros tutorial” and landed on The Little Book of Rust Macros originally by Daniel Keep and updated by Lukas Wirth. After browsing this I understood a few facts about Rust macros: They consist of rules, which match many types of single or repeated Rust tokens against patterns.2 So, I should be able to define rules that match the tokens that form the LC-3 assembly language. They can pick out their inputs from among any other tokens. You provide these other tokens in the input matching rule, so you could do for example: macro_rules! longhand_add { ($a:literal plus $b:literal) => { $a + $b }; } let two = longhand_add!{ 1 plus 1 }; This is apparently how you can create domain-specific languages with Rust macros, because the tokens you match don’t have to be legal Rust code; they just have to be legal tokens. In other words, plus is fine and doesn’t have to be the name of anything in the program; but foo[% is not. They substitute their inputs into the Rust code that is in the body of the rule. So really, in the end, macros are a way of writing repetitive code without repeating yourself. A tangent about C macros This last fact actually corresponds with one of the uses of C macros. C macros are useful for several purposes, for which the C preprocessor is a drastically overpowered and unsafe tool full of evil traps. Most of these purposes have alternative, less overpowered, techniques for achieving them in languages like Rust or even C++. First, compile-time constants: #define PI 3.1416 for which Rust has constant expressions: const PI: f64 = 3.1416; Second, polymorphic “functions”: #define MIN(a, b) (a) <= (b) ? (a) : (b) for which Rust has… well, actual polymorphic functions: fn min<T: std::cmp::PartialOrd>(a: T, b: T) -> T { if a <= b { a } else { b } } Third, conditional compilation: #ifdef WIN32 int read_key(void) { // ... } #endif // WIN32 for which Rust has… also conditional compilation: #[cfg(windows)] fn read_key() -> i32 { // ... } Fourth, redefining syntax, as mentioned above: #define plus + int two = 1 plus 1; which in C you should probably never do except as a joke. But in Rust (as in the longhand_add example from earlier) at least you get a clue about what is going on because of the the longhand_add!{...} macro name surrounding the custom syntax; and the plus identifier doesn’t leak out into the rest of your program. Lastly, code generation, which is what we want to do here in the assembler. In C it’s often complicated and this tangent is already long enough, but if you’re curious, here and here is an example of using a C preprocessor technique called X Macros to generate code that would otherwise be repetitive to write. In C, code generation using macros is a way of trading off less readability (because macros are complicated) for better maintainability (because in repeated blocks of very similar code it’s easy to make mistakes without noticing.) I imagine in Rust the tradeoff is much the same. Designing an LC-3 assembler macro You may remember in the previous post, in order to run a program with the VM, I had to write a small, separate Rust program to write the hand-assembled words to a file consisting of LC-3 bytecode. I could then load and run the file with the VM’s ld_img() method. I would like to be able to write a file with the assembler macro, but I would also like to be able to write assembly language directly inline and execute it with the VM, without having to write it to a file. Something like this: fn run_program() -> Result<()> { let mut vm = VM::new(); vm.ld_asm(&asm! { .ORIG x3000 LEA R0, hello ; load address of string PUTS ; output string to console HALT hello: .STRINGZ "Hello World!\n" .END }?); vm.start() } My first thought was that I could have the asm macro expand to an array of LC-3 bytecodes. However, writing out a possible implementation for the VM.ld_asm() method shows that the asm macro needs to give two pieces of data: the origin address as well as the bytecodes. pub fn ld_asm(&mut self, ???) { let mut addr = ???origin???; for inst in ???bytecodes??? { self.mem[addr] = Wrapping(*inst); addr += 1; } } So, it seemed better to have the asm macro expand to an expression that creates a struct with these two pieces of data in it. I started an assembler.rs submodule and called this object assembler::Program. #[derive(Debug)] pub struct Program { origin: u16, bytecode: Vec<u16>, } impl Program { pub fn origin(&self) -> usize { self.origin as usize } pub fn bytecode(&self) -> &[u16] { &self.bytecode } } Next, I needed to figure out how to get from LC-3 assembly language to the data model of Program. Obviously I needed the address to load the program into (origin), which is set by the .ORIG directive. But I also needed to turn the assembly language text into bytecodes somehow. Maybe the macro could do this … but at this point, my hunch from reading about Rust macros was that the macro should focus on transforming the assembly language into valid Rust code, and not so much on processing. Processing can be done in a method of Program using regular Rust code, not macro code. So the macro should just extract the information from the assembly language: a list of instructions and their operands, and a map of labels to their addresses (“symbol table”).3 #[derive(Clone, Copy, Debug)] pub enum Reg { R0, R1, R2, R3, R4, R5, R6, R7 } #[derive(Debug)] pub enum Inst { Add1(/* dst: */ Reg, /* src1: */ Reg, /* src2: */ Reg), Add2(/* dst: */ Reg, /* src: */ Reg, /* imm: */ i8), And1(/* dst: */ Reg, /* src1: */ Reg, /* src2: */ Reg), And2(/* dst: */ Reg, /* src: */ Reg, /* imm: */ i8), // ...etc Trap(u8), } pub type SymbolTable = MultiMap<&'static str, u16>; With all this, here’s effectively what I want the macro to extract out of the sample program I listed near the beginning of the post: let origin: u16 = 0x3000; let instructions = vec![ Inst::Lea(Reg::R0, "hello"), Inst::Trap(0x22), Inst::Trap(0x25), Inst::Stringz("Hello world!\n"), ]; let symtab: SymbolTable = multimap!( "hello" => 0x3003, ); (Remember from Part 1 that PUTS and HALT are system subroutines, called with the TRAP instruction.) As the last step of the macro, I’ll then pass these three pieces of data to a static method of Program which will create an instance of the Program struct with the origin and bytecode in it: Program::assemble(origin, &instructions, &symtab) You may be surprised that I picked a multimap for the symbol table instead of just a map. In fact I originally used a map. But it’s possible for the assembly language code to include the same label twice, which is an error. I found that handling duplicate labels inside the macro made it much more complicated, whereas it was easier to handle errors in the assemble() method. But for that, we have to store two copies of the label in the symbol table so that we can determine later on that it is a duplicate. Demystifying the magic At this point I still hadn’t sat down to actually write a Rust macro. Now that I knew what I want the macro to achieve, I could start.4 The easy part was that the assembly language code should start with the .ORIG directive, to set the address at which to load the assembled bytecode; and end with the .END directive. Here’s a macro rule that does that: ( .ORIG $orig:literal $(/* magic happens here: recognize at least one asm instruction */)+ .END ) => {{ use $crate::assembler::{Inst::*, Program, Reg::*, SymbolTable}; let mut instructions = Vec::new(); let mut symtab: SymbolTable = Default::default(); let origin: u16 = $orig; $( // more magic happens here: push a value into `instructions` for // each instruction recognized by the macro, and add a label to // the symbol table if there is one )* Program::assemble(origin, &instructions, &symtab) }}; Easy, right? The hard part is what happens in the “magic”! You might notice that the original LC-3 assembly language’s .ORIG directive looks like .ORIG x3000, and x3000 is decidedly not a Rust numeric literal that can be assigned to a u16. At this point I had to decide what tradeoffs I wanted to make in the macro. Did I want to support the LC-3 assembly language from the specification exactly? It looked like I might be able to do that, x3000-formatted hex literals and all, if I scrapped what I had so far and instead wrote a procedural macro5, which operates directly on a stream of tokens from the lexer. But instead, I decided that my goal would be to support a DSL that looks approximately like the LC-3 assembly language, without making the macro too complicated. In this case, “not making the macro too complicated” means that hex literals are Rust hex literals (0x3000 instead of x3000) and decimal literals are Rust decimal literals (15 instead of #15). That was good enough for me. Next I had to write a matcher that would match each instruction. A line of LC-3 assembly language looks like this:6 instruction := [ label : ] opcode [ operand [ , operand ]* ] [ ; comment ] \n So I first tried a matcher like this: $($label:ident:)? $opcode:ident $($operands:expr),* $(; $comment:tt) There are a few problems with this. The most pressing one is that “consume tokens until newline” is just not a thing in Rust macro matchers, so it’s not possible to ignore comments like this. Newlines are just treated like any other whitespace. There’s also no fragment specifier7 for “any token”; the closest is tt but that matches a token tree, which is not actually what I want here — I think it would mean the comment has to be valid Rust code, for one thing! Keeping my tradeoff philosophy in mind, I gave up quickly on including semicolon-delimited comments in the macro. Regular // and /* comments would work just fine without even having to match them in the macro. Instead, I decided that each instruction would end with a semicolon, and that way I’d also avoid the problem of not being able to match newlines. $($label:ident:)? $opcode:ident $($operands:expr),*; The next problem is that macro matchers cannot look ahead or backtrack, so $label and $opcode are ambiguous here. If we write an identifier, it could be either a label or an opcode and we won’t know until we read the next token to see if it’s a colon or not; which is not allowed. So I made another change to the DSL, to make the colon come before the label. With this matcher expression, I could write more of the body of the macro rule:8 ( .ORIG $orig:literal; $($(:$label:ident)? $opcode:ident $($operands:expr),*;)+ .END; ) => {{ use $crate::assembler::{Inst::*, Program, Reg::*, SymbolTable}; let mut instructions = Vec::new(); let mut symtab: SymbolTable = Default::default(); let origin: u16 = $orig; $( $(symtab.insert(stringify!($label), origin + instructions.len() as u16);)* // still some magic happening here... )* Program::assemble(origin, &instructions, &symtab) }}; For each instruction matched by the macro, we insert its label (if there is one) into the symbol table to record that it should point to the current instruction. Then at the remaining “magic”, we have to insert an instruction into the code vector. I originally thought that I could do something like instructions.push($opcode($($argument),*));, in other words constructing a value of Inst directly. But that turned out to be impractical because the ADD and AND opcodes actually have two forms, one to do the operation with a value from a register, and one with a literal value. This means we actually need two different arms of the Inst enum for each of these instructions, as I listed above: Add1(Reg, Reg, Reg), Add2(Reg, Reg, i8), I could have changed it so that we have to write ADD1 and ADD2 inside the asm! macro, but that seemed to me too much of a tradeoff in the wrong direction; it meant that if you wanted to copy an LC-3 assembly language listing into the asm! macro, you’d need to go over every ADD instruction and rename it to either ADD1 or ADD2, and same for AND. This would be a bigger cognitive burden than just mechanically getting the numeric literals in the right format. Not requiring a 1-to-1 correspondence between assembly language opcodes and the Inst enum also meant I could easily define aliases for the trap routines. For example, HALT could translate to Inst::Trap(0x25) without having to define a separate Inst::Halt. But then what to put in the “magic” part of the macro body? It seemed to me that another macro expansion could transform LEA R0, hello into Inst::Lea(Reg::R0, "hello")! I read about internal rules in the Little Book, and they seemed like a good fit for this. So, I replaced the magic with this call to an internal rule @inst: instructions.push(asm! {@inst $opcode $($operands),*}); And I wrote wrote a series of @inst internal rules, each of which constructs an arm of the Inst enum, such as: (@inst ADD $dst:expr, $src:expr, $imm:literal) => { Add2($dst, $src, $imm) }; (@inst ADD $dst:expr, $src1:expr, $src2:expr) => { Add1($dst, $src1, $src2) }; // ... (@inst HALT) => { Trap(0x25) }; // ... (@inst LEA $dst:expr, $lbl:ident) => { Lea($dst, stringify!($lbl)) }; Macro rules have to be written from most specific to least specific, so the rules for ADD first try to match against a literal in the third operand (e.g. ADD R0, R1, -1) and construct an Inst::Add2, and otherwise fall back to an Inst::Add1. But unfortunately I ran into another problem here. The $lbl:ident in the LEA rule is not recognized. I’m still not 100% sure why this is, but the Little Book’s section on fragment specifiers says, Capturing with anything but the ident, lifetime and tt fragments will render the captured AST opaque, making it impossible to further match it with other fragment specifiers in future macro invocations. So I suppose this is because we capture the operands with $($operands:expr),*. I tried capturing them as token trees (tt) but then the macro becomes ambiguous because token trees can include the commas and semicolons that I’m using for delimitation. So, I had to rewrite the rules for opcodes that take a label as an operand, like this: (@inst LEA $dst:expr, $lbl:literal) => { Lea($dst, $lbl) }; and now we have to write them like LEA R0, "hello" (with quotes). This is the one thing I wasn’t able to puzzle out to my satisfaction, that I wish I had been. Finally, after writing all the @inst rules I realized I had a bug. When adding the address of a label into the symbol table, I calculated the current value of the program counter (PC) with origin + code.len(). But some instructions will translate into more than one word of bytecode: BLKW and STRINGZ.9 BLKW 8, for example, reserves a block of 8 words. This would give an incorrect address for a label occurring after any BLKW or STRINGZ instruction. To fix this, I wrote a method for Inst to calculate the instruction’s word length: impl Inst { pub fn word_len(&self) -> u16 { match *self { Inst::Blkw(len) => len, Inst::Stringz(s) => u16::try_from(s.len()).unwrap(), _ => 1, } } } and I changed the macro to insert the label into the symbol table pointing to the correct PC:10 $( symtab.insert( stringify!($lbl), origin + instructions.iter().map(|i| i.word_len()).sum::<u16>(), ); )* At this point I had something that looked and worked quite a lot like how I originally envisioned the inline assembler. For comparison, the original idea was to put the LC-3 assembly language directly inside the macro: asm! { .ORIG x3000 LEA R0, hello ; load address of string PUTS ; output string to console HALT hello: .STRINGZ "Hello World!\n" .END } Along the way I needed a few tweaks to avoid making the macro too complicated, now I had this: asm! { .ORIG 0x3000; LEA R0, "hello"; // load address of string PUTS; // output string to console HALT; :hello STRINGZ "Hello World!\n"; .END; } Just out of curiosity, I used cargo-expand (as I did in Part One) to expand the above use of the asm! macro, and I found it was quite readable: Now we’re ready to save the bytecode to tape. (Image by Bruno /Germany from Pixabay) { use crate::assembler::{Inst::*, Program, Reg::*, SymbolTable}; let mut instructions = Vec::new(); let mut symtab: SymbolTable = Default::default(); let origin: u16 = 0x3000; instructions.push(Lea(R0, "hello")); instructions.push(Trap(0x22)); instructions.push(Trap(0x25)); symtab.insert("hello", origin + instructions.iter().map(|i| i.word_len()).sum::<u16>()); code.push(Stringz("Hello World!\n")); Program::assemble(origin, &instructions, &symtab) } Assembling the bytecode I felt like the hard part was over and done with! Now all I needed was to write the Program::assemble() method. I knew already that the core of it would work like the inner loop of the VM in Part One of the series, only in reverse. Instead of using bitmatch to unpack the instruction words, I matched on Inst and used bitpack to pack the data into instruction words. Most of them were straightforward: match inst { Add1(dst, src1, src2) => { let (d, s, a) = (*dst as u16, *src1 as u16, *src2 as u16); words.push(bitpack!("0001_dddsss000aaa")); } // ...etc The instructions that take a label operand needed a bit of extra work. I had to look up the label in the symbol table, compute an offset relative to the PC, and pack that into the instruction word. This process may produce an error: the label might not exist, or might have been given more than once, or the offset might be too large to pack into the available bits (in other words, the instruction is trying to reference a label that’s too far away.) fn integer_fits(integer: i32, bits: usize) -> Result<u16, String> { let shift = 32 - bits; if integer << shift >> shift != integer { Err(format!( "Value x{:04x} is too large to fit in {} bits", integer, bits )) } else { Ok(integer as u16) } } fn calc_offset( origin: u16, symtab: &SymbolTable, pc: u16, label: &'static str, bits: usize, ) -> Result<u16, String> { if let Some(v) = symtab.get_vec(label) { if v.len() != 1 { return Err(format!("Duplicate label \"{}\"", label)); } let addr = v[0]; let offset = addr as i32 - origin as i32 - pc as i32 - 1; Self::integer_fits(offset, bits).map_err(|_| { format!( "Label \"{}\" is too far away from instruction ({} words)", label, offset ) }) } else { Err(format!("Undefined label \"{}\"", label)) } } The arm of the core match expression for such an instruction, for example LEA, looks like this: Lea(dst, label) => { let d = *dst as u16; let o = Self::calc_offset(origin, symtab, pc, label, 9) .unwrap_or_else(append_error); words.push(bitpack!("1110_dddooooooooo")); } Here, append_error is a closure that pushes the error message returned by calc_offset() into an array: |e| { errors.push((origin + pc, e)); 0 } Lastly, a couple of arms for the oddball instructions that define data words, not code: Blkw(len) => words.extend(vec![0; *len as usize]), Fill(v) => words.push(*v), Stringz(s) => { words.extend(s.bytes().map(|b| b as u16)); words.push(0); } At the end of the method, if there weren’t any errors, then we successfully assembled the program: if words.len() > (0xffff - origin).into() { errors.push((0xffff, "Program is too large to fit in memory".to_string())); } if errors.is_empty() { Ok(Self { origin, bytecode: words, }) } else { Err(AssemblerError { errors }) } Bells and whistles Next thing was to add a few extensions to the assembly language to make writing programs easier. (Writing programs is what I’m going to cover in Part 3 of the series.) While researching the LC-3 for Part 1, I found a whole lot of lab manuals and other university course material. No surprise, since the LC-3 is originally from a textbook. One document I stumbled upon was “LC3 Language Extensions” from Richard Squier’s course material at Georgetown. In it are a few handy aliases for opcodes: MOV R3, R5 – copy R3 into R5; can be implemented as ADD R3, R5, 0, i.e. R5 = R3 + 0 ZERO R2 – clear (store zero into) R2; can be implemented as AND R2, R2, 0 INC R4 – increment (add one to) R4; can be implemented as ADD R4, R4, 1 DEC R4 – decrement (subtract one from) R4; can be implemented as ADD R4, R4, -1 Besides these, the LC-3 specification itself names RET as an alias for JMP R7. Finally, an all-zero instruction word is a no-op so I defined an alias NOP for it.11 These are straightforward to define in the macro: (@inst MOV $dst:expr, $src:expr) => { Add2($dst, $src, 0) }; (@inst ZERO $dst:expr) => { And2($dst, $dst, 0) }; (@inst INC $dst:expr) => { Add2($dst, $dst, 1) }; (@inst DEC $dst:expr) => { Add2($dst, $dst, -1) }; (@inst RET) => { Jmp(R7) }; (@inst NOP) => { Fill(0) }; I wrote the last one as Fill(0) and not as Br(false, false, false, 0) which might have been more instructive, because Br takes a &' static str for its last parameter, not an address. So I would have had to make a dummy label in the symbol table pointing to address 0. Filling a zero word seemed simpler and easier. The final improvement I wanted was to have AssemblerError print nice error messages. I kind of glossed over AssemblerError earlier, but it is an implementation of Error that contains an array of error messages with their associated PC value: #[derive(Debug, Clone)] pub struct AssemblerError { errors: Vec<(u16, String)>, } impl error::Error for AssemblerError {} I implemented Display such that it would display each error message alongside a nice hex representation of the PC where the instruction failed to assemble: impl fmt::Display for AssemblerError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { writeln!(f, "Error assembling program")?; for (pc, message) in &self.errors { writeln!(f, " x{:04x}: {}", pc, message)?; } Ok(()) } } This still left me with a somewhat unsatisfying mix of kinds of errors. Ideally, the macro would catch all possible errors at compile time! At compile time we can catch several kinds of errors: // Unsyntactical assembly language program asm!{ !!! 23 skidoo !!! } // error: no rules expected the token `!` HCF; // Nonexistent instruction mnemonic // error: no rules expected the token `HCF` ADD R3, R2; // Wrong number of arguments // error: unexpected end of macro invocation // note: while trying to match `,` // (@inst ADD $dst:expr, $src:expr, $imm:literal) => { Add2($dst, $sr... // ^ ADD R3, "R2", 5; // Wrong type of argument // error[E0308]: mismatched types // ADD R3, "R2", 5; // ^^^^ expected `Reg`, found `&str` // ...al) => { Add2($dst, $src, $imm) }; // ---- arguments to this enum variant are incorrect // note: tuple variant defined here // Add2(/* dst: */ Reg, /* src: */ Reg, /* imm: */ i8), // ^^^^ BR R7; // Wrong type of argument again // error: no rules expected the token `R7` // note: while trying to match meta-variable `$lbl:literal` // (@inst BR $lbl:literal) => { Br(true, true,... // ^^^^^^^^^^^^ These error messages from the compiler are not ideal — if I had written a dedicated assembler, I’d have made it output better error messages — but they are not terrible either. Then there are some errors that could be caught at compile time, but not with this particular design of the macro. Although note that saying an error is caught at runtime is ambiguous here. Even if the Rust compiler doesn’t flag the error while processing the macro, we can still flag it at the time of the execution of assemble() — this is at runtime for the Rust program, but at compile time for the assembly language. It’s different from a LC-3 runtime error where the VM encounters an illegal opcode such as 0xDEAD during execution. Anyway, this sample program contains one of each such error and shows the nice output of AssemblerError: asm! { .ORIG 0x3000; LEA R0, "hello"; // label is a duplicate PUTS; LEA R0, "greet"; // label doesn't exist PUTS; LEA R0, "salute"; // label too far away to fit in offset ADD R3, R2, 0x7f; // immediate value is out of range HALT; :hello STRINGZ "Hello World!\n"; :hello STRINGZ "Good morning, planet!\n"; BLKW 1024; :salute STRINGZ "Regards, globe!\n"; BLKW 0xffff; // extra space makes the program too big .END; }? // Error: Error assembling program // x3000: Duplicate label "hello" // x3002: Undefined label "greet" // x3004: Label "salute" is too far away from instruction (1061 words) // x3005: Value x007f is too large to fit in 5 bits // xffff: Program is too large to fit in memory To check that all the labels are present only once, you need to do two passes on the input. In fact, the macro effectively does do two passes: one in the macro rules where it populates the symbol table, and one in assemble() where it reads the values back out again. But I don’t believe it’d be possible to do two passes in the macro rules themselves, to get compile time checking for this. The out-of-range value in ADD R3, R2, 0x7f is an interesting case though! This could be caught at compile time if Rust had arbitrary bit-width integers.12 After all, TRAP -1 and TRAP 0x100 are caught at compile time because the definition of Inst::Trap(u8) does not allow you to construct one with those literal values. I tried using types from the ux crate for this, e.g. Add2(Reg, Reg, ux::i5). But there is no support for constructing custom integer types from literals, so I would have had to use try_from() in the macro — in which case I wouldn’t get compile time errors anyway, so I didn’t bother. My colleague Andreu Botella suggested that I could make out-of-range immediate values a compile time error by using a constant expression — something I didn’t know existed in Rust. (@int $bits:literal $num:literal) => {{ const _: () = { let converted = $num as i8; let shift = 8 - $bits; if converted << shift >> shift != converted { panic!("Value is too large to fit in bitfield"); } }; $num as i8 }}; (@inst ADD $dst:expr, $src:expr, $imm:literal) => { Add2($dst, $src, asm!(@int 5 $imm)) }; I found this really clever! But on the other hand, it made having a good error message quite difficult. panic! in a constant expression cannot format values, it can only print a string literal. So you get a compile time error like this: error[E0080]: evaluation of constant value failed asm! { .ORIG 0x3000; ADD R0, R0, 127; .END; }.unwrap_err(); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'Value is too large to fit in bitfield' The whole assembly language program is highlighted as the source of the error, and the message doesn’t give any clue which value is problematic. This would make it almost impossible to locate the error in a large program with many immediate values. For this reason, I decided not to adopt this technique. I found the runtime error preferable because it gives you the address of the instruction, as well as the offending value. But I did learn something! Conclusion At this point I was quite happy with the inline assembler macro! I was able to use it to write programs for the LC-3 without having to calculate label offsets by hand, which is all I really wanted to do in the first place. Part 3 of the series will be about some programs that I wrote to test the VM (and test my understanding of it!) I felt like I had successfully demystified Rust macros for myself now, and would be able to write another macro if I needed to. I appreciated having the chance to gain that understanding while working on a personal project that caught my interest.13 This is — for my learning style, at least — my ideal way to learn. I hope that sharing it helps you too. Finally, you should know that this writeup presents an idealized version of the process. In 2020 I wrote a series of posts where I journalled writing short Rust programs, including mistakes and all, and those got some attention. This post is not that. I tried many different dead ends while writing this, and if I’d chronicled all of them this post would be even longer. Here, I’ve tried to convey approximately the journey of understanding I went through, while smoothing it over so that it makes — hopefully — a good read. Many thanks to Andreu Botella, Angelos Oikonomopoulos, and Federico Mena Quintero who read a draft of this and with their comments made it a better read than it was before. [1] As might be the smarter, and certainly the more conventional, thing to do [2] Tokens as in what a lexer produces: identifiers, literals, … [3] A symbol table usually means something different, a table that contains information about a program’s variables and other identifiers. We don’t have variables in this assembly language; labels are the only symbols there are [4] What actually happened, I just sat down and started writing and deleting and rewriting macros until I got a feel for what I needed. But this blog post is supposed to be a coherent story, not a stream-of-consciousness log, so let’s pretend that I thought about it like this first [5] Conspicuously, still under construction in the Little Book [6] This isn’t a real grammar for the assembly language, just an approximation [7] Fragment specifiers, according to the Little Book, are what the expr part of $e:expr is called [8] Notice that .ORIG and .END now have semicolons too, but no labels; they aren’t instructions, so they can’t have an address [9] Eagle-eyed readers may note that in the LC-3 manual these are called .BLKW and .STRINGZ, with leading dots. I elided the dots, again to make the macro less complicated [10] This now seems wasteful rather than keeping the PC in a variable. On the other hand, it seems complicated to add a bunch of local variables to the repeated part of the macro [11] In fact, any instruction word with a value between 0x0000-0x01ff is a no-op. This is a consequence of BR‘s opcode being 0b0000, which I can only assume was chosen for exactly this reason. BR, with the three status register bits also being zero, never branches and always falls through to the next instruction [12] Or even just i5 and i6 types [13] Even if it took 1.5 years to complete it
  • Luis Villa: Announcing the Upstream podcast (2023/05/16 21:21)
    Open is 1️⃣ all over and 2️⃣ really interesting and yet 3️⃣ there’s not enough media that takes it seriously as a cultural phenomenon, growing out of software but now going well beyond that. And so, announcement: I’m trying to fill that hole a little bit myself. Tidelift’s new Upstream podcast, which I’m hosting, will: Pull from across open, not just from software. That’s not because software is bad or uninteresting, but because it’s the best-covered and best-networked of the many opens. So I hope to help create some bridges with the podcast. Tech will definitely come up—but it’ll be in service to the people and communities building things. Bring interesting people together. I like interview-style podcasts with guests who have related but distinct interests—and the magic is their interaction. So that’s what we’ll be aiming for here. Personal goal: two guests who find each other so interesting that they schedule coffee after the recording. Happened once so far! Be, ultimately, optimistic. It’s very easy, especially for experienced open folks, to get cynical or burnt out. I hope that this podcast can talk frankly about those challenges—but also be a recharge for those who’ve forgotten why open can be so full of hope and joy for the future. So far I’ve recorded on: The near past (crypto?) and near future (machine learning?) of open, with Molly White of Web 3 Is Going Great and Stefano Maffuli of the Open Source Initiative. Get it here! (Transcripts coming soon…) The joy of open. At Tidelift, we often focus on the frustrating parts of open, like maintainer burnout, so I wanted to refresh with a talk about how open can be fun. Guests are Annie Rauwerda of the amazing Depths of Wikipedia, and Sumana Harihareswara—who among many other things, has performed plays and done standup about open software. Will release this tomorrow! The impact of open on engineering culture, particularly at the intersection of our massively complex technology stacks, our tools, and our people. But we are often so focused on how culture impacts tech (the other way around) that we overlook this. I brought on Kellan Elliot-McCrea of Flickr, Etsy, and Adobe, and Adam Jacob of Chef and the forthcoming System Initiative to talk about those challenges—and opportunities. The relationship of open to climate and disasters. To talk about how open intersects with some of the most pressing challenges of our time, I talked with Monica Granados, who works on climate at Creative Commons, and Heather Leson, who does digital innovation — including open — at the IFRC’s Solferino Academy. I learned a ton from this one—so excited to share it out in a few weeks. Future episodes are still in the works, but some topics I’m hoping to cover include: open and regulation: what is happening in Brussels and DC, anyway? Think of this as a follow-up to Tidelift’s posts on the Cyber Resilience Act. open and water: how does open’s thinking on the commons help us think about water, and vice-versa? open and ethics: if we’re not technolibertarians, what are we anyway? I’m very open to suggestions! Let me know if there’s anyone interesting I should be talking to, or topics you want to learn more about. We’ll be announcing future episodes through the normal Places Where You Get Your Podcasts and the Tidelift website.
  • Sam Thursfield: Status update, 16/05/2023 (2023/05/16 15:26)
    I am volunteering a lot of time to work on testing at the moment. When you start out as a developer this seems like the most boring kind of open source contribution that you can do. Once you become responsible for maintaining existing codebases though it becomes very interesting as a way to automate some of the work involved in reviewing merge requests and responding to issue reports. Most of the effort has been on the GNOME OpenQA tests. We started with a single gnomeos test suite, which started from the installer ISO and ran every test available, taking around 8 minutes to complete. We now have two test suites: gnome_install and gnome_apps. The gnome_install testsuite now only runs the install process, and takes about 3 minutes. The gnome_apps testsuite starts from a disk image, so while it still needs to run through gnome-initial-setup before starting any apps, we save a couple of minutes of execution. And now the door is open to expand the set of OpenQA tests much more, because during development we can choose to run only one of the testsuites and keep the developer cycle time to a minimum.Big thanks to Jordan Petridis for helping me to land this change (I didn’t exactly get it right the first time), and to the rest of the crew in the #gnome-os chat. I don’t plan on adding many more testsuites myself. The next step is to teach the hardworking team of GNOME module maintainers how to extend the openQA test suites with tests that are useful to them, without — and this is very important — without just causing frustration when we make big theming or design changes. (See a report from last month when Adwaita header bars changed colour). Hopefully I can spread the word effectively at this year’s GUADEC in Riga, Lativia. I will be speaking there on Thursday 27th July. The talk is scheduled at the same time as a very interesting GTK status update so I suppose the talk itself will be for a few dedicated cats. But there should be plenty of time to repeat the material in the bar afterwards to anyone who will listen. My next steps will be around adding an OpenQA test suite for desktop search – something we’ve never properly integration tested, which works as well as it does only because of hard working maintainers and helpful downstream bug reports. I have started collecting some example desktop content which we can load and index during the search tests. I’m on the lookout for more, in fact I recently read about the LibreOffice “bug documents” collection and am currently fetching that to see if we can reuse some of it. One more cool thing before you go – we now have a commandline tool for checking openQA test status and generating useful bug reports. And we have a new project-wide label 9. Integration test failure to track GNOME bugs that are detected by the OpenQA integration tests. 〉utils/pipeline_report.py --earlier=1 05/16/2023 05:24:30 pm Latest gnome/gnome-build-meta pipeline on default branch is 528262. Pipeline status: success Pipeline 1 steps earlier than 528262 is 528172. Pipeline status: success Project: * Repo: gnome/gnome-build-meta * Commit: dc71a6791591616edf3da6c757d736df2651e0dc * Commit date: 2023-05-16T13:20:48.000+02:00 * Commit title: openqa: Fix up casedir Integration tests status (Gitlab): * Pipeline: https://gitlab.gnome.org/gnome/gnome-build-meta/-/pipelines/528172 * test-s3-image job: https://gitlab.gnome.org/gnome/gnome-build-meta/-/jobs/2815294 * test-s3-image job status: failed * test-s3-image job finished at: 2023-05-16T14:46:54.519Z Integration tests status (OpenQA): * gnome_apps testsuite - job URL: https://openqa.gnome.org/tests/1012 * 3/31 tests passed * Failed: gnome_desktop * gnome_install testsuite - job URL: https://openqa.gnome.org/tests/1013 * 4/6 tests passed * Failed: gnome_desktop The format can be improved here, but it already makes it quicker for me to turn a test failure into a bug report against whatever component probably caused the failure. If you introduce a critical bug in your module maybe you’ll get to see one soon
  • Pratham Gupta: Hello GNOME Community (2023/05/13 18:48)
    Namaste 🙏My name is Pratham Gupta, currently in my second-year at Indian Institute of Technology (IIT) Mandi pursuing Bachelor of Technology in Computer Science. I am working on Crosswords Editor as a part of Google Summer of Code (GSoC) 2023 under my mentor Jonathan Blandford.How i found GNOMEMy journey with GNOME began when I was looking to learn app development for Ubuntu and similar distributions. While searching for resources, I came across the GNOME Newcomers page, which proved to be an invaluable resource for me. The step-by-step guide for developers helped me become familiar with the technologies used in the domain, and I was able to take my first steps towards contributing to open source.Here comes GNOME CrosswordsOne project that caught my eye was GNOME Crosswords, and I decided to start my contribution journey there. While it was a smooth but curvy road, I was able to learn a lot from looking at the code and understanding what each part of it was doing. This gave me the confidence I needed and I made my first contribution to the project.About my projectGNOME Crosswords has two parts: Crosswords Player and Editor. The project is focused on the Crosswords Editor and aims to add anagram search support with a word list. I plan to implement a feature that will display anagrams for a selected word to the user. It will a complete implementation involving searching for anagrams to displaying them to the user.
  • Sriyansh Shivam: GSoC 2023 (2023/05/13 07:58)
    What is GSoC? Google Summer of Code is a global, online program focused on bringing new contributors into open source software development. GSoC Contributors work with an open source organization on a 12+ week programming project under the guidance of mentors. GNOME Foundation : GNOME Foundation is a non-profit organization based in Orinda, California, United States, coordinating the efforts in the GNOME project. The GNOME Foundation works to further the goal of the GNOME project: to create a computing platform for use by the general public that is composed entirely of free software. Source: Wikipedia About Me: Hello Everyone 🙂, I am Sriyansh Shivam 2023 GSoC Intern @ GNOME Foundation Sophomore pursuing Bachelor's in Computer Science and Engineering from Bharati Vidyapeeth ( Deemed University ) College of Engineering , Pune🧑🏻🎓 Focused on practicing DSA (Python) and building cool projects 📘 Inclined towards Web Development 🌐 and UI UX 🖱 Completed 14+ certifications and over 20+ projects 🎓 Experience of 6+ languages and comfortable working on both Windows and Linux 💻 Open Source Contributor and Technical Writer When not working, you could find me playing video games 🎮 or listening to songs 🎶 My Journey so Far : My participation in Hacktoberfest 2022 marked the start of my open-source journey. I contributed to several Web development projects. Made no code and low code contributions. I learned how the Git process works, how you make a PR, sometimes receive review and changes requests on your commit, and then your commit gets merged after implementing those changes. I learned about git branching and much more. I heard about Google Summer of Code after discovering the open-source community. Despite having used Git for almost a year, I began using it much more regularly after Hacktoberfest. I started learning more about open source by watching YouTube videos, reading blogs, and enhancing my open-source project-contributing abilities. I occasionally use GitHub to learn about new technologies and to improve my coding abilities by using it as a reference. In February, I found the Workbench project and immediately contacted my mentor and became the first GSoC applicant to begin working on it. At first, I had a lot of challenges, like installing Linux and learning new technologies, but with the help of my mentor's continuous encouragement, I was able to get past those challenges. I continued to research the project, made my first PR, got feedback, and implemented changes. On Element, I discovered the GNOME community and interacted with my mentors, asked doubts and got involved in discussions related to the project. Working on this project not only taught me new skills, but also taught me about GJS, GLib, and other technologies, I am learning and developing not only as a contributor but as well as an individual developer. When I was in my freshman year, I remember watching tonnes of tutorials on YouTube and LinkedIn posts about how people cracked GSoC and the great experience they had, and getting accepted into this program the following year is nothing but a dream come true for me. I feel honored to be interning at GNOME. My Project : I would be working on Workbench to create demos and also implement functionalities that help ease the development workflow. Below are more specific details about the project. About Workbench: Learn and prototype with GNOME technologies Workbench's goal is to let you experiment with GNOME technologies, no matter if tinkering for the first time or building and testing a GTK user interface. Among other things, Workbench comes with real-time GTK/CSS preview library of examples JavaScript and Vala support XML and Blueprint for describing user interface syntax highlighting, undo/redo, autosave, session restore code linter and formatter terminal output 1000+ icons Goal of my Project: Create beginner-friendly and easy-to-understand examples/demos for all widgets of GTK 4.10 and Libadwaita 1.3 to help newcomers understand how to use them effectively. Provide ready-to-use code snippets of the widgets/APIs covered, making it easier for developers to integrate them into their projects. Cover GLib/GIO and Libportal APIs and create relevant examples to help developers understand how to use them in their applications Create demos while taking UI and UX design concepts into account to showcase how to make aesthetically pleasing and functional user interfaces. Cover GNOME HIG Patterns to ensure that the examples and demos follow the GNOME Human Interface Guidelines, making them consistent with other GNOME applications and user-friendly for users Implement Search function in Workbench Implement the Keyboard-Shortcuts feature in Workbench to ease the development-workflow This Project would be beneficial as : In showcasing GNOME platform capabilities. Providing an alternative to lengthy tutorials and dense API references. Providing quick and ready-to-use snippets for GNOME Developers. Helping on-board new developers into the community. Project Details Project Repository I'd also be blogging more regularly than before, so consider following me and subscribing to my newsletter to ensure you don't miss any of my upcoming blogs. If you liked this Blog and want to connect here are my Socials: GitHub LinkedIn Twitter Gmail
  • Nathan Willis: Friends, Romans, Italics: lend me your eyes! (for just a few minutes) (2023/05/12 15:24)
    I should really subtitle this “part 1,” to fit into my established idiom of starting what’s meant to be a series and then letting the follow-ons stack up like cordwood in the Drafts folder. It’s worked well so far. I’m looking at several of them now … they seem so happy…. ANNYWAY, what I actually wanted to say is that I’ve got kind of an “ask.” I’m running an online survey as part of my PhD research, and I’d really be grateful if you’d give it a try. It’s about fonts; what you do is look at text samples and mark any stuff that looks bad. It’s designed to be something that you could finish entirely in less than half an hour (if you do all five samples), but you can do fewer if you want. If you’re game and want to jump at it, the survey site is letter.fit — please go for it, and thanks! The gist of this is that we want to collect responses from as wide a variety of readers as we can. It’s not about right answers or wrong answers. I’d love it if you’d give it a go and maybe spread the word, but if you don’t, that’s totally cool. Just to be less self-promotey (although I certainly don’t reap any profit from it) for the unintrigued, and also to make for a more informative blog-read, I’ll say a tad more about what the survey does and why it’s a question. My apologies if you’ve listened to this song & dance twice…. What? If you haven’t heard, I’ve been doing research about letter spacing in fonts: “good” spacing in text is a big part how of readability and legibility come together. But if you want to say anything “quantitative” about it — which would be nice, in these modern times — then you have to show people real text and get a feel for what they think of it. Those online games where you kern some letters a big, full-window-width word can be fun, but they can’t tell you this sort of stuff. Thus, the test; designed as just a “look at this sample, highlight things that stick out” deal; the value is found by a wide spectrum of samples & lots of variety! You highlight letters, you tag ’em, we all win. Does everybody see the same letter-pairs as incorrect? In the same order? What about eyesight or time-of-day? Let’s find out. So, the back-end has experimental test-fonts in it. When you visit it, it shows you some randomly chosen sample pages with randomly-chosen fonts from the test pool. What’s different about the fonts? Just how the spacing works. I can’t tell you more than that! You have to go into it unbiased or else it doesn’t add anything! Is this about spacing algorithms or has something ELSE been varied in the fonts and layouts? No telling! Do *I* think some algorithms are better than others? Maybe but maybe not! Are all the letters changed, or are some of them the same? That’s off limits! Is it actually totally random and all you’re really measuring is reader frustration in online media as the seasons change? I can’t tell you! (I can tell you it has samples in English, German, and French, and it ought to work with any browser, mobile or desktop. If you find a bug, please let me know!) Experimentifying Using it’s as simple as looking at the random text-and-font samples it shows you, then highlighting anything that you think looks too tightly spaced or too far apart. That’s it. You don’t have to give reasons, you don’t have to spend any particular amount of time on it. In early trials, most people averaged 5 min with a sample; after that you’re not likely to see new stuff. But you can certainly take an hour if that’s what you want; there is no clock on it. It lets you look at five samples. After that, it stops. You can restart it if you like, even immediately, but the “take a break” bit is built-in. There again, in earlier trials we noticed that it’s helpful to take a little breather time; you can start to get fatigued after too many samples in a row. Figuring out that kind of stuff (and how to tune the trials so people can do it in a certain amount of time X) is all about test-design, and balancing the potential variables; it’s a pretty interesting rabbit hole to go down. I think the samples probably get easier if you do at least two — that is, at least if you’re not accustomed to inspecting font samples. Because once you see some differences, it “clicks” a little. The site is totally anonymous, although it does ask you some general demographic & experience questions. It’s not tied to you in any way and there are no cookies (unless you’re eating some, which is definitely allowed). Simulation of the data-analysis stage. Not to scale. But the questions would let us compare & contrast on those various variables, which would allow us to see any patterns in what people report seeing — if such patterns emerge. Which they might… or might not…. Do type-pros and laypeople see different issues? Or see the same but at different speeds? Or is there perhaps some effect but it’s way less significant than the noise level so it doesn’t affect anything? That’s why you have to measure. Software Anyway, that’s where we’re at. I’ve been running these tests for a while now, but I’m getting to the end of my allotted time, so I kinda want to push for making sure we collect as diverse of a data set as we can. The trends & the commonalities & differentiations are all what appears in the data-analysis stage, so the more people who give it a try, the better the information-crunching is. It’s been super interesting putting the testing apparatus and the analytical pieces together; I can assure you that those aspects of it are going to be fodder for conference talks from me for quite some time. E.g., there are a lot of data-science libraries out there; take a wild guess how many of them support vectors of non-numeric characters as array indices for their plotting functions…. Or can deal with one-story and two-story “g” as being different tokens. Or have some method to graph heatmap data onto a string object such as a text-page. It’s fun stuff. Is it going to result in me submitting new modules to Plotly or matplotlib? Not sure. Typographic research hooks might get in a lot of people’s way there so it’s rarely a simple patch; it also remains to be seen how best to generalize some of those bits and pieces. Are you interested in a whole other rabbit-hole? Ask me about getting consistent color-schemes out of matplot, iGraph, Inkscape, and LibreOffice…. I also had to write a custom storage back-end for the mark DB. That part was a lot less fun. It involved JavaScript and other uncivilized notions. But that’s for another time. If you want to take a look, please go ahead; I’d be grateful. Letter.fit. Your input is absolutely valuable — and I mean that, no matter who you are or how much/little you care about the minute details of typography. Oh, and also, please do feel free to share the site link around; it’s got OpenGraph and TwitterCards and stuff in it, so it looks fancy. I’m glad to get people from the FOSS universe represented in the eventual data set, but tell people everywhere. I’d be second-degree polynomially grateful if you shared the word. If not, absolutely no hard feelings. Till next time; stay spaced out there. N (P.S. Comments are off on the actual blog site because some WordPress plugin in there is junk and mangles the CSS. ETA 2024 before I fix that. Or sooner if I nuke it & repave. Questions or comments, hit me on Mastodon, etc.)
  • Gurmannat Sohal: Embarking on a Summer of Coding Adventure (2023/05/12 09:19)
    GNOME + GSoC = ❤️Hello everyone!My name is Gurmannat Sohal and I am excited to announce that I will participate in Google Summer of Code this year with GNOME Foundation. I am pursuing a bachelor’s degree in Electrical Engineering from the Indian Institute of Technology, Roorkee.My interaction with GNOME started the day I quit Windows to work on a Linux distribution and happened to choose Fedora. For the past few months, I’ve been making open-source contributions to GNOME and absolutely love the community and work.My project for GSoC this year is to implement backlog search in the Polari IRC client. The title is self-explanatory; the goal of my project is to implement a backlog search feature, which will benefit GNOME and its community greatly from the improved application experience and an increased user base for the platform. I will be working closely with my mentors Carlos Garnacho and Florian Müllner throughout the program to ensure that I am meeting project milestones and delivering high-quality code.Throughout the summer, I plan to achieve several milestones. You can follow my progress on my project repository at GNOME/Polari. Additionally, I will be posting updates on my blog and my social media profiles.I am incredibly grateful for the opportunity to participate in GSoC this year and I look forward to collaborating with the GNOME community to achieve our project goals. Please feel free to reach out to me with any questions or feedback — I would love to hear from you!
  • Tanmay Patil: Hello, It’s me! (2023/05/12 06:39)
    Hi, everyone!I’m Tanmay(txnmxy) Patil, a Computer Engineering Student at College of Engineering, Pune in India. I’ll be working on adding Acrostic support to GNOME crosswords as part of Google Summer of Code(GSoC) and will be mentored by Jonathan Blandford.FOSS and meTaking a trip down memory lane a year and a half ago, I used Linux for the first time in college and was really moved by it. Everything was new for me, the desktop, the terminal, etc. It being highly customizable really fascinated me. After six months, I finally took a leap of faith and installed Ubuntu on my PC. I became a full-time Linux user. Later, I learned about different distributions, desktop environments and eventually realized that I am using GNOME.I fell in love with the philosophy of Free and Open-Source Software and its unwavering commitment to user freedom. I became a member of COEP’s Free Software Users Group which is a group of people who promote the use of Free and Open-Source Software. In due course, I began using numerous software and started making small contributions to them.About GSoCBeing a GNOME user, I always wanted to do something for the GNOME community. I wasn’t even aware a thing like GSoC existed a year ago. After being introduced to it back in November, I saw it as a great opportunity to become a part of this wonderful community.During this summer, I’ll be working on adding Acrostic puzzles support to Crosswords application. For those who don’t know about Acrostics, It is a type of word puzzle that consists of two parts. The first part is a set of clues with corresponding numbered blanks to represent the answer and the second part is a long series of numbered blanks and spaces representing a text in which the answer for the clues fit. The project would involve extending the library libipuz to load acrostic puzzles from the disk and then doing some widget work to add support to the game.Looking forward to an exciting summer!Additional….I still use the same distribution which I installed initially, never liked the idea of “distro-hopping”.Thank you for reading!
  • Tim F. Brüggemann: Starting Out With GSoC 2023 (2023/05/12 00:00)
    With GSoC right around the corner, I recently stumbled upon a project running under the GNOME Foundation that piqued my interest. ~{{ Math.round($page.readingTime.minutes) }} minutes to read ({{ $page.readingTime.words }} words). # Getting Interested in GSoC 2023 I have both used and learned from quite a few different FOSS applications and projects in the past, but I never really contributed to one before, which always bugged me. I either never really found the time, lacked the required skill(s), or just didn't find anything that interested me enough to get started. However, I then stumbled across FlatSync, a tool to keep your Flatpaks in sync across reinstalls and/or multiple machines. Switching over from NixOS to Silverblue, I missed declaring all my packages in one module and then having all of them available after a single nixos-rebuild switch, especially since I use 2 different machines regularly. Needless to say, I was instantly hooked on the idea and began setting up a dev environment to start working on the project. # First Contribution to the Project I reached out to the project's mentor on Matrix and got to work on implementing basic interaction with GitHub's Gist API. After a bit of chatting and mentoring (or getting mentored, I guess), I got my first MR ready and merged. # Applying to GSoC After successfully getting involved in the project, I decided upon signing up for GSoC. I had mixed feelings about this, as I only have some basic experience regarding Rust and DBus. Fast-forward to today, and I've been accepted into the program. I am both very excited as well as grateful to be a part of such a big project, and I'm very much looking forward to engaging in development! # Start of GSoC With GSoC's Community Bonding Phase now starting, I'm currently working on setting up required socials and getting into the project again. # Project Progress I've reached out to the project's mentor once again and we laid out some issues for the first week to tackle. For this first week, we mostly did some refactoring work. There was a big MR against main open, but it was stale for about a month and had some major merge conflicts with the current state of main. As such, we resolved conflicts locally and refactored the code to accurately represent the current project's state. This was mostly unifying structs and other data types, as there have been different variations of essentially the same data models in both branches. Getting the MR merged meant we now had basic daemon-cli-connection via DBus. Furthermore, we now serialize libflatpak's installation information to JSON and push that to GH Gist instead of just the application's name, which should make version management and different remotes way easier to handle in the future. We can also create a basic diff between local and remote already, which should help us later down the road when we actually want to merge together both states. # Community Bonding As a GSoC student participating alongside the GNOME Foundation, you are meant to pursue strong community bonding. This means getting on to Planet GNOME, joining GNOME Discourse as well as getting to know the other GSoC projects and its members. As you can see from this post, I'm currently doing exactly that. I played with the idea of setting up a blog in the past already, and, well, now is definitely a good time to start. :p I feel both excited and honored to be given the opportunity of such a strong integration into the GNOME community, and I very much look forward to it! # Future Outlook I very much look forward to this opportunity and hope that FlatSync as well as all other projects will be having a great and successful time. I hope to learn a lot about our used software stack (Rust, DBus, GTK) as well as the general development cycle of open-source projects, GNOME projects in particular, and I hope to continue working on this and maybe other projects after GSoC is over!
  • Dave Patrick Caberto: GSoC 2023: Introduction (2023/05/11 15:17)
     Hello there!I am Dave Patrick Caberto, a first-year Electronics Engineering student from Bataan Peninsula State University, Philippines. This summer, I will be working on the Rust and GTK 4 rewrite of Bustle, a D-Bus activity visualizer, with the guidance of my mentors, Bilal Elmoussaoui and Maximiliano Sandoval.Me and my bizarre open-source journeyWeirdly enough, I first discovered Linux five years ago on a failed attempt to do Hackintosh. I got bored with Windows that I decided to try something new and different, and that's when I eventually stumbled upon Linux, specifically Elementary OS. It did not last for me since I still depend a lot on applications that are only available with Windows.Fast-forward two years later, I decided to give Linux another try with KDE Neon and migrate my workflows to open-source applications such as LibreOffice, Inkscape, and Kdenlive. I remember feeling adventurous and breaking my system numerous times, though I considered it as part of the learning process. I was intrigued by the idea of being a part of a community of passionate developers and enthusiastic users. I also liked the customization it had. However, as I grew with it, I started to realize that I had spent way too much time configuring things and forgetting to focus on what was really important, and that is when I discovered GNOME.When I first used GNOME, it was a totally different workflow, and I liked it. It was simple; the UI made sense. At that time, there was some software I was missing, particularly a screen recording application that works nicely on Wayland. That was the time I discovered RecApp. It worked well; however, it looked not at home, even for a GTK application. One of my first contributions was creating a mockup for the redesign, and I also took that opportunity to learn Python and GTK and implement my mockup. I was not necessarily proud of my code, but I was proud and happy about having the chance to contribute to the software I use and the community.A few months later, as I learned more about programming and the GNOME developer ecosystem, there were many more things I'd like to improve and change in RecApp, some of which other contributors did not agree with. That's when Kooha was born, with a total in-and-out redesign of RecApp and a different focus and ideology. Almost a month later was also the birth of Mousai, a song recognizer application. Since then, I have been maintaining these two applications.There are definitely a lot of things I missed, but that was a quick summary of my journey with open-source.More about Bustle and the projectFor those who have not heard of Bustle, it is an application used to visualize D-Bus activities. It shows signal emissions, method calls, method returns, and errors, which is useful in observing traffic, debugging, and optimizing performance in D-Bus applications.Although the current implementation of Bustle in Haskell and GTK 3 is functional, there are noteworthy reasons to consider a rewrite in Rust. Some of these advantages include having access to a range of libraries, such as zbus, gtk4-rs, and LibPCAP bindings. Aside from this, the growing Rust community and the availability of the Rust SDK in Flathub would make the tool more accessible to contributors and simpler to distribute to the users. On the other hand, porting Bustle to GTK 4 would over benefits such as the newer and more modern LibAdwaita widgets and ListView API, which would make it easier for the tool to comply with the HIG and benefit from the latest developments in the platform.Altogether, this rewrite and port will contribute to the maintainability, accessibility, and future-proofing of Bustle through the use of newer technologies and arguably more ergonomic technologies.I will be posting updates about the project every few weeks for the next few months, including more details about the plans for this GSoC project. If you'd like to talk about something, feel free to contact me at Matrix @sedve:matrix.org.Thanks for reading!
  • Hari Rana: Overview of Flatpak’s Permission Models (2023/05/11 00:00)
    Introduction Flatpak’s permissions can be confusing. Some are technical and need knowledge on how they work, and others are self-explanatory. Some are added before the app starts, known as static permissions, and some are requested when the user runs the app, known as dynamic permissions. Many may also criticize Flatpak for lacking Android-style permissions while being unaware of the existence of XDG Desktop Portals. In this article, I’m going to explain: What static and dynamic permissions are The differences between static and dynamic permissions The issues with static permissions What XDG Desktop Portals are and how they work Why static permissions exist in the first place Keep in mind that I won’t be going into low level details in this article. This is a simplified overview of the Flatpak permissions system for the less technical. Static Permissions Static permissions are permissions that come with the app by default and cannot be changed while the app is running. Suppose an app has the permission to read and write to $HOME/Downloads (the Downloads directory). If you revoke that permission, the app keeps the permission until you completely close it and reopen it. Once you reopen it, you’ll notice that it can’t access the Downloads directory anymore, whether it is through the file picker or by dragging and dropping – assuming it does not make use of dynamic permissions. Dynamic Permissions Dynamic permissions are permissions that can be changed while the app is running. In other words, resources are only accessed when the user allows it on demand, and can be revoked while the app is running. This is analogous to the Android-style permission model, where a prompt appears on-screen and asks you whether you want to allow the app to access certain resources (a file, hardware, etc.) or deny it. Most well known desktop environments, like GNOME and KDE Plasma, support these permissions, and will actively suggest that apps use them. Decoder, for example, uses dynamic permissions to access the user’s camera: A prompt appears to allow or deny camera permissions for Decoder; I press Deny to deny the permission; I open Settings and enable the Camera permission, and Decoder can now access my camera; I deny once again and Decoder cannot access my camera anymore. Using dynamic permissions to allow and deny camera permissions for Decoder. Differences Between Static and Dynamic Permissions Dynamic permissions are the opposite of static permissions. Static permissions are often viewed as a hack, in other words a workaround, whereas dynamic permissions as a solution. Dynamic permissions are meant to replace static permissions and address their issues. Static permissions are implicit, because they come with the app by default. They need to be learned when using Flatpak to some extent, as some apps can come with unsuitable permissions, e.g. not enough permissions. Dynamic permissions are explicit, because the user allows them on demand. They’re mostly abstracted away from the user, similarly to Android. The Issues With Static Permissions There are several issues with static permissions. As mentioned previously, static permissions are implicit, need to be learned and are considered hacks; however, there are many more issues associated with them. User Experience Complications Since static permissions are implicit and need to be learned, if an app comes with unsuitable permissions, then it can be unusable or inconvenient by default – assuming it doesn’t use dynamic permissions. For example, if a text editor does not come with the filesystem=host permission, i.e. read-write access to the host system, including user and external devices’ directories, then the app would be effectively useless, because it can’t access any of your files, let alone write. To work around this inconvenient default, the user needs to manually set additional permissions, to make it useful. In this case, they’d manually have to add filesystem=host. Another workaround would have the packager add this permission, but the actual solution would be to use dynamic permissions to be able to read and write files anywhere and anytime, as long as the user allows it. Insecure There are several reasons why static permissions are insecure, namely the inability to filter resources properly, and insecure permissions being shipped by default. Inability to Filter Resources Static permissions have no proper method to filter resources. The model’s philosophy is “punch holes in the sandbox whenever needed”, which means that you are effectively making the sandbox weaker with each additional permission you give it. For example, if an app has read-write access to the Downloads directory (or any directory), then it can view and write all files located in it at any time. In contrast, dynamic permissions are designed to be selective, so you only access the files you absolutely need. For example, Upscaler uses dynamic permissions to retrieve files by either dragging and dropping or selecting one from the file picker:1 In a file manager, in the '/var/home/TheEvilSkeleton/Pictures/Upscaler' directory, I grab an image named 'test.png' and drop it into an app called Upscaler. Upscaler returns '/run/user/1000/doc/98b40428/test.png' in the logs and imports the image into the app. Then, using the file picker, I open a file 'test2.jpg'. Upscaler returns '/run/user/1000/doc/92c6053f/test2.jpg' and shows the image in the app. Selecting files by dragging and dropping, and by using the file picker. The illusion here is Upscaler comes with read and write capabilities to all directories and files by default. This is untrue. Instead, dynamic permissions automatically retrieve the data the user provides and act accordingly. In the previous example, opening a file from the Pictures directory exported the file to a private location that the sandbox can access (/run/user/1000/doc normally). This means that only the files I provided can be interacted with by Upscaler, and nothing else. If there was a test3.jpg file in the same directory, then Upscaler won’t be able to access it, as I never provided it. Insecure Defaults With static permissions, you get a pre-configured set of permissions that the app can access once installed (unless manually changed). This means, apps can come with enough permissions by default to read and write the user directory, or even worse, it can access external devices, such as USBs, webcams, microphones, etc. This is by nature insecure and confusing as the default permissions are inconsistent and vary per app. Some apps come with little to no permissions, whereas others come with many. Dynamic permissions, on the other hand, come with no permissions by default. Once needed, a prompt appears and asks the user for explicit permissions. Refer to the Decoder and Upscaler examples. XDG Desktop Portals XDG Desktop Portals, shortened to Portals, are a collection of APIs that implement dynamic permissions; they allow sandboxed environments to conveniently and securely access resources by using host tools. Each portal is specific to the use case – there is a portal for accessing the camera; a portal for accessing files and directories using the host file picker; or to assist dragging and dropping; and many more. Using Decoder’s example again, it was specifically using the Camera portal. In Upscaler’s example, when I dragged and dropped test.png (at 0:06), it was using the FileTransfer portal. When I opened test2.jpg (at 0:13), it was using the FileChooser portal. However, Portals do more than just implement dynamic permissions; they’re also designed to integrate cleanly with the desktop. A famous problem with apps used to be the file picker problem, where an app that used a toolkit would use their corresponding file picker. For example, a Qt app would use the Qt file picker and a GTK app would open a GTK file picker. Frameworks like Electron would only use the GTK file picker, even on Plasma, because there wasn’t a convenient method to use the system’s file picker. This was solved by the FileChooser portal, which uses the system’s file picker, or a generic GTK file picker implementation as a fallback. That is because Portals are interfaces that systems can integrate and interact with. To explain it simply, a portal only provides basic information to the system. The system grabs this information and displays it to the user however it sees fit. Firefox, for example, was one of those apps that only used the GTK file picker. Nowadays, if you use Firefox on GNOME, it will continue to use the GTK file picker. However, if you use Firefox on Plasma, then it will use the Qt file picker instead. Should System76 create their own file picker for COSMIC in the future, Firefox will open COSMIC’s file picker on COSMIC. In short, it’s thanks to Portals that we have dynamic permissions in the first place. Additionally, Portals helped apps use host resources instead of whatever the toolkit or framework provides. This means that apps can better integrate with the system, and do so easily. Why Do Static Permissions Still Exist? Unfortunately, Portals aren’t perfect. While it is the closest to the idealistic user experience, there are a few advantages that static permissions have over dynamic permissions: no implementation is required and less functional limitations. With dynamic permissions, toolkits and frameworks need to implement each portal. At the time of writing this article, GTK supports the FileChooser and FileTransfer Portals. Qt, Firefox, Chromium, and Electron support the FileChooser portal, but not the FileTransfer portal. This means that dragging and dropping files from outside of the sandbox will not work. What’s worse, wxWidgets doesn’t support FileChooser or FileTransfer Portals, which means that we need to resort to static permissions for file access, and only the GTK file picker will be used. Portals may have technical limitations. Bottles, an app that allows you to run Windows software on Linux, is one of the apps that is affected by Portals’ limitations, specifically the FileChooser and FileTransfer Portals. Many Windows apps require some additional files next to the executable file (e.g. libraries), for example the Windows version of Mindustry. If a user downloads and runs the Mindustry executable from Bottles strictly with dynamic permissions, then it won’t work, because Mindustry can’t access the files it needs; the sandbox will only have read and write access to the Mindustry executable, and not the additional files. As a workaround, Bottles comes with the static permission xdg-download, i.e. read-write access to the Downloads directory, where most users store downloaded content. However, this doesn’t fix the issue entirely, because it won’t work outside of the Downloads directory (unless manually changed). These limitations could go as far as making Portals unsuitable for certain apps, or make it inconvenient for developers and users.2 When I explain the premise of static permissions, I like to explain it like Xwayland and Wayland. Xwayland acts as a drop-in replacement for X11, for use in Wayland, but it comes with the design flaws of X11. Similarly, static permissions are used as a drop-in replacement for traditional system access for use in Flatpak. Static permissions are intended to be a short term workaround, as dynamic permissions are meant to be convenient, especially as more toolkits and frameworks implement them. However, XDG Desktop Portals are a relatively new technology, so it needs time to mature and adopt. Conclusion To summarize everything, here are some important details in the permission models: Static permissions are implicit permissions that cannot be changed while the application is running: Pros: No implementation required Cons: Workaround, not a solution Difficult to use and understand Permissions vary between apps and case by case basis Permissions cannot be changed while the app is running The model revolves around punching holes in the sandbox, which is inherently insecure Does not make use of system resources, like the system file picker Dynamic permissions (Portals) are explicit permissions that the user allows on demand: Pros: Easy to use Permissions are disabled by default, or Principle of Least Privilege The user explicitly allows or denies every permission, but they’re disabled by default Each permission is selective; for example, opening a file will only access that file, nothing more Permissions can be changed while the app is running Integrates well with the desktop Considered a standard for accessing resources Sounds cool Image previews with my DE’s file picker in a GTK3 app? Yes please! Cons: Requires code changes in toolkits/frameworks/apps made without Portals in mind Portals might not be suitable for some app use cases yet, or can be a nuisance Unfortunately, transitioning from one model or paradigm to another in the realm of technology isn’t easy. Oftentimes, transitioning to the tech is more difficult than developing the tech itself. We currently rely on static permissions, because Portals require to be implemented and does not cover all use cases. We’ve seen a lot of progress in recent years with Portals being adopted in toolkits and frameworks. Hopefully, Portals will only get better in the future as more people use them. If you want to experiment with XDG Desktop Portals, feel free to take a look at ASHPD Demo. Please do note that the app may misbehave if your desktop environment does not support the portal or does so improperly. Further Reading If you want to learn more about the static permissions, feel free to take a look at Flatseal’s documentation. For the offline version, you can install Flatseal, click the hamburger menu (☰) and click “Documentation”. If you want a technical overview of Flatpak’s permission models, feel free to take a look at Flatpak High-Level Overview. Credits Thanks to Oro for proofreading this article. Mastodon and Website Flatpak High-Level Overview and Flatseal for documenting the permission models Footnotes At the time of writing this article, it’s not available on the stable version of Upscaler. ↩ Luckily, this issue is being discussed with “neighboring files” in this discussion. xdg-download won’t be needed anymore once it has been addressed. ↩
  • Michael Catanzaro: GNOME Core Apps Update (2023/05/10 21:32)
    It’s been a while since my big core app reorganization for GNOME 3.22. Here is a history of core app changes since then: GNOME 3.26 (September 2017) added Music, To Do (which has since been renamed to Endeavor), and Document Scanner (simple-scan). (I blogged about this at the time, then became lazy and stopped blogging about core app updates, until now.) To Do was removed in GNOME 3.28 (March 2018) due to lack of consensus over whether it should really be a core app.  As a result of this, we improved communication between GNOME release team and design team to ensure both teams agree on future core app changes. Mea culpa. Documents was removed in GNOME 3.32 (March 2019). A new Developer Tools subcategory of core was created in GNOME 3.38 (September 2020), adding Builder, dconf Editor, Devhelp, and Sysprof. These apps are only interesting for software developers and are not intended to be installed by default in general-purpose operating systems like the rest of GNOME core. GNOME 41 (September 2021) featured the first larger set of changes to GNOME core since GNOME 3.22. This release removed Archive Manager (file-roller), since Files (nautilus) is now able to handle archives, and also removed gedit (formerly Text Editor). It added Connections and a replacement Text Editor app (gnome-text-editor). It also added a new Mobile subcategory of core, for apps intended for mobile-focused operating systems, featuring the dialer app Calls. (To date, the Mobile subcategory has not been very successful: so far Calls is the only app included there.) GNOME 42 (March 2022) featured a second larger set of changes. Screenshot was removed because GNOME Shell gained a built-in screenshot tool. Terminal was removed in favor of Console (kgx). We also moved Boxes to the Developer Tools subcategory, to recommend that it no longer be installed by default in general purpose operating systems. GNOME 43 (September 2022) added D-Spy to Developer Tools. OK, now we’re caught up on historical changes. So, what to expect next? New Process for Core Apps Changes Although most of the core app changes have gone smoothly, we ran into some trouble replacing Terminal with Console. Console provides a fresher and simpler user interface on top of vte, the same terminal backend used by Terminal, so Console and Terminal share much of the same underlying functionality. This means work of the Terminal maintainers is actually key to the success of Console. Using a new terminal app rather than evolving Terminal allowed for bigger changes to the default user experience without upsetting users who prefer the experience provided by Terminal. I think Console is generally nicer than Terminal, but it is missing a few features that Fedora Workstation developers thought were important to have before replacing Terminal with Console. Long story short: this core app change was effectively rejected by one of our most important downstreams. Since then, Console has not seen very much development, and accordingly it is unlikely to be accepted into Fedora Workstation anytime soon. We messed up by adding the app to core before downstreams were comfortable with it, and at this point it has become unclear whether Console should remain in core or whether we should give up and bring back Terminal. Console remains for now, but I’m not sure where we go from here. Help welcome. To prevent this situation from happening again, Sophie developed a detailed and organized process for adding or removing core apps, including a new Incubator category designed to provide notice to downstreams that we are considering adding new apps to GNOME core. The new Incubator is much more structured than my previous short-lived Incubator attempt in GNOME 3.22. When apps are added to Incubator, I’ve been proactively asking other Fedora Workstation developers to provide feedback to make sure the app is considered ready there, to avoid a repeat of the situation with Console. Other downstreams are also welcome to watch the  Incubator/Submission project and provide feedback on newly-submitted apps, which should allow plenty of heads-up so downstreams can let us know sooner rather than later if there are problems with Incubator apps. Hopefully this should ensure apps are actually adopted by downstreams when they enter GNOME core. Imminent Core App Changes Currently there are two apps in Incubator. Loupe is a new image viewer app developed by Chris and Sophie to replace Image Viewer (eog). Snapshot is a new camera app developed by Maximiliano and Jamie to replace Cheese. These apps are maturing rapidly and have received primarily positive feedback thus far, so they are likely to graduate from Incubator and enter GNOME core sooner rather than later. The time to provide feedback is now. Don’t be surprised if Loupe is included in core for GNOME 45. In addition to Image Viewer and Cheese, we are also considering removing Photos. Photos is one of our “content apps” designed to allow browsing an entire collection of files independently of their filesystem locations. Historically, the other two content apps were Documents and Music. The content app strategy did not work very well for Documents, since a document browser doesn’t really offer many advantages over a file browser, but Photos and Music are both pretty decent at displaying your collection of pictures or songs, assuming you have such a collection. We have been discussing what to do with Photos and the other content apps for a very long time, at least since 2015. It took a very long time to reach some rough consensus, but we have finally agreed that the design of Photos still makes sense for GNOME: having a local app for viewing both local and cloud photos is still useful. However, Photos is no longer actively maintained. Several basic functionality bugs imperiled timely release of Fedora 37 last fall, and the app is less useful than previously because it no longer integrates with cloud services like Google Photos. (The Google integration depends on libgdata, which was removed from GNOME 44 because it did not survive the transition to libsoup 3.) Photos has failed the new core app review process due to lack of active maintenance, and will be soon be removed from GNOME core unless a new maintainer steps up to take care of it. Volunteers welcome. Future Core App Changes Lastly, I want to talk about some changes that are not yet planned, but might occur in the future. Think of this entire section as brainstorming rather than any concrete plans. Like Photos, we have also been discussing the status of Music. The popularity of DRM-encumbered cloud music services has increased, and local music storage does not seem to be as common as it used to be. If you do have local music, Music is pretty decent at handling it, but there are prominent bugs and missing features (like the ability to select which folders to index) detracting from the user experience. We do not have consensus on whether having a core app to play local music files still makes sense, since most users probably do not have a local music collection anymore. But perhaps all that is a moot point, because Videos (totem) 3.38 removed support for opening audio files, leaving us with no core apps capable of playing audio for the past 2.5 years. Previously, our default music player was Videos, which was really weird, and now we have none; Music can only play audio files that you’ve navigated to using Music itself, so it’s impossible for Music to be our default music player. My suggestion to rename Videos to Media Player and handle audio files again has not been well-received, so the most likely solution to this conundrum is to teach Music how to open audio files, likely securing its future in core. A merge request exists, but it does not look close to landing. Fedora Workstation is still shipping Rhythmbox rather than Music specifically due to this problem. My opinion is this needs to be resolved for Music to remain in core. It would be nice to have an email client in GNOME core, since everybody uses email and local clients are much nicer than webmail. The only plausible candidate here is Geary. (If you like Evolution, consider that you might not like the major UI changes and many, many feature removals that would be necessary for Evolution to enter GNOME core.) Geary has only one active maintainer, and adding a big application that depends on just one person seems too risky. If more developers were interested in maintaining Geary, it would feel like a safer addition to GNOME core. Contacts feels a little out of place currently. It’s mostly useful for storing email addresses, but you cannot actually do anything with them because we have no email application in core. Like Photos, Contacts has had several recent basic functionality bugs that imperiled timely Fedora releases, but these seem to have been largely resolved, so it’s not causing urgent problems. Still, for Contacts to remain in the long term, we’re probably going to need another maintainer here too. And perhaps it only makes sense to keep if we add Geary. Finally, should Maps move to the Mobile category? It seems clearly useful to have a maps app installed by default on a phone, but I wonder how many desktop users really prefer to use Maps rather than a maps website. GNOME 44 Core Apps I’ll end this blog post with an updated list of core apps as of GNOME 44. Here they are: Main category (26 apps): Calculator Calendar Characters Cheese Clocks Connections Console (kgx) Contacts Disks (gnome-disk-utility) Disk Usage Analyzer (baobab) Document Scanner (simple-scan) Document Viewer (evince) Files (nautilus) Fonts (gnome-font-viewer) Help (yelp) Image Viewer (eog) Logs Maps Music Photos Software System Monitor Text Editor Videos (totem) Weather Web (epiphany) Developer Tools (6 apps): Boxes Builder dconf Editor Devhelp D-Spy sysprof Mobile (1 app): Calls
  • Akshay Warrier: GSoC 2023: Introductory Post (2023/05/10 21:04)
    Hi!I’m Akshay Warrier, a second-year student from the Indian Institute of Information Technology Kottayam, India studying Electronics and Communication Engineering. This summer I’ll be working on Workbench with my mentors Sonny Piers and Andy Holmes.Me, Linux, and GNOMEIt was about two years ago, I decided to give Linux a try on a whim. I installed Ubuntu and played around with it for a bit and it wasn’t long before I had completely switched over. Everything was new and different to me but I remember really liking the unified look and feel of the UI. There were still things I wasn’t accustomed to, such as using the terminal.I would later on learn about desktop environments and realize that the apps and software I have been using that I know and love are from GNOME. A few months later and I had gone down the rabbit hole of distro-hopping. I installed and tried out various different Linux distros and in the process learned a lot about the Linux community, following all sorts of forums and subreddits. I eventually stumbled across Arch Linux. I had my apprehensions about installing it but my curiosity took over and I gave it a shot anyway. It has stuck with me since then.Workbench and Library DemosI knew very early on that I wanted to be part of the GNOME community as a contributor but I found it very difficult to find a project that was the right fit for me. This was mainly because I had no prior experience working with GTK and most of my exposure before this was working on small projects involving Python or JavaScript. But I remember coming across Workbench and seeing that it didn’t have GTK as a pre-requisite and also used JavaScript which worked well in my favor. I started making contributions to Workbench and after several contributions I got myself familiarized with GJS and Blueprint, the UI markup language used by Workbench.Workbench has a feature called the “Library” which is a collection of demos that show the various widgets, APIs, and design patterns of GTK. This is really useful for people who just want to quickly look up the basic usage of a widget or an API but don’t have the time to go through piles of documentation. Currently, the Library already includes around 23 demos, and our goal is to cover most, if not all, commonly used widgets and APIs by the end of the internship.In the end…I believe I owe a lot to the open-source community for all the software and applications I use daily and therefore being able to contribute and give back to the community through GSoC means a lot to me. I hope to learn a lot on this new and exciting journey of mine.Thank you for reading!
  • Pedro Sader Azevedo: Hello Planet GNOME! (2023/05/09 20:10)
    Hi, everyone! I’m Pedro Sader Azevedo, a Computer Engineering student at Universidade Estadual de Campinas (Unicamp) in Brazil. In the next few months I’ll be working as a Google Summer of Code (GSoC) intern to integrate the functionality of GNOME Network Displays into GNOME Settings, with the guidance of Felipe Borges, Claudio Wunder, Jonas Ådahl, and Anupam Kumar. More about me During the pandemic, when every aspect of life was moved to the digital realm, I learned about Free Software and was won over by its principled stance on technology. I decided to take back my computing and became a user (and vocal advocate) of GNU/Linux! Soon after, I helped organize The Week of Computing at my university. I invited a representative of GNOME (the one and only Georges Stavracas!), whose talk was especially impressive to me as I thoroughly identified with the values of freedom, collaboration, and inclusivity embodied by the project. I started contributing with translations (which I still do a lot!) and was so warmly welcomed that I soon started programming as well. I also contribute with community-engagement, by helping the Free and Open Source Software study group LKCAMP organize events that promote usage of and contribution to FOSS projects. More about my internship The goal of the internship is to the add networks displays (i.e. “screen casting”) to GNOME Settigs. I plan to do that by separating the backend of GNOME Network Displays into a library, then using that library to implement the new feature on the settings app. This change will contribute to the general impression of using a “first-class” operating system that seamlessly integrates with others. While it is true that the app already provides said feature, finding it directly in Settings is much more reassuring and convenient than having to install a separate program just for that. If there’s anything you’d like to discuss, feel free to reach out to me using any of the communication plaforms listed here (I’m most active on Matrix though). Thank you for reading!
  • Richard Hughes: MSI and Insecure KMs (2023/05/09 13:40)
    As some as you may know, MSI suffered a data breach which leaked a huge amount of source code, documentation and low-level firmware PRIVATE KEYS. This is super bad as it now allows anyone to sign a random firmware image and install it as an official MSI firmware. It’s even more super bad than that, as the certificates leaked seem to be the KeyManifest keys, which actually control the layer below SecureBoot, this little-documented and even less well understood thing called BootGuard. I’ll not overplay the impact here, but there is basically no firmware security on most modern MSI hardware now. We already detect the leaked test keys from Lenovo and notify the user via the HSI test failure and I think we should do the same thing for MSI devices too. I’ve not downloaded the leak for obvious reasons, and I don’t think the KM hashes would be easy to find either. So what can you do to help? Do you have an MSI laptop or motherboard affected by the leak? The full list is here (source: Binarly) and if you have one of those machines I’d ask if you could follow the instructions below, run MEInfo and attach it to the discussion please. As for how to get MEInfo, Intel doesn’t want to make it easy for us. The Intel CSME System Tools are all different binaries, and are seemingly all compiled one-by-one for each specific MEI generation — and available only from a semi-legitimate place unless you’re an OEM or ODM. Once you have the archive of tools you either have to work out what CSME revision you have (e.g. Ice Point is 13.0) or do what I do and extract all the versions and just keep running them until one works. e.g. choosing the wrong one will get you: sudo ./CSME\ System\ Tools\ v13.50\ r3/MEInfo/LINUX64/MEInfo Intel (R) MEInfo Version: 13.50.15.1475 Copyright (C) 2005 - 2021, Intel Corporation. All rights reserved. Error 621: Unsupported hardware platform. HW: Cometlake Platform. Supported HW: Jasplerlake Platform. And choosing the right one will get you: Intel (R) MEInfo Version: 14.1.60.1790 Copyright (C) 2005 - 2021, Intel Corporation. All rights reserved. General FW Information … OEM Public Key Hash FPF 2B4D5D79BD7EE3C192412A4501D88FB2066C853FF7B1060765395D671B15D30C Now, how to access these hashes is what Intel keeps a secret, for no reason at all. I literally need to know what integer index to use when querying the HECI device. I’ve asked Intel, but I’ve been waiting since October 2022. For instance: sudo strace -xx -s 4096 -e openat,read,write,close ./CSME\ System\ Tools\ v14.0.20+\ r20/MEInfo/LINUX64/MEInfo … write(3, "\x0a\x0a\x00\x00\x00\x23\x00\x40\x00\x00\x00\x00\x20\x00\x00\x00\x00", 17) = 17 read(3, "\x0a\x8a\x00\x00\x20\x00\x00\x00\x2b\x4d\x5d\x79\xbd\x7e\xe3\xc1\x92\x41\x2a\x45\x01\xd8\x8f\xb2\x06\x6c\x85\x3f\xf7\xb1\x06\x07\x65\x39\x5d\x67\x1b\x15\xd3\x0c", 4096) = 40 … That contains all the information I need – the Comet Lake READ_FILE_EX ID is 0x40002300 and there’s a SHA256 hash that matches what the OEM Public Key Hash FPF console output said above. There are actually three accesses to get the same hash in three different places, so until I know why I’d like the entire output from MEInfo. The information I need uploading to the bug is then just these two files: sudo ./THE_CORRECT_PATH/MEInfo/LINUX64/MEInfo &> YOUR_GITHUB_USERNAME-meinfo.txt sudo strace -xx -s 4096 -e openat,read,write,close ./THE_CORRECT_PATH/MEInfo/LINUX64/MEInfo &> YOUR_GITHUB_USERNAME-meinfo-strace.txt If I need more info I’ll ask on the ticket. Thanks!
  • Peter Hutterer: libei and a fancy protocol (2023/05/09 00:51)
    libei is the library for Emulated Input - see this post for an introduction. Like many projects, libei was started when it was still unclear if it could be the right solution to the problem. In the years (!) since, we've upgraded the answer to that question from "hopefully" to "yeah, I reckon" - doubly so since we added support for receiver contexts and got InputLeap working through the various portal changes. Emulating or capturing input needs two processes to communicate for obvious reasons so the communication protocol is a core part of it. But initially, libei was a quickly written prototype and the protocol was hacked up on an as-needed let's-get-this-working basis. The rest of the C API got stable enough but the protocol was the missing bit. Long-term the protocol must be stable - without a stable protocol updating your compositor may break all flatpaks still shipping an older libei. Or updating a flatpak may not work with an older compositor. So in the last weeks/months, a lot of work as gone into making the protocol stable. This consisted of two parts: drop protobuf and make the variuos features interface-dependent, unashamedly quite like the Wayland protocol which is also split into a number of interfaces that can be independently versioned. Initially, I attempted to make the protocol binary compatible with Wayland but dropped that goal eventually - the benefits were minimal and the effort and limitations (due to different requirements) were quite significant. The protocol is defined in a single XML file and can be used directly from language bindings (if any). The protocol documentation is quite extensive but it's relatively trivial in principal: the first 8 bytes of each message are the object ID, then we have 4 bytes for the message length in bytes, then 4 for the object-specific opcode. That opcode is one of the requests or events in the object's interface - which is defined at object creation time. Unlike Wayland, the majority of objects in libei are created in server-side (the EIS implementation decides which seats are available and which devices in those seats). The remainder of the message are the arguments. Note that unlike other protocols the message does not carry a signature - prior knowledge of the message is required to parse the arguments. This is a direct effect of initially making it wayland-compatible and I didn't really find it worth the effort to add this. Anyway, long story short: swapping the protocol out didn't initially have any effect on the C library but with the changes came some minor updates to remove some of the warts in the API. Perhaps the biggest change is that the previous capabilities of a device are now split across several interfaces. Your average mouse-like emulated device will have the "pointer", "button" and "scroll" interfaces, or maybe the "pointer_absolute", "button" and "scroll" interface. The touch and keyboard interfaces were left as-is. Future interfaces will likely include gestures and tablet tools, I have done some rough prototyping locally and it will fit in nicely enough with the current protocol. At the time of writing, the protocol is not officialy stable but I have no intention of changing it short of some bug we may discover. Expect libei 1.0 very soon.
  • Pedro Sader Azevedo: Making 4 GNOME Shell extensions compatible with GNOME 44 (2023/05/08 22:14)
    I absolutely love extensions, in general. I remember being blown away every time I found a notably extensible software, from Firefox add-ons in my early days of surfing the web to Neovim plugins in my latest coding endeavors. This is why, once I started using GNU/Linux, I was amazed by GNOME Shell extensions. Being able to customize the appearance and behavior of my desktop environment as easily as used to with VS Code extensions was unimaginable coming from Windows and I loved every bit of it. Since then, I’ve been a GNOME Shell extension fanatic and only upgraded my workstation when all extensions were made compatible with the latest version of GNOME. The addition of an upgrade assistant to Extension Manager made the wait easier, though not shorter. Now that my current distro of choice (Fedora Silverblue) allows upgrading early and safely, I decided to do something different: upgrade right away and contribute to the extensions with GNOME 44 support! These were the extensions I chose to contribute with: ddterm A drop down terminal extension for GNOME Shell. With tabs. Works on Wayland natively. I use it every day as my music player, with ncspot. Focus changer Change focus between windows in all directions, using vim-like mappings (Super + h, j, k, l). Can’t beat that muscle memory! Focused Window dbus Exposes a dbus method to get active window title and class. Required to get ActivityWatch working on mutter. Reading Strip Works as a reading guide for computer and this is really useful for people affected by dyslexia. Useful to avoid mixing up table rows and columns! Here’s how that experiment went. Upgrading to GNOME 44 As I said, version upgrades are safe and easy on Fedora Silverblue! All I had to do was pin my current deployment with: sudo ostree admin pin 0 Then rebase to latest Fedora release: rpm-ostree rebase fedora:fedora/38/x86_64/silverblue Locating extension directories With GNOME 44 installed on my machine, it was time to locate where the GNOME Shell extensions were installed. Extensions installed via EGO or Extension Manager are located at $HOME/.local/share/gnome-shell/extensions. Each extension has a dedicated folder, whose name looks like extension-name@author.example.com. Editing the metadata.json file On each of these directories there’s a metadata.json, that includes a list called “shell-version” with the versions of GNOME supported by the extension. My first test was to simply add “44” to that list and see what happened, like so: "shell-version": [ "40", "41", "42", - "43" + "43", + "44" ], Restarting the session Wayland requires logging out to restart the session, so I logged out and logged in again. Tweaking, if needed To my surprise, three out of four extensions worked by simply changing the metadata.json file! The Focused Window dbus extension loaded properly, but when I tried to actually call its dbus method I got an error saying that the canshade window property didn’t exist. Since that didn’t seem essential for the extension’s funcionality I removed the code related to that and tried again. This time, it worked perfectly! Contributing After using the extensions on GNOME 44 for a few days without any issues I decided to contribute to them. Some extensions a specific commit tag for changes made to metadata.json, so I made sure to use them in my contributions. I think this a simple but valuable contribution, making it an excellent gateway into contributing to GNOME Extensions and FOSS projects in general. Give it a shot! 🧩❤️
  • Felipe Borges: GNOME will be mentoring 9 new contributors in Google Summer of Code 2023 (2023/05/05 09:15)
    We are happy to announce that GNOME was assigned nine slots for Google Summer of Code projects this year! GSoC is a program focused on bringing new contributors into open source software development. A number of long term GNOME developers are former GSoC interns, making the program a very valuable entry point for new members in our project. In 2023 we will mentoring the following projects: Project Title Contributor Assigned Mentor(s) Make GNOME platform demos for Workbench Akshay Warrier Sonny Piers Andy Holmes Rust and GTK 4 Bustle Rewrite Dave Patrick Caberto Bilal Elmoussaoui Maximilian Create a New “System” panel in GNOME Settings Gotam Gorabh Felipe Borges Implement backlog search in Polari IRC client Gurmannat Sohal Carlos Garnacho Florian Müllner Integrate GNOME Network Displays features into GNOME Settings Pedro Sader Azevedo Felipe Borges Claudio Wunder Jonas Ådahl Anupam Kumar GNOME Crosswords Anagram Support Pratham Gupta jrb Make GNOME Platform Demos for Workbench Sriyansh Shivam Sonny Piers Andy Holmes Add Acrostic Puzzles to GNOME Crosswords Tanmay Patil jrb Flatpak synching between machines Tim FB Rasmus Thomsen As part of the contributor’s acceptance into GSoC they are expected to actively participate in the Community Bonding period (May 4 – 28). The Community Bonding period is intended to help prepare contributors to start contributing at full speed starting May 29. The new contributors will soon get their blogs added to Planet GNOME making it easy for the GNOME community to get to know them and the projects that they will be working on. We would like to also thank our mentors for supporting GSoC and helping new contributors enter our project. If you have any doubts, feel free to reply to this Discourse topic or message us privately at soc-admins@gnome.org ** This is a repost from https://discourse.gnome.org/t/announcement-gnome-will-be-mentoring-9-new-contributors-in-google-summer-of-code-2023/15232
  • Matthew Garrett: Twitter's e2ee DMs are better than nothing (2023/05/04 21:49)
    (Edit 2023-05-10: This has now launched for a subset of Twitter users. The code that existed to notify users that device identities had changed does not appear to have been enabled - as a result, in its current form, Twitter can absolutely MITM conversations and read your messages)Elon Musk appeared on an interview with Tucker Carlson last month, with one of the topics being the fact that Twitter could be legally compelled to hand over users' direct messages to government agencies since they're held on Twitter's servers and aren't encrypted. Elon talked about how they were in the process of implementing proper encryption for DMs that would prevent this - "You could put a gun to my head and I couldn't tell you. That's how it should be."tl;dr - in the current implementation, while Twitter could subvert the end-to-end nature of the encryption, it could not do so without users being notified. If any user involved in a conversation were to ignore that notification, all messages in that conversation (including ones sent in the past) could then be decrypted. This isn't ideal, but it still seems like an improvement over having no encryption at all. More technical discussion follows.For context: all information about Twitter's implementation here has been derived from reverse engineering version 9.86.0 of the Android client and 9.56.1 of the iOS client (the current versions at time of writing), and the feature hasn't yet launched. While it's certainly possible that there could be major changes in the protocol between now launch, Elon has asserted that they plan to launch the feature this week so it's plausible that this reflects what'll ship.For it to be impossible for Twitter to read DMs, they need to not only be encrypted, they need to be encrypted with a key that's not available to Twitter. This is what's referred to as "end-to-end encryption", or e2ee - it means that the only components in the communication chain that have access to the unencrypted data are the endpoints. Even if the message passes through other systems (and even if it's stored on other systems), those systems do not have access to the keys that would be needed to decrypt the data.End-to-end encrypted messengers were initially popularised by Signal, but the Signal protocol has since been incorporated into WhatsApp and is probably much more widely used there. Millions of people per day are sending messages to each other that pass through servers controlled by third parties, but those third parties are completely unable to read the contents of those messages. This is the scenario that Elon described, where there's no degree of compulsion that could cause the people relaying messages to and from people to decrypt those messages afterwards.But for this to be possible, both ends of the communication need to be able to encrypt messages in a way the other end can decrypt. This is usually performed using AES, a well-studied encryption algorithm with no known significant weaknesses. AES is a form of what's referred to as a symmetric encryption, one where encryption and decryption are performed with the same key. This means that both ends need access to that key, which presents us with a bootstrapping problem. Until a shared secret is obtained, there's no way to communicate securely, so how do we generate that shared secret? A common mechanism for this is something called Diffie Hellman key exchange, which makes use of asymmetric encryption. In asymmetric encryption, an encryption key can be split into two components - a public key and a private key. Both devices involved in the communication combine their private key and the other party's public key to generate a secret that can only be decoded with access to the private key. As long as you know the other party's public key, you can now securely generate a shared secret with them. Even a third party with access to all the public keys won't be able to identify this secret. Signal makes use of a variation of Diffie-Hellman called Extended Triple Diffie-Hellman that has some desirable properties, but it's not strictly necessary for the implementation of something that's end-to-end encrypted.Although it was rumoured that Twitter would make use of the Signal protocol, and in fact there are vestiges of code in the Twitter client that still reference Signal, recent versions of the app have shipped with an entirely different approach that appears to have been written from scratch. It seems simple enough. Each device generates an asymmetric keypair using the NIST P-256 elliptic curve, along with a device identifier. The device identifier and the public half of the key are uploaded to Twitter using a new API endpoint called /1.1/keyregistry/register. When you want to send an encrypted DM to someone, the app calls /1.1/keyregistry/extract_public_keys with the IDs of the users you want to communicate with, and gets back a list of their public keys. It then looks up the conversation ID (a numeric identifier that corresponds to a given DM exchange - for a 1:1 conversation between two people it doesn't appear that this ever changes, so if you DMed an account 5 years ago and then DM them again now from the same account, the conversation ID will be the same) in a local database to retrieve a conversation key. If that key doesn't exist yet, the sender generates a random one. The message is then encrypted with the conversation key using AES in GCM mode, and the conversation key is then put through Diffie-Hellman with each of the recipients' public device keys. The encrypted message is then sent to Twitter along with the list of encrypted conversation keys. When each of the recipients' devices receives the message it checks whether it already has a copy of the conversation key, and if not performs its half of the Diffie-Hellman negotiation to decrypt the encrypted conversation key. One it has the conversation key it decrypts it and shows it to the user.What would happen if Twitter changed the registered public key associated with a device to one where they held the private key, or added an entirely new device to a user's account? If the app were to just happily send a message with the conversation key encrypted with that new key, Twitter would be able to decrypt that and obtain the conversation key. Since the conversation key is tied to the conversation, not any given pair of devices, obtaining the conversation key means you can then decrypt every message in that conversation, including ones sent before the key was obtained.(An aside: Signal and WhatsApp make use of a protocol called Sesame which involves additional secret material that's shared between every device a user owns, hence why you have to do that QR code dance whenever you add a new device to your account. I'm grossly over-simplifying how clever the Signal approach is here, largely because I don't understand the details of it myself. The Signal protocol uses something called the Double Ratchet Algorithm to implement the actual message encryption keys in such a way that even if someone were able to successfully impersonate a device they'd only be able to decrypt messages sent after that point even if they had encrypted copies of every previous message in the conversation)How's this avoided? Based on the UI that exists in the iOS version of the app, in a fairly straightforward way - each user can only have a single device that supports encrypted messages. If the user (or, in our hypothetical, a malicious Twitter) replaces the device key, the client will generate a notification. If the user pays attention to that notification and verifies with the recipient through some out of band mechanism that the device has actually been replaced, then everything is fine. But, if any participant in the conversation ignores this warning, the holder of the subverted key can obtain the conversation key and decrypt the entire history of the conversation. That's strictly worse than anything based on Signal, where such impersonation would simply not work, but even in the Twitter case it's not possible for someone to silently subvert the security.So when Elon says Twitter wouldn't be able to decrypt these messages even if someone held a gun to his head, there's a condition applied to that - it's true as long as nobody fucks up. This is clearly better than the messages just not being encrypted at all in the first place, but overall it's a weaker solution than Signal. If you're currently using Twitter DMs, should you turn on encryption? As long as the limitations aren't too limiting, definitely! Should you use this in preference to Signal or WhatsApp? Almost certainly not. This seems like a genuine incremental improvement, but it'd be easy to interpret what Elon says as providing stronger guarantees than actually exist. comments
  • Jonas Ådahl: Vivid colors in Brno (2023/05/04 19:09)
    Co-authored by Sebastian Wick & Jonas Ådahl. During April 24 to 26 Red Hat invited people working on compositors and display drivers to come together to collaborate on bringing the Linux graphics stack to the next level. There were three high level topics that were discussed at length: Color Management, High Dynamic Range (HDR) and Variable Refresh Rate (VRR). This post will go through the discussions that took place, and occasional rough consensus reached among the people who attended. The event itself aimed to be both as inclusive and engaging as possible, meaning participants could attend both in person, in the Red Hat office in Brno, Czech Republic, or remotely via a video link. The format of the event was structured in a way aiming to give remote attendees and physical attendees an equal opportunity to participate in discussions. While the hallway track can be a great way to collaborate, discussions accessible remotely were prioritized by having two available rooms with their own video link. This meant that if the main room wanted to continue on the same topic, while some wanted to do a breakout session, they could go to the other room, and anyone attending remotely could tag along by connecting to the other video link. In the end, the break out room became the room where people collaborated on various things in a less structured manner, leaving the main room to cover the main topics. A reason for this is that the microphones in both rooms were a bit too good, effectively catching any conversation anyone had anywhere in the room. Making one of the rooms a bit more chaotic, while the other focused, also allowed for both ways of collaborating. For the kernel side, people working on AMD, Intel and NVIDIA drivers were among the attendees, and for user space there was representation from gamescope, GNOME, KDE, smithay, Wayland, weston and wlroots. Some of those people are community contributors and some of them were attending on behalf of Red Hat, Canonical, System76, sourcehut, Collabora, Blue Systems, Igalia, AMD, Intel, Google, and NVIDIA. We had a lot of productive discussion, ending up in total with a 20 (!) page document of notes. Color management & HDR Wayland Color management in the Linux graphics stack is shifting in the way it is implemented, away from the style used in X11 where the display server (X.org) takes a hands-off approach and the end result is dependent on individual client capabilities, to an architecture where the Wayland display server takes an active role to ensure that all clients, be them color aware or not, show up on screen correctly. Pekka Paalanen and Sebastian Wick gave a summary of the current state of digital color on Linux and Wayland. For full details, see the Color and HDR documentation repository. They described the in-development color-representation and color-management Wayland protocols. The color-representation protocol lets clients describe the way color channels are encoded and the color-management protocol lets clients describe the color channels’ meaning to completely describe the appearance of surfaces. It also gives clients information about how it can optimize its content to the target monitor capabilities to minimize the color transformations in the compositor. Another key aspect of the Wayland color protocols in development is that compositors will be able to choose what they want to support. This allows for example to implement HDR without involving ICC workflows. There is already a broad consensus that this type of active color management aligns with the Wayland philosophy and while work is needed in compositors and client toolkits alike, the protocols in question are ready for prototyping and review from the wider community. Colors in kernel drivers & compositors There are two parts to HDR and color management for compositors. The first one is to create content from different SDR and HDR sources using color transformations. The second is signaling the monitor to enter the desired mode. Given the current state of kernel API capabilities, compositors are in general required to handle all of their color transformations using shaders during composition. For the short term we will focus on removing the last blockers for HDR signaling and in the long term work on making it possible to offload color space conversions to the display hardware which should ideally make it possible to power down the GPU while playing e.g. a movie Short term Entering HDR mode is done by setting the colorimetry (KMS Colorspace property) and overriding the transfer characteristics (KMS HDR_OUTPUT_METADATA property). Unfortunately the design of the Colorspace property does not mix well with the current broader KMS design where the output format is an implementation detail of the driver. We’re going to tweak the behavior of the Colorspace property such that it doesn’t directly control the InfoFrame but lets the driver choose the correct variant and transparently convert to YCC using the correct matrix if required. This should allow AMD to support HDR signaling upstream as well. The HDR_OUTPUT_METADATA property is a bit weird as well and should be documented. Changing it might require a mode set and changing the transfer characteristics part of the blob will make monitors glitch, while changing other parameters must not require a mode set and must not glitch. Both landing support upstream for the AMD driver, and improvements to the documentation should happen soon, enabling proper upstream HDR signaling. Vendor specific uAPI for color pipelines Recently a proposal for adding vendor specific properties for exposing hardware color pipelines via KMS has been posted, and while it is great to see work being done to improve situation in the Linux kernel, there are concerns that this opens up for per vendor API that end up necessary for compositors to implement, effectively reintroducing per vendor GPU drivers in userspace outside of mesa. Still, upstream support in the kernel has its upsides, as it for example makes it much easier to experiment. A way forward discussed is to propose that vendor specific color pipeline properties should be handled with care, by requiring them to be clearly documented as experimental, and disabled by default both with a build configuration, and a off-by-default module parameter. A proposal for this will be sent by Harry Wentland to the relevant kernel mailing lists. Color pipelines in KMS Long term, KMS should support color pipelines without any experimental flags, and there is a wide agreement that it should be done with a vendor agnostic API. To achieve this, a proposal was discussed at length, but to summarize it, the goal is to introduce a new KMS object for color operations. A color operation object exposes a low level mathematical function (e.g. Matrix multiplication, 1D or 3D look up tables) and a link to the next operation. To declare a color pipeline, drivers construct a linked list of these operations, for example 1D LUT → Matrix → 1D LUT to describe the current DEGAMMA_LUT → CTM → GAMMA_LUT KMS pipeline. The discussions primarily focused on per plane color pipelines for the pre-blending stage, but the same concept should be reusable for the post blending stage on the CRTC. Eventually this work should also make it possible to cleanly separate KMS properties which change the colors (i.e. color operations) from properties changing the mode and signaling to sinks, such as Broadcast RGB, Colorspace, max_bpc. It was also agreed that user space needs more control over the output format, i.e. what is transmitted over the wire. Right now this is a driver implementation detail and chosen such that the bandwidth requirements of the selected mode will be satisfied. In particular making it possible to turn off YCC subsampling, specifying the minimum bit depth and specifying the compression strength for DCC seems to have consensus. There are a lot more details that handle all the quirks that hardware may have. For more details and further discussion about the color pipeline proposal, head over to the RFC that Simon Ser just sent to the relevant mailing lists. Testing & VKMS Testability of color pipelines and KMS in general was a topic that was brought up as well, with two areas of interest: testing compositors and the generic DRM layer in the kernel using VKMS, and testing actual kernel drivers. The state of VKMS is to some degree problematic; it currently lacks a large enough pool of established contributors that can take maintainership responsibilities, i.e. reviewing and landing code, but at the same time, there is an urge to make it a more central part of GPU driver development in general, where it can take a more active role in ensuring cross driver conformance. Discussions on how to create more incentive for both kernel developers and compositor developers to help out were discussed, and while ability to test compositors is a relatively good incentive, an idea discussed was to require new DRM properties to always get a VKMS implementation as well to be able to land. This is, however, not easy, since a significant amount of bootstrapping is needed to make that viable. Some ideas were thrown around, and hopefully something will come out of it; keep an eye on the relevant mailing lists for something related to this area. For testing actual drivers, the usage of Chamelium was discussed, and while everyone agreed it’s something that is definitely nice to have, it takes a significant amount of resources to maintain wired up CI runners for the community to rely on. Ideally a setup that can be shared across the different compositors and GPU drivers would be great, but it’s a significant task to handle. Variable Refresh Rate Smoothing out refresh rate changes Variable Refresh Rate monitors driven at a certain mode have a minimum and maximum refresh cycle duration and the actual duration can be chosen for every refresh cycle. One problem with most existing VRR monitors however is that when the refresh duration changes too quickly, they tend to produce visible glitches. They appear as brightness changes for a fraction of a second and can be very jarring. To avoid them, each refresh cycle must change the duration only up to some fixed amount. The amount however varies between monitors, with some having no restriction at all. A VESA certification is currently being deployed aiming to certify monitors where any change in the refresh cycle duration does not result in glitches. For all other monitors, the increase and decrease in duration which does not result in glitches is unknown if not provided by optional EDID/DisplayID data blocks. Driving monitors glitch-free without machine readable information therefore requires another approach. One idea is to make the limits configurable. Requiring all users to tweak and fiddle to make it work good enough, however, is not very user friendly, so another idea that was discussed is to maintain a database similar to the one used by libinput, but in libdisplay-info, that contains the required information about monitors, even if there is no such information made available by the vendor. With all of the required information, the smoothing of refresh rate changes still needs to happen somewhere. It was debated whether this should be handled transparently by the kernel, or if it should be completely up to user space. There are pros and cons to both ways, for example better timing ability in the kernel, but less black box magic if handled by user space. In the end, the conclusion is for user space components (i.e. compositors) to handle this themselves first, and then reconsider some point in the future if that is enough, or whether new kernel uAPI is needed. Low Framerate Compensation The usual frame rates that a VRR monitor can achieve typically do not cover a bunch of often used low frame rates, such as 30, 25, or 24 Hz. To still be able to show such content without stutter, the display can be driven at a multiple of the target frame rate and present new content on every n-th refresh cycle. Right now this Low Framerate Compensation (LFC) feature is built into the kernel driver, and when VRR is enabled, user space can transparently present content at refresh rates even lower than what the display supports. While this seems like a good idea, there are problems with this approach. For example the cursor can only be updated when there is a content update, making it very sluggish because of the low rate of content updates even though the screen refreshes multiple times. This either requires a special KMS commit which does not result in an immediate page flip but ends up on the refresh cycles inserted by LFC, or implementing LFC in user space instead. Like with the refresh rate change smoothing talked about earlier, moving LFC to user space might be possible but also might require some help from the kernel to be able to time page flips well enough. Wayland For VRR to work, applications need to provide content updates on a surface in a semi-regular interval. GUI applications for example often only draw when something changed which makes the updates irregular, driving VRR to its minimum refresh rate until e.g. an animation is playing and VRR is ramping up the refresh rate over multiple refresh cycles. This results in choppy mouse cursor movements and animations for some time. GUI applications sometimes do provide semi-regular updates, e.g. during animations or video playback. Some applications, like games, always provide semi-regular updates. Currently there is no1 Wayland protocol letting applications advertise that a surface works with VRR at a moment in time, or at all. There is no way for a compositor to automatically determine if an app or a surface is suitable for VRR as well. For wayland native applications a protocol to communicate this information could be created but there are a lot of applications out there which would work fine with VRR but will not get updated to support this protocol. Maintaining a database similar to the one mentioned above, but for applications, was discussed, but there is no clear winner in how to do so, and where to store the data. Maintaining a list is cumbersome, and complicates the ability for applications to work with VRR on release, or on distributions with out of date databases. Another idea was a desktop file entry stating support, but this too has its downsides. All in all, there is no clear path forward in how to actually enable VRR for applications transparently without causing issues. 1. Except for a protocol proposal. Wrap-up The hackfest was a huge success! Not only was this a good opportunity to get everyone up to speed and learn about what everyone is doing, having people with different backgrounds in the discussions made it possible to discuss problems, ideas and solutions spanning all the way from clients over compositors, to drivers and hardware. Especially on the color and HDR topics we came up with good, actionable consensus and a clear path to where we want to go. For VRR we managed to pin-point the remaining issues and know which parts require more experimentation. For GNOME, Color management, HDR and VRR are all topics that are being actively worked on, and the future is both bright and dynamic, not only when it comes to luminescence and color intensity, but also when it comes to the rate monitors present all these intense colors. Dor Askayo who has been working on bringing VRR to GNOME attended part of the hackfest, and together we can hopefully bring experimental VRR to GNOME soon. There will be more work needed to iron out the overall experience, as covered above, but getting the fundamental building blocks in place is a critical first step. For HDR, work has been going on to attach color state information to the scene graph, and at the hackfest Georges Basile Stavracas, Sebastian Wick and Jonas Ådahl sat down and sketched out a new Clutter rendering API that aims replace the current Clutter paint nodes API that is used in Mutter and GNOME Shell, which will make color transformations a first class citizen. We will initially focus on using shaders for everything, but down the road, the goal is to utilize the future color pipeline KMS uAPI for both performance and power consumption improvements. We’d like to thank Red Hat for organizing and hosting the hackfest and for allowing us to work on these interesting topics, Red Hat and Collabora for sponsoring food and refreshments, and especially Carlos Soriano Sanchez and Tomas Popela for actually doing all the work making the event happen. It was great. Also thanks to Jakub Steiner for the illustration, and Carlos Soriano Sanchez for the photo from the hackfest. For another great hackfest write-up, head over to Simon Ser’s blog post.
Enter your comment. Wiki syntax is allowed:
T Q F J G
 
  • news/planet/gnome.txt
  • Last modified: 2021/10/30 11:41
  • by 127.0.0.1