Gnome Planet - Latest News

  • Christian Hergert: Ptyxis Progress Support (2024/12/03 20:30)
    The upcoming systemd v247 release will have support for a feature originating from ConEmu (a terminal emulator for Windows) which was eventually adopted by Windows Terminal. Specifically, it is an OSC (Operating System Command) escape sequence which defines progress state. Various systemd tools will natively support this. Terminal emulators which do not support it simply ignore the OSC sequence but those that do support it may provide additional UI to the application. Lennart discussed this briefly in their ongoing systemd 247 features series on Mastodon and so I took up a quick attempt to implement the sequence parsing for VTE-based terminals. That has since been iterated upon and landed in VTE. Additionally, Ptyxis now has corresponding code to support it as well. Once GNOME CI is back up and running smoothly this will be available in the Ptyxis nightly build.
  • Michael Meeks: 2024-12-02 Monday (2024/12/02 14:14)
    Mail chew, 1:1's with Miklos & Lily, lunch. Set off to Paris via the Eurostar.
  • Michael Meeks: 2024-12-01 Sunday (2024/12/01 21:00)
    All Saints, family service + Baptism in the morning, caught up with wider church family; home for Pizza lunch with E. Put up Christmas decorations left & right. Slugged variously, watched Night Action some Fallout - gratuitous gore, but interesting effects & characters.
  • This Week in GNOME: #176 Command History (2024/11/29 00:00)
    Update on what happened across the GNOME project in the week from November 22 to November 29. GNOME Core Apps and Libraries GJS Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps. ptomato reports This week, Gary added a feature to the GJS command-line interpreter: command history is now saved between runs. Look for this in GNOME 48. Third Party Projects xjuan says Cambalache version 0.94.0 is out!Release notes: Gtk 4 and Gtk 3 accessibility support Support property subclass override defaults AdwDialog placeholder support Improved object description in hierarchy Lots of bug fixes and minor UI improvements Read more about it at https://blogs.gnome.org/xjuan/2024/11/26/cambalache-0-94-released/ nokyan announces Resources 1.7 has been released with support for Neural Processing Units (NPUs), the ability to select multiple processes and swap usage columns for the Apps and Processes views. Additionally, temperatures are now also displayed as graphs and there are a couple of improvements for AMD GPUs regarding media engine and compute utilization. The update is of course available on Flathub. Enjoy! Jan-Michael Brummer says TwoFun 0.5.1 has been released. It’s a two player game for touch devices featuring smaller game to kill some time. This time user interface gained some smaller improvements and bugfixes. Enjoy! GNOME Foundation Rosanna announces Some sad news to report this week: the GNOME shop is currently closed to new orders. If you have an outstanding order that has not yet arrived and have not already contacted me, please let me know by forwarding the order to info@gnome.org. If you have any experience with running an online shop like the one we have and have the time and patience to help me troubleshoot and explain it to me, please reach out as well! I would be most grateful. Hope everyone in the US celebrating this week has had a wonderful Thanksgiving! That’s all for this week! See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
  • Udo Ijibike: Outreachy Internship Series: Files Usability Test Report (2024/11/27 11:55)
    During my Outreachy internship with GNOME, Tamnjong Larry Tabeh and I conducted user research exercises under the inspiring mentorship of Allan Day and Aryan Kaushik. In this blog post, I’ll discuss the usability test we conducted for GNOME’s Files, also known as Nautilus. This blog post will introduce the study, outline our methodology, and present our key findings from the usability test. I’ve also attached a downloadable report at the end of this blogpost that discusses (in detail) our observations and recommendation(s) for each task performed in the usability test. Without further ado, let’s jump right in! 1.  Introduction Files is the default file manager of the GNOME desktop. It provides a simple and integrated way of managing files when running a Linux-based OS by supporting all the basic functions of a file manager and more. With recent GNOME releases introducing significant changes to the Files user experience, and more improvements planned for subsequent releases, the design team wanted to assess the effectiveness of these updates and learn more about other aspects of the user experience. To support these efforts, we executed a user research project to identify areas for improvement, and gather actionable insights from observed user behaviours that can inform design decisions when addressing identified issues. 1.1.  Research Goals Our research goals were to: Assess the effectiveness of the new menu structure and the discoverability of the following menu items: Icon Size editors Properties Select All Undo/Redo Sort Open Item Location Show Hidden Files Add To Bookmark Evaluate the ease of use of Files’s Search feature, and the intuitiveness of its Search Filters. Investigate the extent to which any difficulty experienced when right-clicking an empty space in List View impacts the user experience when accessing a folder context-menu. 1.2.  Research Questions Upon completion of the study, we wanted to be able to answer the following questions: Menu Structure Is the current organization of the menus effective? Can people find the buttons they need for basic tasks when they need them? Search Experience Do people understand how to search in Files? Do people understand the search filters and how to use them? Are the search filters effective for their context of use? List View Layout Is it challenging for people to access the folder context menu in list view when they have a lot of files? Does the current design meet user expectations when accessing folder context menu in list view? 2.  Study Design 2.1.  Approach To answer our research questions, we opted for a moderated task-based usability test. This approach meant that we could simulate typical usage conditions and observe participants interact with Files. This made it easy for us to identify pain-points and gaps in the specific aspects of the Files user experience that we were interested in, and allowed us to engage participants in discussions that deepened our understanding of the challenges they experienced with Files. To plan the study, we started by defining the ideal participant for our research goals. Next, we established an optimal sequence for the tasks we wanted participants to perform, then crafted a scenario for each, after which we designed the testing environment. Then concluded preparations with a pilot test to identify weaknesses in the study plan and implement revisions where necessary before testing with recruited participants. 2.2.  Recruitment Criteria To generate the data we needed, we had to observe individuals who were unfamiliar with the Files menu structure. This requirement was crucial, as previous use of Files could influence a participant’s interactions, which would have made it difficult for us to discern valid usability issues from their interactions. We also needed participants to be able to perform basic computing tasks independently: tasks like navigating software and managing files on their computer. This proficiency was important to ensuring that any challenges observed during the study were specifically related to the Files user experience, rather than stemming from a lack of general computer skills. Therefore, we defined our recruitment criteria as follows: Has never used GNOME prior to their usability test session. Is able to use a computer moderately well. 2.3.  Testing Environment During testing, participants interacted with development versions of Files, specifically, versions 47.rc-7925df1ba and 47.rc-3faeec25e. Both versions were the latest available at the time of testing and had identical implementations of the features we were targeting. To elicit natural interactions from the participants, we enhanced the testing environment with a selection of files and folders that were strategically organized, named, and hidden, to create states in Files that encouraged and facilitated the tasks we planned to observe. 3.  Participant Overview We recruited and tested with six first-time GNOME users, aged twenty-one to forty-seven, from diverse backgrounds, with varying levels of computer expertise. This diversity in the sample helped us keep our findings inclusive by ensuring that we considered a broad range of experiences in the usability test. Although the majority of the participants reported current use of Windows 11 as shown below, a few also reported previous use of macOS and earlier versions of Windows OS. 4.  Methodology For this usability test: We conducted in-person usability tests with six computer users who met our selection criteria. The moderating researcher followed a test script and concluded each session with a brief, semi-structured interview. Participants attempted eleven tasks in the following order: Change the icon size Find the size of a folder with Properties Select all files in a folder with “Select All” Undo an action with the “Undo” button Change the sort order Change Files display from grid view to list view Create a new folder while in list view Find a file using the search feature, with filters Go to a file’s location from search results with “Open Item Location” Reveal hidden items in a folder with “Show Hidden Files” Add a folder to the sidebar with “Add to Bookmarks” Participants were encouraged to continuously think aloud while the performing tasks and each session lasted at least 40 minutes. All sessions were recorded with participant consent and were later transcribed for analysis. To analyze the collected data, we summarized the participants’ experiences for each task using Jim Hall’s Heat Map technique and synthesized findings from our session notes through Thematic Analysis. 5.  Usability Test Result Applying Jim Hall’s Heat Map technique we summarized the observed experience for all tasks performed in the usability test. The heatmap below shows the completion rate for each task and the level of difficulty experienced by participants when performing them. The tasks are in rows and participants are represented in columns. The cell where a row (Task) intersects with a column (Participant) captures the task outcome and relative difficulty experienced by a participant during their attempt. A cell is green if the participant completed the task without any difficulty, yellow if the participant completed the task with very little difficulty, orange if the participant completed the task with moderate difficulty, red if the participant completed the task with severe difficulty, black if the participant was unable to complete the task, and gray if the participant’s approach was outside the scope of the study. 6.  Key Insights 1.  Menu structure The menu structure was generally easy for participants to navigate. Despite using GNOME and Files for the first time during their testing sessions, they adapted quickly and were able to locate most of the buttons and menu items required to complete the tasks. The best performing tasks were “Change the sort order” and “Reveal hidden items in a folder”, and the worst performing tasks were “Change the icon size” and “Add a folder to Bookmark”. Overall, the participants easily found the following menu items when needed: Sort Show Hidden Files Properties Open Item Location But struggled to find these menu items when needed Icon size editors Select All Undo/Redo Add To Bookmark In situations where participants were familiar with a shortcut or gesture for performing a task, they almost never considered checking the designated menus for a button. We observed this behavior in every participant, particularly when they performed the following tasks: Nonetheless, Files excelled in this area with its remarkable support for widely used shortcuts and cross-platform conventions. We also observed that when these actions worked as expected it had the following effects on the user’s experience: It reduced feelings of apprehension in participants and encouraged them to engage more confidently with the software. It made it possible for the participants to discover Files’s menu structure without sacrificing their efficiency. 2.  Search The “Search Current Folder” task flow was very intuitive for all participants. The search filters were also very easy to use and they effectively supported participants during the file search. However, we found that the clarity of some filter labels could be reasonably improved by tailoring them to the context of a file search. 3.  List View Layout The current List View layout did not effectively support typical user behavior when accessing the folder context menu. 4.  General Observation When the participants engaged in active discovery of Files, we observed behaviour patterns that are linked to the following aspects of the design: Familiarity: We observed that when participants attempted familiar tasks, they looked for familiar cues in the UI. We noticed that when a UI component looked familiar to participants, they interacted without hesitation and with the expectation that this interaction would lead to the same outcomes that they’re accustomed to from their prior experience with similar software. Whereas, when a UI component was unfamiliar, participants were more restrained and cautious when they interacted with it. For example, we noticed participants interact differently with the “Search Current Folder” button compared to the “List/Grid View” and “View Options” buttons. With the “Search Current Folder” button, participants took longer to identify the icon, and half of the sample checked the tooltip for confirmation before clicking the button, because the icon was unfamiliar. In contrast, participants reacted a lot quicker during the first task, as they instinctively clicked on the “List/Grid View” or “View Options” icons without checking the tool tip. Some even did so while assuming the two buttons were part of a single control and interacted with them as if they were combined, because they were familiar with the icons and the design pattern. Tool tips With a lot of icon buttons in the Files UI, we observed participants relying heavily on tooltips to discover the UI. Mostly as a way to validate their assumptions about the functionality of an icon button as highlighted above. Clear and effective labels: We observed that, the more abstract or vague a label was, the more participants struggled to interpret it correctly. In the “Open Item Location” tasks, we guided the participants who were unable to find the menu item to the file’s context menu, then asked them if they thought there was a button that could have helped them complete the task. Both participants who gave up on this task instantly chose the correct option. Whereas, in the “Add To Bookmarks” tasks, almost everyone independently found the menu item but the majority of them were hesitant to click on it because of the word “Bookmarks” in the label. Layout of Files By the end of most of the sessions, participants had concluded that controls in the white (child) section of the layout affected elements within that section, while controls in the sidebar were relevant to just the elements in the sidebar, even though this wasn’t always the case with how the Files menu structure is actually organized. Therefore, when participants needed to perform an action they believed would affect elements in the child section of the layout, most of them instinctively checked the same section for an appropriate control. 7.  Conclusion, Reflections and Next Steps If you’d like to learn about our findings and the identified usability issues for each task, here is a detailed report that discusses our how the participants interacted alongside our recommendations: Detailed Report for Files Usability Test Overall, the usability test effectively supported our research goals and provided qualitative insights that directly addressed our research questions. Beyond these insights, we also noted that users have preferences for performing certain tasks. Future research efforts can build on this insight by exploring the usage patterns of Files users to inform decisions around the most effective ways to support them. Reflecting on the study’s limitations, a key aspect that may have influenced our result was the participant sample. We tested with a sample that was predominantly composed of Windows 11 users, although unintended. Ideally, a more diverse group that included current users of different operating systems could have further enriched our findings by providing a broader range of experiences to consider. However, we mitigated this limitation by recognizing that the participants who had previous experience with more operating systems brought their knowledge from those interactions into their use of Files, which likely influenced their behaviors and expectations during the test. 8.  Acknowledgements I gained a lot of valuable skills from my internship with GNOME; I significantly improved my communications skills, learned practical skills for designing and executing user research projects using different qualitative and quantitative user research methods, and developed a sense for the more nuanced but critical considerations necessary for ensuring reliability and validity of research findings through the various phases of a study and how to mitigate them in research planning and execution. So, I’d like to conclude by expressing my profound gratitude to everyone who made this experience so impactful. I’d like to appreciate my mentors (Allan day and Aryan Kaushik) for their guidance, insightful feedback, and encouragement, throughout and beyond the internship; the GNOME community, for the warm welcome and support; and Outreachy, for making it possible for me to even have this experience. I greatly enjoyed working on this project and I expect to make more user research contributions to GNOME. Thank you!    
  • Juan Pablo Ugarte: Cambalache 0.94 Released! (2024/11/27 00:29)
    Hello, I am pleased to announce a new Cambalache stable release. Version 0.94.0 – Accessibility Release! Gtk 4 and Gtk 3 accessibility support Support property subclass override defaults AdwDialog placeholder support Improved object description in hierarchy Lots of bug fixes and minor UI improvements How it started? A couple of months ago I decided to make a poll on Mastodon about which feature people would like to see next. To my surprise GtkExpression did not come up first and GResources where not the last one. Data Model First things firsts, how to store a11y data in the project” So what are we trying to sotre? from Gtk documentation: GtkWidget allows defining accessibility information, such as properties, relations, and states, using the custom <accessibility> element: <object class="GtkButton" id="button1"> <accessibility> <property name="label">Download</property> <relation name="labelled-by">label1</relation> </accessibility> </object> These looks a lot like regular properties so my first idea was to store them as properties in the data model. So I decided to create one custom/fake interface class for each type of a11y data CmbAccessibleProperty, CmbAccessibleRelation and CmbAccessibleState. These are hardcoded in cmb-catalog-gen tool and look like this # Property name: (type, default value, since version) self.__a11y_add_ifaces_from_enum([ ( "Property", "GtkAccessibleProperty", { "autocomplete": ["GtkAccessibleAutocomplete", "none", None], "description": ["gchararray", None, None], ... } ), ( "Relation", "GtkAccessibleRelation", { "active-descendant": ["GtkAccessible", None, None], "controls": ["CmbAccessibleList", None, None], # Reference List "described-by": ["CmbAccessibleList", None, None], # Reference List ... } ), ( "State", "GtkAccessibleState", { "busy": ["gboolean", "False", None], "checked": ["CmbAccessibleTristateUndefined", "undefined", None], "disabled": ["gboolean", "False", None], "expanded": ["CmbBooleanUndefined", "undefined", None], ... } ) ]) This function will create the custom interface with all the properties and make sure all values in the GtkEnumeration are covered. One fundamental difference with properties is that some a11y relations can be used more than once to specify multiple values. To cover this I created a new value type called CmbAccessibleList which is simply a coma separated list of values. This way the import and export code can handle loading and exporting a11y data into Cambalache data model. Editing a11y data in the UI Now since these interfaces are not real, no actual widget implements them, they wont show up automatically in the UI. This can be easily solved by adding a new tab “a11y” to the object editor which only shows a11y interface properties.Now at this point it is possible to create and edit accessibility metadata for any UI but as Emmanuelle pointed out not every a11y property and relation is valid for every role. To know what is valid or not you need to read WAI-ARIA specs or write a script that pulls all the metadata from it. With this metadata handy is it easy to filter properties and relations depending on the a11y role.BTW keep in mind that accessible-role property should not be changed under normal circumstances. Where to get it? From Flathub flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo flatpak install flathub ar.xjuan.Cambalache or directly from gitlab git clone https://gitlab.gnome.org/jpu/cambalache.git Matrix channel Have any question? come chat with us at #cambalache:gnome.org Mastodon Follow me in Mastodon @xjuan to get news related to Cambalache development. Happy coding!
  • Daniel García Moreno: Hackweek 24 (2024/11/22 11:00)
    It's the time for a new Hack Week. The Hack Week 24 was from November 18th to November 22th, and I've decided to join the New openSUSE-welcome project this time. The idea of this project is to revisit the existing openSUSE welcome app, and I've been trying to help here, specifically for the GNOME desktop installation. openSUSE-welcome Right now after installing any openSUSE distribution with a graphical desktop, the user is welcomed on first login with a custom welcome app. This custom application is a Qt/QML with some basic information and useful links. The same generic application is used for all desktops, and for popular desktops right now exists upstream applications for this purpose, so we were talking on Monday morning about it and decided to use specific apps for desktops. So for GNOME, we can use the GNOME Tour application. gnome-tour GNOME Tour is a simple rust/gtk4 application with some fancy images in a slideshow. This application is generic and just shows information about GNOME desktop, so I created a fork for openSUSE to do some openSUSE specific customization and use this application as openSUSE welcome in GNOME desktop for Tumbleweed and Leap. Desktop patterns, the welcome workflow After some testing and investigation about the current workflow for the welcome app: x11_enhanced pattern recommends opensuse-welcome app. We can add a Recommends: gnome-tour to the gnome pattern The application run using xdg autostart, so gnome-tour package should put the file in /etc/xdg/autostart and set to hidden on close. In the case of having a system with multiple desktops, we can choose the specific welcome app using the OnlyShowIn/NotShowIn config in desktop file So I've created a draft PR to do not show the openSUSE-welcome app in GNOME, and I've also the gnome-tour fork in my home OBS project. I've been testing this configuration in Tumbleweed with GNOME, KDE and XFCE installed and it works as expected. The openSUSE-welcome is shown in KDE and XFCE and the gnome-tour app is only shown in GNOME. Next steps The next steps to have the GNOME Tour app as default welcome for openSUSE GNOME installation are: Send forked gnome-tour package to GNOME:Next project in OBS. Add the Recommends: gnome-tour to patterns-gnome to GNOME:Next project in OBS. Make sure that any other welcome application is not shown in GNOME. Review openQA tests that expect opensuse-welcome and adapt for the new application.
  • This Week in GNOME: #175 Magic (2024/11/22 00:00)
    Update on what happened across the GNOME project in the week from November 15 to November 22. GNOME Core Apps and Libraries GJS Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps. ptomato says Gary Li added support for source maps to GJS. If you use build tools such as TypeScript for source code transformation, you can ship source map files alongside your built JS files and make sure your build tool adds the magic source map comment. You will then get the original source locations printed in stack traces and in the debugger. Third Party Projects Konstantin Tutsch announces Lock v1.1.0 is here! The user experience of encrypting data has drastically improved with the manual entering of a key’s UID being obsolete. You can now simply choose the key you want to encrypt for from a list of all available keys with just a single click! Fingerprint access of keys has also improved. You can now copy a key’s fingerprint by clicking on its row during keyring management. Available on Flathub. Mateus R. Costa reports Today I have released version 1.1.1 of bign-handheld-thumbnailer. This new version is a patch release with some small tweaks and I believe there won’t be much to add to the program for a while. bign-handheld-thumbnailer, for those who haven’t heard about, is a thumbnailer for Nintendo DS and 3DS roms. It was created as a replacement for gnome-nds-thumbnailer (which only created thumbnails for NDS roms and has been recently archived), but also gained the ability to generate thumbnails for 3DS roms. The new 1.1.1 version is available in a copr for Fedora 40, 41 and Rawhide (Fedora 39 is left out as it will be EoL in a few days). For the next steps, and since gnome-nds-thumbnailer has been archived, it should be a good moment to try to get the project added into official distro repos. I will be attempting to going through the Fedora process, if you are on a different distro and this thumbnailer is useful for you, consider asking for an official package. (Another possibility is shipping the thumbnailer as a Flatpak in the near future, if that functionality is ever added…) Turtle Manage git repositories in Nautilus. Philipp says Turtle 0.11 has been released. Clean and reset dialog A clean and reset dialog has been added to clean a repository from untracked files or reset the current branch to a specific reference. Diff and log updates It is now possible to compare the working directory to a reference in the diff dialog. Renamed files are now shown as one entry in the commit table instead of a removed and added entry. Additionally the context menu in the log dialog has been extended by a push action to push the selected branch. Minor updates There are more minor updates, for the full list see the changelog Parabolic Download web video and audio. Nick announces Parabolic V2024.11.1 is here! This update contains fixes for various bugs users were experiencing. Here’s the full changelog: Fixed an issue where file names that included a period were truncated Fixed an issue where long file names caused the application to crash Fixed an issue where external subtitle files were created although embedding was supported Fixed an issue where generic downloads were unable to be opened upon completion Updated yt-dlp to 2024.11.18 That’s all for this week! See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
  • Sam Thursfield: Status update, 21/11/2024 (2024/11/21 12:44)
    A month of low energy here. My work day is spent on a corporate project which I can’t talk about due to NDAs, although I can say it’s largely grunt work at the moment. I’m not much of a gamer but sometimes you need a dose of solid fun. I finally started playing through Age of Empires II: The Conquerers, which has aged 24 years at this point, like a fine whisky. I played the original AoE when it came out, and I loved the graphics and the scenarios but got burned out by how dumb the units would be. All that was greatly improved in the second edition; although your guys will still sometimes wander off to chip away single-handedly at an enemy castle with their tiny sword, the new keyboard shortcuts make it less frustrating to micro-manage an army. I guess this is old news for everyone except me but, what a game. I’m preparing some QA related talk submissions for FOSDEM 2025. I haven’t had time or energy to work on QA testing in GNOME, but I still have a clear idea of how we can move forwards, and I’ll keep talking about it a little longer to see if we can really go somewhere. There is still a small community joining our monthly call which gives a certain momentum to the project. In terms of search, unfortunately I don’t feel much momentum around this at the moment. Besides the heroic contributions from Carlos, we did get a new IDE from Demigod’s and Rachel’s summer of code project. Somehow that hasn’t even made its way into Fedora 41, despite being part of the latest GNOME 47 release, so it’s still tricky to use it in demos. I looked at submitting some desktop search related talk to FOSDEM but there’s not a single dev room I can see where a proposal would even fit. There we are. One final thought. This was a pretty engaging 20 minute talk by Cabel Sasser on the topic of a bizarre mural he saw at a McDonalds burger restaurant in Washington, USA. I recommend watching it if you haven’t, but you can also jump straight to the website about the mural’s artist, Wes Cook. Something jumped out at me in the middle of the talk when he said “We all want to be seen.” Most folk want some recognition of the work we do and some understanding. In the software industry it’s very difficult because what we do is so specialised. But we’re now at a point where graphical desktops have been mainstream for nearly 30 years. Everyone uses one in their job. Smartphones are 15 years old and the tech isn’t hugely evolving. A lot of this stuff is based on open source projects with 15+ year histories, and the people who can tell the stories of how they were made and how they work are all still around and still active. It could be worth spending some effort recognising what we have, and talking about it, and the people who make it happen. (If only we could have a desktop room at FOSDEM again…)
  • Adrien Plazas: Capitole du Libre et discriminations (2024/11/20 23:00)
    Le weekend du 16 et 17 novembre 2024, j’ai eu le plaisir d’aller au Capitole du Libre (CdL), une chouette conférence tenue tous les ans à Toulouse, et j’ai envie de revenir dessus. Le CdL rassemble la communauté libriste française, avec une représentation notable du milieu associatif. On y trouve un village associatif rassemblant un large pan du milieu libriste français au delà du logiciel, des présentations techniques accessibles à tous les niveaux de connaissance, et des présentations plus politiques qui proposent de réfléchir et faire évoluer le mouvement libriste. Ça en fait une conférence très joviale et conviviale où l’on peut venir en famille. J’ai déjà participé au CdL par le passé, y tenant des stands pour parler de GNOME sur smartphones en 2018, 2019 et 2022. De plus en 2019 j’y ai donné une présentation sur mes travaux pour porter GNOME sur smartphones, et en 2022 j’ai eu le plaisir d’interviewer David Revoy dans les couloirs de la conférence. C’est une conférence que j’apprécie et à laquelle j’aime participer, et 2024 est la première édition où j’y suis allé en visiteur. Des présentations marquantes Une fois n’est pas coutume, je suis venu au CdL pour assister à des présentations, et quelques unes m’ont agréablement marqué. Voici un bref récapitulatif par ordre chronologique de celles qui ont constitué les points hauts de ma visite. Le samedi à 15:00, Valentin Deniaud a donné une présentation piquante et très drôle sur la lutte de l’éditeur de logiciels libres Entr’ouvert contre Orange, narrant comment le géant des télécoms a été condamné après des décennies de lutte pour non respect d’une licence libre. La présentation pleine d’anecdotes et d’humour dresse un parcours laborieux dans divers types de droits et dans la justice française. Le même après-midi à 16:30, Armony Altinier et Jean-Philippe Mengual nous ont fait une démonstration des limitation de l’accessibilité de la plate-forme d’apprentissage Moodle aux personnes non-voyantes. La présentation explique surtout comment les personnes utilisant cette plate-forme pour proposer du matériel éducatif peuvent adapter le contenu pédagogique et les méthodes d’évaluation pour une plus grande inclusivité. Le dimanche à 10:30, un intervenant représentant l’association Skeptikón a dressé les liens entre logiciels libres, scepticisme et politique. La présentation nous invite à faire face aux temps sombres que profile la montée globale du fascisme, présentant divers moyens d’y faire face comme le scepticisme, le librisme et le syndicalisme, le point d’orgue étant mis sur la nécessité de combattre le sentiment d’inanité et le fatalisme en créant du nous. Elle a été suivie à 11:30 par une présentation d’Isabella Vanni décrivant le fonctionnement de l’April et son cheminement pour mieux inclure la diversité de genres. Partant des origines de l’informatique dévalorisée et féminisée pour aller jusqu’à sa valorisation et sa réappropriation par les hommes dans les années 1980, elle présente les mécanismes qui freinent la ré-inclusion des femmes dans le secteur et des moyens de lutter contre. La conférence reprend à 14:00 par une présentation de Khrys, nous offrant une plongée dans les origines féminines de l’informatique, tissant des liens avec l’intelligence artificielle et le luddisme. Par le prisme du mythe de Prométhée et de ses diverses réinterprétations en science-fiction, elle confronte les visions patriarcales et féminines des innovations techniques, nous invitant à être technocritiques envers les intelligences artificielles via une approche féministe. Des intentions d’inclusion sans application Le CdL est doté d’un code de conduite que chaque participant·e se doit de respecter, qu’ils et elles soient visiteurs·euses, invités·es ou organisateurs·ices. Le code de conduite déclare que les organisateurs·ices souhaitent éviter tout type de discrimination, que le non respect de ces règles de bienséance pourra entraîner l’exclusion de l’évènement et nous invite à signaler toute discrimination dont on est victime ou témoin. Tout cela est louable mais sa mise en application me pose quelques problèmes. Comment faire un retour quand c’est l’organisation de l’évènement dans sa globalité qui est discriminante, empêchant des pans entiers de la population d’y accéder ? Comment faire un retour quand les discriminations sont causées sur la scène principale par des intervenants·es pendant des présentations ou la table-ronde, sous les yeux de l’organisation qui laisse faire sans répondre ? Car de la discrimination au CdL il y en a, il y en a beaucoup même, mais elle n’est pas forcément où l’organisation s’y attend. Si ce n’est pas ma première participation au CdL, c’est ma première en tant que visiteur et confronter ces deux expérience me faire prendre conscience des œillères qu’on peut avoir vis-à-vis du déroulement de l’évènement quand on est affairé·e à son animation, qu’elles soient par volonté de ne pas faire de vagues ou parce qu’on a la tête dans le guidon. Je suis donc convaincu de l’honnêteté des intentions de l’organisation du CdL, tout autant que je suis convaincu que les problèmes sont de fond, et sont communs aux milieux du libre et à de trop larges pans des milieux socialistes. Allez dans n’importe quelle conférence et vous trouverez une partie de ces problèmes, et probablement d’autres encore. Pour ces raisons, j’ai décidé de faire un retour sur les problèmes dont j’ai été témoin non pas à l’organisation du CdL directement, mais par cet article pour appeler la communauté libriste toute entière à se remettre en question. Dans la suite de cet article, je vais tenter d’expliquer ce qui constitue à mes yeux des problèmes de l’évènement et d’offrir des pistes pour y remédier. Invisibilisation des luttes des travailleurs·euses du libre Le samedi s’est clôturé par une table-ronde sur les modèles de gouvernance des projets libres. Si les échanges étaient intéressants, on a pu entendre plusieurs fois qu’il n’y a pas de licenciements dans les logiciels libres. Je ne comprends pas d’où peut venir une telle affirmation, sinon peut-être d’une vision très limitée de ce qu’il se passe dans nos milieux. Je connais beaucoup trop de personnes licenciées qui travaillaient pour des multinationales du libre, mais également des organisations à but non-lucratif, des coopératives ou encore de petites associations. Les vagues de licenciements chez Red Hat et Mozilla de ces dernières années devraient suffire de vous convaincre, pour ne citer que les exemples les plus médiatisés. Et au delà des licenciements, les travailleurs·euses du libre sont aussi victimes de harcèlements et des mises au placard. Et ça c’est sans parler de la précarité du milieu, supérieure à celles des autres milieux de l’informatique. Contrats précaires, salariat déguisé, freelance, exploitation du travail passion, travail bénévole, alternance entre embauche et non-emploi, le tout sans nécessairement de chômage entre temps. Tout ça est très fréquent. Je me suis énormément retrouvé dans ma lecture de Te plains pas, c’est pas l’usine, quand bien même je n’ai jamais travaillé dans le milieu associatif, il y a une exploitation très similaire dans les milieux libristes, exploitation fondée sur un sentiment de devoir pour le bien commun et des injonctions à l’abnégation et au don de soi. Alors qu’il s’agît certes de luttes, mais avant tout de relations de travail dans une économie capitaliste. On parle tout le temps des licences libres, mais trop peu des licenciés·es du libre, et ce genre de discours déconnectés de la réalité dans nos milieux participent à l’invisibilisation de leurs luttes. Par solidarité on devrait s’y intéresser. Peut-être que cet écueil ne serait pas arrivé si la table était réellement ronde et incluait l’auditoire au lieu d’être une discussion descendante entre quatre grands noms du milieu. Incommodation de personnes handicapées Pour plein de raisons que ce soit, on n’a pas toustes assez d’énergie pour tenir une journée entière, d’autant plus une journée de conférence. Les rares lieux de repos sont la cour intérieure et des bancs de béton dans le bruyant, passant et illuminé hall principal. Les conférences peuvent être épuisantes pour moi, et lorsque j’ai dû à certains moments trouver un endroit calme où me reposer à l’abris du froid, le mieux que j’ai trouvé était par terre dans un couloir passant, ce qui vous vous en doutez n’est pas idéal. Il y a plein de choses à faire pour améliorer ça qui ne coûtent pas grand chose. Une salle de repos un peu isolée, avec des chaises, calme, dans la pénombre et clairement indiquée serait plus que bienvenue. En complément ou à minima, avoir quelques chaises disposées çà et là permettrait un repos de meilleure qualité et plus fréquent. En complément, avoir plus de tables de bar près de la buvette permettrait aux personnes de manger leur crêpe et de boire leur café sans occuper une des rares places assises simplement pour répondre à leur besoin de libérer leurs mains. Je dois louer l’organisation du CdL pour fournir un son clair et audible, ce qu’en tant que personne malentendante j’apprécie. Cela dit le son était parfois beaucoup trop fort, notamment lors de projections de vidéos promotionnelles auxquelles le son n’apportait rien de pertinent. Étant également hypersensible auditif, je dois avouer que ces rares moments étaient particulièrement pénibles et venaient contribuer à ma fatigue. Je n’ai pas eu le temps de me ruer sur mes bouchons d’oreille, le son était tellement fort et incommodant que je n’ai eu d’autre choix que de me boucher les oreilles avec les doigts en attendant que ça passe. Faire plus attention au niveau sonore serait bienvenu. Stigmatisation des personnes psychiatrisées Lors d’une présentation, une slide déclare que « le progrès technique est comme une hache qu’on aurait mise dans les mains d’un psychopathe », soulignée d’une image d’Elon Musk éclatant une porte avec une hache, faisant référence au film Shining. La personne présentant affirmant que c’est une citation d’Albert Einstein, comme pour appuyer la spiritualité du propos. Un peu plus tard dans la présentation, et à moins que ma mémoire ne me fasse défaut, le terme est réutilisé pour qualifier les personnes qui emploient des termes misogynes. Psychopathe est une insulte saniste qui stigmatise les personnes psychiatrisées. Il disqualifie une personne en lui associant une tare psychologique infamante, méritant au mieux le mépris et le rejet, au pire l’enfermement ou la mort. Le simple fait d’utiliser ce terme participe à légitimer le système saniste dont souffrent les personnes psychiatrisées. Ce terme médicalise les comportements perçus comme déviants, l’appliquer à Elon Musk pour qualifier ses actes revient à expliquer son fascisme par une tare mentale. Ses comportements s’expliquent pourtant très bien par son status social, et chercher à l’en extraire pour les expliquer par d’autres moyens dépolitise la situation. Elon Musk a de tels comportements parce que c’est un milliardaire fasciste, un influenceur libertarien, un colon blanc, un transphobe. Il en va de même pour les misogynes, je pense que n’importe quelle féministe sera d’accord pour dire que le patriarcat n’est pas une question de pathologie, et je suis convaincu que les féministes psychiatrisées n’apprécient pas d’être associées à des misogynes. Cette psychiatrisation du politique prend également forme dans l’injonction à voir un·e psy. Pourtant en entretenant le sanisme, on entretien un système qui vise en grande partie à enfermer les personnes en lutte pour leur émancipation. Les esclaves luttant pour leur libération étaient psychiatrisés·es, les femmes luttant contre le patriarcat étaient psychiatrisées, les personnes homosexuelles étaient psychiatrisées, les opposants politiques étaient ou sont toujours psychiatrisés·es, les personnes trans sont toujours très activement psychiatrisées. Pour maintenir les hiérarchies on médicalise, psychiatrise, enferme, médicamente, camisole et ôte de toute agentivité les personnes en révolte contre les dominations qu’elles subissent. Voilà ce qui se cache derrière le terme psychopathe. Jamais un fasciste n’a été enfermé pour ses idées, jamais un homme pour sa misogynie. Quand après sa présentation, je suis allé brièvement demander à la personne de ne plus utiliser ce terme, elle a justifié que c’est une citation d’Einstein. La comparaison des personnes psychiatrisée à des fascistes et des misogynes, ça n’est pas d’Einstein, pas plus que c’est Einstein qui a inclus ce terme dans la présentation. Einstein a dit plein de choses, pourquoi d’entre toutes retenir celle là ? Et si le but est d’avoir une citation à propos, pourquoi en choisir une saniste datant de 1917, ignorant plus d’un siècle de luttes ? Il aurait été possible de choisir n’importe quelle autre citation, voire de se passer de citation, mais c’est celle là qui a été retenue. Il aurait même été possible d’utiliser la citation mais en la commentant, soulignant qu’elle est problématique et pourquoi, mais à ce compte autant ne pas l’utiliser puisque c’est hors-propos de la présentation. Mais surtout, ll aurait été possible de ne pas réemployer ce terme plus tard, hors de toute citation servant d’excuse à son utilisation. Les personnes psychiatrisées ne sont pas des punchlines. Lors d’une autre présentation on apprend que lors du procès de Nuremberg, le nazi Hermann Göring a été jugé contre toute attente parfaitement sain par les psychologues. La personne présentant tentait de démontrer que les idées politiques ne sont pas une question de santé mentale, mais l’a fait sans démonter l’idée même de santé mentale, laissant penser que le fait que ces personnes aient été jugées saines serait une anomalie. Il me semble donc important de compléter en démontant l’idée même de santé mentale, qui comme j’ai tenté de l’expliquer plus haut est avant tout un outil d’oppression. Pour aller plus loin, je vous recommande de lire l’article L’abolition carcérale doit inclure la psychiatrie. Je précise que lutter contre la psychiatrie ne revient pas à nier les difficultés neurologiques ou psychologiques que peuvent rencontrer des personnes, ni le fait que le système psychiatrique peut parfois les aider. Mais si le système psychiatrique peut les aider, c’est parce que c’est l’unique moyen alors à notre disposition pour faire du soin mental et ce principalement pour des raisons de légalité, ce qui ne doit pas servir à nier que le système psychiatrique est avant tout un système de contrôle des corps et des esprits et une extension du système carcéral. Le soin doit se faire malgré la psychiatrie, pas grâce à elle. Le fait que les nazis aient été trouvés sains par les psychologues du procès de Nuremberg complète et illustre ce que je disais plus haut sur le rôle de la psychiatrie comme outil de domination. On apprend dans le même temps que les psychologues ont donné à Hermann Göring un quotient intellectuel de 138, soulignant avec surprise que les nazis ne sont pas nécessairement idiots. Au delà du fait que la notion d’intelligence est en elle même très discutable, le QI est intrinsèquement un outil de hiérarchisation conçu par des bourgeois blancs pour se dresser au dessus des autres, réduisant les personnes à un unique nombre masquant sa méthode de calcul et les biais qui la constitue, mais offrant une illusion de scientificité. Le QI a principalement été utilisé en soutien au racisme, hiérarchisant l’intelligence des races pour justifier le colonialisme, et il suffit de regarder une carte mondial des QIs pour s’en convaincre. Rien d’étonnant donc qu’un bourgeois blanc ai un haut QI, l’outil fonctionne comme prévu. Et au delà du QI, ramener le fascisme à une notion d’intelligence dépolitise là encore le sujet et stigmatise les personnes qui en sont en réalité victimes. Je tiens à présenter mes excuses à la première personne citée dans cette section et à qui je suis allé brièvement parler en aparté après un présentation autrement louable, j’espère ne pas avoir contribué au stress de l’évènement, mais par antivalidisme je ne pouvais pas laisser passer l’utilisation d’un tel terme. De même, je tiens à présenter mes excuses à la seconde personne et à son audience pour avoir monopolisé la parole pendant le court temps alloué aux questions, après là encore une excellente présentation. Stigmatisation des personnes racisées Lors de la table-ronde du samedi soir, une personne évoque la récente faille de sécurité injectée au logiciel xz suite à une longue infiltration. Elle semble avant tout avoir retenu une chose de cet épisode, à savoir la nationalité chinoise de la personne infiltrée puisqu’elle a cru pertinent de commenter en feignant une gêne que, puisqu’on n’est qu’entre nous cette personne peut l’avouer, elle a peur quand elle voit des contributions aux logiciels libres qu’elle maintient venant de personnes de l’est, de Russie, d’Asie du sud. Cette personne savait pertinemment que la table-ronde était filmée et allait être publiée, et elle a pu dire ça sans la moindre réaction ni des autres personnes sur scène, ni de l’organisation de l’évènement. Le public quant-à lui n’a jamais eu la parole de toute la table-ronde, empêchant toute réponse tierce dans le cadre de la conférence. Erratum du 24 novembre 2024 : il y a bel et bien eu un tour de questions que j’avais fini par oublier, et ce malgré une intervention que j’avais trouvée salutaire répondant à une vision très libérale de l’inclusion partagée sur scène. Virtuellement tous les pays, tous les états pratiquent l’infiltration, l’injection de backdoors, les attaques virales, y compris les états occidentaux, y compris la France. Je serais tenté de dire que c’est probablement avant tout les états occidentaux qui en sont la source, il n’y a qu’à voir l’étendue du virus étasunien Stuxnet pour s’en convaincre. Pourtant ce n’est pas des logiciels développés par la CIA ou encore l’armée française que cette personne a remis en question, comme si l’informatique devait rester avant tout une préoccupation d’occidentaux. Le problème, c’est les sud asiatiques. Disons-le clairement, ce à quoi on assisté durant cette table-ronde n’est rien de plus que de la xénophobie éhontée, du racisme. Le pire étant que ces immondices racistes ont été dites en réponse au fait qu’une autre personne de la table-ronde ai loué les contributions des personnes venant de régions en guerre au Moyen-Orient. Des tas de nos camarades libristes viennent de Russie ou de Chine, d’autres y vivent et subissent au quotidien le fascisme de ces états. Et que dire des camarades libristes des pays réprimés par les états occidentaux et les USA plus particulièrement ? Des mainteneurs du noyau Linux ont très récemment été virés du projet car russes, mais il n’y a pas de licenciements dans le libre nous a-t-on annoncé dans cette même table-ronde. Ni de racisme, manifestement, puisque l’organisation du CdL n’a pas réagi. Plus tard, cette même personne nous annonce fièrement tutorer pour le Google Summer of Code. J’ai été tuteur pour cet évènement à deux reprises, et je sais qu’il y a une représentation notable des personnes d’Asie du sud parmi les candidats·es stagiaires. Ça m’inquiète pour la sélection des candidats·es que cette personne peut opérer, de même que pour la qualité de l’encadrement et le traitement des stagiaires. Exclusion des personnes sourdes Au delà d’être très intéressante, la présentation d’Armony Altinier et Jean-Philippe Mengual était lunaire et ce pour une raison toute simple : des personnes sourdes étaient venues assister à l’unique présentation de toute la conférence sur l’accessibilité et absolument rien n’a été mis en place par le CdL pour les inclure. Armony et Jean-Philippe se sont retrouvés·es à pallier le manque en ouvrant un éditeur de texte et en s’échangeant le clavier à tour de rôle pour retranscrire tant bien que mal ce que l’autre disait. Même si elle a trouvé ses limites lors des démonstrations où l’éditeur de texte a dû être masqué et où le clavier était sollicité, leur démarche et adaptivité a été plus que louable pour pallier les manquements de l’organisation de la conférence. Imaginez la scène, deux personnes avec différents handicaps qui se retrouvent à devoir bricoler pour compenser l’accessibilité de la conférence à des personne ayant encore une autre catégorie de handicap ! Et tout ça, de nouveau, pendant l’unique présentation de toute la conférence ayant rapport avec le handicap et plus exactement avec le manque d’accessibilité. Pourtant des solutions existent. Idéalement, avoir des interprètes en langue des signes française pour inclure les personnes sourdes et avoir des personnes pour faire la transcription en sous-titres pour inclure les personnes mal-entendantes. Ces solutions peuvent certes coûter cher en main d’œuvre, mais même sans moyens il est possible de bricoler des choses. Des logiciels libres de transcription existent comme Live Captions, et même si ces logiciels sont imparfaits, les avoir sur un écran dédié déporté ou sur la machine des présentateurs·ices permettrait de limiter l’exclusion. Et si on considère que ces logiciels libres ne sont pas suffisants, il ne faut pas hésiter à passer par des logiciels propriétaires, l’inclusion doit passer avant le purisme. De plus, écrire les sous-titres en direct et sur place permettrait de compenser des soucis de captation audio et permettrait de plus rapidement publier les captations vidéo avec leurs sous-titres, pour toujours mieux inclure les personnes sourdes et mal entendantes. Enfin ce ne sont pas les seules personnes handicapées à bénéficier de sous-titres et de nombreuses personnes souffrant de troubles de l’attention arrivent mieux à suivre quand il y a à la fois de l’audio et des sous-titres, avoir des sous-titres en direct faciliterait leur participation et limiterait leur fatigue. Addendum du 24 novembre 2024 : on m’a fait remarquer que des sous-titres français permettent de mieux inclure les personnes dont le français n’est pas la première langue que l’audio français seul. Je suis pourtant bien placé pour le savoir, ayant réussi début octobre à suivre un documentaire grâce à ses sous-titres en italien alors que je ne connais pas la langue. De la même manière, on m’a fait remarquer que l’interprétation en langue des signes française permet de mieux inclure les personnes dont le français n’est pas la première langue que des sous-titres français. Exclusion des personnes craignant pour leur santé et propagation des épidémies Hé, regardez sous le tapis, c’est le covid, il n’est jamais parti, on l’y a glissé et on fait comme s’il n’existait plus. Pourtant c’est toujours une cause de mortalité majeure, et le covid long continue d’handicaper un grand nombre de personnes. J’ai des amis·es et camarades libristes qui ont acquis des handicaps parfois très sévère suite à de « petites grippes » ou du covid, et ce n’étaient pas des personnes dites « à risque ». Je parle de perte d’odorat définitif, de grande fatigue chronique ou de grande réduction de la mobilité. Et c’est sans mentionner les autres maladies comme la coqueluche, ni sans parler des morts. On continue de faire comme si de rien n’était, on n’a rien appris du début de la pandémie du covid et on redevient eugénistes pour un confort fantasmé. Je peux porter un FFP2 tant que je veux pendant la conférence, il ne fait que vous protéger des virus que je pourrais diffuser, il ne me protège pas de ceux que vous diffusez. Pourtant éviter la propagation de maladies aéroportées, les gênes qu’elles occasionnent, les handicaps et les morts ne demande pas grand chose. Aérer les endroits clos tant que possible et se masquer dans les endroits de passage et confinés tels que les transports en commun, les supermarchés ou les conférences suffiraient à grandement réduire le nombre d’infections. Mais pour que ça fonctionne, encore faut-il qu’on se protège les uns·es les autres. Refuser de se masquer est eugéniste, c’est considérer que la maladie c’est pour les autres, qu’on est fort·e et que les handicaps et les morts sont acceptables. Il ne faut surtout pas attendre d’avoir des symptômes pour se masquer, on peut être porteur·se asymptomatique et participer à la diffusion de maladies, qu’on finisse par les développer ou non. De plus lorsqu’on développe une maladie telle que le covid ou la grippe, on a déjà participé à sa propagation pendant plusieurs jours. Porter un masque quand on le peut et quand c’est pertinent est devenu un acte radical de soin communautaire, c’est déprimant. L’organisation du CdL contribue à cette situation, je n’ai vu aucune recommandation à se masquer, aucun système de filtration, aucune aération, pas même des fenêtres ouvertes qui ne coûtent pourtant littéralement rien. Ce n’est pourtant pas faute de sensibilisation, de documentation et d’actions de la part de l’Association pour la Réduction des Risques Aéroportés. Cabrioles ou Autodéfense Sanitaire fournissent également des ressources à ce sujet. L’autodéfense ne peut pas être individuelle, et sans actes collectifs tout le monde est vulnérable, sans prise en compte sérieuse des risques sanitaires par l’organisation de l’évènement personne n’est protégé. Au delà de ça, en rassemblant un nombre conséquent de participants·es de tout le pays dans des salles bondées sans la moindre mesure de prévention, le CdL participe activement à la propagation des épidémies. Erratum du 24 novembre 2024 : les masques FFP2 protègent bel et bien leur porteurs·ses, mais ils ne sont réellement efficaces une journée que si tout le monde joue le jeu. Addendum du 24 novembre 2024 : toutes les personnes qui souhaiteraient se masquer ne peuvent pas le faire, c’est pourquoi il faut que toutes les personnes qui le peuvent se masquent pour les protéger. Le but n’est pas de faire les choses parfaitement mais de les faire au mieux de nos moyens, et à l’heure actuelle nous sommes collectivement lamentables. L’organisation de l’évènement a le pouvoir de participer à reverser la donne, de conscientiser nos milieux, de protéger et inclure nos camarades. Une conférence de mecs blancs Les conférences que j’ai citées comme marquantes pourraient faire croire à une parité de genre des intervenants·es, mais il n’en est rien. Les militant·es antipatriarcales doivent malheureusement faire un énorme travail pour leur inclusion et l’organisation du CdL s’est déjà faite remonter les bretelles il y a quelques années pour avoir refusé des présentations de femmes quand dans le même temps elle en accordait plusieurs à des hommes. Lors de sa keynote, le présentateur nous expliquait que la mode est aux IAs, que c’est là que sont les financements en ce moment et que par conséquent et pour leur propre bien, les projets logiciels libres se doivent de suivre l’exemple de VLC et inclure des fonctionnalités en IA. La démonstration m’a laissé peu convaincu. Le lendemain, dans la même salle et au même créneau horaire, Khrys donnait une présentation nous invitant à être technocritiques des IAs via une angle féministe, soulignant la nécessité du libre. La présentation de Khrys était pertinente, percutante, intéressante, stimulante et salvatrice en nous invitant à aller contre le capitalisme et le patriarcat, et non pas à s’en accommoder comme la keynote de la veille. Sujet similaire, angle différent, la seconde présentation était à mon sens bien meilleure, mais c’est à une homme plutôt qu’à une femme que l’organisation a choisi de donner le créneau de keynote, créneau sans autres présentations pour lui faire concurrence. Khrys qui quant-à-elle a dû partager l’audience avec les nombreuses autres présentations du créneau a malgré tout réussi à faire salle comble. De la même manière, si la présentation d’Isabella Vanni sur l’inclusion des femmes et des minorités de genre à l’April s’est vue allouée l’amphithéâtre, elle a été mise en concurrence avec une présentation sur la censure d’internet en France qui a fait salle comble, face à un amphi presque désert pour Isabella. Si la présentation sur la censure d’internet a été annulée au dernier moment, ce manquement de programmation a été relevé et critiqué. La conférence se clôt sur une slide nous annonçant 1 200 participants à cette édition. On ne saura pas combien de participantes. Probablement que les personnes ayant réalisé ces slides n’ont pas assisté à la présentation d’Isabella Vanni qui nous expliquait pourtant bien combien le masculin neutre participe activement à l’absence des femmes dans les milieux de l’informatique et du libre. Un autre point tristement notable est la blanchité des intervenants·es. La conférence est un véritable entre-soi blanc, pas étonnant que des propos racistes puissent être proférés pendant la table ronde sans soulever la moindre réaction, et il y a fort à parier que ça participe du fait que les personnes racisées n’y participent pas plus. J’ai cru comprendre que certaines conférences vont activement rechercher des personnes pour présenter, mettant réellement la diversité du milieu en avant et aidant de ce fait à la rétablir en normalisant la présence, la visibilité et les paroles des personnes minorisées. J’ai entendu du bien de MiXiT et de Paris Web du point de vue de l’inclusivité, peut-être y-aurait-il à creuser dans la façon dont elles s’organisent ? Je ne dis pas que l’organisation du CdL ne se soucie pas de la diversité des intervenants·es, mais je suis convaincu que d’autres parviennent beaucoup mieux à la réaliser et que ces conférences devraient servir de points de référence. Le CdL a beaucoup de tracks en parallèle mais je me demande si cette quantité ne se fait pas au détriment de la qualité de la conférence, et ce malgré la diversité des sujets abordés. Peut-être vaudrait-il mieux avoir moins des tracks mais des salles plus pleines, notamment l’amphithéâtre qui héberge la track principale ? Je peux entendre que réduire le nombre de tracks augmenterait l’occupation des salles de cours déjà bondées, ce qui serait effectivement un problème, mais peut-être y’a-t-il d’autres amphis à utiliser ? J’imagine que si l’organisation du CdL pouvait en avoir d’autres, elle ne les aurait pas boudés, et que donc elle ne peut en avoir qu’un seul. Vous noterez que si les présentations qui m’ont marquées sont pour moitié présentées par des présentatrices et ce malgré une très vaste majorité de présentateurs, cela veut dire que j’ai trouvé en moyennes les présentations données par des femmes de meilleure qualité. Je serais taquin, je suggèrerais que réduire le nombre de présentations en donnant la priorité aux non-mecs et non-blancs augmenterait la qualité de la conférence. Allez, soyons taquins, je le suggère. Mais qu’on s’entende, je ne dis pas que la parité et l’inclusivité doivent être atteintes pour avoir une meilleure conférence, je note juste qu’une meilleure conférence serait un effet de bord bénéfique de leur atteinte. Conclusion J’ai volontairement omis de détailler les personnes qui ont commis ces impairs parce que je ne souhaite pas m’en prendre à elles mais aux problèmes. On vit dans des sociétés patriarcales, racistes, sanistes, validistes, et le milieu du libre n’y échappe pas. Je ne souhaite pas lutter contre des personnes mais contre des systèmes et les discours qui les soutiennent. J’espère que les personnes qui se reconnaîtraient dans cet article ne prendront pas mes remarques comme des attaques mais comme un appel à faire plus attention. Il y a certainement des tas d’autres problèmes que je n’ai pas vus, soit parce que je n’en ai pas eu conscience, soit parce que je n’en ai pas été témoin, soit parce que je n’ai pas le recul ou le vécu nécessaire. Je n’ai par exemple aucune idée de combien la conférence est accessible en fauteuil roulant. Mon but n’est de toute façon pas de faire une liste exhaustive mais de faire un retour sur mon vécu de la conférence, pointant du doigt des choses que je trouve graves et qui je pense devraient être sérieusement prises en compte. De plus, je tiens à présenter mes excuses de ne pas avoir plus sourcé et référencé mes propos, cet article a été écrit dans l’urgence et sa rédaction m’a fatigué, je n’ai plus l’énergie pour plus de recherches. Je crois sincèrement aux volontés d’inclusion du CdL, tout autant que je sais qu’on vit dans des sociétés où les oppressions sont tellement banalisées qu’elles sont invisibles à la majorité. J’appelle néanmoins l’organisation du CdL à se remettre en question, les intentions d’inclusivité ne doivent pas rester des mots sur une page web et doivent être activement mises en pratique. Je ne souhaite pas spécifiquement jeter la pierre à son organisation, le CdL est une conférence que j’aime sincèrement et ce genre de problèmes sont malheureusement extrêmement répandus, non seulement dans la société mais également dans les milieux du libres. En disant ça, je souhaite pointer du doigt l’entièreté des mouvements des logiciels libres comme de la culture libre. Conférence majeure du libre, le FOSDEM accueille des milliers de personnes et probablement plus de 10 000 dans un espace incroyablement sous-dimensionné. Évènement hautement international, les participants·es viennent de partout autour du monde. Le FOSDEM est un véritable lieu d’échange international d’épidémies où l’on blague à demi-mots que l’on tu n’as pas pleinement vécu la conférence si l’on re rentre pas avec une grippe du FOSDEM. L’organisation du FOSDEM ferme volontairement les yeux sur le problème et n’a absolument aucune politique sanitaire, la rendant activement complice de la propagation des épidémies et pandémies. À cette complicité doit s’ajouter celles des entreprises du libre qui incitent sinon forcent leurs employés·es à participer à la conférence bruxelloise. Bien qu’étant intimiste avec sa cinquantaine de participants·es, j’ai de nouveau attrapé le covid pendant le Berlin Mini GUADEC 2024. Les mesures de protection mises en place étaient là encore insuffisantes, et nous étions de mémoire seulement 4 à se masquer, tout en devant passer des journées entières dans le même espace mal aéré. Encore une fois, je participais à la protection de personnes qui refusaient de m’accorder la même en ne se masquant pas, et l’organisation est responsable de l’insuffisance des mesures mises en place. Je ne demande pas à ce que les conférences soient parfaites, aucune ne le sera jamais, et je ne prétends surtout pas pouvoir faire aussi bien sinon mieux. Je tiens à saluer l’organisation du CdL pour faire avoir fait un évènement assez chouette pour qu’on ait envie de le voir aller de l’avant, quitte à devoir le secouer un peu pour qu’il devienne réellement inclusif. J’espère que l’équipe du CdL ne prendra pas les problèmes que je remonte comme des attaques, tout autant que j’espère que les autres conférences du libre sauront s’assurer de ne pas faire les mêmes erreurs. J’espère également que les pistes d’amélioration que j’ai données aideront, je ne prétends pas qu’elles sont toutes faciles à mettre en place mais je veux bien, à mon échelle et avec l’énergie que j’ai, me tenir disponible pour aider l’organisation du CdL ou d’une autre conférence à trouver comment arranger la situation.
  • Peter Hutterer: hidreport and hut: two crates for handling HID Report Descriptors and HID Reports (2024/11/19 01:54)
    A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two. Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed. HID Usage Tables (HUT) As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this: let gd_x = GenericDesktop::X; let usage_page = gd_x.usage_page(); assert!(matches!(usage_page, UsagePage::GenericDesktop)); Or the more likely need: convert from a numeric page/id tuple to a named usage. let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X println!("Usage is {}", usage.name()); 90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple. hidreport - Report Descriptor parsing The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this: let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap(); I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything): let input_report_bytes = read_from_device(); let report = rdesc.find_input_report(&input_report_bytes).unwrap(); let field = report.fields().first().unwrap(); match field { Field::Variable(var) => { let val: u32 = var.extract(&input_report_bytes).unwrap().into(); println!("Field {:?} is of value {}", field, val); }, _ => {} } The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present. hid-recorder The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates. $ sudo hid-recorder /dev/hidraw1 # Microsoft Microsoft® 2.4GHz Transceiver v9.0 # Report descriptor length: 223 bytes # 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x02, // Usage (Mouse) 2 # 0xa1, 0x01, // Collection (Application) 4 # 0x05, 0x01, // Usage Page (Generic Desktop) 6 # 0x09, 0x02, // Usage (Mouse) 8 # 0xa1, 0x02, // Collection (Logical) 10 # 0x85, 0x1a, // Report ID (26) 12 # 0x09, 0x01, // Usage (Pointer) 14 # 0xa1, 0x00, // Collection (Physical) 16 # 0x05, 0x09, // Usage Page (Button) 18 # 0x19, 0x01, // UsageMinimum (1) 20 # 0x29, 0x05, // UsageMaximum (5) 22 # 0x95, 0x05, // Report Count (5) 24 # 0x75, 0x01, // Report Size (1) 26 ... omitted for brevity # 0x75, 0x01, // Report Size (1) 213 # 0xb1, 0x02, // Feature (Data,Var,Abs) 215 # 0x75, 0x03, // Report Size (3) 217 # 0xb1, 0x01, // Feature (Cnst,Arr,Abs) 219 # 0xc0, // End Collection 221 # 0xc0, // End Collection 222 R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty N: Microsoft Microsoft® 2.4GHz Transceiver v9.0 I: 3 45e 7a5 # Report descriptor: # ------- Input Report ------- # Report ID: 26 # Report size: 80 bits # | Bit: 8 | Usage: 0009/0001: Button / Button 1 | Logical Range: 0..=1 | # | Bit: 9 | Usage: 0009/0002: Button / Button 2 | Logical Range: 0..=1 | # | Bit: 10 | Usage: 0009/0003: Button / Button 3 | Logical Range: 0..=1 | # | Bit: 11 | Usage: 0009/0004: Button / Button 4 | Logical Range: 0..=1 | # | Bit: 12 | Usage: 0009/0005: Button / Button 5 | Logical Range: 0..=1 | # | Bits: 13..=15 | ######### Padding | # | Bits: 16..=31 | Usage: 0001/0030: Generic Desktop / X | Logical Range: -32767..=32767 | # | Bits: 32..=47 | Usage: 0001/0031: Generic Desktop / Y | Logical Range: -32767..=32767 | # | Bits: 48..=63 | Usage: 0001/0038: Generic Desktop / Wheel | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # | Bits: 64..=79 | Usage: 000c/0238: Consumer / AC Pan | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # ------- Input Report ------- # Report ID: 31 # Report size: 24 bits # | Bits: 8..=23 | Usage: 000c/0238: Consumer / AC Pan | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # ------- Feature Report ------- # Report ID: 18 # Report size: 16 bits # | Bits: 8..=9 | Usage: 0001/0048: Generic Desktop / Resolution Multiplier | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 10..=11 | Usage: 0001/0048: Generic Desktop / Resolution Multiplier | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 12..=15 | ######### Padding | # ------- Feature Report ------- # Report ID: 23 # Report size: 16 bits # | Bits: 8..=9 | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 10..=11 | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bit: 12 | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range: 0..=1 | Physical Range: 0..=0 | # | Bits: 13..=15 | ######### Padding | ############################################################################## # Recorded events below in format: # E: . [bytes ...] # # Current time: 11:31:20 # Report ID: 26 / # Button 1: 0 | Button 2: 0 | Button 3: 0 | Button 4: 0 | Button 5: 0 | X: 5 | Y: 0 | # Wheel: 0 | # AC Pan: 0 | E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  • Richard Hughes: Firmware SBOMs for open source projects (2024/11/14 15:58)
    You might be surprised to hear that closed source firmware typically contains open source dependencies. In the case of EDK II (probably the BIOS of your x64 machine you’re using now) it’s about 20 different projects, and in the case of coreboot (hopefully the firmware of the machine you’ll own in the future) it’s about another 10 — some overlapping with EDK II. Examples here would be things like libjpeg (for the OEM splash image) or libssl (for crypto, but only the good kind). It makes no sense for each person building firmware to write the same SBOM for the OSS code. Moving the SBOM upstream means it can be kept up to date by the same team writing the open source code. It’s very similar to what we encouraged desktop application developers to do with AppStream metadata a decade or so ago. That was wildly successful, so maybe we can do the same trick again here. My proposal would to submit a sbom.cdx.json to each upstream project in CycloneDX format, stored in a location amenable to the project — e.g. in ./contrib, ./data/sbom or even in the root project folder. The location isn’t important, only the file suffix needs to be predictable. Notice the CycloneDX word there not SPDX — the latter is great for open source license compliance, but I was only able to encode 43% of our “example firmware SBOM” into SPDX format, even with a lot of ugly hacks. I spent a long time trying to jam a round peg in a square hole and came to the conclusion it’s not going to work very well. SPDX works great as an export format to ensure license compliance (and the uswid CLI can already do that now…) but SPDX doesn’t work very well as a data source. CycloneDX is just a better designed format for a SBOM, sorry ISO. Let’s assume we check in a new file to ~30 projects. With my upstream-maintainer hat on, nobody likes to manually edit yet-another-file when tagging releases, so I’m encouraging projects shipping a CycloneDX sbom.cdx.json to use some of the auto-substituted tokens, e.g. @VCS_TAG@ → git describe --tags --abbrev=0 e.g. 1.2.3 @VCS_VERSION@ → git describe --tags e.g. 1.2.3-250-gfa2371946 @VCS_BRANCH@ → git rev-parse --abbrev-ref HEAD e.g. staging @VCS_COMMIT@ → git rev-parse HEAD e.g. 3090e61ee3452c0478860747de057c0269bfb7b6 @VCS_SBOM_AUTHORS@ → git shortlog -n -s -- sbom.cdx.json e.g. Example User, Another User @VCS_SBOM_AUTHOR@ → @VCS_SBOM_AUTHORS@[0] e.g. Example User @VCS_AUTHORS@ → git shortlog -n -s e.g. Example User, Another User @VCS_AUTHOR@ → @VCS_AUTHORS@[0] e.g. Example User Using git in this way during the built process allows us to also “fixup” SBOM files with either missing details, or when the downstream ODM patches the project to do something upstream wouldn’t be happy with shipping upstream. For fwupd (which I’m using as a cute example, it’s not built into firmware…) the sbom.cdx.json file would be something like this: { "bomFormat": "CycloneDX", "specVersion": "1.6", "version": 1, "metadata": { "authors": [ { "name": "@VCS_SBOM_AUTHORS@" } ] }, "components": [ { "type": "library", "bom-ref": "pkg:github/fwupd/fwupd@@VCS_TAG@", "cpe": "cpe:2.3:a:fwupd:fwupd:@VCS_TAG@:*:*:*:*:*:*:*", "name": "fwupd", "version": "@VCS_VERSION@", "description": "Firmware update daemon", "supplier": { "name": "fwupd developers", "url": [ "https://github.com/fwupd/fwupd/blob/main/MAINTAINERS" ] }, "licenses": [ { "license": { "id": "LGPL-2.1-or-later" } } ], "externalReferences": [ { "type": "website", "url": "https://fwupd.org/" }, { "type": "vcs", "url": "https://github.com/fwupd/fwupd" } ] } ] } Putting it all together means we can do some pretty clever things assuming we have a recursive git checkout using either git modules, sub-modules or sub-projects: $ uswid --find ~/Code/fwupd --fixup --save sbom.cdx.json --verbose Found: - ~/Code/fwupd/contrib/sbom.cdx.json - ~/Code/fwupd/venv/build/contrib/sbom.cdx.json - ~/Code/fwupd/subprojects/libjcat/contrib/spdx.json Substitution required in ~/Code/fwupd/contrib/sbom.cdx.json: - @VCS_TAG@ → 2.0.1 - @VCS_VERSION@ → 2.0.1-253-gd27804fbb Fixup required in ~/Code/fwupd/subprojects/libjcat/spdx.json: - Add VCS commit → db8822a01af89aa65a8d29c7110cc86d78a5d2b3 Additional dependencies added: - pkg:github/hughsie/libjcat@0.2.1 → pkg:github/hughsie/libxmlb@0.2.1 - pkg:github/fwupd/fwupd@2.0.1 → pkg:github/hughsie/libjcat@0.2.1 ~/Code/fwupd/venv/build/contrib/sbom.cdx.json was merged into existing component pkg:github/fwupd/fwupd@2.0.1 And then we have a sbom.cdx.json that we can use as an input file used for building the firmware blob. If we can convince EDK2 to merge the additional sbom.cdx.json for each built module then it all works like magic, and we can build the 100% accurate external SBOM into the firmware binary itself with no additional work. Comments most welcome.
  • Jussi Pakkanen: PDF/AAAARGH (2024/11/08 15:01)
    Note: the PDF/A specification is not freely available so everything here is based on reverse engineering. It might be complete bunk.There are many different "subspecies" of PDF. The most common are PDF/X and PDF/A. CapyPDF can already do PDF/X, so I figured it's time to look into PDF/A. Like, how much worse could it possibly be?Specifying that a PDF file is PDF/X is straightforward. Each PDF has a Catalog dictionary that defines properties of the document. All you need to do is to add an OutputIntent dictionary and link it to the Catalog. The dictionary has a key that specifies the subtype. Setting that to /GTS_PDFX does the trick. There are many different versions of PDF/X so you need to define that as well. A simple solution would be to have a second key in that dictionary for specifying the subtype. Half of that expectation is correct. There is indeed a key you can set, but it is in a completely different part of the object tree called the Information dictionary. It's a bit weird but you implement it once and then forget it.PDF/A has four different versions, namely 1, 2, 3, 4 and each of these have several conformance levels that are specified with a single letter. Thus the way you specify that the file is a PDF/A document is that you write the value /GTS_PDFA1 to the intent dictionary. Yes. regardless of which version of PDF/A you want, this dictionary will say it is PDFA1.What would be the mechanism, then, to specify the sub version:In the Information dictionary, just like with PDF/X?In some other PDF object dictionary?In a standalone PDF object that is in fact an embedded XML document?Something even worse?Depending on your interpretation, the correct answer is either 3 or 4. Here is the XML file in question as generated by LibreOffice. The payload parts are marked with red arrows.The other bits are just document metadata replicated. PDF version 2.0 has gone even further and deprecated storing PDF metadata in PDF's own data structures. The sructures that have been designed specifically for PDF documents, which all PDF processing software already know how to handle and which tens of billions (?) of documents already use and which can thus never be removed? Those ones. As Sun Tzu famously said:A man with one metadata block in his file format always knows what his document is called.A man with two can never be sure. Thus far we have only been at level 3. So what more could possibly be added to this to make it even worse?Spaces.Yes, indeed. The screen shot does not show it, but the recommend way to use this specific XML format is to add a whole lot of whitespace below the XML snippet so it can be edited in place later if needed. This is highly suspicious for PDF/A for two main reasons. First of all PDF/A is meant for archiving usage. Documents in it should not be edited afterwards. That is the entire point. Secondly, the PDF file format already has a way of replacing objects with newer versions.The practical outcome of all this is that every single PDF/A document has approximately 5 kilobytes of fluff to represent two bytes of actual information. Said object can not even be compressed because the RDF document must be stored uncompressed to be editable. Even though in PDF/A documents it will never be edited.
  • Martin Pitt: Learning web components and PatternFly Elements (2024/11/08 00:00)
    Today at Red Hat is day of learning again! I used the occasion to brush up my knowledge about web components and taking a look at PatternFly Elements. I’ve leered at that for a long time already – using “regular” PatternFly requires React, and thus all the npm, bundler, build system etc. baggage around it. In Cockpit we support writing your own plugins with a simple static .html and .
  • Jiri Eischmann: We’re More Offline at Conferences, and That’s Probably a Good Thing (2024/11/07 17:00)
    I’ve just been to two traditional Czech open source conferences – LinuxDays and OpenAlt – and I’ve noticed one interesting shift: the communication on social media during the conferences has disappeared. After 2010, we suddenly all had a device in our pocket that we could easily use to share experiences and observations from anywhere. And at least at IT events, people started doing this a lot. Under the hashtag of the given conference, there was a stream of messages from participants about which talks they liked, where they could find a good place to eat in the area, what caught their attention among the booths. The event organizers used this to inform visitors, and the booth staff to attract people to their booth. I remember writing about what we had interesting at our booth, and people actually came to have a look based on that. At the peak of this trend, the popular so-called Twitter walls were in use. These were typically web applications that displayed the latest messages under a given hashtag, and they ran on screens in the corridors or were projected directly in the lecture rooms, so that even those who weren’t following it on their mobile phones could keep track. And today, all of this has practically disappeared. When I counted it after LinuxDays, there were a total of 14 messages with the #linuxdays hashtag on Mastodon during the conference, and only 8 on Twitter. During OpenAlt, there were 20 messages with the #openalt hashtag on Mastodon and 8 on Twitter. I also checked if it was running on Bluesky. There were a few messages with the hashtags of both conferences there, but except for one, they were all bridged from Mastodon. In any case, these are absolutely negligible numbers compared to what we used to see ten years ago. Where did it all go? I thought about it and came up with four reasons: Microblogging is much more fragmented today than it was ten years ago. Back then, we were all on Twitter. That is now in decline. The open-source community has largely moved to Mastodon, but not entirely. Some are still on LinkedIn, some on Bluesky, etc. When there is no single place where everyone is, the effect of a universal communication channel disappears. Conference communication has partly shifted to instant messaging. This trend started 8-9 years ago. A group (typically on Telegram) was created for conference attendees, and it served for conference communication. Compared to a microblogging platform, this has the advantage that it is not entirely open communication. What happens at the conference, stays at the conference. It doesn’t take the form of publicly searchable messages. For some, this is a safer space than a social network. It’s also faster, with features like location sharing, etc. However, this mode of communication has also declined a lot. During OpenAlt, there were only 20 messages in its Telegram group. People are much more passive on social media today. Rather than sharing their own posts from the conference, they’d rather leave it to some influencer who will make a cool video from there, which everyone will then watch and like. All the major social networks have shifted towards a small group creating content for a passive majority. New platforms like TikTok have been functioning this way from the start. After Covid, people simply don’t have the same need to share their conference experiences online. They are somewhat saturated with it after the Covid years, and when they go somewhere, they don’t want to tap messages into their phone about how they’re doing there. Overall, I don’t see it as a bad thing. Yes, it had its charm, and it was easier during the conference to draw attention to your booth or talk, but in today’s digital age, any shift towards offline is welcome. After all, conferences are there for people to meet in person. Otherwise, we could just watch the streams from home and write about them on social media. We’ve been there before, and it wasn’t quite right. How do you see it? Do you also notice that you share less online from conferences?
  • Arun Raghavan: GStreamer Conference 2024 (2024/11/06 17:52)
    All of us at Asymptotic are back home from the exciting week at GStreamer Conference 2024 in Montréal, Canada last month. It was great to hang out with the community and see all the great work going on in the GStreamer ecosystem. Montréal sunsets are There were some visa-related adventures leading up to the conference, but thanks to the organising team (shoutout to Mark Filion and Tim-Philipp Müller), everything was sorted out in time and Sanchayan and Taruntej were able to make it. This conference was also special because this year marks the 25th anniversary of the GStreamer project! Happy birthday to us! Talks We had 4 talks at the conference this year. GStreamer & QUIC (video) Sanchayan spoke about his work with the various QUIC elements in GStreamer. We already have the quinnquicsrc and quinquicsink upstream, with a couple of plugins to allow (de)multiplexing of raw streams as well as an implementation or RTP-over-QUIC (RoQ). We’ve also started work on Media-over-QUIC (MoQ) elements. This has been a fun challenge for us, as we’re looking to build out a general-purpose toolkit for building QUIC application-layer protocols in GStreamer. Watch this space for more updates as we build out more functionality, especially around MoQ. Clock Rate Matching in GStreamer & PipeWire (video) Photo credit: Francisco My talk was about an interesting corner of GStreamer, namely clock rate matching. This is a part of live pipelines that is often taken for granted, so I wanted to give folks a peek under the hood. The idea of doing this talk was was born out of some recent work we did to allow splitting up the graph clock in PipeWire from the PTP clock when sending AES67 streams on the network. I found the contrast between the PipeWire and GStreamer approaches thought-provoking, and wanted to share that with the community. GStreamer for Real-Time Audio on Windows (video) Next, Taruntej dove into how we optimised our usage of GStreamer in a real-time audio application on Windows. We had some pretty tight performance requirements for this project, and Taruntej spent a lot of time profiling and tuning the pipeline to meet them. He shared some of the lessons learned and the tools he used to get there. Simplifying HLS playlist generation in GStreamer (video) Sanchayan also walked us through the work he’s been doing to simplify HLS (HTTP Live Streaming) multivariant playlist generation. This should be a nice feature to round out GStreamer’s already strong support for generating HLS streams. We are also exploring the possibility of reusing the same code for generating DASH (Dynamic Adaptive Streaming over HTTP) manifests. Hackfest As usual, the conference was followed by a two-day hackfest. We worked on a few interesting problems: Sanchayan addressed some feedback on the QUIC muxer elements, and then investigated extending the HLS elements for SCTE-35 marker insertion and DASH support Taruntej worked on improvements to the threadshare elements, specifically to bring some ts-udpsrc element features in line with udpsrc I spent some time reviewing a long-pending merge request to add soft-seeking support to the AWS S3 sink (so that it might be possible to upload seekable MP4s, for example, directly to S3). I also had a very productive conversation with George Kiagiadakis about how we should improve the PipeWire GStreamer elements (more on this soon!) All in all, it was a great time, and I’m looking forward to the spring hackfest and conference in the the latter part next year!
  • Tim Janik: JJ-FZF - a TUI for Jujutsu (2024/11/04 02:32)
    JJ-FZF is a TUI (Terminal-based User Interface) for Jujutsu, built on top of fzf. It centers around the jj log view, providing key bindings for common operations on JJ/Git repositories. About six months ago, I revisited JJ, drawn in by its promise of Automatic rebase and conflict resolution. I have…
  • Mario Sanchez Prada: Igalia and WebKit: status update and plans (2024) (2024/11/03 17:20)
    It’s been more than 2 years since the last time I wrote something here, and in that time a lot of things happened. Among those, one of the main highlights was me moving back to Igalia‘s WebKit team, but this time I moved as part of Igalia’s support infrastructure to help with other types of tasks such as general coordination, team facilitation and project management, among other things. On top of those things, I’ve been also presenting our work around WebKit in different venues, such as in the Embedded Open Source Summit or in the Embedded Recipes conference, for instance. Of course, that included presenting our work in the WebKit community as part of the WebKit Contributors Meeting, a small and technically focused event that happens every year, normally around the Bay Area (California). That’s often a pretty dense presentation where, over the course of 30-40 minutes, we go through all the main areas that we at Igalia contribute to in WebKit, trying to summarize our main contributions in the previous 12 months. This includes work not just from the WebKit team, but also from other ones such as our Web Platform, Compilers or Multimedia teams. So far I did that a couple of times only, both last year on October 24rth as well as this year, just a couple of weeks ago in the latest instance of the WebKit Contributors meeting. I believe the session was interesting and informative, but unfortunately it does not get recorded so this time I thought I’d write a blog post to make it more widely accessible to people not attending that event. This is a long read, so maybe grab a cup of your favorite beverage first… Igalia and WebKit So first of all, what is the relationship between Igalia and the WebKit project? In a nutshell, we are the lead developers and the maintainers of the two Linux-based WebKit ports, known as WebKitGTK and WPE. These ports share a common baseline (e.g. GLib, GStreamer, libsoup) and also some goals (e.g. performance, security), but other than that their purpose is different, with WebKitGTK being aimed at the Linux desktop, while WPE is mainly focused on embedded devices. This means that, while WebKitGTK is the go-to solution to embed Web content in GTK applications (e.g. GNOME Web/Epiphany, Evolution), and therefore integrates well with that graphical toolkit, WPE does not even provide a graphical toolkit since its main goal is to be able to run well on embedded devices that often don’t even have a lot of memory or processing power, or not even the usual mechanisms for I/O that we are used to in desktop computers. This is why WPE’s architecture is designed with flexibility in mind with a backends-based architecture, why it aims for using as few resources as possible, and why it tries to depend on as few libraries as possible, so you can integrate it virtually in any kind of embedded Linux platform. Besides that port-specific work, which is what our WebKit and Multimedia teams focus a lot of their effort on, we also contribute at a different level in the port-agnostic parts of WebKit, mostly around the area of Web standards (e.g. contributing to Web specifications and to implement them) and the Javascript engine. This work is carried out by our Web Platform and Compilers team, which tirelessly contribute to the different parts of WebCore and JavaScriptCore that affect not just the WebKitGTK and WPE ports, but also the rest of them to a bigger or smaller degree. Last but not least, we also devote a considerable amount of our time to other topics such as accessibility, performance, bug fixing, QA... and also to make sure WebKit works well on 32-bit devices, which is an important thing for a lot of WPE users out there. Who are our users? At Igalia we distinguish 4 main types of users of the WebKitGTK and WPE ports of WebKit: Port users: this category would include anyone that writes a product directly against the port’s API, that is, apps such as a desktop Web browser or embedded systems that rely on a fullscreen Web view to render its Web-based content (e.g. digital signage systems). Platform providers: in this category we would have developers that build frameworks with one of the Linux ports at its core, so that people relying on such frameworks can leverage the power of the Web without having to directly interface with the port’s API. RDK could be a good example of this use case, with WPE at the core of the so-called Thunder plugin (previously known as WPEFramework). Web developers: of course, Web developers willing to develop and test their applications against our ports need to be considered here too, as they come with a different set of needs that need to be fulfilled, beyond rendering their Web content (e.g. using the Web Inspector). End users: And finally, the end user is the last piece of the puzzle we need to pay attention to, as that’s what makes all this effort a task worth undertaking, even if most of them most likely don’t need what WebKit is, which is perfectly fine :-) We like to make this distinction of 4 possible types of users explicit because we think it’s important to understand the complexity of the amount of use cases and the diversity of potential users and customers we need to provide service for, which is behind our decisions and the way we prioritize our work. Strategic goals Our main goal is that our product, the WebKit web engine, is useful for more and more people in different situations. Because of this, it is important that the platform is homogeneous and that it can be used reliably with all the engines available nowadays, and this is why compatibility and interoperability is a must, and why we work with the the standards bodies to help with the design and implementation of several Web specifications. With WPE, it is very important to be able to run the engine in small embedded devices, and that requires good performance and being efficient in multiple hardware architectures, as well as great flexibility for specific hardware, which is why we provided WPE with a backend-based architecture, and reduced dependencies to a minimum. Then, it is also important that the QA Infrastructure is good enough to keep the releases working and with good quality, which is why I regularly maintain, evolve and keep an eye on the EWS and post-commit bots that keep WebKitGTK and WPE building, running and passing the tens of thousands of tests that we need to check continuously, to ensure we don’t regress (or that we catch issues soon enough, when there’s a problem). Then of course it’s also important to keep doing security releases, making sure that we release stable versions with fixes to the different CVEs reported as soon as possible. Finally, we also make sure that we keep evolving our tooling as much as possible (see for instance the release of the new SDK earlier this year), as well as improving the documentation for both ports. Last, all this effort would not be possible if not because we also consider a goal of us to maintain an efficient collaboration with the rest of the WebKit community in different ways, from making sure we re-use and contribute to other ports as much code as possible, to making sure we communicate well in all the forums available (e.g. Slack, mailing list, annual meeting). Contributions to WebKit in numbers Well, first of all the usual disclaimer: number of commits is for sure not the best possible metric,  and therefore should be taken with a grain of salt. However, the point here is not to focus too much on the actual numbers but on the more general conclusions that can be extracted from them, and from that point of view I believe it’s interesting to take a look at this data at least once a year. With that out of the way, it’s interesting to confirm that once again we are still the 2nd biggest contributor to WebKit after Apple, with ~13% of the commits landed in this past 12-month period. More specifically, we landed 2027 patches out of the 15617 ones that took place during the past year, only surpassed by Apple and their 12456 commits. The remaining 1134 patches were landed mostly by Sony, followed by RedHat and several other contributors. Now, if we remove Apple from the picture, we can observe how this year our contributions represented ~64% of all the non-Apple commits, a figure that grew about ~11% compared to the past year. This confirms once again our commitment to WebKit, a project we started contributing about 14 years ago already, and where we have been systematically being the 2nd top contributor for a while now. Main areas of work The 10 main areas we have contributed to in WebKit in the past 12 months are the following ones: Web platform Graphics Multimedia JavaScriptCore New WPE API WebKit on Android Quality assurance Security Tooling Documentation In the next sections I’ll talk a bit about what we’ve done and what we’re planning to do next for each of them. Web Platform content-visibility:auto This feature allows skipping painting and rendering of off-screen sections, particularly useful to avoid the browser spending time rendering parts in large pages, as content outside of the view doesn’t get rendered until it gets visible. We completed the implementation and it’s now enabled by default. Navigation API This is a new API to manage browser navigation actions and examine history, which we started working on in the past cycle. There’s been a lot of work happening here and, while it’s not finished yet, the current plan is that Apple will continue working on that in the next months. hasUAVisualTransition This is an attribute of the NavigateEvent interface, which is meant to be True if the User Agent has performed a visual transition before a navigation event. It was something that we have also finished implementing and is now also enabled by default. Secure Curves in the Web Cryptography API In this case, we worked on fixing several Web Interop related issues, as well as on increasing test coverage within the Web Platform Tests (WPT) test suites. On top of that we also moved the X25519 feature to the “prepare to ship” stage. Trusted Types This work is related to reducing DOM-based XSS attacks. Here we finished the implementation and this is now pending to be enabled by default. MathML We continued working on the MathML specification by working on the support for padding, border and margin, as well as by increasing the WPT score by ~5%. The plan for next year is to continue working on core features and improve the interaction with CSS. Cross-root ARIA Web components have accessibility-related issues with native Shadow DOM as you cannot reference elements with ARIA attributes across boundaries. We haven’t worked on this in this period, but the plan is to work in the next months on implementing the Reference Target proposal to solve those issues. Canvas Formatted Text Canvas has not a solution to add formatted and multi-line text, so we would like to also work on exploring and prototyping the Canvas Place Element proposal in WebKit, which allows better text in canvas and more extended features. Graphics Completed migration from Cairo to Skia for the Linux ports If you have followed the latest developments, you probably already know that the Linux WebKit ports (i.e. WebKitGTK and WPE) have moved from Cairo to Skia for their 2D rendering library, which was a pretty big and important decision taken after a long time trying different approaches and experiments (including developing our own HW-accelerated 2D rendering library!), as well as running several tests and measuring results in different benchmarks. The results in the end were pretty overwhelming and we decided to give Skia a go, and we are happy to say that, as of today, the migration has been completed: we covered all the use cases in Cairo, achieving feature parity, and we are now working on implementing new features and improvements built on top of Skia (e.g. GPU-based 2D rendering). On top of that, Skia is now the default backend for WebKitGTK and WPE since 2.46.0, released on September 17th, so if you’re building a recent version of those ports you’ll be already using Skia as their 2D rendering backend. Note that Skia is using its GPU-based backend only on desktop environments, on embedded devices the situation is trickier and for now the default is the CPU-based Skia backend, but we are actively working to narrow the gap and to enable GPU-based rendering also on embedded. Architecture changes with buffer sharing APIs (DMABuf) We did a lot of work here, such as a big refactoring of the fencing system to control the access to the buffers, or the continued work towards integrating with Apple’s DisplayLink infrastructure. On top of that, we also enabled more efficient composition using damaging information, so that we don’t need to pass that much information to the compositor, which would slow the CPU down. Enablement of the GPUProcess On this front, we enabled by default the compilation for WebGL rendering using the GPU process, and we are currently working in performance review and enabling it for other types of rendering. New SVG engine (LBSE: Layer-Based SVG Engine) If you are not familiar with this, here the idea is to make sure that we reuse the graphics pipeline used for HTML and CSS rendering, and use it also for SVG, instead of having its own pipeline. This means, among other things, that SVG layers will be supported as a 1st-class citizen in the engine, enabling HW-accelerated animations, as well as support for 3D transformations for individual SVG elements. On this front, on this cycle we added support for the missing features in the LBSE, namely: Implemented support for gradients & patterns (applicable to both fill and stroke) Implemented support for clipping & masking (for all shapes/text) Implemented support for markers Helped review implementation of SVG filters (done by Apple) Besides all this, we also improved the performance of the new layer-based engine by reducing repaints and re-layouts as much as possible (further optimizations still possible), narrowing the performance gap with the current engine for MotionMark. While we are still not at the same level of performance as the current SVG engine, we are confident that there are several key places where, with the right funding, we should be able to improve the performance to at least match the current engine, and therefore be able to push the new engine through the finish line. General overhaul of the graphics pipeline, touching different areas (WIP): On top of everything else commented above, we also worked on a general refactor and simplification of the graphics pipeline. For instance, we have been working on the removal of the Nicosia layer now that we are not planning to have multiple rendering implementations, among other things. Multimedia DMABuf-based sink for HW-accelerated video We merged the DMABuf-based sink for HW-accelerated video in the GL-based GStreamer sink. WebCodecs backend We completed the implementation of  audio/video encoding and decoding, and this is now enabled by default in 2.46. As for the next steps, we plan to keep working on the integration of WebCodecs with WebGL and WebAudio. GStreamer-based WebRTC backends We continued working on GstWebRTC, bringing it to a point where it can be used in production in some specific use cases, and we will still be working on this in the next months. Other Besides the points above, we also added an optional text-to-speech backend based on libspiel to the development branch, and worked on general maintenance around the support for Media Source Extensions (MSE) and Encrypted Media Extensions (EME), which are crucial for the use case of WPE running in set-top-boxes, and is a permanent task we will continue to work on in the next months. JavaScriptCore ARMv7/32-bit support: A lot of work happened around 32-bit support in JavaScriptCore, especially around WebAssembly (WASM): we ported the WASM BBQJIT and ported/enabled concurrent JIT support, and we also completed 80% of the implementation for the OMG optimization level of WASM, which we plan to finish in the next months. If you are unfamiliar with what the OMG and BBQ optimization tiers in WASM are, I’d recommend you to take a look at this article in webkit.org: “Assembling WebAssembly“. We also contributed to the JIT-less WASM, which is very useful for embedded systems that can’t support JIT for security or memory related constraints, and also did some work on the In-Place Interpreter (IPInt), which is a new version of the WASM Low-level interpreter (LLInt) that uses less memory and executes WASM bytecode directly without translating it to LLInt bytecode  (and should therefore be faster to execute). Last, we also contributed most of the implementation for the WASM GC, with the exception of some Kotlin tests. As for the next few months, we plan to investigate and optimize heap/JIT memory usage in 32-bit, as well as to finish several other improvements on ARMv7 (e.g. IPInt). New WPE API The new WPE API is a new API that aims at making it easier to use WPE in embedded devices, by removing the hassle of having to handle several libraries in tandem (i.e. WPEWebKit, libWPE and WPEBackend-FDO, for instance), available from WPE’s releases page, and providing a more modern API in general, better aimed at the most common use cases of WPE. A lot of effort happened this year along these lines, including the fact that we finally upstreamed and shipped its initial implementation with WPE 2.44, back in the first half of the year. Now, while we recommend users to give it a try and report feedback as much as possible, this new API is still not set in stone, with regular development still ongoing, so if you have the chance to try it out and share your experience, comments are welcome! Besides shipping its initial implementation, we also added support for external platforms, so that other ones can be loaded beyond the Wayland, DRM and “headless” ones, which are the default platforms already included with WPE itself. This means for instance that a GTK4 platform, or another one for RDK could be easily used with WPE. Then of course a lot of API additions were included in the new API in the latest months: Screens management API:  API to handle different screens, ask the display for the list of screens with their device scale factor, refresh rate, geometry… Top level management API: This API allows a greater degree of control, for instance by allowing more than one WebView for the same top level, as well as allowing to retrieve properties such as size, scale or state (i.e. full screen, maximized…). Maximized and minimized windows API: API to maximize/minimize a top level and monitor its state. mainly used by WebDriver. Preferred DMA-BUF formats API: enables asking the platform (compositor or DRM) for the list of preferred formats and their intended use (scanout/rendering). Input methods API: allows platforms to provide an implementation to handle input events (e.g. virtual keyboard, autocompletion, auto correction…). Gestures API: API to handle gestures (e.g. tap, drag). Buffer damaging: WebKit generates information about the areas of the buffer that actually changed and we pass that to DRM or the compositor to optimize painting. Pointer lock API: allows the WebView to lock the pointer so that the movement of the pointing device (e.g. mouse) can be used for a different purpose (e.g. first-person shooters). Last, we also added support for testing automation, and we can support WebDriver now in the new API. With all this done so far, the plan now is to complete the new WPE API, with a focus on the Settings API and accessibility support, write API tests and documentation, and then also add an external platform to support GTK4. This is done on a best-effort basis, so there’s no specific release date. WebKit on Android This year was also a good year for WebKit on Android, also known as WPE Android, as this is a project that sits on top of WPE and its public API (instead of developing a fully-fledged WebKit port). In case you’re not familiar with this, the idea here is to provide a WebKit-based alternative to the Chromium-based Web view on Android devices, in a way that leverages HW acceleration when possible and that it integrates natively (and nicely) with the several Android subsystems, and of course with Android’s native mainloop. Note that this is an experimental project for now, so don’t expect production-ready quality quite yet, but hopefully something that can be used to start experimenting with selected use cases. If you’re adventurous enough, you can already try the APKs yourself from the releases page in GitHub at https://github.com/Igalia/wpe-android/releases. Anyway, as for the changes that happened in the past 12 months, here is a summary: Updated WPE Android to WPE 2.46 and NDK 27 LTS Added support for WebDriver and included WPT test suites Added support for instrumentation tests, and integrated with the GitHub CI Added support for the remote Web inspector, very useful for debugging Enabled the Skia backend, bringing HW-accelerated 2D rendering to WebKit on Android Implemented prompt delegates, allowing implementing things such as alert dialogs Implemented WPEView client interfaces, allowing responding to things such as HTTP errors Packaged a WPE-based Android WebView in its own library and published in Maven Central. This is a massive improvement as now apps can use WPE Android by simply referencing the library from the gradle files, no need to build everything on their own. Other changes: enabled HTTP/2 support (via the migration to libsoup3), added support for the device scale factor, improved the virtual on-screen keyboard, general bug fixing… On top of that, we published 3 different blog posts covering different topics, from a general intro to a more deep dive explanation of the internals, and showing some demos. You can check them out in Jani’s blog at https://blogs.igalia.com/jani As for the future, we’ll focus on stabilization and regular maintenance for now, and then we’d like to work towards achieving production-ready quality for specific cases if possible. Quality Assurance On the QA front, we had a busy year but in general we could highlight the following topics. Fixed a lot of API tests failures in the bots that were limiting our test coverage. Fixed lots of assertions-related crashes in the bots, which were slowing down the bots as well as causing other types of issues, such as bots exiting early due too many failures. Enabled assertions in the release bots, which will help prevent crashes in the future, as well as with making our debug bots healthier. Moved all the WebKitGTK and WPE bots to building now with Skia instead of Cairo. This means that all the bots running tests are now using Skia, and there’s only one bot still using Cairo to make sure that the compilation is not broken, but that bot does not run tests. Moved all the WebKitGTK bots to use GTK4 by default. As with the move to Skia, all the WebKit bots running tests now use GTK4 and the only one remaining building with GTK3 does not run tests, it only makes sure we don’t break the GTK3 compilation for now. Working on moving all the bots to use the new SDK. This is still work in progress and will likely be completed during 2025 as it’s needed to implement several changes in the infrastructure that will take some time. General gardening and bot maintenance In the next months, our main focus would be a revamp of the QA infrastructure to make sure that we can get all the bots (including the debug ones) to a healthier state, finish the migration of all the bots to the new SDK and, ideally, be able to bring back the ready-to-use WPE images that we used to have available in wpewebkit.org. Security The current release cadence has been working well, so we continue issuing major releases every 6 months (March, September), and then minor and unstable development releases happening on-demand when needed. As usual, we kept aligning releases for WebKitGTK and WPE, with both of them happening at the same time (see https://webkitgtk.org/releases and https://wpewebkit.org/release), and then also publishing WebKit Security Advisories (WSA) when necessary, both for WebKitGTK and for WPE. Last, we also shortened the time before including security fixes in stable releases this year, and we have removed support for libsoup2 from WPE, as that library is no longer maintained. Tooling & Documentation On tooling, the main piece of news is that this year we released the initial version of the new SDK,  which is developed on top of OCI-based containers. This new SDK fixes the issues with the current existing approaches based on JHBuild and flatpak, where one of them was great for development but poor for testing and QA, and the other one was great for testing and QA, but not very convenient for development. This new SDK is regularly maintained and currently runs on Ubuntu 24.04 LTS with GCC 14 & Clang 18. It has been made public on GitHub and announced to the public in May 2024 in Patrick’s blog, and is now the officially recommended way of building WebKitGTK and WPE. As for documentation, we didn’t do as much as we would have liked here, but we still landed a few contributions in docs.webkit.org, mostly related to WebKitGTK (e.g. Releases and Versioning, Security Updates, Multimedia). We plan to do more on this regard in the next months, though, mostly by writing/publishing more documentation and perhaps also some tutorials. Final thoughts This has been a fairly long blog post but, as you can see, it’s been quite a year for WebKit here at Igalia, with many exciting changes happening at several fronts, and so there was quite a lot of stuff to comment on here. This said, you can always check the slides of the presentation in the WebKit Contributors Meeting here if you prefer a more concise version of the same content. In any case, what’s clear it’s that the next months are probably going to be quite interesting as well with all the work that’s already going on in WebKit and its Linux ports, so it’s possible that in 12 months from now I might be writing an equally long essay. We’ll see. Thanks for reading!
  • Christian Hergert: Profiling w/o Frame Pointers (2024/11/03 00:15)
    A couple years ago the Fedora council denied a request by Meta engineers to build the distribution with frame-pointers. Pretty immediately I pushed back by writing a number of articles to inform the council members why frame-pointers were necessary for a good profiling experience. Profiling is used by developers, system administrators, and when we’re lucky by bug reporters! Since then, many people have discussed other options. For example in the not too distant future we’ll probably see SFrame unwinding provide a reasonable way to unwind stacks w/o frame-pointers enabled and more importantly, without copying the contents of the stack. Until then, it can be helpful to have a way to unwind stacks even without the presence of frame-pointers. This past week I implemented that for Sysprof based on a prototype put together by Serhei Makarov in the elfutils project called eu-stacktrace. This prototype works by taking samples of the stack from perf (say 16KB-32KB worth) and resolving enough of the ELF data for DWARF/CFI (Call-frame-information)/etc to unwind the stacks in memory using a copy of the registers. From this you create a callchain (array of instruction pointers) which can be sent to Sysprof for recording. I say “in memory” because the stack and register content doesn’t hit disk. It only lands inside the mmap()-based ring buffer used to communicate with Linux’s perf event subsystem. The (much smaller) array of instruction pointers eventually lands on disk if you’re not recording to a memfd. I expanded upon this prototype with a new sysprof-live-unwinder process which does roughly the same thing as eu-stacktrace while fitting into the Sysprof infrastructure a bit more naturally. It consumes a perf data stream directly (eu-stacktrace consumed Sysprof-formatted data) and then provides that to Sysprof to help reduce overhead. Additionally, eu-stacktrace only unwinds the user-space side of things. On x86_64, at least, you can convince perf to give you both callchains (PERF_SAMPLE_CALLCHAIN) as well as sample stack/registers (PERF_SAMPLE_STACK_USER|PERF_SAMPLE_REGS_USER). If you peek for the location of PERF_CONTEXT_USER to find the context switch, blending them is quite simple. So, naturally, Sysprof does that. The additional overhead for frame-pointer unwinding user-space is negligible when you don’t have frame-pointers to begin with. I should start by saying that this still has considerable overhead compared to frame-pointers. Locally on my test machine (a Thinkpad X1 Carbon Gen 3 from around 2015, so not super new) that is about 10% of samples. I imagine I can shave a bit of that off by tracking the VMAs differently than libdwfl, so we’ll see. Here is an example of it working on CentOS Stream 10 which does not have frame-pointers enabled. Additionally, this build is debuginfod-enabled so after recording it will automatically locate enough debug symbols to get appropriate function names for what was captured. This definitely isn’t the long term answer to unwinding. But if you don’t have frame-pointers on your production operating system of choice, it might just get you by until SFrame comes around. The code is at wip/chergert/translate but will likely get cleaned up and merged this next week.
  • Bilal Elmoussaoui: A million portals (2024/11/01 00:00)
    Approximately four years ago, I published the first release of ASHPD, one of my first Rust libraries, with the simple goal of making it easy to use XDG portals from Rust. Since then, the library has grown to support all available portals and even includes a demo application showcasing some of these features. Let's look at an example: the org.freedesktop.portal.Account portal. From the client side, an API end-user can request user information with the following code: use ashpd::desktop::account::UserInformation; async fn run() -> ashpd::Result<()> { let response = UserInformation::request() .reason("App would like to access user information") .send() .await? .response()?; println!("Name: {}", response.name()); println!("ID: {}", response.id()); Ok(()) } This code calls the org.freedesktop.portal.Account.GetUserInformation D-Bus method, which xdg-desktop-portal will "redirect" to any portal frontend implementing the org.freedesktop.impl.portal.Account D-Bus interface. So, how can you provide an implementation of org.freedesktop.impl.portal.Account in Rust? That's exactly what Maximiliano and I have been working on, building on the solid foundations we established earlier. I’m thrilled to announce that we finally shipped this functionality in the 0.10 release! The first step is to implement the D-Bus interface, which we hide from the API’s end-user using traits. use ashpd::{ async_trait, backend::{ account::{AccountImpl, UserInformationOptions}, request::RequestImpl, Result, }, desktop::account::UserInformation, AppID, WindowIdentifierType, }; pub struct Account; #[async_trait] impl RequestImpl for Account { async fn close(&self) { // Close the dialog } } #[async_trait] impl AccountImpl for Account { async fn get_user_information( &self, _app_id: Option<AppID>, _window_identifier: Option<WindowIdentifierType>, _options: UserInformationOptions, ) -> Result<UserInformation> { Ok(UserInformation::new( "user", "User", url::Url::parse("file://user/icon").unwrap(), )) } } Pretty straightforward! With the D-Bus interface implemented using ASHPD wrapper types, the next step is to export it on the bus. use futures_util::future::pending; async fn main() -> ashpd::Result<()> { ashpd::backend::Builder::new("org.freedesktop.impl.portal.desktop.mycustomportal")? .account(Account) .build() .await?; loop { pending::<()>().await; } } And that’s it—you’ve implemented your first portal frontend! Currently, the backend feature doesn't yet support session-based portals, but we hope to add that functionality in the near future. With over 1 million downloads, ASHPD has come a long way, and it wouldn’t have been possible without the support and contributions from the community. A huge thank you to everyone who has helped make this library what it is today.
  • Ignacy Kuchciński: The Bargain-Finder-inator 5000: One programmer's quest for a new flat (2024/10/31 10:52)
     The Bargain-Finder-inator 5000: One programmer's quest for a new flatOr how I managed to get a reasonably priced apartment offer despite estate agenciesI think every one of us had to go through the hell that's searching for a new place to live. The reasons may be of all kinds, starting with moving between jobs or random life events, ending with your landlord wanting to raise your rent for fixing his couch despite your 3 years of begging for him to do so. You can guess my reasoning from that totally not suspiciously specific example, one thing's for certain - many of us, not lucky enough to be on their own yet, have to go through that not very delightful experience.One major problem when scraping those online market websites, is that you're not the only one desperately doing so. And if it was only for the fellow lost souls who are trying to make ends meet, oh no - many real estate agencies say hello there as well. So when a very good offer finally comes up, one that you've been dreaming your whole life kind of one, you grab that phone and call them not maybe, but may they please-oh-lord pick up. Despite you wasting no breath, chances are that when you enthusiastically call them (after correcting the typos in the phone number you made out of excitement), you're already too late. Even though you ended up manually checking the damn website every 20 minutes (yup, I set an alarm), and you called after only a quarter, you were still not fast enough and there are already four people in line before you. Which in case of a good offer means it's as good as doughnuts at work you heard they were giving out to buy your sympathy for the corporate - gone even faster than they have probably arrived. Yup, that's basically the housing market situation in Poland, yay \o/But do not abandon all hope ye who enter here -  after having only a couple of mental break downs my friend sent me a link to a program on github, that was supposed to scrap our local market website and give instance notice about new offers. The web page did have a similar function, but it only worked in theory - the emails about the "latest" offers came only once a day, not to mention the fact that they were from the day before. Oh well, in that case saying goodbye to the 20 minute alarm sounded like a dream come true, so I tried to configure the program olx-scraper to my needs. However, it turned out to be pretty useless as well - it would repeatedly fetch a whole list of offers from only one page of search results, and compare its size between iterations. If the length of such list increased, it would theoretically mean that there are new offers, and the program would send a mail notification that contained the whole list. While this approach kinda worked for searches that returns only a few results, the whole idea fell apart when there were more than could fit in one page. In that case the number of offers would seem to remain constant, and new offers would be missed. Another room for improvement was in lack of ability to ignore certain kinds of offers, such as ads, and not so helpful emails, which could just give you what you're looking for - the newest offer, instead of the whole list.Here comes the sun in the form of the Bargain-Finder-inator 5000 to the rescue! I quickly realized that a few patches was not enough to fix the old program for my (or frankly saying anyone's) use case and re-wrote the whole searching algorithm, eventually leading to a whole new program. The original name was "Wyszukiwator-Mieszkań 5000", inspired by Dr. Doofenschmirtz various schemes and inventions, and roughly translates to "Searcher-Of-Flats 5000". However, as the project grew beyond the real estate market, I needed a new name that would reflect that - it also needed to be slightly more accessible for foreigners than our oh how beautiful polish words. So I came up with the current one, with the best fitting abbreviation: bf5000. I think it's kind of neat :)Totally accurate photograph of me giving birth to Bargain-Finder-inator 5000 circa 2024, colorizedWhat Bargain-Finder-inator 5000 dutifully does is monitor a link you serve to it, pointing to an online marketplace, be it for a real estate market or any other you can think of. The catch is that it needs to be supported, but writing a new backend shouldn't be too much of a hassle, and when it is you can simply copy paste the URL of your search with all the necessary filters specified, and  give it to bf5000. You also need to specify the delay between each check for new offers, which consists of fetching only the latest offer, and comparing it with the previous "latest". If they don't match, then we are in for some goodies - an email notification with the link to the latest offer will be sent, so you need to specify the email title, addresses and the whole provider too. For more information, check out the repository on gitlab.So, don't wait no more for better days, and be part of the change now! We can take back what's rightfully ours from those money-hungry real estate agencies! When I say Bargain, you say Finder-inator 5000! You get the idea.
  • Jussi Pakkanen: Happenings at work (2024/10/30 09:25)
    A few months ago this happened.Which, for those of you not up to date on your 1960s British television, is to say that I've resigned. I'm currently enjoying the unemployed life style. No, that is not me being cheeky or ironic. I'm actually enjoying being able to focus on my own free time projects and sleeping late.Since I'm not a millionaire at some point I'll probably have to get a job again. But not for at least six months. Maybe more, maybe less, we'll see what happens.This should not affect Meson users in any significant way. I plan to spend some time to work on some fundamental issues in the code base to make things better all round. But the most important thing for now is to land the option refactor monster.
  • Adrien Plazas: Towards a GNOME Mobile Test Suite (2024/10/27 23:00)
    GNOME Mobile Making GNOME adapt to form factors beyond desktop and laptop computers is an ongoing trend that can be dated as early as the late 2000s, when Maemo provided a GNOME-based UI to phones like the Nokia N810 or the Nokia N900. Later, prototype versions of GNOME Shell had a netbook-friendly design that got course-corrected for its first release in 2011, keeping GNOME competent on larger screens. GNOME 3 was designed with touchscreens in mind, especially touchscreen-equiped netbooks and laptops, with some foray into large touch-only devices like kiosks. With its touch-capabilities and minimalist touch-friendly design, GNOME 3 offered a good base to adapt to even smaller touch-only form factors like tablets and smartphones. In the late 2010 two Linux smartphones got developed concurrently, Purism’s Librem 5 and Pine64’s PinePhone. Purism choose GNOME as the UI for its phone and invested in the development of the Phosh mobile-first shell for GNOME and into making GNOME apps adapt to smartphones. The GNOME community pretty widely embraced adaptiveness, which led to the creation of GNOME’s platform library libadwaita. At the same time, community-driven projects like postmarketOS and Mobian offered support for these smartphones and contributed to the development of this mobile-friendly software stack, including contributions to GNOME. While these devices’ reception was polarizing, the Linux community was motivated enough to pursue what they initiated, leading to the birth of GNOME Shell Mobile and to the broadening of supported devices. While GNOME Shell got forked to make it fit smartphones, it is only to prototype this mobile support freely. Ultimately the goal is the this support into Shell, making it adapt from desktops to smartphones. This still overall prototypal support for modern smartphones from GNOME and the initiative that supports it are colloquially referred to as GNOME Mobile. Testing GNOME Mobile The GNOME release team defines what constitues the canonical core GNOME stack, and describes it in the gnome-build-meta repository. GNOME OS is built based on this description and is used to test GNOME, ensuring its components are correctly integrated and interact well together. openQA is a high-level and automated OS testing tool, and in 2021, Codethink brought GNOME an openQA instance that is used to test GNOME OS automatically rather than manually. The tests are ran in virtual machines thanks to QEMU. Testing GNOME on smartphones implies testing its mobile-specific stack on smartphone-like devices the same way we test the rest of GNOME. Hardware requirements for GNOME are pretty loosely defined, and the only real requirement for smartphones is that apps designed for them should fit in a 360 × 294px window, so they can fit a 360px wide screen in portrait mode and a 360px tall screen in landscape mode, minus the space reserved for Shell. To that we can safely assume that a smartphone reports having a handset chassis type, that it has a touchscreen as its main input method, that it should work without a keyboard and a pointing device, that its screen is 9:16 or taller, and that the its has a high pixel density and should be used with an matching integer scaling factor. For reference, here is the pixel density of some de-facto reference GNOME smartphones. DeviceDiagonalResolutionDensityUI Scale Librem 55.7”720 × 1440px282 ppi200% PinePhone5.95”720 × 1440px270 ppi200% PinePhone Pro6”720 × 1440px268 ppi200% OnePlus 66.28”1080 × 2280px401 ppi300% OnePlus 6T6.41”1080 × 2340px402 ppi300% Building an automated test suite for GNOME Mobile in openQA has already been attempted earlier this year by Dorothy Kabarozi and Tanju Acheleke, and they built the gnome_mobile test suite. Last month I got offered by Codethink the opportunity to continue that effort, thanks to them for sponsoring that work. I’ve learned Dorothy and Tanju encountered various issues that prevented them from doing proper mobile tests, and the produced suite tests apps on a regular desktop but with their windows resized to smartphone-like sizes. The goal of my project was to make the test VM provide a smartphone-like screen size and chassis type. Pixel Density I’ve first tweaked the VM’s screen to be 360 × 720, but such a small resolution isn’t supported and the tests automatically fail. No big deal, smartphones run on high density devices and we want to test UI scaling, so I decided to switch to 720 × 1440 with 200% scaling… except of course the tests weren’t scaled, why would they be? To set the scaling factor, we first have to complete the system’s initial setup unscaled, and then once finally logged into GNOME Shell, we discover Settings doesn’t let us change it. This happens because Mutter enables changing the scaling factor only on arbitrarily large-enough resolutions, and 720 × 1440@2 is below the required threshold. At this point, I faced the same issues as Dorothy and Tanju and didn’t go any further, but let’s dig a bit more. Besides Mutter’s arbitrary limitation, we are facing the need to set the display’s physical size or pixel density so the OS can adapt to it from the very beginning. The best way to do this it is to have an EDID declaring our display’s resolution and physical size, we just need to find the best way to generate it and to use it. We could use a tool like qemu-edid to generate the EDID we want, inject it into the OS, and override the one from the virtual machine, but it would be a messy and dirty workaround. Our test suite uses QEMU with virtio-vga which offers the following properties: #define VIRTIO_GPU_BASE_PROPERTIES(_state, _conf) \ DEFINE_PROP_UINT32("max_outputs", _state, _conf.max_outputs, 1), \ DEFINE_PROP_BIT("edid", _state, _conf.flags, \ VIRTIO_GPU_FLAG_EDID_ENABLED, true), \ DEFINE_PROP_UINT32("xres", _state, _conf.xres, 1280), \ DEFINE_PROP_UINT32("yres", _state, _conf.yres, 800) We already use xres and yres to set the display’s resolution, but there also is the edid property, that openQA toggles on to make QEMU generate an EDID describing the virtual machine’s screen. QEMU has all that’s needed to generate and expose an EDID with the right pixel density, except for a way to let the user override the pixel density that QEMU defaults to 100 DPI. We could imagine exposing the dpi parameter as a virtio-vga property, making QEMU able to emulate devices with a high density screen, and helping us run mobile tests. Chassis Type Then I’ve looked at giving the VM a smartphone’s chassis type. The chassis type is defined in the SMBIOS, let’s read about it in the reference specification: 7.4 System Enclosure or Chassis (Type 3) The information in this structure (see Table 16) defines attributes of the system’s mechanical enclosure(s). For example, if a system included a separate enclosure for its peripheral devices, two structures would be returned: one for the main system enclosure and the second for the peripheral device enclosure. The additions to this structure in version 2.1 of this specification support the population of the CIM_Chassis class. Table 16 – System Enclosure or Chassis (Type 3) structure OffsetNameLengthValueDescription 05hTypeBYTEVariesBit 7 Chassis lock is present if 1. Otherwise, either a lock is not resent or it is unknown if the enclosure has a lock. Bits 6:0 Enumeration value; see below. 7.4.1 System Enclosure or Chassis Types Table 17 shows the byte values for the System Enclosure or Chassis Types field. NOTE Refer to 6.3 for the CIM properties associated with this enumerated value. Table 17 – System Enclosure or Chassis Types Byte ValueMeaning 01hOther 0BhHand Held For our QEMU VM to declare being a handheld device, we need to set the SMBIOS structure type 3’s Type field to 0x0B. According to its documentation, QEMU lets us set some of the SMBIOS fields conveniently via the -smbios parameter. For type 3 we are allowed -smbios type=3[,manufacturer=str][,version=str][,serial=str][,asset=str][,sku=str], so unfortunately it doesn’t let us set the chassis type. QEMU also let’s us set the whole SMBIOS via -smbios file=binary, so we could write the SMBIOS ourselves and feed it to QEMU, but it would be a dirty workaround to an issue that can be fixed. QEMU has all that’s needed to generate an SMBIOS with the right chassis type, except for a way to let the user override the chassis type that QEMU defaults to 0x01 meaning other. We could imagine adding a chassis=… parameter to -smbios type=3, making QEMU able to fake devices types, and helping us run mobile tests. Clearing The Way Adding the dpi and chassis parameters to QEMU’s CLI shouldn’t be too hard, the internals are there, it’s just a matter of exposing these variables. The important part is of course to work with the QEMU project, making sure they are happy with the proposed modifications. If you want to work on that, please let me know! And if you want to contribute to GNOME Mobile’s automated test suite, feel free to do so on the related issue on GNOME’s GitLab instance. Thanks again to Codethink for sponsoring that work.
  • Alice Mikhaylenko: Steam Deck, HID, and libmanette adventures (2024/10/23 22:44)
    Recently, I got a Steam Deck OLED. Obviously, one of the main reasons for that is to run a certain yet to be announced here emulation app on it, so I installed Bazzite instead of SteamOS, cleaned up the preinstalled junk and got a clean desktop along with the Steam session/gaming mode. For the most part, it just works (in desktop mode, at least), but there was one problematic area: input. Gamepad input Gamepads in general are difficult. While you can write generic evdev code dealing with, say, keyboard input and be reasonably sure it will work with at least the majority of keyboards, that’s not the case for gamepads. Buttons will use random input codes. Gamepads will assign different input types for the same control. (for example, D-pad can be presented as 4 buttons, 2 hat axes or 2 absolute axes). Linux kernel includes specialized hid drivers for some gamepads which will work reasonably well out of the box, but in general all bets are off. Projects like SDL have gamepad mapping databases – normalizing input for all gamepads into a standardized list of inputs. However, even that doesn’t guarantee they will work. Gamepads will pretend to be other gamepads (for example, it’s very common to emulate an Xbox gamepad) and will use incorrect mapping as a result. Some gamepads will even use identical IDs and provide physically different sets of buttons, meaning there’s no way to map both at the same time. As such, apps have to expect that gamepad may or may not work correctly and user may or may not need to remap their gamepad. Steam controllers Both the standalone Steam Controller and Steam Deck’s internal gamepad pose a unique challenge: in addition to being gamepads with every problem mentioned above, they also emulate keyboard and pointer input. To make things more complicated, Steam has a built-in userspace HID driver for these controllers, with subtly different behavior between it and the Linux kernel driver. SteamOS and Bazzite both autostart Steam in background in desktop mode. If one tries to use evdev in a generic way, same as for other gamepads, the results will not be pretty: In desktop mode Steam emulates a virtual XInput (Xbox) gamepad. This gamepad works fine, except it lacks access to Steam and QAM buttons, as well as the 4 back buttons (L4, L5, R4, R5). This works perfectly fine for most games, but fails for emulators where in addition to the in-game controls you need a button to exit the game/open menu. It also provides 2 action sets: Desktop and Gamepad. In desktop action set none of the gamepad buttons will even act like gamepad buttons, and instead will emulate keyboard and mouse. D-pad will act as arrow keys, A button will be Enter, B button will be Esc and so on. This is called “lizard mode” for some reason, and on Steam Deck is toggled by holding the Menu (Start) button. Once you switch to gamepad action set, gamepad buttons will act as a gamepad, with the caveat mentioned above. Gamepad action set also makes the left touchpad behave differently: instead of scrolling and performing a middle click on press, it does a right click on press while moving finger on it does nothing. hid-steam Linux kernel includes a driver for these controllers, called hid-steam, so you don’t have to be running Steam for it to work. While it does most of the same things Steam’s userspace driver does, it’s not identical. Lizard mode is similar, the only difference is that haptic feedback on the right touchpad stops right after lifting finger instead of after the cursor stops, while left touchpad scrolls with a different speed and does nothing on press. The gamepad device is different tho – it’s now called “Steam Deck” instead of “Microsoft X-Box 360 pad 0” and this time every button is available, in addition to touchpads – presented as a hat and a button each (tho there’s no feedback when pressing). The catch? It disables touchpads’ pointer input. The driver was based on Steam Deck HID code from SDL, and in SDL it made sense – it’s made for (usually fullscreen) games, if you’re playing it with a gamepad, you don’t need a pointer anyway. It makes less sense in emulators or otherwise desktop apps tho. It would be really nice if we could have gamepad input AND touchpads. Ideally automatically, without needing to toggle modes manually. libmanette libmanette is the GNOME gamepad library, originally split from gnome-games. It’s very simple and basically acts as a wrapper around evdev and SDL mappings database, and has API for mapping gamepads from apps. So, I decided to add support for Steam deck properly. This essentially means writing our own HID driver. Steam udev rules First, hidraw access is currently blocked by default and you need an udev rule to allow it. This is what the well known Steam udev rules do for Valve devices as well as a bunch of other well known gamepads. There are a few interesting developments in kernel, logind and xdg-desktop-portal, so we may have easier access to these devices in future, but for now we need udev rules. That said, it’s pretty safe to assume that if you have a Steam Controller or Steam Deck, you already have those rules installed. Writing a HID driver Finally, we get to the main part of the article, everything before this was introduction. We need to do a few things: 1. Disable lizard mode on startup 2. Keep disabling it every now and then, so that it doesn’t get reenabled (this is unfortunately necessary and SDL does the same thing) 3. Handle input ourselves 4. Handle rumble Both SDL and hid-steam will be excellent references for most of this, and we’ll be referring to them a lot. For the actual HID calls, we’ll be using hidapi. Before that, we need to find the device itself. Raw HID devices are exposed differently from evdev ones, as /dev/hidraw* instead of /dev/input/event*, so first libmanette needs to search for those (either using gudev, or monitoring /dev when in flatpak). Since we’re doing this for a very specific gamepad, we don’t need to worry about filtering out other input devices – this is an allowlist, so we just don’t include those. So we just match by vendor ID and product ID. Steam Deck is 28DE:1205 (at least OLED, but as far as I can tell the PID is the same for LCD). However, there are 3 devices like that: the gamepad itself, but also its emulated mouse and keyboard. Well, sort of. Only hid-steam uses those devices, Steam instead sends them via XTEST. Since that obviously doesn’t work on Wayland, there’s instead a uinput device provided by extest. SDL code tells us that only the gamepad device can actually receive HID reports, so the right device is the one that allows to read from it. Disabling lizard mode Next, we need to disable lizard mode. SDL sends an ID_CLEAR_DIGITAL_MAPPINGS report to disable keyboard/mouse emulation, then changes a few settings: namely, disables touchpads. As mentioned above, hid-steam does the same thing – it was based on this code. However, we don’t want to disable touchpads here. What we want to do instead is to send a ID_LOAD_DEFAULT_SETTINGS feature report to reset settings changed by hid-steam, and then only disable scrolling for the left touchpad. We’ll make it right click instead, like Steam does. This will keep the right touchpad moving pointer, but the previous ID_CLEAR_DIGITAL_MAPPINGS report had disabled touchpad clicking, so we also need to restore it. For that, we need to use the ID_SET_DIGITAL_MAPPINGS report. SDL does not have an existing struct for its payload (likely because of struct padding issues), so I had to figure it out myself. The structure is as follows, after the standard zero byte and the header: 8 bytes: buttons bitmask 1 byte: emulated device type 1 byte: a mouse button for DEVICE_MOUSE, a keyboard key for DEVICE_KEYBOARD, etc. Note that the SDL MouseButtons struct starts from 0 while the IDs Steam Deck accepts start from 1, so MOUSE_BTN_LEFT should be 1, MOUSE_BTN_RIGHT should be 2 and so on. Then the structure repeats, up to 6 times in the same report. ID_GET_DIGITAL_MAPPINGS returns the same structure. So, setting digital mappings for: STEAM_DECK_LBUTTON_LEFT_PAD, DEVICE_MOUSE, MOUSE_BTN_RIGHT STEAM_DECK_LBUTTON_RIGHT_PAD, DEVICE_MOUSE, MOUSE_BTN_LEFT (with the mouse button enum fixed to start from 1 instead of 0) reenables clicking. Now we have working touchpads even without Steam running, with the rest of gamepad working as a gamepad, automatically. Keeping it disabled We also need to periodically do this again to prevent hid-steam from reenabling it. SDL does it every 200 updates, so about every 800 ms (update rate is 4 ms), and the same rate works fine here. Note that SDL doesn’t reset the same settings as initially, but only SETTING_RIGHT_TRACKPAD_MODE. I don’t know why, and doing the same thing did not work for me, so I just use the same code as detailed above instead and it works fine. It does mean that clicks from touchpad presses are ended and immediately restarted every 800 ms, but it doesn’t seem to cause any issues in practice, even with e.g. drag-n-drop) Handling gamepad input This part was straightforward. Every 4 ms we poll the gamepad and receive the entire state in a single struct: buttons as a bitmask, stick coordinates, trigger values, but also touchpad coordinates, touchpad pressure, accelerometer and gyro. Right now we only expose a subset of buttons, as well as stick coordinates. There are some very interesting values in the button mask though – for example whether sticks are currently being touched, and whether touchpads are currently being touched and/or pressed. We may expose that in future, e.g. having API to disable touchpads like SDL does and instead offer the raw coordinates and pressure. Or do things on touch and/or click. Or send haptic feedback. We’ll see. libmanette event API is pretty clunky, but it wasn’t very difficult to wrap these values and send them out. Rumble For rumble we’re doing the same thing as SDL: sending an ID_TRIGGER_RUMBLE_CMD report. There are a few magic numbers involved, e.g. for the left and right gain values – originated presumably in SDL, copied into hid-steam and now into libmanette as well ^^ Skipping duplicate devices The evdev device for Steam Deck is still there, as is the virtual gamepad if Steam is running. We want to skip both of them. Thankfully, that’s easily done via checking VID/PID: Steam virtual gamepad is 28DE:11FF, while the evdev device has the same PID as the hidraw one. So, now we only have the HID device. Behavior So, how does all of this work now? When Steam is not running, libmanette will automatically switch to gamepad mode, and enable touchpads. Once the app exits, it will revert to how it was before. When Steam is running, libmanette apps will see exactly the same gamepad instead of the emulated one. However, we cannot disable lizard mode automatically in this state, so you’ll have to hold Menu button, or you’ll get input from both the gamepad and keyboard. Since Steam doesn’t disable touchpads in gamepad mode, they will still work as expected, so the only caveat is needing to hold Menu button. So, it’s not perfect, but it’s a big improvement from how it was before. Mappings Now that libmanette has bespoke code specifically for Steam Deck, there are a few more questions. This gamepad doesn’t use mappings, and apps can safely assume it has all the advertised controls and nothing else. They can also know exactly what it looks like. So, libmanette now has ManetteDeviceType enum, currently with 2 values: MANETTE_DEVICE_GENERIC for evdev devices, and MANETTE_DEVICE_STEAM_DECK, for Steam Deck. In future we’ll likely have more dedicated HID drivers and as such more device types. For now though, that’s it. The code is here, though it’s not merged yet. Big thanks to people who wrote SDL and the hid-steam driver – I would definitely not be able to do this without being able to reference them. ^^
  • Bastien Nocera: wireless_status kernel sysfs API (2024/10/23 12:06)
    (I worked on this feature last year, before being moved off desktop related projects, but I never saw it documented anywhere other than in the original commit messages, so here's the opportunity to shine a little light on a feature that could probably see more use)    The new usb_set_wireless_status() driver API function can be used by drivers of USB devices to export whether the wireless device associated with that USB dongle is turned on or not.    To quote the commit message:This will be used by user-space OS components to determine whether the battery-powered part of the device is wirelessly connected or not, allowing, for example: - upower to hide the battery for devices where the device is turned off but the receiver plugged in, rather than showing 0%, or other values that could be confusing to users - Pipewire to hide a headset from the list of possible inputs or outputs or route audio appropriately if the headset is suddenly turned off, or turned on - libinput to determine whether a keyboard or mouse is present when its receiver is plugged in.This is not an attribute that is meant to replace protocol specific APIs [...] but solely for wireless devices with an ad-hoc “lose it and your device is e-waste” receiver dongle.      Currently, the only 2 drivers to use this are the ones for the Logitech G935 headset, and the Steelseries Arctis 1 headset. Adding support for other Logitech headsets would be possible if they export battery information (the protocols are usually well documented), support for more Steelseries headsets should be feasible if the protocol has already been reverse-engineered.    As far as consumers for this sysfs attribute, I filed a bug against Pipewire (link) to use it to not consider the receiver dongle as good as unplugged if the headset is turned off, which would avoid audio being sent to headsets that won't hear it.    UPower supports this feature since version 1.90.1 (although it had a bug that makes 1.90.2 the first viable release to include it), and batteries will appear and disappear when the device is turned on/off.A turned-on headset
  • GNOME Foundation News: Registration Now Open for GNOME Asia 2024 (2024/10/23 08:12)
    Registration for GNOME Asia 2024 is now open! This year’s summit will be held from December 6-8, 2024, in the dynamic city of Bangalore, India, with both in-person and remote participation options. GNOME Asia 2024 will feature a fantastic lineup of presentations and workshops centered around the latest innovations in the GNOME ecosystem and its community. Whether you’re attending on-site in Bangalore or joining online from anywhere in the world, there’s something for everyone. The full conference schedule, including session and speaker details, will soon be available on the event website. Registration is open to everyone—whether you’re an experienced developer, new to the open-source world, or simply curious about what’s happening in GNOME. We look forward to welcoming you, both in person and online, from December 6-8! Register Now Become a GNOME Asia 2024 Sponsor! We’re still looking for sponsors for this year’s summit. If you or your company are interested in sponsoring GNOME Asia 2024, please find more details and our sponsorship brochure on the event website or reach out to asia@gnome.org.
  • Colin Walters: Why bootc doesn’t require “/usr merge” (2024/10/22 19:59)
    The systemd docs talk about UsrMerge, and while bootc works nicely with this, it does not require it and never will. In this blog we’ll touch on the rationale for that a bit. The first stumbling block is pretty simple: For many people shipping “/usr merge” systems, a a lot of backwards compatibility symlinks are required, like /bin → /usr/bin etc. Those symbolic links are pretty load bearing, and we really want them to also not just be sitting there as random mutable state. This problem domain really scope creeps into “how does / (aka the root filesystem)” work? There are multiple valid models; one that is viable for many use cases is where it’s ephemeral (i.e. a tmpfs) as encouraged by things like systemd-volatile-root. One thing I don’t like about that is that / is just sitting there mutable, given how important those symlinks are. It clashes a bit with things like wanting to ensure all read files are only from verity-protected paths and things like that. These things are closer to quibbles though, and I’m sure some folks are successfully shipping systems where they don’t have those compatibility symlinks at all. The bigger problem though is all the things that never did “/usr move”, such as /opt. And for many things in there we actually really do want it to be read-only at runtime (and more generally, versioned with the operating system content). Finally, /opt is just a symptom of a much larger issue that there’s no “/usr merge” requirement for building application containers (docker/podman/kube style) and a toplevel, explicit goal of bootc is to be compatible with that world. It’s for these reasons that while historically the ostree project encouraged “/usr merge”, it never required it and in fact the default / is versioned with the operating system – defining /etc and /var as the places to put persistent machine local state. The way bootc works by default is to continue that tradition, but as of recently we default to composefs which provides a strong and consistent story for immutability for everything under / (including /usr and /opt and arbitrary toplevels). There’s more about this in our filesystem docs. In conclusion I think what we’re doing in bootc is basically more practical, and I hope it will make it easier for people to adopt image-based systems!
  • Felix Häcker: Shortwave 4.0 (2024/10/18 15:50)
    It was long overdue, but better late than never! Shortwave 4.0 is now available on Flathub: General New MPRIS media controls implementation with improved CPU usage Song notifications are disabled by default now No more loading on startup, stations now get directly retrieved from cached data Fixed issue which sometimes prevented loading more than 8 stations from library Refreshed user interface by making use of new Libadwaita widgets Large parts of the app were reworked, providing a solid foundation for the next upcoming features Playback Last station now gets restored on app launch Redesigned player sidebar, allowing to control volume more easily New recording indicator showing whether the current playback is being recorded Fixed buffering issue which prevented playing new stations, especially after switching stations too fast Fixed issues which sometimes prevented that a song gets recorded Fixed issue that volume remains muted after unmuting Station Covers More supported image file format for station covers Enhanced security by loading station covers using sandboxed Glycin image library Non square covers automatically get a blurred background New generated fallback for stations without any cover image Improved disk usage by automatically purging no longer needed cached data Browse / Search More useful station suggestions by respecting configured system language / region Suggestions now get updated with every start, no longer always showing the same stations More accessible search feature, no longer hidden in a subpage Search results are no longer limited at 250 stations Faster and more efficient search by using new grid widgets Chromecast Shortwave is now a registered Google Cast app, no longer relying on the generic media player New backend which greatly improves communication stability with cast devices Improved discovery of cast devices with lower CPU and memory usage Now possible to change the volume of a connected cast device Enjoy!
  • Andrea Veri: GNOME Infrastructure migration to AWS (2024/10/17 00:25)
    1. Some historical background The GNOME Infrastructure has been hosted as part of one of Red Hat’s datacenters for over 15 years now. The “community cage”, which is how we usually define the hosting platform that backs up multiple Open Source projects including OSCI, is made of a set of racks living within the RAL3 (located in Raleigh) datacenter. Red Hat has not only been contributing to GNOME by maintaining the Red Hat’s Desktop Team operational, sponsoring events (such as GUADEC) but has also been supporting the project with hosting, internet connectivity, machines, RHEL (and many other RH products subscriptions). When the infrastructure was originally stood up it was primarily composed of a set of bare metal machines, workloads were not yet virtualized at the time and many services were running directly on top of the physical nodes. The advent of virtual machines and later containers reshaped how we managed and operated every component. What however remained the same over time was the networking layout of these services: a single L2 and a shared (with other tenants) public internet L3 domains (with both IPv4 and IPv6). Recent challenges When GNOME’s Openshift 4 environment was built back in 2020 we had to make specific calls: We’d have ran an Openshift Hyperconverged setup (with storage (Ceph), control plane, workloads running on top of the same subset of nodes) The total amount of nodes we received budget for was 3, this meant running with masters.schedulable=true We’d have kept using our former Ceph cluster (as it had slower disks, a good combination for certain workloads we run), this is however not supported by ODF (Openshift Data Foundation) and would have required some glue to make it completely functional Migrating GNOME’s private L2 network to L3 would have required an effort from Red Hat’s IT Network Team who generally contributes outside of their working hours, no changes were planned in this regard No changes were planned on the networking equipment side to make links redundant, that means a code upgrade on switches would have required a full services downtime Over time and with GNOME’s users and contributors base growing (46k users registered in GitLab, 7.44B requests and 50T of traffic per month on services we host on Openshift and kindly served by Fastly’s load balancers) we started noticing some of our original architecture decisions weren’t positively contributing to platform’s availability, specifically: Every time an Openshift upgrade was applied, it resulted in a cluster downtime due to the unsupported double ODF cluster layout (one internal and one external to the cluster). The behavior was stuck block devices preventing the machines to reboot with associated high IO (and general SELinux labeling mismatches), with the same nodes also hosting OCP’s control plane it was resulting in API and other OCP components becoming unavailable With no L3 network, we had to create a next-hop on our own to effectively give internet access through NAT to machines without a public internet IP address, this was resulting in connectivity outages whenever the target VM would go down for a quick maintenance Migration to AWS With budgets season for FY25 approaching we struggled finding the necessary funds in order to finally optimize and fill the gaps of our previous architecture. With this in mind we reached out to AWS Open Source Program and received a substantial amount for us to be able to fully transition GNOME’s Infrastructure to the public cloud. What we achieved so far: Deployed and configured VPC related resources, this step will help us resolve the need to have a next-hop device we have to maintain Deployed an Openshift 4.17 cluster (which uses a combination of network and classic load balancers, x86 control plane and arm64 workers) Deployed IDM nodes that are using a Wireguard tunnel between AWS and RAL3 to remain in sync Migrated several applications including SSO, Discourse, Hedgedoc What’s upcoming: Migrating away from Splunk and use a combination of rsyslog/promtail/loki Keep migrating further applications, the idea is to fully decommission the former cluster and GNOME’s presence within Red Hat’s community cage during Q1FY25 Introduce a replacement for master.gnome.org and GNOME tarballs installation Migrate applications to GNOME’s SSO Retire services such as GNOME’s wiki (MoinMoin, a static copy will instead be made available), NSD (authoritative DNS servers were outsourced and replaced with ClouDNS and GitHub’s pipelines for DNS RRs updates), Nagios, Prometheus Blackbox (replaced by ClouDNS endpoints monitoring service), Ceph (replaced by EBS, EFS, S3) Migrate smtp.gnome.org to OSCI in order to maintain current public IP’s reputation And benefits of running GNOME’s services in AWS: Scalability, we can easily scale up our worker nodes pool We run our services on top of AWS SDN and can easily create networks, routing tables, benefit from faster connectivity options, redundant networking infrastructure Use EBS/EFS, don’t have to maintain a self-managed Ceph cluster, easily scale volumes IOPS Use a local to-the-VPC load balancer, less latency for traffic to flow between the frontend and our VPC Have access to AWS services such as AWS Shield for advanced DDOS protection (with one bringing down GNOME’s GitLab just a week ago) I’d like to thank AWS (Tom “spot” Callaway, Mila Zhou) for their sponsorship and the massive opportunity they are giving to the GNOME’s Infrastructure to improve and provide resilient, stable and highly available workloads to GNOME’s users and contributors base. And a big thank you to Red Hat for the continued sponsorship over more than 15 years on making the GNOME’s Infrastructure run smoothly and efficiently, it’s crucial for me to emphatise how critical Red Hat’s long term support has been.
  • Sam Thursfield: Status update, 16/10/2024 (2024/10/16 11:00)
    I’ve participated in two internships this year, and interns — who are usually busy full-time students — often ask “How do you get time to contribute to open source?”. And the truth is that there’s no secret formula. It’s tricky to get paid to work on something that you give away for free, isn’t it? Mostly I contribute to open source in free time, either after work hours, or occasionally during periods of downtime. To my complete surprise I managed to buy a house this year and so I suddenly don’t have any time after work. During the day most of my time is spent on proprietary customer-specific work, and after work I go to look at the house and try to figure out where to start with the whole thing. (By the way, does anyone around Santiago need a load of 1980s-style furniture made from chipboard?).I’ll still be participating in GNOME around desktop search and the openQA tests, answering questions and triaging bug reports, but I won’t be driving any new stuff forwards.Anyway, why is it interesting to blog about things I’m not doing?I read this quote in LWN the other day: Make it easy to quit – Actively celebrate people who step back from maintainer positions. Celebrate what they accomplished and what they are moving on to. Don’t punish or otherwise shame quitting. This also incentivizes other people to step up, knowing that they don’t necessarily have to do it forever. — Rich Bowen, “Open Source Summit Vienna 2024” At least in GNOME, we often don’t do this. We don’t celebrate what people *have achieved*, with I think one exception (the legendary “Pants of Thanks” ceremony). We should do better at this. It’s not that we don’t appreciate each others work. But mostly we require the person doing the work to also be the one shouting loudly about it, before we notice. Is there a better way?Another thing we don’t do, by the way, is celebrate corporate participation. The great exception to this is the STF grant, and everyone involved in that did an excellent job of highlighting work which the STF grant enabled. We’re less good at crediting all the work that happens thanks to paid engineers from Red Hat, Endless, Canonical, SUSE, and so on. Another quote from this article: Each generation of a project (ie open source but not only open source) is responsible for mentoring the next generation. When you mentor someone, spend time emphasizing that it’s their job to mentor the next person, otherwise they will assume that it’s your job. A failure to commuincate this will result in the eventual attrition and death of the community. — Rich Bowen, “Open Source Summit Vienna 2024” I quite like giving conference talks and I’ve been wondering what I could speak about, if I’m not driving any new development myself.We now have 25 years of history in GNOME and it would be nice to give some talks about “How $thing works.” Desktop search comes to mind here, of course. I also learned (against my will) a lot about initial-setup this year. So I might propose some talks along these lines. It seems like also a nice way to look back at work that’s been done over the years, and give credit the people who have worked on these things over time, doing stuff that’s often invisible.On that topic, I want to highlight the excellent work done over the summer by our two GSoC interns Divyansh Jain and Rachel Tam, adding a web-based IDE to TinySPARQL that can run queries against the GNOME search database. You can read more about that both on Rachel’s blog and on Demigod’s blog. The idea behind this was making it easier to visualize how the LocalSearch index actually works, what is stored there, and what you can do with it. Hopefully this can lead into some interesting talks about search! If you like this post, please leave a comment! You use the form below, or reply on the Fediverse to @samthursfield.wordpress.com@samthursfield.wordpress.com. I’m also on LinkedIn.
  • Jiri Eischmann: Fedora at LinuxDays 2024 (2024/10/15 15:44)
    Last weekend I went to Prague to represent Fedora at LinuxDays 2024. It’s the biggest Linux event in the country with more than a thousand attendees and the Fedora booth is busy there every year. Like last year the Fedora booth was colocated with the Red Hat booth. It made sense not only because there is a relationship between the two, but it had very practical reasons: I was the only person representing and staffing the Fedora booth and I appreciated help from my colleagues who watch over the Fedora booth when I took a break to have a meal or give a talk. Post by @fedoracz@floss.social View on Mastodon The biggest magnet at our booth was again a macbook running Fedora Asahi Remix. I gave a talk about it which was only 20 minutes long and was intended as a teaser: here is an overview of the project and if you’d like to know and see more, come to your booth. Fortunately just two days before the conference, the Asahi Linux project announced support for Steam via the Fex/muvm emulation, so I could utilize a large library of games I own have a license for on Steam. During the talk someone asked if it could run the Factorio game and it could, indeed. Post by @fedoracz@floss.social View on Mastodon We also had a Fedora conference box which includes a Fedora Slimbook laptop. It was a nice contrast to the Macbook because Slimbook focuses on Linux whereas Apple doesn’t care about Linux at all. The booth was so busy that I was making a post about our presence for 2 hours because I couldn’t find even a few minutes to finish it. I also did a bit of user support. An older gentleman approached our booth stating that he had traveled 100km to get help. He had a dual boot of Fedora and Ubuntu and an Ubuntu update had broken the bootloader. Regenerating the GRUB resolved the issue. Pavel Píša, a doctor from Czech University of Technology, invited me to their booth to check out Fedora Linux running on a Milk-V box with a RISC-V CPU. I left a flyer regarding an open Fedora QA position for RISC-V because Red Hat is currently looking for someone to test Fedora Linux on RISC-V. Me with the RISC-V box. Original post. Overall, the conference was a great experience, albeit tiring. I hope to attend next year again.
  • Richard Hughes: Making it easy to generate fwupd device emulation data (2024/10/11 15:48)
    We’re trying to increase the fwupd coverage score, so we can mercilessly refactor and improve code upstream without risks of regressions. To do this we run thousands of unit tests for each part of the libfwupd public API and libfwupdplugin private API. This gets us a long way, but what we really want to do is emulate the end-to-end firmware update of every real device we support. It’s not trivial (or quick) connecting hundreds of devices to a specific CI machine, and so for some time we’ve supported recording USB device enumeration, re-plug, firmware write, re–re-plug and re-enumeration. For fwupd 2.0.0 we added support for all sysfs-based devices too, which allows us emulate a real world NVMe disk doing actual ioctls() and reads() in every submitted CI job. We’re now going to ask vendors to record emulations for existing plugins of the firmware update so we can run those in CI too. The device emulation docs are complicated and there’s lots of things that the user can do wrong. What I really wanted was a “click, click, save-as, click” user experience that doesn’t need to use the command line. The tl;dr: is that we’ve now added the needed async API in fwupd 2.0.1 (probably going to be released on Monday) and added the click, click UI to gnome-firmware: There’s a slight niggle when the user starts recording the first “internal” device (e.g. a NVMe disk) that we need to ask the user to restart the daemon or the computer. This is because we can’t just hotplug the internal non-removable device, and need to “start recording” then “enumerate device(s)” rather than the other way around. Recording all the device enumeration isn’t free in CPU or RAM (and is possibly a security problem too), and so we don’t turn it on by default. All the emulation is also all controlled using polkit now, so you need the root password to do anything remotely interesting. Some of the strings are a bit unhelpful, and some a bit clunky, so if you see anything that doesn’t look awesome or is hard to translate please tell us and we can fix it up. Of course, even better would be a merge request with a better string. If you want to try it out there’s a COPR with all the right bits for Fedora 41. It’ll might also work on Fedora 40 if you remove gnome-software. I’ll probably switch the Flathub build to 48.alpha when fwupd 2.0.1 is released too. Feedback welcome.
  • GNOME Foundation News: 2024-2025 budget and economic review (2024/10/10 07:45)
    Dear community members, As promised in the previous communication the Board would like to share some more details on our current financial situation and the budget for our 2024-2025 financial year, which runs from 1st October 2024 to 30th September 2025. Background The Foundation needs an approved budget in place because our spending policies use the budget to authorise what staff and committees are allowed to spend money on. This year we passed the budget on time for the start of the financial year, which was thanks to a lot of detailed and particularly challenging work by Richard, which the board is grateful for. We consider the budget in 2 distinct parts: Budget for our fiscally-sponsored projects. We consider their income, but not their expenses. The reason for that is that the Foundation takes a small part of the income as the fiscal sponsorship fee, supporting our administrative and operating costs. Funds received on behalf of other projects are tracked separately, called “reserved funds”, and the Foundation cannot spend money that belongs to the other projects. General operating budget for the GNOME Foundation, which is what this post is all about! At any later point, when talking about the budget, we’re talking about the general/unrestricted operating funds and it is safe to assume that income for fiscally-sponsored projects is not included. The budget for the previous 2023-2024 fiscal year was presented to the board as a roughly balanced break-even budget, anticipating $1.201M of revenue and $1.195M of expenses. The board considered two fundraising scenarios proposed by our previous ED, with the most ambitious scenario planning to raise an additional $2M for the Foundation, and one more conservative which anticipated an additional $475k of revenue from various sources (donations, grants, event sponsorship). This more conservative scenario was included in the budget, but in practice things did not work out as planned. This additional funding was not raised, meaning that in practice the Foundation once again ran at a deficit over the past year and used funds from our reserves. The new 2024-2025 budget considers a total income of $586k, and total expense of $550k. Two things are clearly different from last year: the expenses have been greatly reduced, and we have aimed for a surplus instead of the deficit we ended up with last year. Both things were a consequence of the budget from previous year not being executed as expected. Since our reserve policy requires us to retain enough money to sustain core operations without income for another year (specifically, 1.1 times core spending), we’ve had to reduce expenses to save money and restore our reserves. So, let’s dig into the details: Income $205,100 in donations. This number is based on previous years income, of individual contributions ($75,000), Advisory Board fees ($105,800), and other small contributions ($7,800) like matching donations (where companies double what employees donate). It also includes $16,500 currently pending from Wau Holland Stiftung, an organization we had a historic agreement with to collect funds from European donors that is tax deductible. We believe that there is a great potential for the GNOME Foundation to increase the amount of individual contributions received, and this has been included in the Strategic Plan and many board discussions. Unfortunately, without a permanent Executive Director, we cannot guarantee that we will be able to establish a program to do so in the short-term, so we have decided to budget conservatively to ensure economic sustainability. $64,500 from event sponsorship. Most of that money comes from GUADEC ($61,000), with some from LAS and GNOME Asia, which is one of the main reasons why we are able to maintain our events: because they are sponsored separately, they are mostly self-sustaining. $65,500 in fiscal sponsorship fees. This is based on a % fee the GNOME Foundation takes for our operational costs from hosting GIMP and Black Python Devs. This number is uncommonly high due as we have been workng with the GIMP on financial and legal arrangements to receive approx $1M of historical Bitcoin donations. (And sell them immediately – holding Bitcoin assets creates a regulatory/reporting problem for US nonprofits and our accountants have advised us against it.) $1,000 in interest from money in the bank account. This is budgeted higher than previous years, as work is already in progress to change bank accounts to increase this income, as recommended by our auditors. $500 profit from selling T-shirts and other goods ($2,500 income, $2,000 in expenses). $250,000 from the 2nd year of an Endless grant that was approved last year. This grant provides $50,000 for general funds that the Foundation can use at its discretion, and $200,000 that need to be spent on specific tasks. Currently, those are assigned to Flathub, Parental Controls, GNOME Software maintenance, and internships. Some of those will be detailed in the expense section. Expenditures $10,000 interim ED salary. This is to be able to pay Richard to continue managing the Foundation and staff team until 10th December. $100,000 for development contractors for work associated with the Endless grant. This work includes improvements in Parental Controls and GNOME Software, and is being executed by Philip Withnall (development), Sam Hewitt (design) and potentially one more developer over the coming year. Philip gave an update on the work in his presentation at GUADEC. $110,600 in contractor costs for program staff, including events and infrastructure. This covers Kristi’s work which is the backbone of events such as GUADEC, LAS and GNOME.Asia, and Bart’s work running GNOME and Flathub infrastructure. The Flathub portion of this work is funded by the Endless grant. $32,000 in Outreachy interships. This is a long-term partnership with Conservancy and commitment by the GNOME Foundation as the original birthplace of the Outreachy initiative. They are supported this year by reallocating some of the Endless grant, with their permission. This will pay for a total 4 interns between the winter and summer cohort. $20,000 in contractor support. This is allocated for part-time contracting of Thibault Martin and Dawid Jankowiak to support the STF team and work on a crowdfunding platform for our development fundraising. Some of this is funded by the Endless grant and will be spent on coordinating the next steps of the Flathub payments/donations launch. $158,000 in employment/contractor costs for operations and admin staff, supporting the GNOME Foundation across finances, events and community initiatives. $47,500 in professional services, ie legal and accounting. These include a reserve for legal fees ($10,000), an external accounts audit for the previous financial year ($17,500), which is required due to our income (mostly due to STF) being over the $2M threshold, and accounting fees ($20,000). Some of the financial and legal costs are driven by work setting up Flathub LLC and are covered by the Endless grant. $3,200 in office expenses, mostly related to postal expenses required for sending material between contractors, staff, and event organisers. $54,000 in conferences and travel. These include the budget for the conferences themselves ($30,000), which includes GUADEC, GNOME Asia, and hackathons around the globe, but also travel for staff ($12,000) and community ($12,000). Travel particularly has been significantly reduced from previous year, but should still allow for staff/organisers to attend our events, and for the travel committee to support some community travel to GUADEC and GNOME Asia. $15,000 in other fees. These include banking costs for sending money from the US to Europe, PayPal fees, and insurance. They might seem high, but are in total less than 1.5% of the cash flow of the Foundation, which is within the expected value for any organization. Balance As of the preparation of this budget, we have approx $140,000 in GNOME Foundation reserves. There’s a lot more money in the bank, but they are reserved funds held for GIMP and BPD. We need to ensure that we meet our reserve policy of retaining 1.1 times core spending. Unfortunately, core spending is fairly loosely defined. This year, we have considered: Events and minimal staff travel, part-time infrastructure support, minimal staff, and some fees and professional services. In total, we accounted that we would need at least $158,000 at the end of the year to be able meet the policy. The approved budget should put our reserves around $176,000 at the year end, which is slightly above our reserve policy. Considering we used a very limited interpretation of the reserves policy, it’s better to include a small safety margin for any unanticipated costs. Conclusion With limited time from our interim Executive Director (ED), Richard Littauer, who is working part-time, the board is prioritising: recruiting our new ED, delivering our current project/grant commitments (to STF and to Endless), and fundraising for development work. This includes working with the community to launch our development fund crowdfunder/platform and plan a follow-up project for STF grant, so that the GNOME Foundation can support and grow its direct investment in project development. Keen readers will note that there is nothing in the current budget for the ED’s salary. We are in discussions with a potential donor to see whether we can find support for the salary for the ED for the first year. In any case, transparently sharing our financial situation and fundraising needs is an essential part of any ED recruitment process, so we could still recruit somebody with “raise money for your own salary” being their first priority. Hopefully this additional detail helps to show the challenges of our current situation, and why we had to make really tough decisions, like parting ways with some greatly appreciated members of our staff team. We hope this sheds some more light on why those decisions were taken, provides confidence on the work done by the board and the ED, and where we currently stand. We are also very relieved to be able to provide a surplus budget for the first time in many years, and doing so while still being able to support the community: events, infrastructure, internships, travel funding, and meeting our commitment to donors for work done in some parts of the stack, e.g.: Flathub, parental controls and GNOME Software. We welcome any feedback and questions from the GNOME community. Thanks to all of our GNOME members, contributors, donors, sponsors and advisory board members! The GNOME Foundation Board of Directors
  • Hubert Figuière: Dev Log September 2024 (2024/10/09 00:00)
    A long overdue dev log. The last one was for September 2023. That's a year. Stuff in life has happened. Compiano In November I switched Compiano to use pipewire directly for sound. This mean removing bits of UI too. I should look at a release, but I have a few blockers I need to tackle. One key element is that there is a mechanism to download the soundbanks and for now it doesn't ask consent to do so. I don't want a release without this. Raw thumbnailer I already posted about it. libopenraw Lot has happened on that front. I want it to be a foundation of Niepce and others. First adding it to glycin triggered an alpha release on crates.io. There started a long cycle of alpha release, we are at alpha 8 as of now. Various changes: Added the mp4parse crate directly as a module. A key reason is that I already used a fork so it complicate things. Maybe I should make these few bits upstreamable and use upstream. Added a mime types API Save up a lot of RAM when doing colour interpolation: it's done in place instead of allocating a new buffer. This is significant as this is done using a 64-bit float per component. Fixed the rendering in many ways. The only thing it needs it to apply the colour balance. Fixed unpack or decompression of Olympus and Fuji files. Got an external contribution: Panasonic decompression. This made me fix the loading of uncompressed Panasonic raw too. The more recent Panasonic cameras are still failing though, there is a subtle variant that needs to be handled. Still missing from rendering: recent Nikon and all their exotic variants and compression scheme, Canon CR3, GoPro, Sony. Niepce Not so much work directly done on Niepce in the last few month, but still. Ongoing features Started a while ago some work towards the import and a rework of the catalog (the main data storage). The former is the implementation of a workflow that allow immporting images into the catalog. The latter involve reworking the catalog to become a self contained storage as a sqlite3 database. One step I already did was to use it to store the catalog preferences instead of a separate file. This should also include fixing the UI open, creating, switching. This big two things are user visible and are a stop forward what I want to happen as an internal milestone. Then I can start pluging the library import and maybe import my picture vault. A good starting point towards managing the collection, but not really for photo editing yet. Gotta make choices. Images Implemented support for HEIF which is being adopted by camera manufacturers. I updated the RT engine to 5.11 which came with RawTherapee 5.11. This is still a soft work of the code base to use strip out Gtk3 and a more recent version of glibmm. The latter patch might no longer be needed as I have since removed gtkmm from Niepce. I also implemented the GEGL pipeline using gegl-rs, which I took over to make it useful. At one I shall try to figure out how to write a loader in Rust to use libopenraw with GEGL. Cleanups The UI is slowly moving to use blueprint, and I removed all the first-party C++ code outside of bindings, no more Gtkmm and Glibmm is only here because RT engine needs it. Other Stuff I contributed to. STF I took part to the STF effort and worked on fixing issues with the desktop portal. The big chunk of the work related to the USB portal, taking over code by Georges that is itself based on code by Ryan. It spreads through multiple component of the stack: flatpak, xdg-desktop-portal, xdg-desktop-portal-gnome, libportal and ashpd. I also did a bunch of bug fixes, crashes, memory leaks, etc in flatpak, flatpak-builder, and the rest of the stack. I also implemented issue #1 for flatpak-builder: easy renaming of MIME files and icons, and also properly fixing the id in the appstream file, a common problem in flatpak. Glycin Glycin is a sandboxed image loader. I did implement the raw camera loader using libopenraw. It's written in Rust, so is libopenraw now. Thank you Sophie for merging it. Poppler Jeff was complaining about a file being super slow, with sysprof flamegraph. That picked my curiosity and looked at it. The peculiarity of the document is that it has 16000 pages and a lot of cross references. This lead to two patches: The first one is in 24.06. There was a loop, calling for the length of the container at each iteration. Turns out this is protected by a mutex and that's a lot of time spend for nothing since the value is immutable. Call it once before the loop and voila. The other one, merged for 24.09 change that loop to be a hash table lookup. The problem is that it want to locate the page by object reference, but iterate through the page list. Lots of ref and lots of a page mean even more iterator. The more complex approach is when building the page cache (it's done on demand), we build a reference to page index map. And the slow code is no longer slow and almost disappear from the flamegraphs. This make Evince and Okular faster to open that document and any presenting similar attribute: a lot of bookmarks.
  • Tobias Bernard: Boiling The Ocean Hackfest (2024/10/05 18:00)
    Last weekend we had another edition of last year’s post-All Systems Go hackfest in Berlin. This year it was even more of a collaborative event with friends from other communities, particularly postmarketOS. Topics included GNOME OS, postmarketOS, systemd, Android app support, hardware enablement, app design, local-first sync, and many other exciting things. This left us with an awkward branding question, since we didn’t want to name the event after one specific community or project. Initially we had a very long and unpronounceable acronym (LMGOSRP), but I couldn’t bring myself to use that on the announcement post so I went with something a bit more digestible :) “Boiling The Ocean” refers to the fact that this is what all the hackfest topics share in common: They’re all very difficult long-term efforts that we expect to still be working on for years before they fully bear fruit. A second, mostly incidental, connotation is that the the ocean (and wider biosphere) are currently being boiled thanks to the climate crisis, and that much of our work has a degrowth or resilience angle (e.g. running on older devices or local-first). I’m not going to try to summarize all the work done at the event since there were many different parallel tracks, many of which I didn’t participate in. Here’s a quick summary of a few of the things I was tangentially involved in, hopefully others will do their own write-ups about what they were up to. Mobile Mainline Linux on ex-Android phones was a big topic, since there were many relevant actors from this space present. This includes the postmarketOS crew, Robert with his camera work, and Jonas and Caleb who are still working on Android app support via Alien Dalvik. To me, one of the most exciting things here is that we’re seeing more well-supported Qualcomm devices (in addition to everyone’s favorite, the Oneplus 6) these days thanks to all the work being done by Caleb and others on that stack. Between this, the progress on cameras, and the Android app support maybe we can finally do the week-long daily driving challenge we’ve wanted to do for a while at GUADEC 2025 :) Design On Thursday night we already did a bit of pre-event hacking at a cafe, and I had an impromptu design session with Luca about eSIM support. He has an app for this at the moment, though of course ideally this should just be in Settings longer-term. For now we discussed how to clean up the UI a bit and bring it more in line with the HIG, and I’ll push some updates to the cellular settings mockups based on this soon. On Friday I looked into a few Papers things with Pablo, in particular highlights/annotations. I pushed the new mockups, including a new way to edit annotations. It’s very exciting to see how energetic the Papers team is, huge kudos to Pablo, Qiu, Markus, et al for revitalizing this app <3 On Saturday I sat down with fellow GNOME design contributor Philipp, and looked at a few design questions in Decibels and Calendar. One of my main takeaways is that we should take a fresh look at the adaptive Calendar layout now that we have Adwaita breakpoints and multi-layout. 47 Release Party On Saturday night we had the GNOME 47 release party, featuring a GNOME trivia quiz. Thanks to Ondrej for preparing it, and congrats to the winners: Adrian, Marvin, and Stefan :) Local-First Adrian and Andreas from p2panda had some productive discussions about a longer-term plan for a local-first sync system, and immediate next steps in that direction. We have a first collaboration planned in the form of a Hedgedoc-style local-first syncing pad, codenamed “Aardvark” (initial mockups). This will be based on a new, more modular version of p2panda (still WIP, but to be released later this year). Longer-term the idea is to have some kind of shared system level daemon so multiple apps can use the same syncing infrastructure, but for now we want to test this architecture in a self-contained app since it’s much easier to iterate on. There’s no clear timeline for this yet, but we’re aiming to start this work around the end of the year. GNOME OS On Sunday we had a GNOME OS planning meeting with Adrian, Abderrahim, and the rest of the GNOME OS team (remote). The notes are here if you’re interested in the details, but the upshot is that the transition to the next-generation stack using systemd sysupdate and homed is progressing nicely (thanks to the work Adrian and Codethink have been doing for our Sovereign Tech Fund project). If all goes to plan we’ll complete both of these this cycle, making GNOME OS 48 next spring a real game changer in terms of security and reliability. Community Despite the very last minute announcement and some logistical back and forth the event worked out beautifully, and we had over 20 people joining across the various days. In addition to the usual suspects I was happy to meet some newcomers, including from outside Berlin and outside the typical desktop crowd. Thanks for joining everyone! Thanks also to Caleb and Zeeshan for helping with organization, and the venues we had hosting us across the various days: offline, a community space in Neukölln JUCR, for hosting us in their very cool Kreuzberg office and even paying for drinks and food The x-hain hackerspace in Friedrichshain See you next time!
  • Peter Hutterer: HIOCREVOKE merged for kernel 6.12 (2024/10/04 00:27)
    TLDR: if you know what EVIOCREVOKE does, the same now works for hidraw devices via HIDIOCREVOKE. The HID standard is the most common hardware protocol for input devices. In the Linux kernel HID is typically translated to the evdev protocol which is what libinput and all Xorg input drivers use. evdev is the kernel's input API and used for all devices, not just HID ones. evdev is mostly compatible with HID but there are quite a few niche cases where they differ a fair bit. And some cases where evdev doesn't work well because of different assumptions, e.g. it's near-impossible to correctly express a device with 40 generic buttons (as opposed to named buttons like "left", "right", ...[0]). In particular for gaming devices it's quite common to access the HID device directly via the /dev/hidraw nodes. And of course for configuration of devices accessing the hidraw node is a must too (see Solaar, openrazer, libratbag, etc.). Alas, /dev/hidraw nodes are only accessible as root - right now applications work around this by either "run as root" or shipping udev rules tagging the device with uaccess. evdev too can only be accessed as root (or the input group) but many many moons ago when dinosaurs still roamed the earth (version 3.12 to be precise), David Rheinsberg merged the EVIOCREVOKE ioctl. When called the file descriptor immediately becomes invalid, any further reads/writes will fail with ENODEV. This is a cornerstone for systemd-logind: it hands out a file descriptor via DBus to Xorg or the Wayland compositor but keeps a copy. On VT switch it calls the ioctl, thus preventing any events from reaching said X server/compositor. In turn this means that a) X no longer needs to run as root[1] since it can get input devices from logind and b) X loses access to those input devices at logind's leisure so we don't have to worry about leaking passwords. Real-time forward to 2024 and kernel 6.12 now gained the HIDIOCREVOKE for /dev/hidraw nodes. The corresponding logind support has also been merged. The principle is the same: logind can hand out an fd to a hidraw node and can revoke it at will so we don't have to worry about data leakage to processes that should not longer receive events. This is the first of many steps towards more general HID support in userspace. It's not immediately usable since logind will only hand out those fds to the session leader (read: compositor or Xorg) so if you as application want that fd you need to convince your display server to give it to you. For that we may have something like the inputfd Wayland protocol (or maybe a portal but right now it seems a Wayland protocol is more likely). But that aside, let's hooray nonetheless. One step down, many more to go. One of the other side-effects of this is that logind now has an fd to any device opened by a user-space process. With HID-BPF this means we can eventually "firewall" these devices from malicious applications: we could e.g. allow libratbag to configure your mouse' buttons but block any attempts to upload a new firmware. This is very much an idea for now, there's a lot of code that needs to be written to get there. But getting there we can now, so full of optimism we go[2]. [0] to illustrate: the button that goes back in your browser is actually evdev's BTN_SIDE and BTN_BACK is ... just another button assigned to nothing particular by default. [1] and c) I have to care less about X server CVEs. [2] mind you, optimism is just another word for naïveté
  • Andy Wingo: preliminary notes on a nofl field-logging barrier (2024/10/03 08:54)
    When you have a generational collector, you aim to trace only the part of the object graph that has been allocated recently. To do so, you need to keep a remembered set: a set of old-to-new edges, used as roots when performing a minor collection. A language run-time maintains this set by adding write barriers: little bits of collector code that run when a mutator writes to a field.Whippet’s nofl space is a block-structured space that is appropriate for use as an old generation or as part of a sticky-mark-bit generational collector. It used to have a card-marking write barrier; see my article diving into V8’s new write barrier, for more background.Unfortunately, when running whiffle benchmarks, I was seeing no improvement for generational configurations relative to whole-heap collection. Generational collection was doing fine in my tiny microbenchmarks that are part of Whippet itself, but when translated to larger programs (that aren’t yet proper macrobenchmarks), it was a lose.I had planned on doing some serious tracing and instrumentation to figure out what was happening, and thereby correct the problem. I still plan on doing this, but instead for this issue I used the old noggin technique instead: just, you know, thinking about the thing, eventually concluding that unconditional card-marking barriers are inappropriate for sticky-mark-bit collectors. As I mentioned in the earlier article: An unconditional card-marking barrier applies to stores to slots in all objects, not just those in oldspace; a store to a new object will mark a card, but that card may contain old objects which would then be re-scanned. Or consider a store to an old object in a more dense part of oldspace; scanning the card may incur more work than needed. It could also be that Whippet is being too aggressive at re-using blocks for new allocations, where it should be limiting itself to blocks that are very sparsely populated with old objects. That’s three problems. The second is well-known. But the first and last are specific to sticky-mark-bit collectors, where pages mix old and new objects.a precise field-logging write barrierBack in 2019, Steve Blackburn’s paper Design and Analysis of Field-Logging Write Barriers took a look at the state of the art in precise barriers that record not regions of memory that have been updated, but the precise edges (fields) that were written to. He ends up re-using this work later in the 2022 LXR paper (see §3.4), where the write barrier is used for deferred reference counting and a snapshot-at-the-beginning (SATB) barrier for concurrent marking. All in all field-logging seems like an interesting strategy. Relative to card-marking, work during the pause is much less: you have a precise buffer of all fields that were written to, and you just iterate that, instead of iterating objects. Field-logging does impose some mutator cost, but perhaps the payoff is worth it.To log each old-to-new edge precisely once, you need a bit per field indicating whether the field is logged already. Blackburn’s 2019 write barrier paper used bits in the object header, if the object was small enough, and otherwise bits before the object start. This requires some cooperation between the collector, the compiler, and the run-time that I wasn’t ready to pay for. The 2022 LXR paper was a bit vague on this topic, saying just that it used “a side table”.In Whippet’s nofl space, we have a side table already, used for a number of purposes:Mark bits.Iterability / interior pointers: is there an object at a given address? If so, it will have a recognizable bit pattern.End of object, to be able to sweep without inspecting the object itselfPinning, allowing a mutator to prevent an object from being evacuated, for example because a hash code was computed from its addressA hack to allow fully-conservative tracing to identify ephemerons at trace-time; this re-uses the pinning bit, since in practice such configurations never evacuateBump-pointer allocation into holes: the mark byte table serves the purpose of Immix’s line mark byte table, but at finer granularity. Because of this though, it is swept lazily rather than eagerly.Generations. Young objects have a bit set that is cleared when they are promoted.Well. Why not add another thing? The nofl space’s granule size is two words, so we can use two bits of the byte for field logging bits. If there is a write to a field, a barrier would first check that the object being written to is old, and then check the log bit for the field being written. The old check will be to a byte that is nearby or possibly the same as the one to check the field logging bit. If the bit is unsert, we call out to a slow path to actually record the field.preliminary resultsI disassembled the fast path as compiled by GCC and got something like this on x86-64, in AT&T syntax, for the young-generation test:mov %rax,%rdx and $0xffffffffffc00000,%rdx shr $0x4,%rax and $0x3ffff,%eax or %rdx,%rax testb $0xe,(%rax) The first five instructions compute the location of the mark byte, from the address of the object (which is known to be in the nofl space). If it has any of the bits in 0xe set, then it’s in the old generation.Then to test a field logging bit it’s a similar set of instructions. In one of my tests the data type looks like this:struct Node { uintptr_t tag; struct Node *left; struct Node *right; int i, j; }; Writing the left field will be in the same granule as the object itself, so we can just test the byte we fetched for the logging bit directly with testb against $0x80. For right, we should be able to know it’s in the same slab (aligned 4 MB region) and just add to the previously computed byte address, but the C compiler doesn’t know that right now and so recomputes. This would work better in a JIT. Anyway I think these bit-swizzling operations are just lost in the flow of memory accesses.For the general case where you don’t statically know the offset of the field in the object, you have to compute which bit in the byte to test:mov %r13,%rcx mov $0x40,%eax shr $0x3,%rcx and $0x1,%ecx shl %cl,%eax test %al,%dil Is it good? Well, it improves things for my whiffle benchmarks, relative to the card-marking barrier, seeing a 1.05×-1.5× speedup across a range of benchmarks. I suspect the main advantage is in avoiding the “unconditional” part of card marking, where a write to a new object could cause old objects to be added to the remembered set. There are still quite a few whiffle configurations in which the whole-heap collector outperforms the sticky-mark-bit generational collector, though; I hope to understand this a bit more by building a more classic semi-space nursery, and comparing performance to that.Implementation links: the barrier fast-path, the slow path, and the sequential store buffers. (At some point I need to make it so that allocating edge buffers in the field set causes the nofl space to page out a corresponding amount of memory, so as to be honest when comparing GC performance at a fixed heap size.)Until next time, onwards and upwards!
  • Hans de Goede: IPU6 camera support in Fedora 41 (2024/10/02 18:06)
    I'm happy to announce that the last tweaks have landed and that the fully FOSS libcamera software ISP based IPU6 camera support in Fedora 41 now has no known bugs left. See the Changes page for testing instructions.Supported hardwareUnlike USB UVC cameras where all cameras work with a single kernel driver, MIPI cameras like the Intel IPU6 cameras require multiple drivers. The IPU6 input-system CSI receiver driver is common to all laptops with an IPU6 camera, but different laptops use different camera sensors and each sensor needs its own driver and then there are glue ICs like the LJCA USB IO-expander and the iVSC (Intel Visual Sensing Controller) and there also is the ipu-bridge code which translates Windows oriented ACPI tables with sensor info into the fwnodes which the Linux drivers expect.This means that even though IPU6 support has landed in Fedora 41 not all laptops with an IPU6 camera will work. Currently the IPU6 integrated in the following CPU models works if the sensor + glue hw/sw is also supported:Tiger LakeAlder LakeRaptor LakeJasper Lake and Meteor Lake also have an IPU6 but there is some more integration work necessary to get things to work there. Getting Meteor Lake IPU6 cameras to work is high on my TODO list.The mainline kernel IPU6 CSI receiver + libcamera software ISP has been successfully tested on the following models:Various Lenovo ThinkPad models with ov2740 (INT3474) sensor (1)Various Dell models with ov01a10 (OVTI01A0) sensorDell XPS 13 PLus with ov13b10 (OVTIDB10/OVTI13B1)Some HP laptops with hi556 sensor (INT3537)To see which sensor your laptop has run: "ls /sys/bus/i2c/devices" this will show e.g. "i2c-INT3474:00" if you have an ov2740, with INT3474 being the ACPI Hardware ID (HID) for the sensor. See here for a list of currently known HID to sensor mappings. Note not all of these have upstream drivers yet. In that cases chances are that there might be a sensor driver for your sensor here.We could really use help with people submitting drivers from there upstream. So if you have a laptop with a sensor which is not in the mainline but is available there, you know a bit of C-programming and you are willing to help, then please drop me an email so that we can work together to get the driver upstream.1) on some ThinkPads the ov2740 sensor fails to start streaming most of the time. I plan to look into this next week and hopefully I can come up with a fix.MIPI camera Integration work done for Fedora 41After landing the kernel IPU6 CSI receiver and libcamera software ISP support upstream early in the Fedora 41 cycle, there still was a lot of work to do with regards to integrating this into the rest of the stack so that the cameras can actually be used outside of the qcam test app.The whole stack looks like this "kernel → libcamera → pipewire | pipewire-camera-consuming-app". Where the 2 currently supported pipewire-camera consuming apps are Firefox and GNOME Snapshot.Once this was all up and running testing found quite a few bugs which have all been fixed now:Firefox showing 13 different cameras in its camera selection pulldown for a single IPU6 camera (fix).Installing pipewire-plugin-libcamera leads to UVC cameras being powered on all the time causing significant battery drain (bug, bug, discussion, fix).Pipewire does not always recognizes cameras on login (bug, bug, bug, fix).Pipewire fails to show cameras with relative controls (fix).spa_libcamera_buffer_recycle sometimes fails, causing stream to freeze on first frame (bug, fix)Firefox chooses bad default resolution of 640x480. I worked with Jan Grulich to get this fixed and this is fixed as of firefox-130.0.1-3.fc41. Thank you Jan!Snapshot prefers 4:3 mode, e.g. 1280x1080 on 16:9 camera sensors capable of 1920x1080 (pending fix)Added intel-vsc-firmware, pipewire-plugin-libcamera, libcamera-ipa to the Fedora 41 Workstation default package-set (pull, pull, pull) comments
  • Tobias Bernard: Berlin Mini GUADEC 2024 (2024/10/01 23:54)
    It’s been over two months but I still haven’t gotten around to writing a blog post about this year’s Berlin Mini GUADEC. I still don’t have time to write a longer post, but instead of putting this off forever I thought I’d at least share a few photos. Overall I think our idea of running this as a self-organized event worked out great. The community (both Berlin locals and other attendees) really came together to make it a success, despite the difficult circumstances. Thanks in particular to Jonas Dreßler for taking care of recording and streaming the talks, Ondřej Kolín and Andrei Zisu for keeping things on track during the event, and Sonny Piers for helping with various logistical things before the event.     Thanks to everyone who helped to make it happen, and see you next year!
  • Carlos Garcia Campos: Graphics improvements in WebKitGTK and WPEWebKit 2.46 (2024/09/27 10:30)
    WebKitGTK and WPEWebKit recently released a new stable version 2.46. This version includes important changes in the graphics implementation. Skia The most important change in 2.46 is the introduction of Skia to replace Cairo as the 2D graphics renderer. Skia supports rendering using the GPU, which is now the default, but we also use it for CPU rendering using the same threaded rendering model we had with Cairo. The architecture hasn’t changed much for GPU rendering: we use the same tiled rendering approach, but buffers for dirty regions are rendered in the main thread as textures. The compositor waits for textures to be ready using fences and copies them directly to the compositor texture. This was the simplest approach that already resulted in much better performance, specially in the desktop with more powerful GPUs. In embedded systems, where GPUs are not so powerful, it’s still better to use the CPU with several rendering threads in most of the cases. It’s still too early to announce anything, but we are already experimenting with different models to improve the performance even more and make a better usage of the GPU in embedded devices. Skia has received several GCC specific optimizations lately, but it’s always more optimized when built with clang. The optimizations are more noticeable in performance when using the CPU for rendering. For this reason, since version 2.46 we recommend to build WebKit with clang for the best performance. GCC is still supported, of course, and performance when built with GCC is quite good too. HiDPI Even though there aren’t specific changes about HiDPI in 2.46, users of high resolution screens using a device scale factor bigger than 1 will notice much better performance thanks to scaling being a lot faster on the GPU. Accelerated canvas The 2D canvas can be accelerated independently on whether the CPU or the GPU is used for painting layers. In 2.46 there’s a new setting WebKitSettings:enable-2d-canvas-acceleration to control the 2D canvas acceleration. In some embedded devices the combination of CPU rendering for layer tiles and GPU for the canvas gives the best performance. The 2D canvas is normally rendered into an image buffer that is then painted in the layer as an image. We changed that for the accelerated case, so that the canvas is now rendered into a texture that is copied to a compositor texture to be directly composited instead of painted into the layer as an image. In 2.46 the offscreen canvas is enabled by default. There are more cases where accelerating the canvas is not desired, for example when the canvas size is not big enough it’s faster to use the GPU. Also when there’s going to be many operations to “download” pixels from GPU. Since this is not always easy to predict, in 2.46 we added support for the willReadFrequently canvas setting, so that when set by the application when creating the canvas it causes the canvas to be always unaccelerated. Filters All the CSS filters are now implemented using Skia APIs, and accelerated when possible. The most noticeable change here is that sites using blur filters are no longer slow. Color spaces Skia brings native support for color spaces, which allows us to greatly simplify the color space handling code in WebKit. WebKit uses color spaces in many scenarios – but especially in case of SVG and filters. In case of some filters, color spaces are necessary as some operations are simpler to perform in linear sRGB. The good example of that is feDiffuseLighting filter – it yielded wrong visual results for a very long time in case of Cairo-based implementation as Cairo doesn’t have a support for color spaces. At some point, however, Cairo-based WebKit implementation has been fixed by converting pixels to linear in-place before applying the filter and converting pixels in-place back to sRGB afterwards. Such a workarounds are not necessary anymore as with Skia, all the pixel-level operations are handled in a color-space-transparent way as long as proper color space information is provided. This not only impacts the results of some filters that are now correct, but improves performance and opens new possibilities for acceleration. Font rendering Font rendering is probably the most noticeable visual change after the Skia switch with mixed feedback. Some people reported that several sites look much better, while others reported problems with kerning in other sites. In other cases it’s not really better or worse, it’s just that we were used to the way fonts were rendered before. Damage tracking WebKit already tracks the area of the layers that has changed to paint only the dirty regions. This means that we only repaint the areas that changed but the compositor incorporates them and the whole frame is always composited and passed to the system compositor. In 2.46 there’s experimental code to track the damage regions and pass them to the system compositor in addition to the frame. Since this is experimental it’s disabled by default, but can be enabled with the runtime feature PropagateDamagingInformation. There’s also UnifyDamagedRegions feature that can be used in combination with PropagateDamagingInformation to unify the damage regions into one before passing it to the system compositor. We still need to analyze the impact of damage tracking in performance before enabling it by default. We have also started an experiment to use the damage information in WebKit compositor and avoid compositing the entire frame every time. GPU info Working on graphics can be really hard in Linux, there are too many variables that can result in different outputs for different users: the driver version, the kernel version, the system compositor, the EGL extensions available, etc. When something doesn’t work for some people and work for others, it’s key for us to gather as much information as possible about the graphics stack. In 2.46 we have added more useful information to webkit://gpu, like the DMA-BUF buffer format and modifier used (for GTK port and WPE when using the new API). Very often the symptom is the same, nothing is rendered in the web view, even when the causes could be very different. For those cases, it’s even more difficult to gather the info because webkit://gpu doesn’t render anything either. In 2.46 it’s possible to load webkit://gpu/stdout to get the information as a JSON directly in stdout. Sysprof Another common symptom for people having problems is that a particular website is slow to render, while for others it works fine. In these cases, in addition to the graphics stack information, we need to figure out where we are slower and why. This is very difficult to fix when you can’t reproduce the problem. We added initial support for profiling in 2.46 using sysprof. The code already has some marks so that when run under sysprof we get useful information about timings of several parts of the graphics pipeline. Next This is just the beginning, we are already working on changes that will allow us to make a better use of both the GPU and CPU for the best performance. We have also plans to do other changes in the graphics architecture to improve synchronization, latency and security. Now that we have adopted sysprof for profiling, we are also working on improvements and new tools.
Enter your comment. Wiki syntax is allowed:
U U E Y B
 
  • news/planet/gnome.txt
  • Last modified: 2021/10/30 11:41
  • by 127.0.0.1