Freedesktop Planet - Latest News

  • André Almeida: Linux 6.14, an almost forgotten release (2025/03/28 00:00)
    Linux 6.14 is the second release of 2025, and as usual Igalia took part on it. It’s a very normal release, except that it was release on Monday, instead of the usual Sunday release that has been going on for years now. The reason behind this? Well, quoting Linus himself: I’d like to say that some important last-minute thing came up and delayed things. But no. It’s just pure incompetence. But we did not forget about it, so here’s our Linux 6.14 blog post! A part of the development cycle for this release happened during late December, when a lot of maintainers and developers were taking their deserved breaks. As a result of this, this release contains less changes than usual as stated by LWN as the “lowest level of merge-window activity seen in years”. Nevertheless, some cool features made through this release: NT synchronization primitives: Elizabeth Figura, from Codeweavers, is know from her work around improving Wine sync functions, like mutexes and semaphores. She was one the main collaborators behind the futex_waitv() work and now developed a virtual driver that is more compliant with the precise semantics that the NT kernel exposes. This allows Wine to behave closer to Windows without the need to create new syscalls, since this driver uses ioctl() as the front-end uAPI. RWF_UNCACHED: Linux has two ways of dealing with storage I/O: buffered I/O (usually the preferred one) that stores data in a temporary buffer and regularly syncs the cache data with the device; and direct I/O that doesn’t use cache and always writes/reads synchronously with the storage device. Now a new mixed approach is available: uncached buffered I/O. This method is aimed to have a fast way to write or read data that will not be needed again in the short term. For reading, the device writes data in the buffer and as soon as the user finished reading the buffer, it’s cleared from the cache. For writing, as soon as userspace fills the cache, the device reads it and removes it from the cache. In this way we still have the advantage of using a fast cache but reducing the cache pressure. amdgpu panic support: AMD developers added kernel panic support for amdgpu driver, “which displays a pretty user friendly message on the screen when a Linux kernel panic occurs” instead of just a black screen or a partial dmesg log. As usual Kernel Newbies provides a very good summary, you should check it for more details: Linux 6.14 changelog. Now let’s jump to see what were the merged contributions by Igalia for this release! DRM For the DRM common infrastructure, we helped to land a standardization for DRM client memory usage reporting. Additionally, we contributed to improve and fix bugs found in drivers of AMD, Intel, Broadcom, and Vivante. AMDGPU For the AMD driver, we fixed bugs experienced by users of Cosmic Desktop Environment on several AMD hardware versions. One was uncovered with the introduction of overlay cursor mode, and a definition mismatch across the display driver caused a page-fault in the usage of multiple overlay planes. Another bug was related to division by zero on plane scaling. Also, we fixed regressions on VRR and MST generated by the series of changes to migrate AMD display driver from open-coded EDID handling to drm_edid struct. Intel For the Intel drivers, we fixed a bug in the xe GPU driver which prevented certain type of workarounds from being applied, helped with the maintainership of the i915 driver, handled external code contributions, maintained the development branch and sent several pull requests. Raspberry Pi (V3D) We fixed the GPU resets for the Raspberry Pi 4 as we found out to be broken as per a user bug report. Also in the V3D driver, the active performance monitor is now properly stopped before being destroyed, addressing a potential use-after-free issue. Additionally, support for a global performance monitor has been added via a new DRM_IOCTL_V3D_PERFMON_SET_GLOBAL ioctl. This allows all jobs to share a single, globally configured perfmon, enabling more consistent performance tracking and paving the way for integration with user-space tools such as perfetto. Your browser does not support the video tag. A small video demo of perfetto integration with V3D etnaviv On the etnaviv side, fdinfo support has been implemented to expose memory usage statistics per file descriptor, enhancing observability and debugging capabilities for memory-related behavior. sched_ext Many BPF schedulers (e.g., scx_lavd) frequently call bpf_ktime_get_ns() for tracking tasks’ runtime properties. bpf_ktime_get_ns() eventually reads a hardware timestamp counter (TSC). However, reading a hardware TSC is not performant in some hardware platforms, degrading instructions per cycyle (IPC). We addressed the performance problem of reading hardware TSC by leveraging the rq clock in the scheduler core, introducing a scx_bpf_now() function for BPF schedulers. Whenever the rq clock is fresh and valid, scx_bpf_now() provides the rq clock, which is already updated by the scheduler core, so it can reduce reading the hardware TSC. Using scx_bpf_now() reduces the number of reading hardware TSC by 50-80% (e.g., 76% for scx_lavd). Assorted kernel fixes Continuing our efforts on cleaning up kernel bugs, we provided a few fixes that address issues reported by syzbot with the goal of increasing stability and security, leveraging the fuzzing capabilities of syzkaller to bring to the surface certain bugs that are hard to notice otherwise. We’re addressing bug reports from different kernel areas, including drivers and core subsystems such as the memory manager. As part of this effort, several fixes were done for the probe path of the rtlwifi driver. Check the complete list of Igalia’s contributions for the 6.14 release Authored (38) Changwoo Min sched_ext: Relocate scx_enabled() related code sched_ext: Implement scx_bpf_now() sched_ext: Add scx_bpf_now() for BPF scheduler sched_ext: Add time helpers for BPF schedulers sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now() sched_ext: Use time helpers in BPF schedulers sched_ext: Fix incorrect time delta calculation in time_delta() Christian Gmeiner drm/v3d: Stop active perfmon if it is being destroyed drm/etnaviv: Add fdinfo support for memory stats drm/v3d: Add DRM_IOCTL_V3D_PERFMON_SET_GLOBAL Luis Henriques fuse: fix possible deadlock if rings are never initialized Maíra Canal drm/v3d: Fix performance counter source settings on V3D 7.x drm/v3d: Fix miscellaneous documentation errors drm/v3d: Assign job pointer to NULL before signaling the fence drm/v3d: Don’t run jobs that have errors flagged in its fence drm/v3d: Set job pointer to NULL when the job’s fence has an error Melissa Wen drm/amd/display: fix page fault due to max surface definition mismatch drm/amd/display: increase MAX_SURFACES to the value supported by hw drm/amd/display: fix divide error in DM plane scale calcs drm/amd/display: restore invalid MSA timing check for freesync drm/amd/display: restore edid reading from a given i2c adapter Ricardo Cañuelo Navarro mm,madvise,hugetlb: check for 0-length range after end address adjustment mm: shmem: remove unnecessary warning in shmem_writepage() Rodrigo Siqueira MAINTAINERS: Change my role from Maintainer to Reviewer mailmap: Add entry for Rodrigo Siqueira Thadeu Lima de Souza Cascardo wifi: rtlwifi: do not complete firmware loading needlessly wifi: rtlwifi: rtl8192se: rise completion of firmware loading as last step wifi: rtlwifi: wait for firmware loading before releasing memory wifi: rtlwifi: fix init_sw_vars leak when probe fails wifi: rtlwifi: usb: fix workqueue leak when probe fails wifi: rtlwifi: remove unused check_buddy_priv wifi: rtlwifi: destroy workqueue at rtl_deinit_core wifi: rtlwifi: fix memory leaks and invalid access at probe error path wifi: rtlwifi: pci: wait for firmware loading before releasing memory Revert “media: uvcvideo: Require entities to have a non-zero unique ID” char: misc: deallocate static minor in error path Tvrtko Ursulin drm/amdgpu: Use DRM scheduler API in amdgpu_xcp_release_sched drm/xe: Fix GT “for each engine” workarounds Reviewed (36) André Almeida ASoC: cs35l41: Fallback to using HID for system_name if no SUB is available ASoC: cs35l41: Fix acpi_device_hid() not found Christian Gmeiner drm/v3d: Fix performance counter source settings on V3D 7.x drm/etnaviv: Convert timeouts to secs_to_jiffies() Iago Toral Quiroga drm/v3d: Fix performance counter source settings on V3D 7.x drm/v3d: Assign job pointer to NULL before signaling the fence drm/v3d: Don’t run jobs that have errors flagged in its fence drm/v3d: Set job pointer to NULL when the job’s fence has an error Jose Maria Casanova Crespo drm/v3d: Assign job pointer to NULL before signaling the fence Luis Henriques fuse: rename to fuse_dev_end_requests and make non-static fuse: Move fuse_get_dev to header file fuse: Move request bits fuse: Add fuse-io-uring design documentation fuse: make args->in_args[0] to be always the header fuse: {io-uring} Handle SQEs - register commands fuse: Make fuse_copy non static fuse: Add fuse-io-uring handling into fuse_copy fuse: {io-uring} Make hash-list req unique finding functions non-static fuse: Add io-uring sqe commit and fetch support fuse: {io-uring} Handle teardown of ring entries fuse: {io-uring} Make fuse_dev_queue_{interrupt,forget} non-static fuse: Allow to queue fg requests through io-uring fuse: Allow to queue bg requests through io-uring fuse: {io-uring} Prevent mount point hang on fuse-server termination fuse: block request allocation until io-uring init is complete fuse: enable fuse-over-io-uring fuse: prevent disabling io-uring on active connections Maíra Canal drm/vkms: Remove index parameter from init_vkms_output drm/vkms: Code formatting drm/vkms: Use drm_frame directly drm/vkms: Use const for input pointers in pixel_read an pixel_write functions drm/v3d: Add DRM_IOCTL_V3D_PERFMON_SET_GLOBAL Tvrtko Ursulin drm/etnaviv: Add fdinfo support for memory stats drm: make drm-active- stats optional Documentation/gpu: Clarify drm memory stats definition drm/sched: Fix preprocessor guard Tested (2) André Almeida ASoC: cs35l41: Fallback to using HID for system_name if no SUB is available Christian Gmeiner hexagon: fix using plain integer as NULL pointer warning in cmpxchg Acked (1) Iago Toral Quiroga drm/v3d: Fix miscellaneous documentation errors Maintainer SoB (6) Maíra Canal drm/v3d: Stop active perfmon if it is being destroyed drm/v3d: Add DRM_IOCTL_V3D_PERFMON_SET_GLOBAL drm/vc4: plane: Remove WARN on state being set in plane_reset Tvrtko Ursulin drm/i915: Remove deadcode drm/i915: Remove unused intel_huc_suspend drm/i915: Remove unused intel_ring_cacheline_align
  • Simon Ser: Status update, March 2025 (2025/03/15 22:00)
    Hi all! This month I’ve finally finished my initial work on HDR10 support for wlroots! My branch supports playing both SDR and HDR content on either an SDR or HDR output. It’s a pretty basic version: wlroots only performs very basic gamut mapping, and has a simple luminance multiplier instead of proper tone mapping. Additionally the source content luminance and mastering display metadata isn’t taken into account. Thus the result isn’t as good as it could be, but that can be improved once the initial work is merged! I’ve also been talking with dnkl about blending optical color values rather than electrical values in foot (“gamma-correct blending”). Thanks to the color-management protocol, foot can specify that its buffers contain linearly encoded values (as opposed to the default, sRGB) and can implement this blending method without sacrificing performance. See the foot pull request for more details. We’ve been working on fixing the few last known blockers remaining for the next wlroots release, in particular related to scene-graph clipping, custom modes, and explicit synchronization. I hope we’ll be able to start the release candidate dance soon. The NPotM is Bakah, a small utility to build Docker Bake configuration files with Buildah (the library powering Podman). I’ve written more about the motivation and design of this tool in a separate article. I’ve released tlstunnel 0.4 with better support for certificate files and some bugfixes. The sogogi WebDAV file server got support for graceful shutdown and Unix socket listeners thanks to Krystian Chachuła. Last, mako 1.10 adds a bunch of useful features such as include directives, more customization for border sizes and icon border radius, and a --no-history flag for makoctl dismiss. See you next month!
  • Pekka Paalanen: Wayland color-management, SDR vs. HDR, and marketing (2025/03/13 09:40)
    This time I have three topics. First, I want to promote the blog post I wrote to celebrate the landing of the Wayland color-management extension into wayland-protocols staging area. It's a brief historique of the journey. Second, I want to discuss SDR and HDR video modes on monitors and TVs. I have seen people expect that the same sRGB content displayed on the SDR video mode and the HDR (BT.2100/PQ) video mode on the same monitor will look the same, and they can arbitrarily switch between the modes at any time. I have argued that this is a false expectation. Why? Monitors tend to have a slew of settings. I tend to call them monitor "knobs". There are brightness, contrast, color temperature, picture mode, dynamic contrast, sharpness, gamma, and whatever. Many people have noticed that when the video source puts the monitor into BT.2100/PQ video mode, the monitor locks out some settings, often brightness and/or contrast included. So, SDR and HDR video modes do not play by the same rules. Hence, one cannot generally expect a match even if the video source does everything correctly. Third, there is marketing. Have a look at the first third of this video. They discuss video streaming services, TV selling, and HDR from the picture quality point of view. My take of that is, that (some? most?) monitors and TVs come with a screaming broken picture out-of-the-box because marketing has to sell them. If all displays displayed a given content as intended, they would all look the same, major technology differences notwithstanding, but marketing wants to make each individual stand out. Have you heard of TV calibration services? If I buy a new TV from a local electronics department store, they offer a calibration service, for a considerable additional fee. Why would anyone need a calibration service, the factory settings should be good, right?
  • Ricardo Garcia: Device-Generated Commands at Vulkanised 2025 (2025/03/11 16:30)
    A month ago I attended Vulkanised 2025 in Cambridge, UK, to present a talk about Device-Generated Commands in Vulkan. The event was organized by Khronos and took place in the Arm Cambridge office. The talk I presented was similar to the one from XDC 2024, but instead of being a lightning 5-minutes talk, I had 25-30 minutes to present and I could expand the contents to contain proper explanations of almost all major DGC concepts that appear in the spec. I attended the event together with my Igalia colleagues Lucas Fryzek and Stéphane Cerveau, who presented about lavapipe and Vulkan Video, respectively. We had a fun time in Cambridge and I can sincerely recommend attending the event to any Vulkan enthusiasts out there. It allows you to meet Khronos members and people working on both the specification and drivers, as well as many other Vulkan users from a wide variety of backgrounds. The recordings for all sessions are now publicly available, and the one for my talk can be found embedded below. For those of you preferring slides and text, I’m also providing a transcription of my presentation together with slide screenshots further down. In addition, at the end of the video there’s a small Q&A section but I’ve always found it challenging to answer questions properly on the fly and with limited time. For this reason, instead of transcribing the Q&A section literally, I’ve taken the liberty of writing down the questions and providing better answers in written form, and I’ve also included an extra question that I got in the hallways as bonus content. You can find the Q&A section right after the embedded video. Vulkanised 2025 recording Questions and answers with longer explanations Question: can you give an example of when it’s beneficial to use Device-Generated Commands? There are two main use cases where DGC would improve performance: on the one hand, many times game engines use compute pre-passes to analyze the scene they want to draw and prepare some data for that scene. This includes maybe deciding LOD levels, discarding content, etc. After that compute pre-pass, results would need to be analyzed from the CPU in some way. This implies a stall: the output from that compute pre-pass needs to be transferred to the CPU so the CPU can use it to record the right drawing commands, or maybe you do this compute pre-pass during the previous frame and it contains data that is slightly out of date. With DGC, this compute dispatch (or set of compute dispatches) could generate the drawing commands directly, so you don’t stall or you can use more precise data. You also save some memory bandwidth because you don’t need to copy the compute results to host-visible memory. On the other hand, sometimes scenes contain so much detail and geometry that recording all the draw calls from the CPU takes a nontrivial amount of time, even if you distribute this draw call recording among different threads. With DGC, the GPU itself can generate these draw calls, so potentially it saves you a lot of CPU time. Question: as the extension makes heavy use of buffer device addresses, what are the challenges for tools like GFXReconstruct when used to record and replay traces that use DGC? The extension makes use of buffer device addresses for two separate things. First, it uses them to pass some buffer information to different API functions, instead of passing buffer handles, offsets and sizes. This is not different from other APIs that existed before. The VK_KHR_buffer_device_address extension contains APIs like vkGetBufferOpaqueCaptureAddressKHR, vkGetDeviceMemoryOpaqueCaptureAddressKHR that are designed to take care of those cases and make it possible to record and reply those traces. Contrary to VK_KHR_ray_tracing_pipeline, which has a feature to indicate if you can capture and replay shader group handles (fundamental for capture and replay when using ray tracing), DGC does not have any specific feature for capture-replay. DGC does not add any new problem from that point of view. Second, the data for some commands that is stored in the DGC buffer sometimes includes device addresses. This is the case for the index buffer bind command, the vertex buffer bind command, indirect draws with count (double indirection here) and ray tracing command. But, again, the addresses in those commands are buffer device addresses. That does not add new challenges for capture and replay compared to what we already had. Question: what is the deal with the last token being the one that dispatches work? One minor detail from DGC, that’s important to remember, is that, by default, DGC respects the order in which sequences appear in the DGC buffer and the state used for those sequences. If you have a DGC buffer that dispatches multiple draws, you know the state that is used precisely for each draw: it’s the state that was recorded before the execute-generated-commands call, plus the small changes that a particular sequence modifies like push constant values or vertex and index buffer binds, for example. In addition, you know precisely the order of those draws: executing the DGC buffer is equivalent, by default, to recording those commands in a regular command buffer from the CPU, in the same order they appear in the DGC buffer. However, when you create an indirect commands layout you can indicate that the sequences in the buffer may run in an undefined order (this is VK_INDIRECT_COMMANDS_LAYOUT_USAGE_UNORDERED_SEQUENCES_BIT_EXT). If the sequences could dispatch work and then change state, we would have a logical problem: what do those state changes affect? The sequence that is executed right after the current one? Which one is that? We would not know the state used for each draw. Forcing the work-dispatching command to be the last one is much easier to reason about and is also logically tight. Naturally, if you have a series of draws on the CPU where, for some of them, you change some small bits of state (e.g. like disabling the depth or stencil tests) you cannot do that in a single DGC sequence. For those cases, you need to batch your sequences in groups with the same state (and use multiple DGC buffers) or you could use regular draws for parts of the scene and DGC for the rest. Question from the hallway: do you know what drivers do exactly at preprocessing time that is so important for performance? Most GPU drivers these days have a kernel side and a userspace side. The kernel driver does a lot of things like talking to the hardware, managing different types of memory and buffers, talking to the display controller, etc. The kernel driver normally also has facilities to receive a command list from userspace and send it to the GPU. These command lists are particular for each GPU vendor and model. The packets that form it control different aspects of the GPU. For example (this is completely made-up), maybe one GPU has a particular packet to modify depth buffer and test parameters, and another packet for the stencil test and its parameters, while another GPU from another vendor has a single packet that controls both. There may be another packet that dispatches draw work of all kinds and is flexible to accomodate the different draw commands that are available on Vulkan. The Vulkan userspace driver translates Vulkan command buffer contents to these GPU-specific command lists. In many drivers, the preprocessing step in DGC takes the command buffer state, combines it with the DGC buffer contents and generates a final command list for the GPU, storing that final command list in the preprocess buffer. Once the preprocess buffer is ready, executing the DGC commands is only a matter of sending that command list to the GPU. Talk slides and transcription Hello, everyone! I’m Ricardo from Igalia and I’m going to talk about device-generated commands in Vulkan. First, some bits about me. I have been part of the graphics team at Igalia since 2019. For those that don’t know us, Igalia is a small consultancy company specialized in open source and my colleagues in the graphics team work on things such as Mesa drivers, Linux kernel drivers, compositors…​ that kind of things. In my particular case the focus of my work is contributing to the Vulkan Conformance Test Suite and I do that as part of a collaboration between Igalia and Valve that has been going on for a number of years now. Just to highlight a couple of things, I’m the main author of the tests for the mesh shading extension and device-generated commands that we are talking about today. So what are device-generated commands? So basically it’s a new extension, a new functionality, that allows a driver to read command sequences from a regular buffer: something like, for example, a storage buffer, instead of the usual regular command buffers that you use. The contents of the DGC buffer could be filled from the GPU itself. This is what saves you the round trip to the CPU and, that way, you can improve the GPU-driven rendering process in your application. It’s like one step ahead of indirect draws and dispatches, and one step behind work graphs. And it’s also interesting because device-generated commands provide a better foundation for translating DX12. If you have a translation layer that implements DX12 on top of Vulkan like, for example, Proton, and you want to implement ExecuteIndirect, you can do that much more easily with device generated commands. This is important for Proton, which Valve uses to run games on the Steam Deck, i.e. Windows games on top of Linux. If we set aside Vulkan for a moment, and we stop thinking about GPUs and such, and you want to come up with a naive CPU-based way of running commands from a storage buffer, how do you do that? Well, one immediate solution we can think of is: first of all, I’m going to assign a token, an identifier, to each of the commands I want to run, and I’m going to store that token in the buffer first. Then, depending on what the command is, I want to store more information. For example, if we have a sequence like we see here in the slide where we have a push constant command followed by dispatch, I’m going to store the token for the push constants command first, then I’m going to store some information that I need for the push constants command, like the pipeline layout, the stage flags, the offset and the size. Then, after that, depending on the size that I said I need, I am going to store the data for the command, which is the push constant values themselves. And then, after that, I’m done with it, and I store the token for the dispatch, and then the dispatch size, and that’s it. But this doesn’t really work: this is not how GPUs work. A GPU would have a hard time running commands from a buffer if we store them this way. And this is not how Vulkan works because in Vulkan you want to provide as much information as possible in advance and you want to make things run in parallel as much as possible, and take advantage of the GPU. So what do we do in Vulkan? In Vulkan, and in the Vulkan VK_EXT_device_generated_commands extension, we have this central concept, which is called the Indirect Commands Layout. This is the main thing, and if you want to remember just one thing about device generated commands, you can remember this one. The indirect commands layout is basically like a template for a short sequence of commands. The way you build this template is using the tokens and the command information that we saw colored red and green in the previous slide, and you build that in advance and pass that in advance so that, in the end, in the command buffer itself, in the buffer that you’re filling with commands, you don’t need to store that information. You just store the data for each command. That’s how you make it work. And the result of this is that with the commands layout, that I said is a template for a short sequence of commands (and by short I mean a handful of them like just three, four or five commands, maybe 10), the DGC buffer can be pretty large, but it does not contain a random sequence of commands where you don’t know what comes next. You can think about it as divided into small chunks that the specification calls sequences, and you get a large number of sequences stored in the buffer but all of them follow this template, this commands layout. In the example we had, push constant followed by dispatch, the contents of the buffer would be push constant values, dispatch size, push content values, dispatch size, many times repeated. The second thing that Vulkan does to be able to make this work is that we limit a lot what you can do with device-generated commands. There are a lot of things you cannot do. In fact, the only things you can do are the ones that are present in this slide. You have some things like, for example, update push constants, you can bind index buffers, vertex buffers, and you can draw in different ways, using mesh shading maybe, you can dispatch compute work and you can dispatch raytracing work, and that’s it. You also need to check which features the driver supports, because maybe the driver only supports device-generated commands for compute or ray tracing or graphics. But you notice you cannot do things like start render passes or insert barriers or bind descriptor sets or that kind of thing. No, you cannot do that. You can only do these things. This indirect commands layout, which is the backbone of the extension, specifies, as I said, the layout for each sequence in the buffer and it has additional restrictions. The first one is that it must specify exactly one token that dispatches some kind of work and it must be the last token in the sequence. You cannot have a sequence that dispatches graphics work twice, or that dispatches computer work twice, or that dispatches compute first and then draws, or something like that. No, you can only do one thing with each DGC buffer and each commands layout and it has to be the last one in the sequence. And one interesting thing that also Vulkan allows you to do, that DX12 doesn’t let you do, is that it allows you (on some drivers, you need to check the properties for this) to choose which shaders you want to use for each sequence. This is a restricted version of the bind pipeline command in Vulkan. You cannot choose arbitrary pipelines and you cannot change arbitrary states but you can switch shaders. For example, if you want to use a different fragment shader for each of the draws in the sequence, you can do that. This is pretty powerful. How do you create one of those indirect commands layout? Well, with one of those typical Vulkan calls, to create an object that you pass these CreateInfo structures that are always present in Vulkan. And, as you can see, you have to pass these shader stages that will be used, will be active, while you draw or you execute those indirect commands. You have to pass the pipeline layout, and you have to pass in an indirect stride. The stride is the amount of bytes for each sequence, from the start of a sequence to the next one. And the most important information of course, is the list of tokens: an array of tokens that you pass as the token count and then the pointer to the first element. Now, each of those tokens contains a bit of information and the most important one is the type, of course. Then you can also pass an offset that tells you how many bytes into the sequence for the start of the data for that command. Together with the stride, it tells us that you don’t need to pack the data for those commands together. If you want to include some padding, because it’s convenient or something, you can do that. And then there’s also the token data which allows you to pass the information that I was painting in green in other slides like information to be able to run the command with some extra parameters. Only a few tokens, a few commands, need that. Depending on the command it is, you have to fill one of the pointers in the union but for most commands they don’t need this kind of information. Knowing which command it is you just know you are going to find some fixed data in the buffer and you just read that and process that. One thing that is interesting, like I said, is the ability to switch shaders and to choose which shaders are going to be used for each of those individual sequences. Some form of pipeline switching, or restricted pipeline switching. To do that you have to create something that is called Indirect Execution Sets. Each of these execution sets is like a group or an array, if you want to think about it like that, of pipelines: similar pipelines or shader objects. They have to share something in common, which is that all of the state in the pipeline has to be identical, basically. Only the shaders can change. When you create these execution sets and you start adding pipelines or shaders to them, you assign an index to each pipeline in the set. Then, you pass this execution set beforehand, before executing the commands, so that the driver knows which set of pipelines you are going to use. And then, in the DGC buffer, when you have this pipeline token, you only have to store the index of the pipeline that you want to use. You create the execution set with 20 pipelines and you pass an index for the pipeline that you want to use for each draw, for each dispatch, or whatever. The way to create the execution sets is the one you see here, where we have, again, one of those CreateInfo structures. There, we have to indicate the type, which is pipelines or shader objects. Depending on that, you have to fill one of the pointers from the union on the top right here. If we focus on pipelines because it’s easier on the bottom left, you have to pass the maximum pipeline count that you’re going to store in the set and an initial pipeline. The initial pipeline is what is going to set the template that all pipelines in the set are going to conform to. They all have to share essentially the same state as the initial pipeline and then you can change the shaders. With shader objects, it’s basically the same, but you have to pass more information for the shader objects, like the descriptor set layouts used by each stage, push-constant information…​ but it’s essentially the same. Once you have that execution set created, you can use those two functions (vkUpdateIndirectExecutionSetPipelineEXT and vkUpdateIndirectExecutionSetShaderEXT) to update and add pipelines to that execution set. You need to take into account that you have to pass a couple of special creation flags to the pipelines, or the shader objects, to tell the driver that you may use those inside an execution set because the driver may need to do something special for them. And one additional restriction that we have is that if you use an execution set token in your sequences, it must appear only once and it must be the first one in the sequence. The recap, so far, is that the DGC buffer is divided into small chunks that we call sequences. Each sequence follows a template that we call the Indirect Commands Layout. Each sequence must dispatch work exactly once and you may be able to switch the set of shaders we used with with each sequence with an Indirect Execution Set. Wow do we go about actually telling Vulkan to execute the contents of a specific buffer? Well, before executing the contents of the DGC buffer the application needs to have bound all the needed states to run those commands. That includes descriptor sets, initial push constant values, initial shader state, initial pipeline state. Even if you are going to use an Execution Set to switch shaders later you have to specify some kind of initial shader state. Once you have that, you can call this vkCmdExecuteGeneratedCommands. You bind all the state into your regular command buffer and then you record this command to tell the driver: at this point, execute the contents of this buffer. As you can see, you typically pass a regular command buffer as the first argument. Then there’s some kind of boolean value called isPreprocessed, which is kind of confusing because it’s the first time it appears and you don’t know what it is about, but we will talk about it in a minute. And then you pass a relatively larger structure containing information about what to execute. In that GeneratedCommandsInfo structure, you need to pass again the shader stages that will be used. You have to pass the handle for the Execution Set, if you’re going to use one (if not you can use the null handle). Of course, the indirect commands layout, which is the central piece here. And then you pass the information about the buffer that you want to execute, which is the indirect address and the indirect address size as the buffer size. We are using buffer device address to pass information. And then we have something again mentioning some kind of preprocessing thing, which is really weird: preprocess address and preprocess size which looks like a buffer of some kind (we will talk about it later). You have to pass the maximum number of sequences that you are going to execute. Optionally, you can also pass a buffer address for an actual counter of sequences. And the last thing that you need is the max draw count, but you can forget about that if you are not dispatching work using draw-with-count tokens as it only applies there. If not, you leave it as zero and it should work. We have a couple of things here that we haven’t talked about yet, which are the preprocessing things. Starting from the bottom, that preprocess address and size give us a hint that there may be a pre-processing step going on. Some kind of thing that the driver may need to do before actually executing the commands, and we need to pass information about the buffer there. The boolean value that we pass to the command ExecuteGeneratedCommands tells us that the pre-processing step may have happened before so it may be possible to explicitly do that pre-processing instead of letting the driver do that at execution time. Let’s take a look at that in more detail. First of all, what is the pre-process buffer? The pre-process buffer is auxiliary space, a scratch buffer, because some drivers need to take a look at how the command sequence looks like before actually starting to execute things. They need to go over the sequence first and they need to write a few things down just to be able to properly do the job later to execute those commands. Once you have the commands layout and you have the maximum number of sequences that you are going to execute, you can call this vkGetGeneratedCommandMemoryRequirementsEXT and the driver is going to tell you how much space it needs. Then, you can create a buffer, you can allocate the space for that, you need to pass a special new buffer usage flag (VK_BUFFER_USAGE_2_PREPROCESS_BUFFER_BIT_EXT) and, once you have that buffer, you pass the address and you pass a size in the previous structure. Now the second thing is that we have the possibility of ding this preprocessing step explicitly. Explicit pre-processing is something that’s optional, but you probably want to do that if you care about performance because it’s the key to performance with some drivers. When you use explicit pre-processing you don’t want to (1) record the state, (2) call this vkPreProcessGeneratedCommandsEXT and (3) call vkExecuteGeneratedCommandsEXT. That is what implicit pre-processing does so this doesn’t give you anything if you do it this way. This is designed so that, if you want to do explicit pre-processing, you’re going to probably want to use a separate command buffer for pre-processing. You want to batch pre-processing calls together and submit them all together to keep the GPU busy and to give you the performance that you want. While you submit the pre-processing steps you may be still preparing the rest of the command buffers to enqueue the next batch of work. That’s the key to doing pre-processing optimally. You need to decide beforehand if you are going to use explicit pre-processing or not because, if you’re going to use explicit preprocessing, you need to pass a flag when you create the commands layout, and then you have to call the function to preprocess generated commands. If you don’t pass that flag, you cannot call the preprocessing function, so it’s an all or nothing. You have to decide, and you do what you want. One thing that is important to note is that preprocessing needs to know and has to have the same state, the same contents of the input buffers as when you execute so it can run properly. The video contains a cut here because the presentation laptop ran out of battery. If the pre-processing step needs to have the same state as the execution, you need to have bound the same pipeline state, the same shaders, the same descriptor sets, the same contents. I said that explicit pre-processing is normally used using a separate command buffer that we submit before actual execution. You have a small problem to solve, which is that you would need to record state twice: once on the pre-process command buffer, so that the pre-process step knows everything, and once on the execution, the regular command buffer, when you call execute. That would be annoying. Instead of that, the pre-process generated commands function takes an argument that is a state command buffer and the specification tells you: this is a command buffer that needs to be in the recording state, and the pre-process step is going to read the state from it. This is the first time, and I think the only time in the specification, that something like this is done. You may be puzzled about what this is exactly: how do you use this and how do we pass this? I just wanted to get this slide out to tell you: if you’re going to use explicit pre-processing, the ergonomic way of using it and how we thought about using the processing step is like you see in this slide. You take your main command buffer and you record all the state first and, just before calling execute-generated-commands, the regular command buffer contains all the state that you want and that preprocess needs. You stop there for a moment and then you prepare your separate preprocessing command buffer passing the main one as an argument to the preprocess call, and then you continue recording commands in your regular command buffer. That’s the ergonomic way of using it. You do need some synchronization at some steps. The main one is that, if you generate the contents of the DGC buffer from the GPU itself, you’re going to need some synchronization: writes to that buffer need to be synchronized with something else that comes later which is executing or reading those commands from from the buffer. Depending on if you use explicit preprocessing you can use the pipeline stage command-pre-process which is new and pre-process-read or you synchronize that with the regular device-generated-commands-execution which was considered part of the regular draw-indirect-stage using indirect-command-read access. If you use explicit pre-processing you need to make sure that writes to the pre-process buffer happen before you start reading from that. So you use these just here (VK_PIPELINE_STAGE_COMMAND_PREPROCESS_BIT_EXT, VK_ACCESS_COMMAND_PREPROCESS_WRITE_BIT_EXT) to synchronize processing with execution (VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT, VK_ACCESS_INDIRECT_COMMAND_READ_BIT) if you use explicit preprocessing. The quick how-to: I just wanted to get this slide out for those wanting a reference that says exactly what you need to do. All the steps that I mentioned here about creating the commands layout, the execution set, allocating the preprocess buffer, etc. This is the basic how-to. And that’s it. Thanks for watching! Questions?
  • Mike Blumenkrantz: Znvk (2025/03/11 00:00)
    New Frontiers More info
  • Mike Blumenkrantz: Slow Down (2025/02/27 00:00)
    Once Again We Return Home It’s been a while, but for the first time this year I have to do it. Some of you are shaking your heads, saying you knew it, and you were right. Here we are again. It’s time to vkoverhead. The Numbers Must Go Up I realized while working on some P E R F that there was a lot of perf to be gained in places I wasn’t testing. That makes sense, right? If there’s no coverage, the perf can’t go up. So I added a new case for the path I was using, and boy howdy did I start to see some weird stuff. Normally this is where I’d post up some gorgeous flamegraphs, and we would sit back in our expensive leather armchairs debating the finer points of optimization. But you know what? We can’t do that anymore. Why, you’re asking. The reason is simple: perf is totally fucking broken and has been for a while. But only on certain machines. Specifically, mine. So no more flamegraphs for you, and none for me. Despite this massive roadblock, the perf gains must continue. Through the power of guesswork and frustration, I’ve managed some sizable gains: # Draw Tests 1000op/s (before) % relative to ‘draw’ (before) 1000op/s (after) % relative to ‘draw’ (after) 0 draw 46298 100.0% 46426 100.0% 16 vbo change 17741 38.3% 22413 48.3% 17 vbo change dynamic (new!) 4544 9.8% 8686 18.7% 18 1vattrib change 3021 6.5% 3316 7.1% 20 16vattrib 16vbo change 5266 11.4% 6398 13.8% 21 16vattrib change 2352 5.1% 2512 5.4% 22 16vattrib change dynamic 3976 8.6% 5003 10.8% Though I was mainly targeting the case of using dynamic vertex input and binding new vertex buffers for every draw (and managed a nearly 100% improvement there) , I ended up seeing noteworthy gains across the board for binding vertex buffers, even when using fully static state. This should provide some minor gains to general RADV perf. Future Improvements Given the still-massive perf gap between using static and dynamic vertex state when only vertex buffers change, it seems likely there’s still some opportunities to reclaim more perf. Only time will tell what can be achieved here, but for now this is what I’ve got.
  • Mike Blumenkrantz: CLthulhu (2025/02/26 00:00)
    Insanity Has A Name Karol Herbst. At SGC, we know this man. We fear him. His photo is on the wall over a break-in-case-of-emergency glass panel which shields a button activating a subterranean escape route set to implode as soon as I sprint through. Despite this, and despite all past evidence leading me to be wary of any idea he pitched, the madman got me again. cl_khr_image2d_from_buffer. On the surface, an innocuous little extension used to access a buffer like a 2D image. Vulkan already has this support for 1D images in the form of VkBufferView, so why would adding a stride to that be any harder (aside from the fact that the API doesn’t support it)? I was deep into otherworldly optimizations at this point, far beyond the point where I was able to differentiate between improvement and neutral, let alone sane or twisted. His words seemed so reasonable: why couldn’t I just throw a buffer to the GPU as a 2D image? I’d have to be an idiot not to be able to do something as simple as that. Wouldn’t I? Dammit, Karol. How to 2D a Buffer You can’t. I mean, I can, but you? Vulkan won’t let you do it. There’s (currently) no extension that enables a 2D bufferview. Rumor has it some madman on a typewriter is preparing to fax over an extension specification to add this, but only time will tell whether Khronos accepts submissions in this format. Here at SGC, we’re all smart HUMANS though, so there’s an obvious solution to this. It’s not memory aliasing. Sure, rebinding buffer memory onto an image might work. But in reading the spec, the synchronization guarantees for buffer-image aliasing didn’t seem that strong. And also it’d be a whole bunch of code to track it, and maybe do weird layout stuff, and add some kind of synchronization on the buffer too, and pray the driver isn’t buggy, and doesn’t this sound a lot like the we-have-this-at-home version of another, better mechanism that zink already has incredible support for? Yeah. What about these things? How do they wORK? DMA Buffers: Totally Normal A DMAbuf is basically a pipe. On one end you have memory. And if you yell TRIANGLE into the other end really loud, something unimaginable and ineffable that lurks deep withinthevoid will slitherand crawl its way up the pipeuntil it GAZES UPON YOU IN YOUR FLESHY MORTAL SHELL ATTEMPTING TO USURP THE POWERS OF THE OLD ONES. It’s a fun little experiment with absolutely no unwanted consequences. Try it at home! The nice thing about dmabufs is I know they work. And I know they work in zink. That’s because in order to run an x̸̧̠͓̣̣͎͚̰͎̍̾s̶̡̢͙̞̙͍̬̝̠̩̱̞̮̩̣̑͂͊̎͆̒̓͐͛͊̒͆̄̋ȩ̶̡̨̳̭̲̹̲͎̪̜͒̓̈́̏r̶̩̗͖͙͖̬̟̞̜̠͙̠̎͑̉̌̎̍̑́̏̓̏̒̍͜͝v̶̞̠̰̘̞͖̙̯̩̯̝̂̃̕͜e̴̢̡͎̮͔̤͖̤͙̟̳̹͛̓͌̈̆̈́̽͘̕ŕ̶̫̾͐͘ or a Wayland compositor (e.g., Ŵ̶̢͍̜̙̺͈͉̼̩̯̺̗̰̰͕͍̱͊͊̓̈̀͛̾̒̂̚̕͝ͅḙ̵̛̬̜͔̲͕͖̜̱̻͊̌̾͊͘s̶̢̗̜͈̘͎̠̘̺͉͕̣̯̘̦͓͈̹̻͙̬̘̿͆̏̃̐̍̂̕ͅt̷̨͈̠͕͔̬̙̣͈̪͕̱͕̙̦͕̼̩͙̲͖͉̪̹̼͛̌͋̃̂̂̓̏̂́̔͠͝ͅơ̸̢̛̛̲̟͙͚̰͇̞̖̭̲͍͇̫̘̦̤̩̖͍̄̓́͑̉̿̅̀̉͒͋͒̂́̆̋̚͝ͅͅn̶̢̡̝̥̤̣͔̣͉͖̖̻̬̝̥̦͇͕̘͋͂͛̌̃͠ͅͅ, the reference compositor), dmabufs have to work. Zink can run both of those just fine, so I know there’s absolutely zero bugs. There can’t be any bugs. No. Not bugs again. NO MORE BUGS Even better, I know that I can do imports and exports of dmabufs in any dimensionality thanks to that crazy CL-GL sharing extension Karol already suckered me into supporting at the expense of every Vulkan driver’s bug tracker. That KAROL HERBST guy, hah, he’s such a kidder! So obviously–It’s just common sense at this point–Obviously I should just be able to hook up the pipes here. Export a buffer and then import a 2D image with whatever random CAUSALITY IS A LIE passes for stride. Right? Basically a day at the beach for me. And of course it works perfectly with no problems whatsoever, giving Davinci Resolve a nice performance boost. Stay sane, readers.
  • Hans de Goede: ThinkPad X1 Carbon Gen 12 camera support and other IPU6 camera work (2025/02/24 14:44)
    I have been working on getting the camera on the ThinkPad X1 Carbon Gen 12 to work under Fedora.This requires 3 things:Some ov08x40 sensor patches, these are available as downstream cherry-picks in Fedora kernels >= 6.12.13A small pipewire fix to avoid WirePlumber listing a bunch of bogus extra "ipu6" Video Sources, these fixes are available in Fedora's pipewire packages >= 1.2.7-4I2C and GPIO drivers for the new Lattice USB IO-expander, these drivers are not available in the upstream / mainline kernel yetI have also rebased the out of tree IPU6 ISP and proprietary userspace stack in rpmfusion and I have integrated the USBIO drivers into the intel-ipu6-kmod package. So for now getting the cameras to work on the X1 Carbon Gen 12 requires installing the out of tree drivers through rpmfusion. Follow these instructions to enable rpmfusion, you need both the free and nonfree repos.Then make sure you have a new enough kernel installed and install the rpmfusion akmod for the USBIO drivers:sudo dnf update 'kernel*'sudo dnf install akmod-intel-ipu6The latest version of the out of tree IPU6 ISP driver can co-exist with the mainline / upstream IPU6 CSI receiver kernel driver. So both the libcamera software ISP FOSS stack and Intel's proprietary stack can co-exist now. If you do not want to use the proprietary stack you can disable it by running 'sudo ipu6-driver-select foss'.After installing the kmod package reboot and then in Firefox go to Mozilla's webrtc test page and click on the "Camera" button, you should now get a camera permisson dialog with 2 cameras: "Built in Front Camera" and "Intel MIPI Camera (V4L2)" the "Built in Front Camera" is the FOSS stack and the "Intel MIPI Camera (V4L2)" is the proprietary stack. Note the FOSS stack will show a strongly zoomed in (cropped) image, this is caused by the GUM test-page, in e.g. google-meet this will not be the case.I have also been making progress with some of the other open IPU6 issues:Camera's failing on Dell XPS laptops due to iVSC errors (rhbz#2316918, rhbz#2324683) after a long debugging session this is finally fixed, the fix for this will be available in Fedora kernels >= 6.13.4 which should show up in updates-testing todayCamera's no working on Microsoft Surface book with ov7251 sensor, the fix for this has landed upstream  comments
  • Peter Hutterer: libinput and 3-finger dragging (2025/02/24 05:38)
    Ready in time for libinput 1.28 [1] and after a number of attempts over the years we now finally have 3-finger dragging in libinput. This is a long-requested feature that allows users to drag by using a 3-finger swipe on the touchpad. Instead of the normal swipe gesture you simply get a button down, pointer motion, button up sequence. Without having to tap or physically click and hold a button, so you might be able to see the appeal right there. Now, as with any interaction that relies on the mere handful of fingers that are on our average user's hand, we are starting to have usage overlaps. Since the only difference between a swipe gesture and a 3-finger drag is in the intention of the user (and we can't detect that yet, stay tuned), 3-finger swipes are disabled when 3-finger dragging is enabled. Otherwise it does fit in quite nicely with the rest of the features we have though. There really isn't much more to say about the new feature except: It's configurable to work on 4-finger drag too so if you mentally substitute all the threes with fours in this article before re-reading it that would save me having to write another blog post. Thanks. [1] "soonish" at the time of writing
  • Peter Hutterer: GNOME 48 and a changed tap-and-drag drag lock behaviour (2025/02/24 04:17)
    This is a heads up as mutter PR!4292 got merged in time for GNOME 48. It (subtly) changes the behaviour of drag lock on touchpads, but (IMO) very much so for the better. Note that this feature is currently not exposed in GNOME Settings so users will have to set it via e.g. the gsettings commandline tool. I don't expect this change to affect many users. This is a feature of a feature of a feature, so let's start at the top. "Tapping" on touchpads refers to the ability to emulate button presses via short touches ("taps") on the touchpad. When enabled, a single-finger tap corresponds emulates a left mouse button click, a two-finger tap a right button click, etc. Taps are short interactions and to be recognised the finger must be set down and released again within a certain time and not move more than a certain distance. Clicking is useful but it's not everything we do with touchpads. "Tap-and-drag" refers to the ability to keep the pointer down so it's possible to drag something while the mouse button is logically down. The sequence required to do this is a tap immediately followed by the finger down (and held down). This will press the left mouse button so that any finger movement results in a drag. Releasing the finger releases the button. This is convenient but especially on large monitors or for users with different-than-whatever-we-guessed-is-average dexterity this can make it hard to drag something to it's final position - a user may run out of touchpad space before the pointer reaches the destination. For those, the tap-and-drag "drag lock" is useful. "Drag lock" refers to the ability of keeping the mouse button pressed until "unlocked", even if the finger moves off the touchpads. It's the same sequence as before: tap followed by the finger down and held down. But releasing the finger will not release the mouse button, instead another tap is required to unlock and release the mouse button. The whole sequence thus becomes tap, down, move.... tap with any number of finger releases in between. Sounds (and is) complicated to explain, is quite easy to try and once you're used to it it will feel quite natural. The above behaviour is the new behaviour which non-coincidentally also matches the macOS behaviour (if you can find the toggle in the settings, good practice for easter eggs!). The previous behaviour used a timeout instead so the mouse button was released automatically if the finger was up after a certain timeout. This was less predictable and caused issues with users who weren't fast enough. The new "sticky" behaviour resolves this issue and is (alanis morissette-stylue ironically) faster to release (a tap can be performed before the previous timeout would've expired). Anyway, TLDR, a feature that very few people use has changed defaults subtly. Bring out the pitchforks! As said above, this is currently only accessible via gsettings and the drag-lock behaviour change only takes effect if tapping, tap-and-drag and drag lock are enabled: $ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag true $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag-lock true All features above are actually handled by libinput, this is just about a default change in GNOME.
  • Simon Ser: Using Podman, Compose and BuildKit (2025/02/22 22:00)
    For my day job, I need to build and run a Docker Compose project. However, because Docker doesn’t play well with nftables and I prefer a rootless + daemonless approach, I’m using Podman. Podman supports Docker Compose projects with two possible solutions: either by connecting the official Docker Compose CLI to a Podman socket, either by using their own drop-in replacement. They ship a small wrapper to select one of these options. (The wrapper has the same name as the replacement, which makes things confusing.) Unfortunately, both options have downsides. When using the official Docker Compose CLI, the classic builder is used instead of the newer BuildKit builder. As a result, some features such as additional contexts are not supported. When using the podman-compose replacement, some other features are missing, such as !reset, configs and referencing another service in additional contexts. It would be possible to add these features to podman-compose, but that’s an endless stream of work (Docker Compose regularly adds new features) and I don’t really see the value in re-implementing all of this (the fact that it’s Python doesn’t help me getting motivated). I’ve started looking for a way to convince the Docker Compose CLI to run under Podman with BuildKit enabled. I’ve tried a few months ago and never got it to work, but it seems like this recently became easier! The podman-compose wrapper force-disables BuildKit, so we need to use directly the Docker Compose CLI without the wrapper. On Arch Linux, this can be achieved by enabling the Podman socket and creating a new Docker context (same as setting DOCKER_HOST, but more permanent): pacman -S docker-compose docker-buildx systemctl --user start podman.socket docker context create podman --docker host=unix://$XDG_RUNTIME_DIR/podman/podman.sock docker context use podman With that, docker compose just works! It turns out it automagically creates a buildx_buildkit_default container under-the-hood to run the BuildKit daemon. Since I don’t like automagical things, I immediately tried to run BuildKit daemon myself: pacman -S buildkit systemctl --user start buildkit.service docker buildx create --name local unix://$XDG_RUNTIME_DIR/buildkit/rootless docker buildx use local Now docker compose uses our systemd-managed BuildKit service. But we’re not done yet! One of the reasons I like Podman is because it’s daemonless, and we’ve got a daemon running in the background. This isn’t the end of the world, but it’d be nicer to be able to run the build without BuildKit. Fortunately, there’s a way around this: any Compose project can be turned into a JSON description of the build commands called Bake. docker buildx bake --print will print that JSON file (and the Docker Compose CLI will use Bake files if COMPOSE_BAKE=true is set since v2.33). Note, Bake supports way more features (e.g. HCL files) but we don’t really need these for our purposes (and the command above can lower fancy Bake files into dumb JSON ones). The JSON file is pretty similar to the podman build CLI arguments. It’s not that hard to do the translation, so I’ve written Bakah, a small tool which does exactly this. It uses Buildah instead of shelling out to Podman (Buildah is the library used by Podman under-the-hood to build images). A few details required a bit more attention, for instance dependency resolution and parallel builds, but it’s quite simple. It can be used like so: docker buildx bake --print >bake.json bakah --file bake.json Bakah is still missing the fancier Bake features (HCL files, inheritance, merging/overriding files, variables, and so on), but it’s enough to build complex Compose projects. I plan to use it for soju-containers in the future, to better split my Dockerfiles (one for the backend, one for the frontend) and remove the CI shell script (which contains a bunch of Podman CLI invocations). I hope it can be useful to you as well!
  • Mike Blumenkrantz: Againicl (2025/02/20 00:00)
    Busy. I didn’t forget to blog. I know you don’t believe me, but I’ve been accumulating items to blog about for the past month. Powering up. Preparing. And now, finally, it’s time to begin opening the valves. Insanity Returns When I got back from hibernation, I was immediately accosted by a developer I’d forgotten. One with whom I spent an amount of time consuming adult beverages at XDC again. One who walks with a perpetual glint of madness in his eyes, ready at the drop of a hat to tackle the nearest driver developer and begin raving about the benefits of supporting OpenCL. Obviously I’m talking about Karol “HOW IS THE PUB ALREADY CLOSED IT’S ONLY 10:30???” Herbst. I was minding my own business, fixing bugs and addressing perf issues when he assaulted me with a vicious nerdsnipe late one night in January. “Hey, why can’t I run DaVinci Resolve on Zink?” he casually asked me, knowing full well the ramifications of such a question. I tried to put him off, but he persisted. “You know, RadeonSI supports all those features,” he said next, and my entire week was ruined. As everyone knows, Zink can only ever be compared to one driver, and the comparisons can’t be too uneven. So it was that I started looking at the CL CTS for the first time this year to implement cl_khr_gl_sharing. This extension is basically EXT_external_objects for CL. It should “just work”. Right? Right… The thing is, this mechanism (on Linux) uses dmabufs. You know, that thing we all love because they make display servers go vroom. dmabufs allow sharing memory regions between processes through file descriptors. Or just within the same process. Anywhere, really. One side exports the memory object to the FD, and the other side imports it. But that’s how normal people use dmabufs. 2D image import/export for display server usage. Or, occasionally, some crazy multi-process browser engine thing. But still 2D. You know who uses dmabufs with all-the-Ds? OpenCL. You know who doesn’t implement all-the-Ds? Any Vulkan drivers. Probably. Case in point, I had to hack it in for RADV before I could get CTS to pass and VVL to stop screaming at me. From there, it turned out zink mostly supported everything already. A minor bugfix and some conditionals to enable raw buffer import/export, and it just works. Brace yourselves, because this is the foundation for getting Cthulhu-level insane next time.
  • Simon Ser: Status update, February 2025 (2025/02/17 22:00)
    Hi! This month has been pretty hectic, with FOSDEM and all. I’ve really enjoyed meeting face-to-face all of these folks I work online with the rest of the year! My talk about modern IRC has been published on the FOSDEM website (unfortunately the audio quality isn’t great). In Wayland news, the color management protocol has finally been merged! I haven’t done much apart cheering from the sidelines: huge thanks to everyone involved for carrying this over the finish line, especially Pekka Paalanen, Sebastian Wick and Xaver Hugl! I’ve started a wlroots implementation, which was enough with some hacks to get MPV to display a HDR video on Sway. I’ve also posted a patch to convert to BT2020 and encode to PQ, but I still need to figure out why red shows up as pink (or rebrand it as lipstick-filter in the Sway config file). I’ve released sway 1.10.1 with a bunch of bugfixes, as well as wlr-randr 0.5.0 which adds relative positioning options (e.g. --left-of) and a man page. I’ve rewritten makoctl in C (the shell script approach has been showing its limitations for a while), and merged support for icon border radius, per-corner radius settings, and a new signal in the mako-specific D-Bus API to notify when the current modes are changed. delthas has contributed support for showing redacted messages as such in gamja. goguma’s compact mode now displays an unread and date delimiter, just like the default mode (thanks Eigil Skjæveland!). I’ve added a basic UI to my WebDAV server, sogogi, to display directory listings and easily upload files from the browser. That’s all, see you next month!
  • Christian Schaller: Looking ahead at 2025 and Fedora Workstation and jobs on offer! (2025/02/03 12:29)
    So a we are a little bit into the new year I hope everybody had a great break and a good start of 2025. Personally I had a blast having gotten the kids an air hockey table as a Yuletide present :). Anyway, wanted to put this blog post together talking about what we are looking at for the new year and to let you all know that we are hiring. Artificial Intelligence One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. Wayland The Wayland community had some challenges last year with frustrations boiling over a few times due to new protocol development taking a long time. Some of it was simply the challenge of finding enough people across multiple projects having the time to follow up and help review while other parts are genuine disagreements of what kind of things should be Wayland protocols or not. That said I think that problem has been somewhat resolved with a general understanding now that we have the ‘ext’ namespace for a reason, to allow people to have a space to review and make protocols without an expectation that they will be universally implemented. This allows for protocols of interest only to a subset of the community going into ‘ext’ and thus allowing protocols that might not be of interest to GNOME and KDE for instance to still have a place to live. The other more practical problem is that of having people available to help review protocols or providing reference implementations. In a space like Wayland where you need multiple people from multiple different projects it can be hard at times to get enough people involved at any given time to move things forward, as different projects have different priorities and of course the developers involved might be busy elsewhere. One thing we have done to try to help out there is to set up a small internal team, lead by Jonas Ådahl, to discuss in-progress Wayland protocols and assign people the responsibility to follow up on those protocols we have an interest in. This has been helpful both as a way for us to develop internal consensus on the best way forward, but also I think our contribution upstream has become more efficient due to this. All that said I also believe Wayland protocols will fade a bit into the background going forward. We are currently at the last stage of a community ‘ramp up’ on Wayland and thus there is a lot of focus on it, but once we are over that phase we will probably see what we saw with X.org extensions over time, that for the most time new extensions are so niche that 95% of the community don’t pay attention or care. There will always be some new technology creating the need for important new protocols, but those are likely to come along a relatively slow cadence. High Dynamic Range HDR support in GNOME Control Center As for concrete Wayland protocols the single biggest thing for us for a long while now has of course been the HDR support for Linux. And it was great to see the HDR protocol get merged just before the holidays. I also want to give a shout out to Xaver Hugl from the KWin project. As we where working to ramp up HDR support in both GNOME Shell and GTK+ we ended up working with Xaver and using Kwin for testing especially the GTK+ implementation. Xaver was very friendly and collaborative and I think HDR support in both GNOME and KDE is more solid thanks to that collaboration, so thank you Xaver! Talking about concrete progress on HDR support Jonas Adahl submitted merge requests for HDR UI controls for GNOME Control Center. This means you will be able to configure the use of HDR on your system in the next Fedora Workstation release. PipeWire I been sharing a lot of cool PipeWire news here in the last couple of years, but things might slow down a little as we go forward just because all the major features are basically working well now. The PulseAudio support is working well and we get very few bug reports now against it. The reports we are getting from the pro-audio community is that PipeWire works just as well or better as JACK for most people in terms of for instance latency, and when we do see issues with pro-audio it tends to be more often caused by driver issues triggered by PipeWire trying to use the device in ways that JACK didn’t. We been resolving those by adding more and more options to hardcode certain options in PipeWire, so that just as with JACK you can force PipeWire to not try things the driver has problems with. Of course fixing the drivers would be the best outcome, but for some of these pro-audio cards they are so niche that it is hard to find developers who wants to work on them or who has hardware to test with. We are still maturing the video support although even that is getting very solid now. The screen capture support is considered fully mature, but the camera support is still a bit of a work in progress, partially because we are going to a generational change the camera landscape with UVC cameras being supplanted by MIPI cameras. Resolving that generational change isn’t just on PipeWire of course, but it does make the a more volatile landscape to mature something in. Of course an advantage here is that applications using PipeWire can easily switch between V4L2 UVC cameras and libcamera MIPI cameras, thus helping users have a smooth experience through this transition period. But even with the challenges posed by this we are moving rapidly forward with Firefox PipeWire camera support being on by default in Fedora now, Chrome coming along quickly and OBS Studio having PipeWire support for some time already. And last but not least SDL3 is now out with PipeWire camera support. MIPI camera support Hans de Goede, Milan Zamazal and Kate Hsuan keeps working on making sure MIPI cameras work under Linux. MIPI cameras are a step forward in terms of technical capabilities, but at the moment a bit of a step backward in terms of open source as a lot of vendors believe they have ‘secret sauce’ in the MIPI camera stacks. Our works focuses mostly on getting the Intel MIPI stack fully working under Linux with the Lattice MIPI aggregator being the biggest hurdle currently for some laptops. Luckily Alan Stern, the USB kernel maintainer, is looking at this now as he got the hardware himself. Flatpak Some major improvements to the Flatpak stack has happened recently with the USB portal merged upstream. The USB portal came out of the Sovereign fund funding for GNOME and it gives us a more secure way to give sandboxed applications access to you USB devcices. In a somewhat related note we are still working on making system daemons installable through Flatpak, with the usecase being applications that has a system daemon to communicate with a specific piece of hardware for example (usually through USB). Christian Hergert got this on his todo list, but we are at the moment waiting for Lennart Poettering to merge some pre-requisite work into systemd that we want to base this on. Accessibility We are putting in a lot of effort towards accessibility these days. This includes working on portals and Wayland extensions to help facilitate accessibility, working on the ORCA screen reader and its dependencies to ensure it works great under Wayland. Working on GTK4 to ensure we got top notch accessibility support in the toolkit and more. GNOME Software Last year Milan Crha landed the support for signing the NVIDIA driver for use on secure boot. The main feature Milan he is looking at now is getting support for DNF5 into GNOME Software. Doing this will resolve one of the longest standing annoyances we had, which is that the dnf command line and GNOME Software would maintain two separate package caches. Once the DNF5 transition is done that should be a thing of the past and thus less risk of disk space being wasted on an extra set of cached packages. Firefox Martin Stransky and Jan Horak has been working hard at making Firefox ready for the future, with a lot of work going into making sure it supports the portals needed to function as a flatpak and by bringing HDR support to Firefox. In fact Martin just got his HDR patches for Firefox merged this week. So with the PipeWire camera support, Flatpak support and HDR support in place, Firefox will be ready for the future. We are hiring! looking for 2 talented developers to join the Red Hat desktop team We are hiring! So we got 2 job openings on the Red Hat desktop team! So if you are interested in joining us in pushing the boundaries of desktop linux forward please take a look and apply. For these 2 positions we are open to remote workers across the globe and while the job adds list specific seniorities we are somewhat flexible on that front too for the right candidate. So be sure to check out the two job listings and get your application in! If you ever wanted to work fulltime on GNOME and related technologies this is your chance.
  • André Almeida: Linux 6.13, I WANT A GUITAR PEDAL (2025/01/20 00:00)
    Just as 2025 is starting, we got a new Linux release in mid January, tagged as 6.13. In the spirit of holidays, Linus Torvalds even announced during 6.13-rc6 that he would be building and raffling a guitar pedal for a random kernel developer! As usual, this release comes with a pack of exciting news done by the kernel community: This release has two important improvements for task scheduling: lazy preemption and proxy execution. The goal with lazy preemption is to find a better balance between throughput and response time. A secondary goal is being able to make it the preferred non-realtime scheduling policy for most cases. Tasks that really need a reschedule in a hurry will use the older TIF_NEED_RESCHED flag. A preliminary work for proxy execution was merged, which will let us avoid priority-inversion scenarios when using real time tasks with deadline scheduling, for use cases such as Android. New important Rust abstractions arrived, such as VFS data structures and interfaces, and also abstractions for misc devices. Lightweight guard pages: guard pages are used to raise a fatal signal when accessed. This feature had the drawback of having a heavy performance impact, but in this new release the flag MADV_GUARD_INSTALL was added for the madvise() syscall, offering a lightweight way to guard pages. To know more about the community improvements, check out the summary made by Kernel Newbies. Now let’s highlight the contributions made by Igalians for this release. Case-insensitive support for tmpfs Case sensitivity has been a traditional difference between Linux distros and MS Windows, with the most popular filesystems been in opposite sides: while ext4 is case sensitive, NTFS is case insensitive. This difference proved to be challenging when Windows apps, mainly games, started to be a common use case for Linux distros (thanks to Wine!). For instance, games running through Steam’s Proton would expect that the path assets/player.png and assets/PLAYER.PNG would point to be the same file, but this is not the case in ext4. To avoid doing workarounds in userspace, ext4 has support for casefolding since Linux 5.2. Now, tmpfs joins the group of filesystems with case-insensitive support. This is particularly useful for running games inside containers, like the combination of Wine + Flatpak. In such scenarios, the container shares a subset of the host filesystem with the application, mounting it using tmpfs. To keep the filesystem consistent, with the same expectations of the host filesystem about the mounted one, if the host filesystem is case-insensitive we can do the same thing for the container filesystem too. You can read more about the use case in the patchset cover letter. While the container frameworks implement proper support for this feature, you can play with it and try it yourself: $ mount -t tmpfs -o casefold fs_name /mytmpfs $ cd /mytmpfs # case-sensitive by default, we still need to enable it $ mkdir a $ touch a; touch A $ ls A a $ mkdir B; cd b cd: The directory 'b' does not exist $ # now let's create a case-insensitive dir $ mkdir case_dir $ chattr +F case_dir $ cd case_dir $ touch a; touch A $ ls a $ mkdir B; cd b $ pwd $ /home/user/mytmpfs/case_dir/B V3D Super Pages support As part of Igalia’s effort for enhancing the graphics stack for Raspberry Pi, the V3D DRM driver now has support for Super Pages, improving performance and making memory usage more efficient for Raspberry Pi 4 and 5. Using Linux 6.13, the driver will enable the MMU to allocate not only the default 4KB pages, but also 64KB “Big Pages” and 1MB “Super Pages”. To measure the difference that Super Pages made to the performance, a series of benchmarks where used, and the highlights are: +8.36% of FPS boost for Warzone 2100 in RPi4 +3.62% of FPS boost for Quake 2 in RPi5 10% time reduction for the Mesa CI job v3dv-rpi5-vk-full:arm64 Aether SX2 emulator is more fluid to play You can read a detailed post about this, with all benchmark results, in Maíra’s blog post, including a super cool PlayStation 2 emulation showcase! New transparent_hugepage_shmem= command-line parameter Igalia contributed new kernel command-line parameters to improve the configuration of multi-size Transparent Huge Pages (mTHP) for shmem. These parameters, transparent_hugepage_shmem= and thp_shmem=, enable more flexible and fine-grained control over the allocation of huge pages when using shmem. The transparent_hugepage_shmem= parameter allows users to set a global default huge page allocation policy for the internal shmem mount. This is particularly valuable for DRM GPU drivers. Just as CPU architectures, GPUs can also take advantage of huge pages, but this is possible only if DRM GEM objects are backed by huge pages. Since GEM uses shmem to allocate anonymous pageable memory, having control over the default huge page allocation policy allows for the exploration of huge pages use on GPUs that rely on GEM objects backed by shmem. In addition, the thp_shmem= parameter provides fine-grained control over the default huge page allocation policy for specific huge page sizes. By configuring page sizes and policies of huge-page allocations for the internal shmem mount, these changes complement the V3D Super Pages feature, as we can now tailor the size of the huge pages to the needs of our GPUs. DRM and AMDGPU improvements As usual in Linux releases, this one collects a list of improvements made by our team in DRM and AMDGPU driver from the last cycle. Cosmic (the desktop environment behind Pop! OS) users discovered some bugs in the AMD display driver regarding the handling of overlay planes. These issues were pre-existing and came to light with the introduction of cursor overlay mode. They were causing page faults and divide errors. We debugged the issue together with reporters and proposed a set of solutions that were ultimately accepted by AMD developers in time for this release. In addition, we worked with AMD developers to migrate the driver-specific handling of EDID data to the DRM common code, using drm_edid opaque objects to avoid handling raw EDID data. The first phase was incorporated and allowed the inclusion of new functionality to get EDID from ACPI. However, some dependencies between the AMD the Linux-dependent and OS-agnostic components were left to be resolved in next iterations. It means that next steps will focus on removing the legacy way of handling this data. Also in the AMD driver, we fixed one out of bounds memory write, fixed one warning on a boot regression and exposed special GPU memory pools via the fdinfo common DRM framework. In the DRM scheduler code, we added some missing locking, removed a couple of re-lock cycles for slightly reduced command submission overheads and clarified the internal documentation. In the common dma-fence code, we fixed one memory leak on the failure path and one significant runtime memory leak caused by incorrect merging of fences. The latter was found by the community and was manifesting itself as a system out of memory condition after a few hours of gameplay. sched_ext The sched_ext landed in kernel 6.12 to enable the efficient development of BPF-based custom schedulers. During the 6.13 development cycle, the sched_ext community has made efforts to harden the code to make it more reliable and clean up the BPF APIs and documentation for clarity. Igalia has contributed to hardening the sched_ext core code. We fixed the incorrect use of the scheduler run queue lock, especially during initializing and finalizing the BPF scheduler. Also, we fixed the missing RCU lock protections when the sched_core selects a proper CPU for a task. Without these fixes, the sched_ext core, in the worst case, could crash or raise a kernel oops message. Other Contributions & Fixes syzkaller, a kernel fuzzer, has been an important instrument to find kernel bugs. With the help of KASAN, a memory error detector, and syzbot, numerous such bugs have been reported and fixed. Igalians have contributed to such fixes around a lot of subsystems (like media, network, etc), helping reduce the number of open bugs. Check the complete list of Igalia’s contributions for the 6.13 release Authored (70) André Almeida unicode: Fix utf8_load() error path MAINTAINERS: Add Unicode tree scripts/kernel-doc: Fix build time warnings libfs: Create the helper function generic_ci_validate_strict_name() ext4: Use generic_ci_validate_strict_name helper unicode: Export latest available UTF-8 version number unicode: Recreate utf8_parse_version() libfs: Export generic_ci_ dentry functions tmpfs: Add casefold lookup support tmpfs: Add flag FS_CASEFOLD_FL support for tmpfs dirs tmpfs: Expose filesystem features via sysfs docs: tmpfs: Add casefold options libfs: Fix kernel-doc warning in generic_ci_validate_strict_name tmpfs: Fix type for sysfs’ casefold attribute tmpfs: Initialize sysfs during tmpfs init Changwoo Min sched_ext: Replace rq_lock() to raw_spin_rq_lock() in scx_ops_bypass() sched_ext: Clarify sched_ext_ops table for userland scheduler sched_ext: add a missing rcu_read_lock/unlock pair at scx_select_cpu_dfl() MAINTAINERS: add me as reviewer for sched_ext Christian Gmeiner drm/v3d: Use v3d_perfmon_find() Guilherme G. Piccoli Documentation: Improve crash_kexec_post_notifiers description wifi: rtlwifi: Drastically reduce the attempts to read efuse in case of failures Maíra Canal drm/v3d: Address race-condition in MMU flush drm/v3d: Flush the MMU before we supply more memory to the binner drm/v3d: Fix return if scheduler initialization fails drm/gem: Create a drm_gem_object_init_with_mnt() function drm/v3d: Introduce gemfs drm/gem: Create shmem GEM object in a given mountpoint drm/v3d: Reduce the alignment of the node allocation drm/v3d: Support Big/Super Pages when writing out PTEs drm/v3d: Use gemfs/THP in BO creation if available drm/v3d: Add modparam for turning off Big/Super Pages drm/v3d: Expose Super Pages capability drm/vc4: Use vc4_perfmon_find() MAINTAINERS: Add Maíra to VC4 reviewers mm: shmem: control THP support through the kernel command line mm: move get_order_from_str() to internal.h mm: shmem: override mTHP shmem default with a kernel parameter mm: huge_memory: use strscpy() instead of strcpy() drm/v3d: Enable Performance Counters before clearing them drm/v3d: Ensure job pointer is set to NULL after job completion Melissa Wen drm/amd/display: switch amdgpu_dm_connector to use struct drm_edid drm/amd/display: switch to setting physical address directly drm/amd/display: always call connector_update when parsing freesync_caps drm/amd/display: remove redundant freesync parser for DP drm/amd/display: add missing tracepoint event in DM atomic_commit_tail drm/amd/display: fix page fault due to max surface definition mismatch drm/amd/display: increase MAX_SURFACES to the value supported by hw drm/amd/display: fix divide error in DM plane scale calcs Thadeu Lima de Souza Cascardo media: uvcvideo: Require entities to have a non-zero unique ID hfsplus: don’t query the device logical block size multiple times Bluetooth: btmtk: avoid UAF in btmtk_process_coredump Tvrtko Ursulin drm/v3d: Appease lockdep while updating GPU stats drm/sched: Add locking to drm_sched_entity_modify_sched Documentation/gpu: Document the situation with unqualified drm-memory- drm/amdgpu: Drop unused fence argument from amdgpu_vmid_grab_used drm/amdgpu: Use drm_print_memory_stats helper from fdinfo drm/amdgpu: Drop impossible condition from amdgpu_job_prepare_job drm/amdgpu: Remove the while loop from amdgpu_job_prepare_job drm/sched: Optimise drm_sched_entity_push_job drm/sched: Stop setting current entity in FIFO mode drm/sched: Re-order struct drm_sched_rq members for clarity drm/sched: Re-group and rename the entity run-queue lock drm/sched: Further optimise drm_sched_entity_push_job drm/amd/pm: Vangogh: Fix kernel memory out of bounds write drm/amdgpu: Stop reporting special chip memory pools as CPU memory in fdinfo drm/amdgpu: Expose special on chip memory pools in fdinfo dma-fence: Fix reference leak on fence merge failure path dma-fence: Use kernel’s sort for merging fences workqueue: Do not warn when cancelling WQ_MEM_RECLAIM work from !WQ_MEM_RECLAIM worker Reviewed (41) André Almeida futex: Use atomic64_inc_return() in get_inode_sequence_number() futex: Use atomic64_try_cmpxchg_relaxed() in get_inode_sequence_number() mm: shmem: use signed int for version handling in casefold option Christian Gmeiner drm/vc4: Use vc4_perfmon_find() drm/etnaviv: Request pages from DMA32 zone on addressing_limited drm/etnaviv: Use unsigned type to count the number of pages drm/etnaviv: Use ‘unsigned’ type to count the number of pages drm/etnaviv: Drop the <linux/pm_runtime.h> header drm/etnaviv: Fix missing mutex_destroy() drm/etnaviv: hold GPU lock across perfmon sampling drm/etnaviv: assert GPU lock held in perfmon pipe_*_read functions drm/etnaviv: unconditionally enable debug registers drm/etnaviv: update hardware headers from rnndb drm/etnaviv: take current primitive into account when checking for hung GPU drm/etnaviv: always allocate 4K for kernel ringbuffers drm/etnaviv: flush shader L1 cache after user commandstream Iago Toral Quiroga drm/v3d: Address race-condition in MMU flush drm/v3d: Flush the MMU before we supply more memory to the binner drm/v3d: Fix return if scheduler initialization fails drm/v3d: Introduce gemfs drm/v3d: Reduce the alignment of the node allocation drm/v3d: Expose Super Pages capability drm/v3d: Enable Performance Counters before clearing them Jose Maria Casanova Crespo drm/v3d: Ensure job pointer is set to NULL after job completion Juan A. Suarez drm/vc4: Use vc4_perfmon_find() Maíra Canal drm/v3d: Use v3d_perfmon_find() drm/vc4: Run default client setup for all variants. drm/vc4: Match drm_dev_enter and exit calls in vc4_hvs_lut_load drm/vc4: Match drm_dev_enter and exit calls in vc4_hvs_atomic_flush drm/vc4: Correct generation check in vc4_hvs_lut_load drm/vkms: Drop unnecessary call to drm_crtc_cleanup() Tvrtko Ursulin drm/gem: Create a drm_gem_object_init_with_mnt() function drm/gem: Create shmem GEM object in a given mountpoint drm/v3d: Support Big/Super Pages when writing out PTEs drm/v3d: Use gemfs/THP in BO creation if available drm/v3d: Add modparam for turning off Big/Super Pages drm: add DRM_SET_CLIENT_NAME ioctl drm: use drm_file client_name in fdinfo drm/amdgpu: make drm-memory-* report resident memory dma-buf: fix dma_fence_array_signaled v4 dma-buf: Fix __dma_buf_debugfs_list_del argument for !CONFIG_DEBUG_FS Tested (1) Christian Gmeiner drm/etnaviv: Replace the ‘&pdev->dev’ with ‘dev’ Acked (5) Changwoo Min sched_ext: Rename scx_bpf_dispatch[_vtime]() to scx_bpf_dsq_insert[_vtime]() sched_ext: Rename scx_bpf_consume() to scx_bpf_dsq_move_to_local() sched_ext: Rename scx_bpf_dispatch[_vtime]_from_dsq*() -> scx_bpf_dsq_move[_vtime]*() Maíra Canal MAINTAINERS: remove myself as a VKMS maintainer MAINTAINERS: Add myself as VKMS Maintainer Maintainer SoB (6) Maíra Canal MAINTAINERS: remove myself as a VKMS maintainer MAINTAINERS: Add myself as VKMS Maintainer drm/vkms: Add documentation drm/vkms: Suppress context imbalance detected by sparse warning drm/vkms: Add missing check for CRTC initialization drm/v3d: Drop allocation of object without mountpoint
  • Simon Ser: Status update, January 2025 (2025/01/18 22:00)
    Hi all! FOSDEM is approaching rapidly! I’ll be there and will give a talk about modern IRC. In wlroots land, we’ve finally merged support for the next-generation screen capture protocols, ext-image-capture-source-v1 and ext-image-copy-capture-v1! Compared to the previous wlroots-specific protocol, the new one provides better damage tracking, enables cursor capture (useful for remote desktop apps) and per-window capture (this part is not yet implemented in wlroots). Thanks to Kirill Primak, wlroots now supports the xdg-toplevel-icon-v1 protocol, useful for clients which want to update their window icon without changing their application ID (either by providing an icon name or pixel buffers). Kirill also added safety assertions everywhere in wlroots to ensure that all listeners are properly removed when a struct is destroyed. I’ve revived some old patches to better identify outputs in wlroots and libdisplay-info. Currently, there are two common ways to refer to an output: either by its name (e.g. “DP-2”), or by its make+model+serial (e.g. “Foo Corp C4FE 42424242”). Unfortunately, both of these naming schemes have downsides. The name is ill-suited to configuration files because it’s unstable and might change on reboot or unplug (it depends on driver load order, and DP-MST connectors get a new name each time they are re-plugged). The make+model+serial uses a database to look up the human-readable manufacturer name (so database updates break config files), and is not unique enough (different models might share a duplicate string). A new wlr_output.port field and a libdisplay-info device tag should address these shortcomings. Jacob McNamee has contributed a Sway patch to add security context properties to IPC, criteria and title format. With this patch, scripts can now figure out whether an application is sandboxed, and a special title can be set for sandboxed (or unsandboxed) apps. There are probably more use-cases we didn’t think of! I’ve managed to put aside some time to start reviewing the DRM color pipeline patches. As discussed in the last XDC it’s in a pretty good shape so I’ve started dropping some Reviewed-by tags. While discussing with David Turner about libliftoff, I’ve realized that the DRM_MODE_PAGE_FLIP_EVENT flag was missing some documentation (it’s not obvious how it interacts with the atomic uAPI) so I’ve sent a patch to fix that. I continue pushing small updates to go-imap, bringing it little by little closer to version 2.0. I’ve added helpers to make it easier for servers to implement the FETCH command, implemented FETCH BINARY and header field decoding for SEARCH in the built-in in-memory server, added limits for the IMAP command size to prevent denial-of-service, and fixed a few bugs. While testing with ImapTest, I’ve discovered and fixed a bug in Go’s mime/quotedprintable package. Thanks to pounce, goguma now internally keeps track of message reactions. This is not used just yet, but will be soon once we add a user interface to display and send reactions. Support for deleting messages (called “redact” in the spec) has been merged. I’ve also implemented a small date indicator which shows up when scrolling in a conversation. That’s all for this month, see you at FOSDEM!
  • Christian Gmeiner: Multiple Render Targets for etnaviv (2025/01/16 00:00)
    Modern graphics programming revolves around achieving high-performance rendering and visually stunning effects. Among OpenGL’s capabilities, Multiple Render Targets (MRTs) are particularly valuable for enabling advanced rendering techniques with greater efficiency. With the latest release of Mesa 24.03 and the commitment from Igalia, the etnaviv GPU driver now includes support for MRTs. If you’ve ever wondered how MRTs can transform your graphics pipeline or are curious about the challenges of implementing this feature, this blog post is for you.
  • Hans de Goede: IPU6 camera support status update (2025/01/14 14:21)
    The initial IPU6 camera support landed in Fedora 41 only works on a limited set of laptops. The reason for this is that with MIPI cameras every different sensor and glue-chip like IO-expanders needs to be supported separately. I have been working on making the camera work on more laptop models. After receiving and sending many emails and blog post comments about this I have started filing Fedora bugzilla issues on a per sensor and/or laptop-model basis to be able to properly keep track of all the work. Currently the following issues are being either actively being worked on, or are being tracked to be fixed in the future.Issues which have fixes pending (review) upstream:IPU6 camera on TERRA PAD 1262 V2 not working, fix has been accepted upstream.IPU6 camera on Dell XPS 9x40 models with ov02c10 sensor not working, sensor driver has been submitted upstream.Open issues with various states of progress:IPU6 camera on Dell Latitude 7450 laptop not workingIPU6 camera on HP Spectre x360 14-eu0xxx / Spectre 16 MeteorLake with ov08x40 not working IPU6 camera on HP Spectre x360 2-in-1 16-f1xxx/891D with hi556 sensor not workingIPU6 camera on Lenovo ThinkPad X1 Carbon Gen 12 not workingLattice MIPI Aggregator support for IPU6 camerasLunar Lake MIPI camera / IPU7 CSI receiver supportov01a10 camera sensor driver lacks 1296x816 mode supportNo driver for ov01a1s camera sensoriVSC fails to probe with ETIMEDOUTiVSC fails to probe with EINVAL on XPS 9315 See all the individual bugs for more details. I plan to post semi-regular status updates on this on my blog.This above list of issues can also be found on my Fedora 42 change proposal tracking this and I intent to keep an updated complete list of all x86 MIPI camera issues (including closed ones) there. comments
  • Mike Blumenkrantz: Rake In Bike (2025/01/09 00:00)
    First Perf of the Year I got a ticket last year about this game Everspace having bad perf on zink. I looked at it a little then, but it was the end of the year and I was busy doing other stuff. More important stuff. I definitely wasn’t just procrastinating. In any case, I didn’t fix it last year, so I dusted it off the other day and got down to business. Unsurprisingly, it was still slow. Easing Into Speed The first step is always a flamegraph, and as expected, I got a hit: Huge bottlenecking when checking query results, specifically in semaphore waits. What’s going on here? What’s going on is this game is blocking on timestamp queries, and the overhead of doing vkWaitSemaphores(t=0) to check drm syncobj progress for the result is colossal. Who could have guessed that using core Vulkan mechanics in a hotpath would obliterate perf? Fixing this is very stupid: directly checking query results with vkGetQueryPoolResults avoids syncobj access inside drivers by accessing what are effectively userspace fences, which Vulkan doesn’t directly permit. If an app starts polling on query results, zink now uses this rather than its usual internal QBO mechanism. Bottleneck uncorked and performance fixed. Right? Naaaaaa The perf is still pretty bad. It’s time to check in with the doctor. Looking through some of the renderpasses reveals all kinds of begin/end tomfoolery. Paring this down, renderpasses are being split for layout changes to toggle feedback loops: The game is rendering to one miplevel of a framebuffer attachment while sampling from another miplevel of the same image. This breaks zink’s heuristic for detecting implicit feedback loops. Improvements here tighten up that detection to flatten out the renderpasses. Gottagofastium Perf recovered: the game runs roughly 150% faster, putting it on par with RadeonSI. Maybe some other games will be affected? Who can say.
  • Mike Blumenkrantz: Manifested (2025/01/07 00:00)
    I’m not saying we’re doing it Don’t quote me. We’re not doing it. Unless we are, in which case everything I wrote last year may come to pass with the advent of the unified OpenGL/ES ‘25 release. This is not a release announcement, but I’m tentatively planning to provide the date of the announcement once the ray-tracing EXT goes live. Confused? Well, you better figure it out quick cuz this is only the first week of 2025 and we got 51 more to go. In the meanwhile, get in the car: we’re going mesh shading. DISCLAIMER I gotta do this every year because we can’t have fun anymore on the internet. C’mon. Obviously there’s no ray-tracing EXT in the pipe BECAUSE WE’RE GOING MESH SHAAADIIIIIIIIIIIIIIIIIIII
  • Lucas Fryzek: 2024 Graphics Team Contributions at Igalia (2024/12/20 00:00)
    2024 has been an exciting year for the Igalia’s Graphics Team. We’ve been making a lot of progress on Turnip, AMD display driver, the Raspberry Pi graphics stack, Vulkan video, and more. Vulkan Device Generated Commands Igalia’s Ricardo Garcia has been working hard on adding support for the new VK_EXT_device_generated_commands extension in the Vulkan Conformance Test Suite. He wrote an excellent blog post on the extension and on his work that you can read here. Ricardo also presented the extension at XDC 2024 in Montréal, which he also blogged about. Take a look and see what generating Vulkan commands directly on the GPU looks like! Raspberry Pi Enhancements & Performance Improvements Our very own Maíra Canal made a big contribution to improve the graphics performance of Raspberry Pi 4 & 5 devices by introducing support for “Super Pages”. She wrote an excellent and detailed blog post on what Super Pages are, how they improve performance, and comparing performance of different apps and games. You can read all the juicy details here. She also worked on introducing CPU jobs to the Broadcom GPU kernel driver in Linux. These changes allow user space to implement jobs that get executed on the CPU in sync with the work on the GPU. She wrote a great blog post detailing what CPU jobs allow you to do and how they work that you can read here. Christian Gmeiner on the Graphics team has also been working on adding Perfetto support to Broadcom GPUs. Perfetto is a performance tracing tool and support for it in Broadcom drivers will allow to developers to gain more insight into bottlenecks of their GPU applications. You can check out his changes to add support in the following MRs: - MR 31575 - MR 32277 - MR 31751 The Raspberry Pi team here at Igalia presented all of their work at XDC 2024 in Montréal. You can see a video below. Linux Kernel 6.8 A number of Igalians made several contributions to the Linux 6.8 kernel release back in March of this year. Our colleague Maíra wrote a great blog post outlining these contributions that you can read here. To highlight some of these contributions: AMD HDR & Color Management Melissa Wen has been working on improving and implementing HDR support in AMD’s display driver as well as working on color management in the Linux display stack. Async Flip André Almeida implemented support for asynchronous page flip in the atomic DRM modesetting API. V3D 7.1.x Kernel Driver Iago Toral contributed a number of patches upstream to get the Broadcom DRM driver working with the latest Broadcom hardware used in the Raspberry Pi 5. GPU stats for the Raspberry Pi 4/5 José María “Chema” Casanova worked on adding GPU stats support to the latest Raspberry Pi hardware. Turnip Improvements Dhruv Mark Collins has been very hard at work to try and bring performance parity between Qualcomm’s proprietary driver and the open source Turnip driver. Two of his big contributions to this were improving the 2D buffer to image copies on A7XX devices, and implementing unidirectional Low Resolution Z (LRZ) on A7XX devices. You can see the MR for these changes here and here. A new member of the Igalia Graphics Team Karmjit Mahil has been working on different parts of the Turnip stack, but one notable improvement he made was to improve fmulz handling for Direct3D 9. You can check out his changes here and read more about them. Danylo Piliaiev has been hard at work adding support for the latest generation of Adreno GPUs. This included getting support for the A750 working, and then implementing performance improvements to bring it up to parity with other Adreno GPUs in Turnip. All-together the turnip team implemented a number of Vulkan extensions and performance improvements such as: VK_KHR_shader_atomic_int64 - Amber MR 27776 VK_KHR_fragment_shading_rate - Danylo Piliaiev MR 30905 VK_KHR_8bit_storage - Žan Dobersek MR 28254 shaderInt8 feature - Žan Dobersek MR 29875 VK_KHR_shader_subgroup_rotate - Job Noorman MR 31358 VK_EXT_map_memory_placed - Dhruv Mark Collins MR 28928 VK_EXT_legacy_dithering - Karmjit Mahil MR 30536 VK_EXT_depth_clamp_zero_one - Danylo Piliaiev MR 29387 Display Next Hackfest & Display/KMS Meet-up Igalia hosted the 2024 version of the Display Next Hackfest. This community event is a way to get Linux display developers together to work on improving the Linux display stack. Our Melissa Wen wrote a blog post about the event and what it was like to organize it. You can read all about it here. Display Next Hackfest Just in-case you thought you couldn’t get enough Linux display stack, Melissa also helped organize a Display/KMS meet-up at XDC 2024. She wrote all about that meet-up and the progress the community made on her blog here. AMD Display & AMDGPU Melissa Wen has also been hard at work improving AMDGPU’s display driver. She made a number of changes including improving display debug log to include hardware color capabilities, Migrating EDID handling to EDID common code and various bug fixes such as: Fixing null-pointer dereference on edid reading https://lore.kernel.org/amd-gfx/20240216122401.216860-1-mwen@igalia.com/ Checking dc_link before dereferencing https://lore.kernel.org/amd-gfx/20240227190828.444715-1-mwen@igalia.com/ Using mpcc_count to log MPC state https://lore.kernel.org/amd-gfx/20240412163928.118203-1-mwen@igalia.com/ Fixing cursor offset on rotation 180 https://lore.kernel.org/amd-gfx/20240807075546.831208-22-chiahsuan.chung@amd.com/ Fixes for kernel crashes since cursor overlay mode https://lore.kernel.org/amd-gfx/20241217205029.39850-1-mwen@igalia.com/ Tvrtko Ursulin, a recent addition to our team, has been working on fixing issues in AMDGPU and some of the Linux kernel’s common code. For example, he worked on fixing bugs in the DRM scheduler around missing locks, optimizing the re-lock cycle on the submit path, and cleaned up the code. On AMDGPU he worked on improving memory usage reporting, fixing out of bounds writes, and micro-optimized ring emissions. For DMA fence he simplified fence merging and resolved a potential memory leak. Lastly, on workqueue he fixed false positive sanity check warnings that AMDGPU & DRM scheduler interactions were triggering. You can see the code for some of changes below: - https://lore.kernel.org/amd-gfx/20240906180639.12218-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20241008150532.23661-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20241227111938.22974-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20240813135712.82611-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20240712152855.45284-1-tursulin@igalia.com/ Vulkan & OpenGL Extensions GL_EXT_texture_offset_non_const Ricardo was busy working on extending OpenGL by adding this extension to GLSL as well as providing an implementation for it in glslang VK_KHR_video_encode_av1 & VK_KHR_video_decode_av1 Igalia is listed as a contributor to these extensions and worked very hard to implement CTS support for the extensions. Etnaviv Improvements Christian Gmeiner, one of the maintainers of the Etnaviv driver for Vivante GPUs, has been hard at work this year to make a number of big improvements to Etnaviv. This includes using hwdb to detect GPU features, which he wrote about here. Another big improvement was migrating Etnaviv to use isaspec for the GPU isa description, allowing an assembler and disassembler to be generated from XML. This also allowed Etnaviv to reuse some common features in Mesa for assemblers/disassemblers and take advantage of the python code generation features others in the community have been working on. He wrote a detailed blog about it, that you can find here. On the same vein of Etnaviv infrastructure improvements, Christian has also been working on a new shader compiler, written in Rust, called “EBC”. Christian presented this new shader compiler at XDC 2024 this year. You can check out his presentation below. On the side of new features, Christian landed a big one in Mesa 24.03 for Etnaviv: Multiple Render Target (MRT) support! This allows games and applications to render to multiple render targets (think framebuffers) in a single graphics operations. This feature is heavily used by deferred rendering techniques, and is a requirement for later versions of desktop OpenGL and OpenGL ES 3. Keep an eye on Christian’s blog to see any of his future announcements. Lavapipe/LLVMpipe, Android & ChromeOS I had a busy year working on improving Lavapipe/LLVMpipe platform integration. This started with adding support for DMABUF import/export, so that the display handles from Android Window system could be properly imported and mapped. Next came Android window system integration for DRI software rendering backend in EGL, and lastly but most importantly came updating the documentation in Mesa for building Android support. I wrote all about this effort here. The latter half on the year had me working on improving lavapipe’s integration with ChromeOs, and having Lavapipe work as a host Vulkan driver for Venus. You can see some of the changes I made in virglrenderer here and crosvm here. This work is still ongoing. What’s Next? We’re not planning to stop our 2024 momentum, and we’re hopping for 2025 to be a great year for Igalia and the Linux graphics stack! I’m booked to present about Lavapipe at Vulkanised 2025, where Ricardo will also present about Device-Generated Commands. Maíra & Chema will be presenting together at FOSDEM 2025 about improving performance on Raspberry Pi GPUs, and Melissa will also present about kworkflow there. We’ll also be at XDC 2025, networking and presenting about all the work we are doing on the Linux graphics stack. Thanks for following our work this year, and here’s to making 2025 an even better year for Linux graphics!
  • Peter Hutterer: A new issue policy for libinput - closing and reopening issues for fun and profit (2024/12/18 03:21)
    This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it. Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this. Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is. So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!). So for the forseeable future libinput will follow the following pattern: Reporter files an issue Maintainer looks at it, posts a comment requesting some information, closes the bug Reporter attaches information, re-opens bug Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer. This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this. [1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present
  • Donnie Berkholz: The lazy technologist’s guide to staying healthy (2024/12/17 21:05)
    TL;DR — I’ve lost a ton of weight from mid-2023 to early 2024 and maintained the vast majority of that loss. I’ve also begin exercising and had great results in my fitness and strength. Here, I’m sharing what I’ve learned as well as a bunch of my tips and tricks. Overall on the diet side, it’s about eating a wide variety and healthy ratio of colorful, minimally processed whole foods, with natural flavor and sweetness, only during meals. On the exercise side, I do both cardio and resistance training. For cardio, I focus on post-meal, moderate-intensity cardio (specifically, 1-mile brisk walks). For strength training, I use calisthenics-based compound exercises (complex multi-muscle movements) 2x/wk, performing a single set to near-exhaustion. I’ve optimized this down from 3 sets 3x/wk, based on my experience and academic research in the area. In the past 18 months, I’ve lost 75 pounds and gone from completely sedentary to fit, while minimizing the effort to do so (but needing a whole lot of persistence and grit). On the fitness side, I’ve taken my cardiorespiratory fitness from below average to high, and I’m stronger than I’ve been in my entire life. Again I’ve aimed to do so with maximum efficiency, shooting for the 80% of value with 20% of effort. Here’s what I wrote in my initial post on weight loss: I have no desire to be a bodybuilder, but I want to be in great shape now and be as healthy and mobile as possible well into my old age. And a year ago, my blood pressure was already at pre-hypertension levels, despite being at a relatively young age. Research shows that 5 factors are key to a long life — extending your life by 12–14 years: Never smoking BMI of 15.5–24.9 30+ min a day of moderate/vigorous exercise Moderate alcohol intake (vs none, occasional, or heavy) Unsurprisingly, there is vigorous scientific and philosophical/religious/moral debate about this one. However all studies agree that heavy drinking is bad, so ensure you avoid that. Diet quality in the upper 40% (Alternate Healthy Eating Index) Additionally, people who are in good health have a much shorter end-of-life period. This means they can enjoy a longer healthy part of their lives (the “healthspan”) and squeeze the toughest times into a shorter period right at the end. After seeing many seniors struggle for years as they got older, I wanted my own story to end differently. Although I’m no smoker, I lacked three other factors. My weight was incredibly unhealthy, I was completely sedentary, and my diet was terrible. I do drink moderately, however (nearly all beer). This post accompanies my earlier writeups, “The lazy technologist’s guide to weight loss” and “The lazy technologist’s guide to fitness” Check them out for in-depth, science-driven reviews of my experience losing weight and getting fit.  Why is this the lazy technologist’s guide, again? I wanted to lose weight in the “laziest” way possible — in the same sense that lazy programmers work to find the most efficient solutions to problems. I’ll reference an apocryphal quote by Bill Gates and a real one by Larry Wall, creator of Perl. Gates supposedly said, “I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it.” Wall wrote in Programming Perl, “Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote so you don’t have to answer so many questions about it.” What’s the lowest-effort, most research-driven way to lifelong health, whether you’re losing weight, getting in shape, or trying to maintain your current healthy weight or state after putting in a whole lot of time and effort getting there? Discovering and executing upon that was my journey. Read on if you’re considering taking a similar path. Hitting my goals Since my posts early this year, I broke through into my target ranges for both maintenance weight and fitness. In mid-April, I hit a low of 164 lbs. Since then, I’ve been gradually transitioning into maintenance mode, hovering within ~10 lbs of my low. As I write this, I’m about 10 pounds above my minimum weight, at a current BMI of 23. At my lowest, I had a BMI around 22. On the fitness side, in late May, I broke into the VO2Max range for high cardiorespiratory fitness. (In my case, that’s 47 based on my age and gender, as measured by my Apple Watch.) In the next few sections, I’ll share how I’ve continued to change what I eat and how I work out to keep improving my overall health. Evolving what I eat for long-term health In this section, I’ll share a lot of what I’ve learned regarding how to eat healthier. There’s a lot to it, from focusing on whole foods with enough protein and fiber to eating enough veggies and managing portion sizes, so dig in for all the details! Keep up the protein As I wrote in the post on weight loss, high protein is a great way to lose weight and maintain or build muscle. Protein also promotes fullness, so I’ve shifted my diet so that every meal (breakfast included) has a good amount of protein — targeting 25%–30% of daily calories. Previously, I used to get quite hungry in the late morning, before it was time to eat lunch. That’s no longer a concern even when I’m on a caloric deficit, let alone eating at maintenance. Use Mediterranean plate ratios Although I’m not officially eating a Mediterranean diet, I’ve found its plate ratios to be incredibly valuable: 1/2 vegetable 1/4 lean protein (white meat, seafood, lentils/beans) 1/4 starchy carb (whole grains or starchy vegetables, avoiding white/processed grains) Building meals that way makes it very hard for me to overeat, because the vegetables are so high-volume and low-calorie that they take up a lot of space in my stomach. Following this guideline is especially helpful at restaurants, which I’ll detail later. My main exception is breakfast, where I do incorporate veggies but not as half of my meal. Veggies plus fruits are certainly half of it, though. Count calories for a while, and then set a permanent weight-gain trigger After overeating for a sizable fraction of my lifetime, and then eating at a large deficit for a year, I need to teach myself what sustainable eating habits look like because they clearly aren’t intuitive, for me. The “intuitive eating” trend may work for people who already have a habit of healthy eating and weight maintenance, but not for the rest of us — our intuition is broken from years or decades of bad habits. As a result, calorie counting at maintenance is a good practice to learn what the correct amount of food per day looks and feels like. My plan is to continue counting calories at maintenance until I’m confident that I’m no longer gaining weight, and then stop. However, that raises the risk that my weight could then start increasing again, because it’s incredibly common for people to re-gain the weight they’ve lost. Around 80%–90% of people fail to maintain their weight loss — mostly those who don’t exercise and stop tracking their eating/weight. There’s great studies on the US National Weight Control Registry about the habits of people who keep their weight off. As a process control, I’m going to continue weighing myself daily. I’m setting an upper limit of 5 pounds above my target weight that will trigger me to begin calorie counting again. To avoid reacting to the random deviations that accompany daily weight, I’ve started using a specialized app called Happy Scale that is designed for creating smoothed trends for body weight. You could also do this in a spreadsheet, but I like the ease of use of this app. Dine out at restaurants, safely Eating out at restaurants (or getting takeout/takeaway) is a challenge that a lot of people on diets — or just trying to eat healthy — can’t figure out how to make work. A lot of people just give up and always order a salad. Surprisingly, that can trick you into thinking you’re eating healthy without actually doing so. I’ve created a set of guidelines that I follow when eating out: Aim for lean protein & veggies, prepared simply (e.g. grilled, roasted, sautéed, steamed). Always start with veggies. If your meal doesn’t come with them, order a starter salad or veggies as an appetizer. Minimize high-fat, calorie-dense sauces & toppings. Watch out for anything based on cream (like Alfredo sauce), cheese, mayo (aioli), oil, or butter. A little bit of a high-flavor cheese is great (like finely grated parmesan, or crumbled feta/goat cheese), but avoid the cheese sauce or a big pile of shredded cheese. Get meals served with tomato-based sauces, slices of lime/lemon or just spices/seasonings, which bring tons of flavor without the calories. If it comes with a calorie-dense sauce, ask for it on the side and dip your bites instead of getting your meal drenched in it. You’ll often be shocked by how big of a cup they provide for the sauce, which would’ve been coating your food. In salads, always get dressing on the side, and prefer oil & vinegar or a vinaigrette. Do the same with any other high-fat sauces — get them on the side. That way, you’re in control of the portion, or you can just dip bites. Salad dressings can have hundreds of calories in them. If you add a huge pile of cheese and croutons, and maybe some processed meats like pepperoni or some oil-covered pasta, then you’ve just turned a healthy meal into the opposite. Avoid breaded, deep-fried foods. This includes the protein as well as French fries or chips/crisps. Don’t eat the table bread when it comes out first at a restaurant. Eat veggies, then protein, and only then starchy carbs. Remember, only 1/4 of your meal can be starchy carbs, according to the Mediterranean plate ratio (bread, rice, pasta, potatoes, etc). Avoid meals that are 1/2 or more starchy carbs. Only eat half of what you order, because restaurant portions are massive. Restaurant portions are big enough for 2 meals, sometimes 3. Split it physically on your plate when you get it, and ask for a box as soon as possible. As one example, I love burgers. When I order one, I’ll look for a healthier, simpler option instead of the one with 15 fatty add-ons, I’ll stick with a single patty instead of a double, and I’ll often ask for the aioli on the side. That way, I can lightly dip each bite if it needs the flavor. I’ll frequently get a turkey or bison patty instead of beef, and I’ll often order it without a bun — either on a bed of lettuce (eaten with a fork & knife), or wrapped in lettuce instead of the bun. For the side, instead of fries, I’ll get a side salad (no croutons, no cheese, vinaigrette on the side), veggies, or fruit. Sometimes I’ll get coleslaw or a lower-calorie soup, when that’s the best option. I allow myself one “extra” from my guidelines, and it’s usually getting cheese on the burger (the other toppings are veggies). As another, noodle/rice dishes at Italian, Indian or SE Asian restaurants (Chinese, Japanese, Vietnamese, Japanese, etc) are common. Get a stir-fry, add lots of veggies, get the grilled/roasted chicken or seafood, avoid the buttery/creamy sauces, and/or eat less of the rice/noodle part of the dish. If you get sushi, prefer sashimi and rolls over nigiri, which has a lot more rice. When you do order starchy carbs, prefer the whole-grain version when possible (brown rice, whole-wheat pasta, etc). When you can make it work, first eat the veggies, then protein, then grains. Sometimes you’re stuck at a place that doesn’t fit any of those guidelines. Fast-food restaurants like McDonald’s, Burger King, or Dairy Queen have no healthy meal options — no grilled chicken, no salad or wraps without fried food, etc. In those cases, I’ll order smaller portions, like a kid’s meal, or a single cheeseburger and the smallest size of fries (the one that comes in a little bag instead of a fry holder), with a cup of water. Another option is a double fish sandwich, if you order it without tartar sauce and skip the bun. You can probably manage a meal around 500–600 calories, but you’ll be hungry because you hardly got any veggies or fiber, so you’re missing out on fullness signals. You also will have eaten all kinds of ultraprocessed ingredients instead of healthy whole foods, which we’ll discuss later. Eat like it’s the 1950s In the US, if you go back to before we had ultraprocessed foods, people ate very differently. Most of that emerged in the 1960s and really gained popularity in the 1970s, so let’s return to the 1950s. Eat a savory breakfast Before we had overwhelmingly sugar-doused cereal, people often ate breakfast differently. It might be leftovers from the night before, or it could be oatmeal, peanut-butter toast, or something like eggs & bacon. In general, breakfasts were much more savory than sweet. I’ve adopted that philosophy, shifting away from breakfasts like sweet cereal or flavored yogurt (both with plenty of added sugar) to a more savory approach, or at least foods with no sugar added. Most often, I’ll have something with eggs and beans, as well as a separate bowl with berries and plain skyr or Greek yogurt. The fruit adds plenty of flavor and sweetness, so there’s no need to add any more from sugar/honey/etc. Eliminate snacks Before ultraprocessed foods, snacks also weren’t really a thing. There weren’t food companies trying to create opportunities for profit through people eating outside of typical meals. You’d eat breakfast, lunch, and dinner, and that was it. Eating random snacks throughout the day didn’t really exist, although some families might have an extra mini-meal of sorts at some point. Decrease portion size by decreasing plate size Additionally, portion sizes have increased dramatically. In part, this is because plateware has increased in size. For example, the diameter of plates has increased from 9″ in the 1950s to 10.5″–11″ between the 1980s and 2000, and as much as 12″–13″ today. People will subconsciously take larger portions and eat more calories when their plates are larger, as academic studies have shown. This brings us to another easy thing I did to eat healthier — reduced the size of my plates, bowls, and glasses. Even without buying new plates, I started only adding food to the “inner ring” instead of all the way to the edge, and stopped piling anything on top of other food. I bought new, smaller bowls and glasses, because those were harder to manage. And when I eat out or get takeout, I have a mental baseline to compare to their plate sizes. I also watch out for the use of multiple courses to keep me from thinking about how much I’m eating. To sum up, I switched to a savory breakfast, eliminated snacks outside of meals, and reduced the size of my plateware. Even if you only do 1 or 2 of those 3 things, that’ll make a meaningful difference. Ultraprocessed foods trick your body I’ve read quite a bit about ultraprocessed foods. The summary is that they are effectively ways to trick your body into thinking it’s getting something that’s not really there. Artificial sweeteners, things with the taste & consistency of fat that have no fat, and artificial/natural flavors in foods that make your body expect something else are just a few examples. Sugar that’s not sugar When your body tastes something sweet, it expects that it will soon get an influx of calories from sugar to digest. Artificial sweeteners mess with this, tricking your body. A number of studies have shown that people tend to make up for these “lost” calories by subconsciously eating more later that day. It’s possible to prevent this with strict calorie counting, but it’s a bias you want to be aware of. It’s also unclear what these mixed signals will do to your body over the long term, when it can’t tell what calories to expect based on what you taste. As a result, I’ve begun avoiding alternative sweeteners, and just getting something with sugar if that’s really what I want. Fat that’s not fat This one is especially sneaky, because you can’t always spot it in the ingredient labels. Using seemingly normal ingredients, companies have created fat substitutes with unique structures that provide the same sort of mouth-feel as fat, without containing the expected levels of fat. These can come in the easily identifiable varieties such as all sorts of “gum” — this results in ice cream that basically doesn’t melt, for example. “Whey protein concentrate” is another common one, as is anything with “dextrin” in the name and a variety of emulsifiers such as “polyesters.” You need to work (and typically pay a premium) for things like ice cream or chocolate with a simple ingredient list, because natural ingredients cost more and often don’t transport as well. Flavor that’s not flavor Flavors in the wrong foods are another example of tricking your body into expecting a different set of nutrients than it gets. This can cause you to develop craving for unhealthy foods, based on your desire for a particular flavor profile that comes from added flavorings. For example, you might want some orange-, apple-, or grape-flavored drink instead of actual oranges, apples, or grapes. Your body will cause you to crave certain things based upon their nutrient profile, and what your body needs. This is most obvious in pregnancy and in studies done on babies/toddlers, given free choice on what to eat. Micronutrients that don’t belong “Enriched” foods are stealing your health, again based on artificially induced cravings for unnaturally added ingredients. A good case study here is flour in bread. In the early 1940s, the United States passed a law requiring enrichment of bread flour to prevent diseases around missing micronutrients (e.g. folic acid, niacin, thiamin, riboflavin, iron). Italy, however, did no such thing — instead, it focused on educating its citizens on healthy diet components. As a result, Italians eat far more beans than Americans, for example, which contain many of the same missing micronutrients. Americans instead eat far more white bread than they should — an ultraprocessed food that our body desires because of the added micronutrients that don’t belong. Salt that’s over the top Overly salty foods are another danger area. In the US, the recommended amount is 2300 mg/day, which it’s quite easy to hit even while trying to avoid extra-salty foods. For example, I mostly don’t eat ramen, other soups, preserved meats like smoked salmon or beef jerky, or frozen meals. Another surprising one is sugar-sweetened beverages like soda, which have so much sugar that they also add salt to trick you into feeling like they aren’t that sweet. Optimize for gut health Another area that’s become increasingly visible in the past couple of decades is the importance of gut microbiota in health. Keeping them healthy is critical to being healthy. That’s come down to a few key factors for me: fibers, fermented food, and reduced alcohol. Eat enough fiber — and that’s a lot! The average American only eats 10–15 grams of fiber per day, while the recommended daily allowance for adults up to age 50 is 25 grams for women and 38 grams for men. I see this as a general correlation with our consumption of ultraprocessed foods, because fiber is primarily present in whole foods. Whole fruits, whole vegetables, whole grains, legumes (beans/lentils), and seeds are among the best sources of fiber. As soon as you stop eating the whole, unprocessed food and replace it with something more processed, you lose the benefits. Make sure to eat the whole fruit, including the edible portion of the skin. Even something as simple as making a fruit smoothie or fruit juice will chop up or remove the fiber and other long-chain complex molecules, reducing its nutritional value. Personally, I found it surprisingly hard to modify my diet enough to get enough fiber while I was losing weight, because I’d been eating ultraprocessed foods for so long. In the end, the main things I added were raspberries & blackberries, chia seeds, broccoli & cauliflower, and beans/lentils. Among fruit, raspberries and blackberries are particularly high fiber (you can tell from all the seeds as you’re eating them). Other great options include apples, oranges, pears, grapefruit, and kiwifruit, as long as you eat the edible portion of the skin & rind. Passion fruit is an all-star with many times more fiber, but it’s quite expensive. Dried fruit can be a great complement to fresh fruit in moderation — especially golden berries (another all-star), plums, and apricots. It’s easy to eat too much dried fruit, though, because all the water’s been removed so it doesn’t fill you up as quickly. For example, you can eat 5 dried apricots in a few minutes, but imagine eating 5 fresh apricots in a row. Vegetables are another great source of fiber, but again you need to focus on the right ones. Among non-starchy options (basically anything but root vegetables), broccoli and cauliflower are great choices, as is kale. I like to begin my meals with one of those, whenever I can. Among starchy options, sweet potatoes, carrots, and corn are great choices. Whole grains (such as whole-wheat bread, the denser the better, and brown rice) are also high in fiber, but they tend to have lots more carbs — while I optimized more for protein. When I’m eating at maintenance, I occasionally have some dense whole-grain breads such as a Danish pumpernickel or a German roggenbrot/vollkornbrot. They’re nothing like your typical American pumpernickel or rye, so try to find a bakery near you that offers them. Otherwise, any 100% whole-grain bread (they often have a stamp) with low sugar and a decent amount of protein & fiber are a good option. Any bread with no sugar is even better, but it’s hard to find. I’d recommend checking out local bakeries first, then the bakery within your favorite grocery store, followed by national brands such as Dave’s Killer Bread or Ezekiel from Food for Life. Legumes & seeds are a great source — I’ve saved perhaps the best for last. Beans and lentils are fiber superstars — a single serving around 100 calories could have 5–10 grams. They also offer a complete set of protein (all 20 amino acids) when combined with a whole-grain rice, such as brown, red or purple. I have a serving of black beans with eggs almost every day. Fermented foods improve gut health Another great way to improve the types of gut microbiota is eating more fermented foods. These are more common things in the US like yogurts and sauerkraut, as well as cultural food like Korean kimchi, increasingly popular drinks like kombucha, and less common drinks like kefir (basically drinkable yogurt). The benefits seem to fade away after just a few weeks though, so it’s important to maintain consumption instead of thinking you can transform your microbiota once and then you’re done. I’m regularly eating skyr, which is a thick Icelandic yogurt with as much protein as Greek yogurt but not the tangy, bitter flavor. It’s a great protein-dense option, even when you eat the version based on whole milk (which I do). I’m also occasionally using kimchi on my eggs or drinking a small half-glass of kefir. Sauerkraut is reserved for summer barbecues, and I haven’t gotten into kombucha at this point. Moderate your alcohol intake Another thing that made a big difference was reducing the amount of alcohol I drink. Cutting this down from a beer every day to more like once a week has made a big difference. I’m overall feeling more energetic and my gut’s much healthier too. Appreciate the sweeter (natural) things in life As I learned more about eating healthy, I came across increasing amounts of material about how added sugar caused major problems, leading toward obesity or diabetes. Interestingly, many parts of the world eat far less sweet food, and there tends to be a general correlation between consumption of ultraprocessed sugary food and obesity. In my own life, I’ve noticed this difference in practice when traveling to Europe and Asia, where many of the desserts are far less sweet (and the obesity rate is much lower). Two great examples are Polish cheesecake (sernik) — which is far less sweet than American cheesecake — and the frequent use of less-sweet ingredients in Asia such as red bean, sesame, or glutinous rice. Based on this, I’ve cut down on foods with added sugar. Natural levels of sugar are generally fine, such as that in many fruits, but even then I try to bias toward less-sweet options. For example, I’ll typically have an apple or pear instead of mango. Among dried fruit, I avoid dates and figs, tending toward lower-sugar options instead. Once you start looking, it’s shocking how seemingly every processed food has added sugar. This goes all the way down to even basic staples such as bread, unless you specifically look for the rare breads without it. In America, we’ve trained ourselves from birth (with sweetened baby food) to eat sweeter and sweeter foods with more and more unnatural levels of sugar, to the point where it tastes too sweet or even sickening to people from other cultures. As a pleasant side effect of this, I find myself enjoying moderately sweet foods almost in the same way that I used to think of desserts. Fruit like strawberry or mango, chia pudding or overnight oats w/ fruit and no other sweetener, frozen Greek yogurt bars, skyr with cinnamon and just a little honey, dried fruit, trail mix, or 85%+ dark chocolate now taste great. Try the “No S Diet” While on my journey, I came across a simple approach called the “No S Diet” that I quite appreciated. It boils down healthy eating into just three rules and one exception: No Snacks No Sweets No Seconds Except (sometimes) on days that start with “S” Even this alone would get you a long, long way. Combining it with a Mediterranean diet (plate ratios, whole foods, lean protein) is almost all you need. I have stopped snacking entirely, as mentioned earlier. I’m a bit more flexible on sweets, if they fit into my calories for the day, but I do try to save more of that for the weekend. For example, I might have a little 85%+ dark chocolate on a weekday after lunch, or some strawberries w/ whipped cream after dinner, but I’ll eat a full dessert serving on the weekend. Eat in the right order Interestingly, I also learned that even the order in which you eat can make a difference. Specifically, you can flatten blood-sugar spikes by eating in a specific order: fiber, then protein, then starchy carbs. For example, start with a salad, then eat the main portion of your entree (e.g. chicken or fish), followed by the sides (rice, potatoes or whatever). This has served me well at home, but it’s been especially helpful at restaurants. Every time I go out, I make a point of ordering either a salad or a veggie-based appetizer to enjoy before the main course. Eating in this specific order isn’t the only reason that helps — it also uses up a bunch of the room in my stomach on veggies instead of more calorie-dense foods, so I’m often full enough before I finish my starchy carbs. Add antioxidants Antioxidants are another great way to eat healthier. These protect your body at a subcellular level from oxidizing reactions, which can damage parts of your cells (especially the mitochondria, basically your cell’s energy factory) over time and contribute to aging. An easy way to identify foods with higher levels of antioxidants is to look for more color. Instead of the bland-looking food, pick one with a stronger color. It could be dark green, red, orange, blue, purple, or something else — just avoid white and beige options within a food family. Although there are many exceptions, this is a good guideline. Remember: eat the rainbow. Go for whole grains and prefer resistant starches Whole grains are hugely more valuable than the more processed options. You get the germ, which has a lot of the nutrients. With your typical American white bread, a lot of the healthy bits are removed (the germ and bran), leaving you with only the endosperm. With whole-grain bread, the germ and bran are also used, which keeps more of the fiber and micronutrients. This also reduces the blood-sugar spike after meals, which is another great benefit. Another thing I learned is that there are different types of starches — rapidly digested, slowly digested, and resistant. Resistant starch takes longer to digest, flattening some of the glucose spikes that can create hunger cravings a couple of hours after meals. Two of the best examples of those are whole grains (type 2 resistant starch, or RS2) as well as pasta, potatoes, or rice that’s cooked and then cooled (type 3, or RS3). One way to prefer resistant starch is to aim for foods that are higher in amylose and lower in amylopectin. Amylose is a single straight-chain polymer, so it takes longer to break down and digest, whereas amylopectin is branched with many ends (so it’s faster to break down in parallel). That parallel breakdown means you get a sugar spike rather than spreading out the sugar over time. In general, this means whole grains over processed grains, and the more colorful versions of foods. Here are some examples: Bread: Whole-wheat/pumpernickel/rye > sourdough/multigrain/50% wheat > white Rice: Purple/black/red/wild > brown > long-grain white > short-grain white Pasta: Bean-based > whole-wheat > standard (durum) Potatoes: Stokes/Okinawan (purple inside) > sweet > white Oats: Steel-cut > rolled > instant Another way to get more resistant starch is to eat more grains that were cooked and then cooled. Pasta salad, potato salad, grain bowls, and reheating leftover rice are a few common examples. Yes, that reheated Chinese stir-fry w/ rice can be healthier than it was when you ordered it! Get nutrients from whole foods, not pills & powders A lot of people try to add missing nutrients to their diet in the form of a multivitamin or a large variety of supplements. Unfortunately, research has shown that despite containing the same chemical compounds, this is frequently not a substitute. The bioavailability (the amount that actually makes it into your bloodstream) is often much higher when you eat these micronutrients as part of whole foods, rather than taking them in a pill or as powders. Protein powder is another issue. A lot of people will make protein shakes or add protein powder to foods like yogurt to get enough protein. Unfortunately, protein powders are missing a lot of the nutrients that protein-based whole foods contain. For protein shakes specifically, the below point applies regarding drinking your calories and its poor effect on satiety. If whole foods aren’t an option, I’d recommend looking into protein bars with high fiber rather than a liquid option. RXBar is my favorite protein bar because of its simple ingredient list, high protein & fiber content (12g protein, 5g fiber) and good flavor, and it’s well-priced at Costco at ~$1.25/bar vs $2 elsewhere. When I need a packable meal replacement that doesn’t require refrigeration, I’ll usually grab an RXBar, a Wholesome Medley trail mix (from Whole Foods), and an apple or pear. Avoid drinking your calories — focus on low- or no-calorie beverages Overall, drinking calories can confuse your body into consuming too many calories in a day. Your primary beverage should be water. That should be complemented primarily by low-calorie, unsweetened options like coffee or tea (potentially with milk and minimal sugar). Drop the sugar-sweetened beverages, like soda Soda and other sugar-sweetened beverages are not recognized by the body as consumed calories. When you consume 500 calories soda, you’re likely to increase your total daily consumption by 500 calories (gaining weight) instead of eating less food later. Not to mention, if you drink sugar-sweetened beverages frequently throughout the day, you’re also destroying your teeth and potentially giving yourself diabetes. Eat your meals instead of blending them into smoothies Smoothies destroy much of the nutritional value in whole fruit, such as fibers and other complex molecules, because it’s ground up into tiny bits by a blender. They also make it much easier to consume far more than you normally would. How much fruit goes into a single smoothie, compared to how many whole fruits you would eat in a single sitting? Drop the sugary alcoholic drinks Alcohol is another place to be careful. Cocktails are full of sugar from the simple syrup. Trying to save calories by getting a basic mixed drink with Diet Coke? Then you’ve got artificial sweeteners. Your best liquor-based option is probably a mix with soda water and lime — things like a vodka soda, ranch water, gin Rickey, or whiskey highball. High-alcohol beers have incredibly high calorie counts as well. There are some good options for low-calorie or non-alcoholic beer, which I covered in an earlier post. For coffee, stick to the classics in the smallest size (4–8 oz) Coffee-based drinks can be incredibly high-calorie, especially in the US. Mochas and blended/frozen drinks can be 500–1000 calories or more, for a single drink. This is especially harmful because of the American tendency to order the largest size instead of the smallest — it’s a better deal, right? A Starbucks Java Chip Frappuccino is 560 calories for a venti (large). But this pales in comparison to Caribou Coffee, which offers drinks like the Turtle Mocha for 960 cal (L) / 1140 (XL) or the Caramel Caribou Cooler for 830 cal (L) / 1050 (XL). At Dunkin’, you can get the Triple Mocha Frozen Coffee at 1100 cal (L) and the Caramel Creme Frozen Coffee at 1120 cal (L). So keep your eyes open on any specialty coffees. When drinking coffee, go for the classics. If you don’t like black coffee or espresso, then get a latte, cappuccino, flat white, cortado, or espresso macchiato. Out of those, lattes have the most milk (so the most calories), while espresso macchiatos have the least. Also, order the smallest possible size — this is also the most authentic size, with a better ratio of espresso to milk. Starbucks carries a short size (8 oz) that isn’t on their printed menu, but unfortunately many other chains only offer 12 oz as their smallest size. Third-wave coffee shops often have 8 oz or smaller sizes as well, especially for classics like a cappuccino or flat white. One trick if you want to order a seasonal or flavored latte is that most coffee shops have a “1/2 sweet” option that uses half the syrup, which is usually more than sufficient to add flavor. I’ll often order the smaller-sized cappuccino plus 1/2 the seasonal syrup instead of a latte, which gives me a similar experience in a smaller portion size and lower price. Non-dairy milks at coffee shops are often full of unnecessary additives and over-sweetened, so try skim milk instead of almond/coconut milk if your goal is lower calories. Non-dairy milks are also full of empty carbs, whereas dairy milk has much more protein. For a richer drink, upgrade to whole milk and add a bit of sugar yourself if needed, instead of letting the barista pour in a huge amount of sugar-packed flavor syrup. Give tea a try Another great zero-calorie option is tea. Experiment with different teas, whether it’s black, green, white, masala chai, or an herbal non-caffeinated tea. The only calories come from any milk or sugar you add, but try appreciating the flavor of the tea alone. If you don’t like it, maybe you want to upgrade to higher-quality teas. I particularly like the herbal options from Celestial Seasonings and Twinings. Tazo, Rishi, and Stash come well-recommended as tea brands you can find at many places in the US. If you really get serious, you’ll probably upgrade to loose-leaf tea from a local shop. Overall, minimize the calories in your drinks. Water, coffee (but not mochas / frozen drinks), and tea are great options, while you should minimize smoothies, soda, and alcohol. But what do meals actually look like? That seems like a ton of restrictions and rules, right? How can you, or I, keep track of them all? Overall, it’s about eating a wide variety and healthy ratio of colorful, minimally processed whole foods, with natural flavor and sweetness, only during meals. Here’s some examples for a day at 1500 calories (a 1000-calorie deficit): Breakfast I eat the same thing almost every day, aiming for a savory breakfast rather than a sweet one. The only things I change are additions to the eggs. The veggies vary, and sometimes I substitute salsa with kimchi or sriracha. 2 scrambled pasture-raised eggs, with 2 diced mushrooms, 1/3 diced heirloom tomato, low-sodium lentils / black beans, and sriracha 250g (~9 oz) Costco three-berry blend of blackberries/raspberries/blueberries (microwaved), combined with 90g (~3.5 oz) whole-milk skyr and 15g (~1 tbsp) chia seeds 110g (~4 oz) kefir (fermented milk) Lunch Every day for lunch, I’ll have a side salad, a veggie plate, or an entree salad with lean protein in it. I aim for flavorful veggies that don’t require any sort of dip — try your local farmer’s market for better-tasting veggies than the grocery store carries. Big salad with Costco power greens (kale, spinach, baby chard), dressed with 1 tsp extra-virgin olive oil, vinegar, salt, and pepper 115g (~4 oz) pulled chicken with mustard-based BBQ sauce 200g (~7 oz) Stokes/Okinawan sweet potato with 1 tsp grass-fed butter Usually my protein is chicken, canned tuna, canned salmon, or frozen, pre-cooked shrimp. On other days, I might make a chicken-salad open-faced sandwich, or the same with tuna salad. Sometimes I’ll put smoked salmon on Wasa crackers, or I’ll add salmon or shrimp to my salad. I’ll also regularly have tacos with chicken/shrimp, corn tortillas, veggies, salsa, and skyr (instead of sour cream). Dinner This day, I ate out at a burger restaurant. Here’s the healthier option I constructed, using Mediterranean ratios and my other guidelines: Crispy Brussels sprouts for a starter Bison burger (6 oz / 180g patty), no bun, on a bed of lettuce and tomato Topped with ~1 tbsp fig jam and ~15g blue cheese (I scraped off half of the blue cheese and got the fig jam on the side, so I could control the portion) Side of steamed broccoli with butter Maintaining weight is just as hard as losing it One of my biggest challenges has been making this transition into a sustainable diet, after depriving myself of many foods I enjoy for the past year. In particular, it’s extremely hard to avoid eating too many desserts or snack foods with added sugar, especially when I’m toward the lower end of my target weight range. I speculate that this is partially related to “set point” theory. My body’s used to being much heavier, and it will take time for my body to realize that I’m healthy at this new level rather than trying to survive a famine, where I should try hard to eat high-calorie foods whenever I come across them. Exercise also helps in maintaining weight loss (there’s a study done on police officers who continued exercise post-weight-loss vs those who didn’t, and a variety of examples from the National Weight Control Registry). Fitness is a lifelong journey On the fitness side, I’ve taken an even more efficiency-optimized approach than I had before, with continued success. I found my energy levels getting extremely low as I approached my target weight while maintaining a large calorie deficit. This prompted me to experiment with whether I could decrease my frequency and intensity of exercise, while still getting most of the results. Dropping HIIT with no hit in results I kept my daily walks for low-intensity steady state (LISS) cardio exercise, although I’ve adapted them slightly into 3 per day — with a 15-minute walk after each of my 3 meals. However, I experimented with dropping the high-intensity interval training (HIIT). Surprisingly, my VO2Max (a measure of cardiorespiratory health) continued to increase at almost the same rate as before. My plan is to watch for a plateau in VO2Max, and consider re-introducing HIIT at that point. Alternately, if I ever get too short on time to continue with enough LISS, I could replace it entirely with my extremely low-volume HIIT program. I would like to re-add HIIT at some point because a mixture of different intensities is overall better than just one. However I frankly don’t enjoy HIIT so I’m not in a big rush, until I have a clear need (like I mentioned above). Simplifying and reducing strength training I was also doing strength training 3x/week, with 3 paired sets per workout. I’ve replaced that with a 2x/week pattern, also dropping from 3 to only 1 paired set — importantly, performed to near-failure. Again, I’ve seen nearly equivalent results. Upon reviewing the academic research and expert recommendations in this area, many experts suggest that sets 2 and 3 essentially serve as “insurance” that you’ve maximized your potential growth in strength & size during a workout. At worst, doing a single set might offer more than 50% of the total benefit of any number of sets. That means a single set — if done well — could provide a majority of the benefits in just 1/3 of the time. This fits nicely into my 80/20 philosophy. If you’d like to look into this in more detail, go on Google Scholar and look up “resistance training single-set OR one-set review OR meta-analysis.” In general, the research shows a dose-dependent response (more sets produce better results), but there’s diminishing returns from each additional set. You need to carefully look at the effect sizes, comparing the effect size of one set to the effect size of multiple sets. There will often be statistically significant differences, but the effect size is the important part. It’s not about whether the difference is real, it’s about how big it is. Overall, if you’re optimizing for efficiency on time spent working out rather than maximizing muscle growth in a certain period of time (e.g. a year), single sets can be a great approach. My perspective is that I’ll be doing this for the rest of my life, and I’ll be moving increasingly slowly toward a plateau of my biological maximum strength, so I don’t really care how many years it takes. I may find that I need to increase my set count as I get more experience with strength training, and my “newbie gains” gradually fade away. We’ll see how things continue to develop over time, and whether I hit a plateau where that might be an option I try. My current strength training routine continues to use a similar routine as described in my last writeup. I use the 8×3 app to track my progressions & progressive overload, and I alternate between two routines, both of which are full-body workouts with compound movements: Day 1: Vertical push/pull (+core & legs). L-sit pull-ups, dips / handstand push-ups, squats, Nordic curls. Day 2: Horizontal push/pull (+core & legs). Horizontal rows, push-ups, squats, Nordic curls, hanging leg raises. Each exercise is part of a progression toward more advanced, lower-leverage movements that will continue to build strength, without the need to use any weights. For example, I’m specifically working on pistol & shrimp squats, handstand push-up negatives, pseudo planche push-ups, L-sit pull-ups, and tucked front levers. I’ve added two more low-cost, small, and portable pieces of equipment to make this easy, bringing my total to three pieces. I’d already purchased a doorway pull-up bar ($26). Since then, I’ve added gymnastics rings ($32) hanging from the pull-up bar. Rings are extremely flexible — I use them for horizontal rows and dips, but they can be used for ab roll-outs, pull-ups (instead of the bar), and so much more. I’m also using a Nordstick ($27, or a bit more for the Pro) that slides under a closet door, because Nordic curls are tricky without some sort of specialized device. An alternative, equipment-free exercise is a reverse hyperextension, but the unweighted version will plateau pretty quickly. Overall, I’ve further reduced the time commitment from exercise without significant impact. I’ve removed HIIT, maintained LISS (daily, 15 min x 3), and reduced strength training (2x/wk, 10 min x 1), and I still see nearly equivalent outcomes. I’m not just maintaining my fitness and strength — it’s continuing to grow, even without any caloric surplus. I do expect that re-composition to plateau within a year or two at a maintenance diet. At that point, I may need to do mini bulks and cuts (gaining/losing weight in cycles to grow my muscle mass). Learn more Want to learn more? Here’s some books that I’ve found helpful, roughly in order. I’ve also shared my Kindle highlights for each one, in case you want to see my perspective on the key points before reading the full book. Ultra-Processed People (my Kindle highlights) Metabolical: The Lure and the Lies of Processed Food, Nutrition, and Modern Medicine (my Kindle highlights) Glucose Revolution: The Life-Changing Power of Balancing Your Blood Sugar (my Kindle highlights) Salt Sugar Fat: How the Food Giants Hooked Us (my Kindle highlights) Sugarless: A 7-Step Plan to Uncover Hidden Sugars, Curb Your Cravings, and Conquer Your Addiction (my Kindle highlights) The No S Diet: The Strikingly Simple Weight-Loss Strategy That Has Dieters Raving–and Dropping Pounds (my Kindle highlights) Ravenous: How to get ourselves and our planet into shape (my Kindle highlights) The Way We Eat Now: How the Food Revolution Has Transformed Our Lives, Our Bodies, and Our World (my Kindle highlights) Spoon-Fed: Why Almost Everything We’ve Been Told About Food is Wrong (my Kindle highlights) Food for Life: The New Science of Eating Well (my Kindle highlights) The Dorito Effect (my Kindle highlights) The End of Craving: Recovering the Lost Wisdom of Eating Well (my Kindle highlights) Lose It Forever: The 6 Habits of Successful Weight Losers from the National Weight Control Registry (my Kindle highlights)
  • Lennart Poettering: Announcing systemd v257 (2024/12/16 23:00)
    Last week we released systemd v257 into the wild. In the weeks leading up to this release (and the week after) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd257 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts: Post #1: Fully Locked Accounts with systemd-sysusers Post #2: Combined Signed PCR and Locally Managed PCR Policies for Disk Encryption Post #3: Progress Indication via Terminal ANSI Sequence Post #4: Multi-Profile UKIs Post #5: The New sd-varlink & sd-json APIs in libsystemd Post #6: Querying for Passwords in User Scope Post #7: Secure Attention Key Logic in systemd-logind Post #8: systemd-nspawn --bind-user= Now Copies User's SSH Key Post #9: The New DeferReactivation= Switch in .timer Units Post #10: Support for the New IPE LSM Post #11: Environment Variables for Shell Prompt Prefix/Suffix Post #12: sysctl Conflict Detection via eBPF Post #13: initrd and µcode UKI Add-Ons Post #14: SecureBoot Signing with the New systemd-sbsign Tool Post #15: Managed Access to hidraw devices in systemd-logind Post #16: Fuzzy Filtering in userdbctl Post #17: MAC Address Based Alternative Network Interface Names Post #18: Conditional Copying/Symlinking in tmpfiles.d/ Post #19: Automatic Service Restarts in Debug Mode Post #20: Filtering by Invocation ID in journalctl Post #21: Supplement Partitions in repart.d/ Post #22: DeviceTree Matching in UKIs Post #23: The New ssh-exec: Protocol in varlinkctl Post #24: SecureBoot Key Enrollment Preparation with bootctl Post #25: Automatically Installing confext/sysext/portable/VMs/container Images at Boot Post #26: Designated Maintenance Time in systemd-logind Post #27: PID Namespacing in Service Management Post #28: Marking Experimental OS Releases in /etc/os-release Post #29: Decoding Capability Masks with systemd-analyze Post #30: Investigating Passed SMBIOS Type #11 Data Post #31: Initializing Partitions from Character Devices in repart.d/ Post #32: Entering Namespaces to Generate Stacktraces Post #33: ID Mapped Mounts for Per-Service Directories Post #34: A Daemon for systemd-sysupdate Post #35: User Record Modifications without Administrator Consent in systemd-homed Post #36: DNR DHCP Support Post #37: Name Based AF_VSOCK ssh Access I intend to do a similar series of serieses of posts for the next systemd release (v258), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.
  • Simon Ser: Status update, December 2024 (2024/12/14 22:00)
    Hi! For once let’s open things up with the NPotM. I’ve started working on sajin, an Android app which synchronizes camera pictures in the background. I’ve grown tired of manually copying files around, and I don’t want to use proprietary services to backup my pictures, so I’ve been meaning to write a tiny app to upload pictures to my server. It’s super simple: enter the WebDAV server URL and credentials, then just forget about the app. It plays well with sogogi (my WebDAV file server) and Photoview (a Web picture gallery). I’d like to implement feedback on synchronization status and manual synchronization of older pictures. I really need to find an icon for it too. Once again, this month I’ve spent a fair bit of time on Sway and wlroots bug fixes, in particular wlroots DRM backend issues affecting old GPUs (these not supporting the atomic KMS API) and multi-GPU setups (I’ve had to bite the bullet and bring my super shaky setup out of the closet). wlroots 0.18.2 has been released, among other things it also fixes some X11 drag-and-drop bugs (thanks Consolatis!). In IRC land, delthas has added soju support for the metadata extension, enabling clients to mark conversations as pinned or muted. Once senpai and Goguma add support for this extension, they will be able to synchronize this bit of state. In other words, marking a conversation as pinned on a mobile phone will also affect all other connected clients. Thanks to John Regan, PostgreSQL message queries have been optimized by several orders of magnitude: on large message stores, they now take a few milliseconds instead of multiple seconds. I’ve turned on WAL mode for SQLite, which should help with message insertion performance. I’ve worked on making Goguma play better with direct connections to old IRC servers such as Libera Chat and OFTC. These servers support only a few IRCv3 extensions, and they aggressively rate-limit TCP connections and commands (including CAP REQ commands sent to initialize the connection). Goguma should now reconnect less often on first setup and should connect more quickly (by reducing the amount of CAP REQ commands). Last, I’ve added proper support for GitLab Pages to dalligi, a small bridge to use builds.sr.ht as a GitLab CI runner. GitLab Pages requires to define a special job with the exact name “pages”, which is cumbersome with builds.sr.ht. dalligi can now copy over artifacts of a previous job to this special “pages” job. I hope this can be used to automatically publish wlroots docs. See you next year!
  • Hans de Goede: IPU6 camera support is broken in kernel 6.11.11 / 6.12.2-6.12.4 (2024/12/12 13:52)
    Unfortunately an incomplete backport of IPU6 DMA handling changes has landed in kernel 6.11.11.This not only causes IPU6 cameras to not work, this causes the kernel to (often?) crash on boot on systems where the IPU6 is in use and thus enabled by the BIOS.Kernels 6.12.2 - 6.12.4 are also affected by this. A fix for this is pending for the upcoming 6.12.5 release.6.11.11 is the last stable release in the 6.11.y series, so there will be no new stable 6.11.y release with a fix.As a workaround users affected by this can stay with 6.11.10 or 6.12.1 until 6.12.5 is available in your distributions updates(-testing) repository. comments
  • Alyssa Rosenzweig: Vulkan 1.4 sur Asahi Linux (2024/12/02 05:00)
    English version follows. Aujourd’hui, Khronos Group a sorti la spécification 1.4 de l’API graphique standard Vulkan. Le projet Asahi Linux est fier d’annoncer le premier pilote Vulkan 1.4 pour le matériel d’Apple. En effet, notre pilote graphique Honeykrisp est reconnu par Khronos comme conforme à cette nouvelle version dès aujourd’hui. Ce pilote est déjà disponible dans nos dépôts officiels. Après avoir installé Fedora Asahi Remix, executez dnf upgrade --refresh pour obtenir la dernière version du pilote. Vulkan 1.4 standardise plusieurs fonctionnalités importantes, y compris les horodatages et la lecture locale avec le rendu dynamique. L’industrie suppose que ces fonctionnalités devront être plus courantes, et nous y sommes préparés. Sortir un pilote conforme reflète notre engagement en faveur des standards graphiques et du logiciel libre. Asahi Linux est aussi compatible avec OpenGL 4.6, OpenGL ES 3.2, et OpenCL 3.0, tous conformes aux spécifications pertinentes. D’ailleurs, les nôtres sont les seuls pilotes conformes pour le materiel d’Apple de n’importe quel standard graphique. Même si le pilote est sorti, il faut encore compiler une version expérimentale de Vulkan-Loader pour utiliser la nouvelle version de Vulkan. Toutes les nouvelles fonctionnalités sont néanmoins disponibles comme extensions à notre pilote Vulkan 1.3 pour en profiter tout de suite. Pour plus d’informations, consultez l’article du blog de Khronos. Today, the Khronos Group released the 1.4 specification of Vulkan, the standard graphics API. The Asahi Linux project is proud to announce the first Vulkan 1.4 driver for Apple hardware. Our Honeykrisp driver is Khronos-recognized as conformant to the new version since day one. That driver is already available in our official repositories. After installing Fedora Asahi Remix, run dnf upgrade --refresh to get the latest drivers. Vulkan 1.4 standardizes several important features, including timestamps and dynamic rendering local read. The industry expects that these features will become more common, and we are prepared. Releasing a conformant driver reflects our commitment to graphics standards and software freedom. Asahi Linux is also compatible with OpenGL 4.6, OpenGL ES 3.2, and OpenCL 3.0, all conformant to the relevant specifications. For that matter, ours are the only conformant drivers on Apple hardware for any graphics standard. Although the driver is released, you still need to build an experimental version of Vulkan-Loader to access the new Vulkan version. Nevertheless, you can immediately use all the new features as extensions in our Vulkan 1.3 driver. For more information, see the Khronos blog post.
  • Simon Ser: Status update, November 2024 (2024/11/21 22:00)
    Hi all! This month I’ve spent a lot of time triaging Sway and wlroots issues following the Sway 1.10 release. There are a few regressions, some of which are already fixed (thanks to all contributors for sending patches!). Kenny has added support for software-only secondary KMS devices such as GUD and DisplayLink. David Turner from Raspberry Pi has contributed crop and scale support for output buffers, that way video players are more likely to hit direct scan-out. I’ve added support for explicit sync in the Wayland backend for nested compositors. I’ve worked a bit on the Goguma mobile IRC client. The auto-complete dropdown now shows user display names, channel topics and command descriptions. Additionally, commands which don’t make sense given the current context are hidden (for instance, /part is not displayed in a conversation with a single user). The gamja Web IRC client should now reconnect more quickly after regaining connectivity. For instance, after resume from suspend, gamja now reconnects immediately instead of waiting 10 seconds. Thanks to Matteo, soju-containers now ships arm64 images. The NPotM is sogogi, a simple WebDAV file server. It’s quite minimal for now: a list of directories to serve is defined in the configuration file, as well as users and access lists. In the future, I’d like to add external authentication (e.g. via PAM or via another HTTP server), HTML directory listings and configuration file reload. That’s all for now! Once again, that’s a pretty short status update. A lot of my time goes into more boring maintenance tasks and reviews. See you next month!
  • Melissa Wen: Display/KMS Meeting at XDC 2024: Detailed Report (2024/11/19 13:00)
    XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers. Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition. Short on Time? Catch the lightning talk summarizing the meeting here (you can even speed up 2x): For a quick written summary, scroll down to the TL;DR section. TL;DR This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting): Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines. Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling. HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes. Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations. Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis. Bringing together Linux display developers in the XDC 2024 While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants. Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD) Link: https://indico.freedesktop.org/event/6/contributions/383/ Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts. The final agenda covered five topics in the scheduled order: How to share drivers between V4L2 and DRM for bridge-like components (new topic); Real-time Scheduling (problems encountered after the Display Next hackfest); HDR/Color Management (ofc); Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together); (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest). Unpacking the Topics Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed. From his notes, let’s dive into the key discussions! How to share drivers between V4L2 and KMS for bridge-like components. Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines. Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems? Potential Solutions: Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware. Real-Time Scheduling Challenges We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front. Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule. Action items: Explore alternative backtraces during the busy wait (e.g., ftrace). Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup). HDR/Color Management This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years. Here’s a breakdown of the key points raised at this meeting: Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver. NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs. Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now. Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)] Compositor Side: The compositor developers have also made significant progress. KDE has already implemented and validated the API through an experimental implementation in Kwin. Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API Weston has also begun exploring implementation, and we might see something from them by the end of the year. Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place. Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API! Display Mux During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver. Context: During this year Linux Display Next hackfest, Mario Limonciello (AMD) introduced the topic and led a discussion on Display Mux. Daniel Dadap (NVIDIA) retook this discussion with the XDC 2024 talk: Dynamic Switching of Display Muxes on Hybrid GPU Systems. Key Considerations: HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here. Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured? Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible. Action points: Sharing DP AUX: Explore the idea of sharing DP AUX and its implications. Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that. Towards Better Commit Failure Feedback In the last part of the meeting, Xaver Hugl asked for better commit failure feedback. Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures. To address this issue, we discussed several potential improvements: Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging. Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action. Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution. By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system. A Big Thank You! Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li. This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack. Stay tuned for future updates!
  • Peter Hutterer: hidreport and hut: two crates for handling HID Report Descriptors and HID Reports (2024/11/19 01:54)
    A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two. Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed. HID Usage Tables (HUT) As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this: let gd_x = GenericDesktop::X; let usage_page = gd_x.usage_page(); assert!(matches!(usage_page, UsagePage::GenericDesktop)); Or the more likely need: convert from a numeric page/id tuple to a named usage. let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X println!("Usage is {}", usage.name()); 90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple. hidreport - Report Descriptor parsing The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this: let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap(); I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything): let input_report_bytes = read_from_device(); let report = rdesc.find_input_report(&input_report_bytes).unwrap(); let field = report.fields().first().unwrap(); match field { Field::Variable(var) => { let val: u32 = var.extract(&input_report_bytes).unwrap().into(); println!("Field {:?} is of value {}", field, val); }, _ => {} } The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present. hid-recorder The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates. $ sudo hid-recorder /dev/hidraw1 # Microsoft Microsoft® 2.4GHz Transceiver v9.0 # Report descriptor length: 223 bytes # 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x02, // Usage (Mouse) 2 # 0xa1, 0x01, // Collection (Application) 4 # 0x05, 0x01, // Usage Page (Generic Desktop) 6 # 0x09, 0x02, // Usage (Mouse) 8 # 0xa1, 0x02, // Collection (Logical) 10 # 0x85, 0x1a, // Report ID (26) 12 # 0x09, 0x01, // Usage (Pointer) 14 # 0xa1, 0x00, // Collection (Physical) 16 # 0x05, 0x09, // Usage Page (Button) 18 # 0x19, 0x01, // UsageMinimum (1) 20 # 0x29, 0x05, // UsageMaximum (5) 22 # 0x95, 0x05, // Report Count (5) 24 # 0x75, 0x01, // Report Size (1) 26 ... omitted for brevity # 0x75, 0x01, // Report Size (1) 213 # 0xb1, 0x02, // Feature (Data,Var,Abs) 215 # 0x75, 0x03, // Report Size (3) 217 # 0xb1, 0x01, // Feature (Cnst,Arr,Abs) 219 # 0xc0, // End Collection 221 # 0xc0, // End Collection 222 R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty N: Microsoft Microsoft® 2.4GHz Transceiver v9.0 I: 3 45e 7a5 # Report descriptor: # ------- Input Report ------- # Report ID: 26 # Report size: 80 bits # | Bit: 8 | Usage: 0009/0001: Button / Button 1 | Logical Range: 0..=1 | # | Bit: 9 | Usage: 0009/0002: Button / Button 2 | Logical Range: 0..=1 | # | Bit: 10 | Usage: 0009/0003: Button / Button 3 | Logical Range: 0..=1 | # | Bit: 11 | Usage: 0009/0004: Button / Button 4 | Logical Range: 0..=1 | # | Bit: 12 | Usage: 0009/0005: Button / Button 5 | Logical Range: 0..=1 | # | Bits: 13..=15 | ######### Padding | # | Bits: 16..=31 | Usage: 0001/0030: Generic Desktop / X | Logical Range: -32767..=32767 | # | Bits: 32..=47 | Usage: 0001/0031: Generic Desktop / Y | Logical Range: -32767..=32767 | # | Bits: 48..=63 | Usage: 0001/0038: Generic Desktop / Wheel | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # | Bits: 64..=79 | Usage: 000c/0238: Consumer / AC Pan | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # ------- Input Report ------- # Report ID: 31 # Report size: 24 bits # | Bits: 8..=23 | Usage: 000c/0238: Consumer / AC Pan | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # ------- Feature Report ------- # Report ID: 18 # Report size: 16 bits # | Bits: 8..=9 | Usage: 0001/0048: Generic Desktop / Resolution Multiplier | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 10..=11 | Usage: 0001/0048: Generic Desktop / Resolution Multiplier | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 12..=15 | ######### Padding | # ------- Feature Report ------- # Report ID: 23 # Report size: 16 bits # | Bits: 8..=9 | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 10..=11 | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bit: 12 | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range: 0..=1 | Physical Range: 0..=0 | # | Bits: 13..=15 | ######### Padding | ############################################################################## # Recorded events below in format: # E: . [bytes ...] # # Current time: 11:31:20 # Report ID: 26 / # Button 1: 0 | Button 2: 0 | Button 3: 0 | Button 4: 0 | Button 5: 0 | X: 5 | Y: 0 | # Wheel: 0 | # AC Pan: 0 | E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  • Ricardo Garcia: My XDC 2024 talk about VK_EXT_device_generated_commands (2024/11/18 15:55)
    Some days ago I wrote about the new VK_EXT_device_generated_commands Vulkan extension that had just been made public. Soon after that, I presented a talk at XDC 2024 with a brief introduction to it. It’s a lightning talk that lasts just about 7 minutes and you can find the embedded video below, as well as the slides and the talk transcription if you prefer written formats. Truth be told, the topic deserves a longer presentation, for sure. However, when I submitted my talk proposal for XDC I wasn’t sure if the extension was going to be public by the time XDC would take place. This meant I had two options: if I submitted a half-slot talk and the extension was not public, I needed to talk for 15 minutes about some general concepts and a couple of NVIDIA vendor-specific extensions: VK_NV_device_generated_commands and VK_NV_device_generated_commands_compute. That would be awkward so I went with a lighning talk where I could talk about those general concepts and, maybe, talk about some VK_EXT_device_generated_commands specifics if the extension was public, which is exactly what happened. Fortunately, I will talk again about the extension at Vulkanised 2025. It will be a longer talk and I will cover the topic in more depth. See you in Cambridge in February and, for those not attending, stay tuned because Vulkanised talks are recorded and later uploaded to YouTube. I’ll post the link here and in social media once it’s available. XDC 2024 recording Talk slides and transcription Hello, I’m Ricardo from Igalia and I’m going to talk about Device-Generated Commands in Vulkan. This is a new extension that was released a couple of weeks ago. I wrote CTS tests for it, I helped with the spec and I worked with some actual heros, some of them present in this room, that managed to get this implemented in a driver. Device-Generated Commands is an extension that allows apps to go one step further in GPU-driven rendering because it makes it possible to write commands to a storage buffer from the GPU and later execute the contents of the buffer without needing to go through the CPU to record those commands, like you typically do by calling vkCmd functions working with regular command buffers. It’s one step ahead of indirect draws and dispatches, and one step behind work graphs. Getting away from Vulkan momentarily, if you want to store commands in a storage buffer there are many possible ways to do it. A naïve approach we can think of is creating the buffer as you see in the slide. We assign a number to each Vulkan command and store it in the buffer. Then, depending on the command, more or less data follows. For example, lets take the sequence of commands in the slide: (1) push constants followed by (2) dispatch. We can store a token number or command id or however you want to call it to indicate push constants, then we follow with meta-data about the command (which is the section in green color) containing the layout, stage flags, offset and size of the push contants. Finally, depending on the size, we store the push constant values, which is the first chunk of data in blue. For the dispatch it’s similar, only that it doesn’t need metadata because we only want the dispatch dimensions. But this is not how GPUs work. A GPU would have a very hard time processing this. Also, Vulkan doesn’t work like this either. We want to make it possible to process things in parallel and provide as much information in advance as possible to the driver. So in Vulkan things are different. The buffer will not contain an arbitrary sequence of commands where you don’t know which one comes next. What we do is to create an Indirect Commands Layout. This is the main concept. The layout is like a template for a short sequence of commands. We create this layout using the tokens and meta-data that we saw colored red and green in the previous slide. We specify the layout we will use in advance and, in the buffer, we ony store the actual data for each command. The result is that the buffer containing commands (lets call it the DGC buffer) is divided into small chunks, called sequences in the spec, and the buffer can contain many such sequences, but all of them follow the layout we specified in advance. In the example, we have push constant values of a known size followed by the dispatch dimensions. Push constant values, dispatch. Push constant values, dispatch. Etc. The second thing Vulkan does is to severely limit the selection of available commands. You can’t just start render passes or bind descriptor sets or do anything you can do in a regular command buffer. You can only do a few things, and they’re all in this slide. There’s general stuff like push contants, stuff related to graphics like draw commands and binding vertex and index buffers, and stuff to dispatch compute or ray tracing work. That’s it. Moreover, each layout must have one token that dispatches work (draw, compute, trace rays) but you can only have one and it must be the last one in the layout. Something that’s optional (not every implementation is going to support this) is being able to switch pipelines or shaders on the fly for each sequence. Summing up, in implementations that allow you to do it, you have to create something new called Indirect Execution Sets, which are groups or arrays of pipelines that are more or less identical in state and, basically, only differ in the shaders they include. Inside each set, each pipeline gets an index and you can change the pipeline used for each sequence by (1) specifying the Execution Set in advance (2) using an execution set token in the layout, and (3) storing a pipeline index in the DGC buffer as the token data. The summary of how to use it would be: First, create the commands layout and, optionally, create the indirect execution set if you’ll switch pipelines and the driver supports that. Then, get a rough idea of the maximum number of sequences that you’ll run in a single batch. With that, create the DGC buffer, query the required preprocess buffer size, which is an auxiliar buffer used by some implementations, and allocate both. Then, you record the regular command buffer normally and specify the state you’ll use for DGC. This also includes some commands that dispatch work that fills the DGC buffer somehow. Finally, you dispatch indirect work by calling vkCmdExecuteGeneratedCommandsEXT. Note you need a barrier to synchronize previous writes to the DGC buffer with reads from it. You can also do explicit preprocessing but I won’t go into detail here. That’s it. Thank for watching, thanks Valve for funding a big chunk of the work involved in shipping this, and thanks to everyone who contributed!
  • Tomeu Vizoso: Etnaviv NPU update 21: Support for the NPU in the NXP i.MX 8M Plus SoC is upstream! (2024/11/16 09:27)
    Several months have passed since the last update. This has been in part due to the summer holidays and a gig doing some non-upstream work, but I have also had the opportunity to continue my work on the NPU driver for the VeriSilicon NPU in the NXP i.MX 8M Plus SoC, thanks to my friends at Ideas on Board.CC BY-NC 4.0 Henrik Boye I'm very happy with what has been accomplished so far, with the first concrete result being the merge in Mesa of the support for NXP's SoC. Thanks to Philipp Zabel and Christian Gmeiner for helping with their ideas and code reviews.With this, as of yesterday, one can accelerate models such as SSDLite MobileDet on that SoC with only open source software, with the support being provided directly from projects that are already ubiquitous in today's products, such as the Linux kernel and Mesa3D. We can expect this functionality to reach distributions such as Debian in due time, for seamless installation and integration in products.With this milestone reached, I will be working on expanding support for more models, with a first goal of enabling YOLO-like models, starting with YOLOX. I will be working as well on performance, as currently we are not fully using the capabilities of this hardware.
  • Christian Gmeiner: CI-Tron: A Long Road to a Better Board Farm (2024/10/30 00:00)
    I’m a big supporter of finding problems before they get into the code base. The earlier you catch issues, the easier they are to fix. One of the main tools that helps with this is a Continuous Integration (CI) farm. A CI farm allows you to run extensive tests like deqp or piglit on a merge request or even on a private git branch before any code is merged, which significantly helps catch problems early.
  • Maira Canal: Unleashing Power: Enabling Super Pages on the RPi (2024/10/28 12:00)
    Unleashing the power of 3D graphics in the Raspberry Pi is a key commitment for Igalia through its collaboration with Raspberry Pi. The introduction of Super Pages for the Raspberry Pi 4 and 5 marks another step in this journey, offering some performance enhancements and more efficient memory usage. In this post, we’ll dive deep into the technical details of Super Pages, discuss the challenges we faced during implementation, and illustrate the benefits this feature brings to the Raspberry Pi ecosystem. What are Super Pages? A Memory Management Unit (MMU) is a hardware component responsible for handling memory access at the system level. It translates virtual addresses used by programs into physical addresses in main memory, enabling efficient memory management and protection. The MMU allows the operating system to allocate memory dynamically, isolating processes from one another to prevent them from interfering with each other’s memory. Recommendation: 📚 Structured computer organization by Andrew Tanenbaum The V3D MMU, which is part of the Broadcom GPU found in the Raspberry Pi 4 and 5, is responsible for translating 32-bit virtual addresses (VA) used by V3D into 40-bit physical addresses used externally to V3D. The MMU relies on a page table, stored in physical memory, which maps virtual addresses to their corresponding physical addresses. The operating system manages this page table, and the MMU uses it to perform address translation during memory access. A fundamental principle of modern operating systems is that memory is not stored contiguously. Instead, a contiguous block of memory is divided into smaller blocks, called “pages”, which are scattered across the entire address space. These pages are typically 4KB in size. This approach enables more efficient memory management and allows for features like virtual memory and memory protection. Over the years, the amount of available memory in computers has increased dramatically. An early IBM PC had up to 640 KiB of RAM, whereas the ThinkPad I’m typing on right now has 32 GB of RAM. Naturally, memory demands have grown alongside this increase. Today, it’s common for web browsers to consume several gigabytes of RAM, and a single shader can take up multiple megabytes. As memory usage grows, a 4KB page size may become inefficient for managing large memory blocks. Handling a large number of small pages for a single block means the MMU must perform multiple address translations, which increases overhead. This can reduce the effectiveness of the Translation Lookaside Buffer (TLB), as it must store and handle more entries, potentially leading to more cache misses and reduced overall performance. This is why many CPU manufacturers have introduced support for larger page sizes. For instance, x86 CPUs typically support 4KB and 2MB pages, with 1GB pages available if supported by the hardware. Similarly, ARM64 CPUs can support 4KB, 16KB, and 64KB page sizes. These larger page sizes help reduce the number of pages the MMU needs to manage, improving performance by reducing the overhead of address translation and making more efficient use of the TLB. So, if CPUs are using bigger sizes, why shouldn’t GPUs do the same? By default, V3D supports 4KB pages. However, by setting specific bits in the page table entry, it is possible to create 64KB “Big Pages” and 1MB “Super Pages.” The issue is that the current V3D driver available in Linux does not enable the use of Big or Super Pages, meaning this hardware feature is currently unused. The advantage of enabling Big and Super Pages is that once an entry for any page within a Big or Super Page is cached in the MMU, it can be used to translate all virtual addresses within that page’s range without needing to fetch additional entries. In theory, this should result in improved performance, especially for applications with high memory demands, such as those using multiple large buffer objects (BOs). As Igalia continually strives to enhance the experience for Raspberry Pi users, we decided to implement this feature in the upstream kernel. But before diving into the implementation details, let’s take a look at the real-world results and see if the theoretical benefits of Super Pages have translated into measurable improvements for Raspberry Pi users. What Does This Feature Mean for RPi Users? With Super Pages implemented, let’s now explore the actual performance improvements observed on the Raspberry Pi and see how impactful this feature is for users. Benchmarking Super Pages: Traces and FPS Improvements To measure the impact of Super Pages, we tested a variety of games and demos traces on the Raspberry Pi 4 and 5, covering genres from action to racing. On average, we observed a +1.40% FPS improvement on the Raspberry Pi 4 and a +1.30% improvement on the Raspberry Pi 5. For instance, on the Raspberry Pi 4, Warzone 2100 saw an 8.36% FPS increase, and on the Raspberry Pi 5, Quake II enjoyed a 3.62% boost. These examples demonstrate the benefits of Super Pages in resource-demanding applications, where optimized memory handling becomes critical. Raspberry Pi 4 FPS Improvements Trace Before Super Pages After Super Pages Improvement warzone2100.30secs.1024x768.trace 56.39 61.10 +8.36% ue4_shooter_game_shooting_low_quality_640x480.gfxr 20.71 21.47 +3.65% quake3e_capture_frames_1800_through_2400_1920x1080.gfxr 60.88 62.50 +2.67% supertuxkart-menus_1024x768.trace 112.62 115.61 +2.65% ue4_shooter_game_shooting_high_quality_640x480.gfxr 20.45 20.88 +2.10% quake2-gles3-1280x720.trace 59.76 60.84 +1.82% ue4_sun_temple_640x480.gfxr 27.60 28.03 +1.54% vkQuake_capture_frames_1_through_1200_1280x720.gfxr 54.59 55.30 +1.29% ue4_shooter_game_low_quality_640x480.gfxr 32.75 33.08 +1.00% sponza_demo02_800x600.gfxr 20.90 21.03 +0.61% supertuxkart-racing_1024x768.trace 8.58 8.63 +0.60% ue4_shooter_game_high_quality_640x480.gfxr 19.62 19.74 +0.59% serious_sam_trace02_1280x720.gfxr 44.00 44.21 +0.50% ue4_vehicle_game-2_640x480.gfxr 12.59 12.65 +0.49% sponza_demo01_800x600.gfxr 21.42 21.46 +0.19% quake3e-1280x720.trace 84.45 84.52 +0.09% Raspberry Pi 5 FPS Improvements Trace Before Super Pages After Super Pages Improvement quake2-gles3-1280x720.trace 151.77 157.26 +3.62% supertuxkart-menus_1024x768.trace 306.79 313.88 +2.31% warzone2100.30secs.1024x768.trace 140.92 144.03 +2.21% vkQuake_capture_frames_1_through_1200_1280x720.gfxr 131.45 134.20 +2.10% ue4_vehicle_game-2_640x480.gfxr 24.42 24.88 +1.89% ue4_shooter_game_high_quality_640x480.gfxr 32.12 32.53 +1.29% ue4_sun_temple_640x480.gfxr 42.05 42.55 +1.20% ue4_shooter_game_shooting_high_quality_640x480.gfxr 52.77 53.31 +1.04% quake3e-1280x720.trace 238.31 240.53 +0.93% warzone2100.70secs.1024x768.trace 151.09 151.81 +0.48% sponza_demo02_800x600.gfxr 50.81 51.05 +0.46% supertuxkart-racing_1024x768.trace 20.91 20.98 +0.33% ue4_shooter_game_low_quality_640x480.gfxr 59.68 59.86 +0.29% quake3e_capture_frames_1_through_1800_1920x1080.gfxr 167.70 168.17 +0.29% ue4_shooter_game_shooting_low_quality_640x480.gfxr 53.40 53.51 +0.22% quake3e_capture_frames_1800_through_2400_1920x1080.gfxr 163.37 163.64 +0.17% serious_sam_trace02_1280x720.gfxr 60.00 60.03 +0.06% sponza_demo01_800x600.gfxr 45.04 45.04 <.01% While an average +1% FPS improvement might seem modest, Super Pages can deliver more noticeable gains in memory-intensive 3D applications and when the GPU is under heavy usage. Let’s see how the Super Pages perform on Mesa CI. Benchmarking Super Pages: Mesa CI Job Duration To avoid introducing regressions in user-space, I usually test my custom kernels with Mesa CI, focusing on the “broadcom-postmerge” stage to verify that all Piglit and CTS tests ran smoothly. For Super Pages, I was pleasantly surprised by the job duration results, as some job durations were reduced by several minutes. Mesa CI Jobs Duration Improvements Job Before Super Pages After Super Pages v3d-rpi4-traces:arm64 ~4m30s ~3m40s v3d-rpi5-traces:arm64 ~3m30s ~2m45s v3d-rpi4-gl-full:arm64 */6 ~24-25 minutes ~22-23 minutes v3d-rpi5-gl-full:arm64 ~48 minutes ~48 minutes v3dv-rpi4-vk-full:arm64 */6 ~44 minutes ~41 minutes v3dv-rpi5-vk-full:arm64 ~102 minutes ~92 minutes Seeing these reductions is especially rewarding. For example, the “v3dv-rpi5-vk-full:arm64” job duration decreased by 10 minutes, meaning more FPS for users and shorter wait times for Mesa developers. Benchmarking Super Pages: PS2 Emulation After sharing a couple of tables, I’ll admit that showcasing performance improvements solely through numbers doesn’t always convey the real impact. Personally, I find it more satisfying to see performance gains in action with real-world applications. This led me to explore PlayStation 2 (PS2) emulation on the RPi 5. From watching YouTube videos, I noticed that PS2 is a popular console for the RPi 5. While the PlayStation (PS1) emulates well even on the RPi 4, and Nintendo 64 and Sega Saturn struggle across most hardware, PS2 hits a sweet spot for testing the RPi 5’s limits. Fortunately, I still have my childhood PS2 — my second console after the Nintendo GameCube, and one of the most successful consoles worldwide, including in Brazil. With a library packed with titles like Metal Gear Solid, Resident Evil, Tomb Raider, and Shadow of the Colossus, the PS2 remains a great system for collectors and retro gamers alike. I selected a few games from my collection to benchmark on the RPi 5 using a PS2 emulator. My emulator of choice was Aether SX2 with Vulkan support. Although AetherSX2 is no longer in development, it still performs well on the RPi. Initially, many games were barely playable, especially those with large buffer objects, like Shadow of the Colossus and Gran Turismo 4. However, after enabling Super Pages support, I noticed immediate improvements. For example, Shadow of the Colossus wouldn’t even open before Super Pages, and while it’s not fully playable yet, it does load now. This isn’t a silver bullet, but it’s a step forward in improving the driver one piece at a time. I ended up selecting four games for a video comparison: Burnout 3: Takedown, Metal Gear Solid 3: Snake Eater, Resident Evil 4, and Tekken 4. Your browser does not support the video tag. Disclaimer: The BIOS used in the emulator was extracted from my own PS2, and I played only games I own, with ROMs I personally extracted. Neither I nor Igalia encourage using downloaded BIOS or ROM files from the internet. From the video, we can see noticeable improvements in all four games. Although they aren’t perfectly playable yet, the performance gains are evident, particularly in Resident Evil 4, where the gameplay saw a solid 5 FPS boost. I realize 18 FPS might not satisfy most players, but I still had a lot of fun playing Resident Evil 4 on the RPi 5. When tracking the FPS for these games, it’s clear that the performance gains go well beyond the average 1% seen in other benchmarks. Super Pages show their true potential in high-memory applications like PS2 emulation. Having seen the performance gains Super Pages can bring to the Raspberry Pi, let’s now dive into the technical aspects of the feature. Implementing Super Pages The first challenge was figuring out how to allocate a contiguous block of memory using shmem. The Shared Memory Virtual Filesystem (shmem) is used as a flexible memory mechanism that allows the GPU and CPU to share access to BOs through the system’s temporary filesystem, tmpfs. tmpfs is a volatile filesystem that stores files in RAM, making it ideal for temporary or high-speed data that doesn’t need to persist on RAM. For example, to allocate a 256KB BO across four 64KB pages, we need four contiguous 64KB memory blocks. However, by default, tmpfs only allocates memory in PAGE_SIZE chunks (as seen in shmem_file_setup()), whereas PAGE_SIZE is 4KB on the Raspberry Pi 4 and 16KB on the Raspberry Pi 5. Since the function drm_gem_object_init() — which initializes an allocated shmem-backed GEM object — relies on shmem_file_setup() to back these objects in memory, we had to consider alternatives, as the default PAGE_SIZE would divide memory into increments that are too small to ensure the large, contiguous blocks needed by the GPU. The solution we proposed was to create drm_gem_object_init_with_mnt(), which allows us to specify the tmpfs mountpoint where the GEM object will be created. This enables us to allocate our BOs in a mountpoint that supports larger page sizes. Additionally, to ensure that our BOs are allocated in the correct mountpoint, we introduced drm_gem_shmem_create_with_mnt(), which allows the mountpoint to be specified when creating a new DRM GEM shmem object. [PATCH v6 04/11] drm/gem: Create a drm_gem_object_init_with_mnt() function [PATCH v6 06/11] drm/gem: Create shmem GEM object in a given mountpoint The next challenge was figuring out how to create a new mountpoint that would allow for different page sizes based on the allocation. Simply creating a new tmpfs mountpoint with a fixed bigger page size wouldn’t suffice, as we needed flexibility for various allocations. Inspired by the i915 driver, we decided to use a tmpfs mountpoint with the “huge=within_size” flag. This flag, which requires the kernel to be configured with CONFIG_TRANSPARENT_HUGEPAGE, enables the allocation of huge pages. Transparent Huge Pages (THP) is a kernel feature that automatically manages large memory pages to improve performance without needing changes from applications. THP dynamically combines smaller pages into larger ones, typically 2MB, reducing memory management overhead and improving cache efficiency. To support our new allocation strategy, we created a dedicated tmpfs mountpoint for V3D, called gemfs, which provides us an ideal space for managing these larger allocations. [PATCH v6 05/11] drm/v3d: Introduce gemfs With everything in place for contiguous allocations, the next step was configuring V3D to enable Big/Super Page support. We began by addressing a major source of memory pressure on the Raspberry Pi: the current 128KB alignment for allocations in the virtual memory space. This alignment wastes space when handling small BO allocations, especially since the userspace driver performs a large number of these small allocations. As a result, we can’t fully utilize the 4GB address space available for the GPU on the Raspberry Pi 4 or 5. For example, we can currently allocate up to 32,000 BOs of 4KB (~140MB) and 3,000 BOs of 400KB (~1.3GB). This becomes a limitation for memory-intensive applications. By reducing the page alignment to 4KB, we can significantly increase the number of BOs, allowing up to 1,000,000 BOs of 4KB (~4GB) and 10,000 BOs of 400KB (~4GB). Therefore, the first change I made was reducing the VA alignment of all allocations to 4KB. [PATCH v6 07/11] drm/v3d: Reduce the alignment of the node allocation With the alignment issue resolved, we can now implement the code to properly set the flags on the Page Table Entries (PTE) for Big/Super Pages. Setting these flags is straightforward — a simple bitwise operation. The challenge lies in determining which BOs can be allocated in Super Pages. For a BO to be eligible for a Big Page, its virtual address must be aligned to 64KB, and the same applies to its physical address. Same thing for Super Pages, but now the addresses must be aligned to 1MB. If the BO qualifies for a Big/Super Page, we need to iterate over 16 4KB pages (for Big Pages) or 256 4KB pages (for Super Pages) and insert the appropriate PTE. Additionally, we modified the way we iterate through the BO’s memory. This was necessary because the THP may not always allocate the entire BO contiguously. For example, it might only allocate contiguously 1MB of a 2MB block. To handle this, we now iterate over the blocks of contiguous memory scattered across the scatterlist, ensuring that each segment is properly handled during the allocation process. What is a scatterlist? It is a Linux Kernel data structure that manages non-contiguous memory as if it were contiguous. It organizes separate memory blocks into a single logical buffer, allowing efficient data handling, especially in Direct Memory Access (DMA) operations, without needing a physically contiguous memory allocation. [PATCH v6 08/11] drm/v3d: Support Big/Super Pages when writing out PTEs However, the last few patches alone don’t fully enable the use of Super Pages. While PATCH 08/11 technically allows for Super Pages, we’re still relying on DRM GEM shmem objects, meaning allocations are still happening in PAGE_SIZE chunks. Although Big/Super Pages could potentially be used if the system naturally allocated 1MB or 64KB contiguously, this is quite rare and not our intended outcome. Our goal is to actively use Big/Super Pages as much as possible. To achieve this, we’ll utilize the V3D-specific mountpoint we created earlier for BO allocation whenever possible. By creating BOs through drm_gem_shmem_create_with_mnt(), we can ensure that large pages are allocated contiguously when possible, enabling the consistent use of Big/Super Pages. [PATCH v6 09/11] drm/v3d: Use gemfs/THP in BO creation if available And there you have it — Big/Super Pages are now fully enabled in V3D. The only requirement to activate this feature in any given kernel is ensuring that CONFIG_TRANSPARENT_HUGEPAGE is enabled. Final Words You can learn more about ongoing enhancements to the Raspberry Pi driver stack in this XDC 2024 talk by José María “Chema” Casanova Crespo. In the talk, Chema discusses the Super Pages work I developed, along with other advancements in the driver stack. Of course, there are still plenty of improvements on the horizon at Igalia. I’m currently experimenting with 64KB CLE allocations in user-space, and I hope to share more good news soon. Finally, I’d like to express my gratitude to Iago Toral and Tvrtko Ursulin for their invaluable support in developing Super Pages for the V3D kernel driver. Thank you both for sharing your experience with me!
  • Bastien Nocera: wireless_status kernel sysfs API (2024/10/23 12:06)
    (I worked on this feature last year, before being moved off desktop related projects, but I never saw it documented anywhere other than in the original commit messages, so here's the opportunity to shine a little light on a feature that could probably see more use)    The new usb_set_wireless_status() driver API function can be used by drivers of USB devices to export whether the wireless device associated with that USB dongle is turned on or not.    To quote the commit message:This will be used by user-space OS components to determine whether the battery-powered part of the device is wirelessly connected or not, allowing, for example: - upower to hide the battery for devices where the device is turned off but the receiver plugged in, rather than showing 0%, or other values that could be confusing to users - Pipewire to hide a headset from the list of possible inputs or outputs or route audio appropriately if the headset is suddenly turned off, or turned on - libinput to determine whether a keyboard or mouse is present when its receiver is plugged in.This is not an attribute that is meant to replace protocol specific APIs [...] but solely for wireless devices with an ad-hoc “lose it and your device is e-waste” receiver dongle.      Currently, the only 2 drivers to use this are the ones for the Logitech G935 headset, and the Steelseries Arctis 1 headset. Adding support for other Logitech headsets would be possible if they export battery information (the protocols are usually well documented), support for more Steelseries headsets should be feasible if the protocol has already been reverse-engineered.    As far as consumers for this sysfs attribute, I filed a bug against Pipewire (link) to use it to not consider the receiver dongle as good as unplugged if the headset is turned off, which would avoid audio being sent to headsets that won't hear it.    UPower supports this feature since version 1.90.1 (although it had a bug that makes 1.90.2 the first viable release to include it), and batteries will appear and disappear when the device is turned on/off.A turned-on headset
  • Simon Ser: Status update, October 2024 (2024/10/20 22:00)
    Hi! This month XDC 2024 took place in Montreal. I wasn’t there in-person, but thanks to the organizers I could still ask questions and attend workshops remotely (thanks!). As usual, XDC has been a great reminder of many things I wanted to do but which got buried under a pile of emails. We’ve discussed the upcoming KMS color management uAPI again, I’ve taken a bit of time to send more comments and it looks like this one is getting close to completion (famous last words). We’ve also discussed about display muxing (switching a connector from one GPU to another one), it’s quite fun how surprisingly tricky this process is. Another topic was better multi-GPU support, in particular how to avoid going through the main GPU when an application is rendered and displayed on a secondary GPU. I’ve sent a proposal to improve the kernel DMA-BUF uAPI. New this year was the Wayland workshop organized by Mike Blumenkrantz, Daniel Stone and Jonas Ådahl. We’ve discussed the governance change proposals sent earlier this month. Various changes are being discussed, all have the goal to lower the barrier to entry when contributing a protocol and preventing patches from getting stuck. I’m excited to see how this turns out! We’ve finally started the release candidate cycle for Sway 1.10. I’ve released Sway 1.10-rc4 this weekend with a bunch more fixes, I’m hoping the final release can go out soon! I’ve also released the long overdue cage 0.2.0, which fast forwards wlroots to version 0.18 and adds primary selection support. I’ve sent a patch to add a udmabuf allocator to wlroots. This is useful for running the wlroots GLES2 and Vulkan renderers with software rendering (e.g. llvmpipe and lavapipe), which is handy for CI and exercises the same codepaths as real hardware instead of the seldom used Pixman renderer. wlroots-rs has been updated to wlroots v0.18, and I’ve revamped the way the compositor state is managed. Previously the library forced the use of Rc<RefCell<T>> to hold the state, which caused issues with double mutable borrows at runtime when compositor callbacks were nested (wlroots invokes compositor callback which borrows state and calls into wlroots which invokes another compositor callback which borrows state). With the new design the compositor must pass its state as an argument to all wlroots functions which may emit signals and call back into the compositor. delthas has contributed a whole bunch of soju patches used by his new hosted bouncer service, IRC Today. Uploaded videos and PDF files can now be viewed inline in Web browsers, a new HTTP basic authentication backend has been added, file uploads can now be delegated to a separate HTTP backend, a new soju.im/SAFERATE specification indicates when clients don’t need to rate-limit their messages, and a bunch of various smaller improvements and fixes. A bunch of exciting new features are in the pipeline as well (but I won’t spoil them just yet)! Matthew Hague has contributed TLS certificate pinning to Goguma. When hitting an invalid certificate, Goguma will now offer the user a choice to trust this specific certificate (trust on first use). gamja now supports drag-and-drop for file uploads thanks to xse. Both gamja and Goguma have moved to Codeberg, I hope this lowers the barrier to entry for contributing. A tiny NPotM is soju-containers¸ a repository containing Dockerfiles for soju and gamja, for easy deployment and testing. Both hottub and yojo now have support for build secrets. For hottub, secrets are only enabled when the owner pushes commits (and enables the feature at setup time). For yojo, the owner needs to enable the feature at setup time and can then select specific secrets to expose on specific repositories. All of this is locked down to prevent collaborators from gaining access to arbitrary secrets when pushing to a repository. That’s all for now, see you next month!
  • Mike Blumenkrantz: Recovery (2024/10/15 00:00)
    Struggling Last week was XDC. I did too much Wayland, and now I’ve been stricken with a plague for my hubris. I have some updates, but I lack the ability to fully capture the exploits of Mesa’s most sane developer in the ten minutes I’m awake every day. In the meanwhile, let’s take a look another potential example of great hubris. Hm. Have you ever made a decision that seemed great at the time but then you realized later it was actually maybe not that great? Like, maybe it was actually really, uh, well, not dumb since nobody reading this blog would do something like that, but not…smart. And everyone else was kinda going along with your decision and trusting that you knew what you were talking about because let’s face it, you’re smart. Everyone knows how smart you are. That’s why they trust you to make these decisions. Long-time SGC readers know I’m not one to make decisions of any kind, but we all remember that time Microsoft famously introduced Work Graphs to D3D and also (quietly) deprecated ExecuteIndirect. The argument was compelling: why not just move all the work to the GPU? Haters described Work Graphs as just another attempt by the driver cartel to blame bugs on app developers by making tooling impossible. The rest of us were all in—We jumped on that bandwagon like it was the last triangle in the pipe before a crash. It wasn’t long before the high-powered players were aboard: NVIDIA AMD Details were light at this stage. There were no benchmarks, no performance numbers, no games or applications using Work Graphs, but everyone trusted Microsoft. Everyone knew the idea of this tech was sound, that it had to be faster. Microsoft doubled down: Work Graphs would support mesh nodes for drawing! Other graphics wizards began to get involved. The developerverse was in a tizzy. Everyone wanted in on the action. The hype train had departed the station. Hm? Six months after GDC, the first notable performance figures for Work Graphs were blogged about by AAA graphics rockstar, Kostas Anagnostou. I was at a Khronos F2F when it happened, and the number of laptop screens open to the post when it dropped was nonzero. Very nonzero. At best, the figures were whelming. Still there was no real analysis of Work Graph performance in comparison to alternative solutions. Haters will say I’m biased after recently shipping Vulkan’s device generated commands extension, but this was going to ship regardless since vkd3d-proton requires cross-vendor compatibility for ExecuteIndirect functionality used in games like Halo Infinite and Starfield. I’m all about the numbers. Show me the graphs. The perf graphs, that is. Fortunately, friend of the blog and veteran vertex wrangler, Hans-Kristian Arntzen, always has my back. He’s spent the past few months heroically writing vkd3d-proton emulation for Work Graphs, and he has recently posted his findings to an obscure README in that repository. READ IT. SERIOUSLY. YES, THIS IS A FULL PAGE-WIDTH LINK SO YOU CAN’T POSSIBLY MISS IT. If you’re just here for the quick summary (which you shouldn’t be considering how much time he has spent making charts and graphs, and taking screenshots, and summing everything up in bite-sized morsels for easy consumption): Across the board, Work Graph performance is not very exciting Emulation with core Vulkan compute shader features is up to 3x faster Comparison test cases against ExecuteIndirect (which show EI being worse) do not effectively leverage that functionality, as noted by Hans-Kristian nearly six months ago The principle of charity requires taking serious claims in the best possible light. This should have yielded robust, powerful ExecuteIndirect benchmark usage (and even base compute/mesh shader usage) to provide competitive benchmarks against Work Graph functionality. At the time of writing, those benchmarks have yet to materialize, and the only test cases are closer to strawmen that can be held up for an easy victory. I’m not saying that Work Graphs are inherently bad. Yet. At this point, however, I haven’t seen compelling evidence which validates the hype surrounding the tech. I haven’t seen great benchmarks and demos. Maybe it’s a combination of that and still-improving driver support. Maybe it’s as-yet available functionality awaiting future hardware. In any case, I haven’t seen a strong, fact-based technical argument which proves, beyond a doubt, that this is the future of graphics. Before anyone else tries to jump on the Work Graph hype train, I think we owe it to ourselves to thoroughly interrogate this new paradigm and make sure it provides the value that everyone expects.
  • Alyssa Rosenzweig: AAA gaming on Asahi Linux (2024/10/10 05:00)
    Gaming on Linux on M1 is here! We’re thrilled to release our Asahi game playing toolkit, which integrates our Vulkan 1.3 drivers with x86 emulation and Windows compatibility. Plus a bonus: conformant OpenCL 3.0. Asahi Linux now ships the only conformant OpenGL®, OpenCL™, and Vulkan® drivers for this hardware. As for gaming… while today’s release is an alpha, Control runs well! Installation First, install Fedora Asahi Remix. Once installed, get the latest drivers with dnf upgrade --refresh && reboot. Then just dnf install steam and play. While all M1/M2-series systems work, most games require 16GB of memory due to emulation overhead. The stack Games are typically x86 Windows binaries rendering with DirectX, while our target is Arm Linux with Vulkan. We need to handle each difference: FEX emulates x86 on Arm. Wine translates Windows to Linux. DXVK and vkd3d-proton translate DirectX to Vulkan. There’s one curveball: page size. Operating systems allocate memory in fixed size “pages”. If an application expects smaller pages than the system uses, they will break due to insufficient alignment of allocations. That’s a problem: x86 expects 4K pages but Apple systems use 16K pages. While Linux can’t mix page sizes between processes, it can virtualize another Arm Linux kernel with a different page size. So we run games inside a tiny virtual machine using muvm, passing through devices like the GPU and game controllers. The hardware is happy because the system is 16K, the game is happy because the virtual machine is 4K, and you’re happy because you can play Fallout 4. Vulkan The final piece is an adult-level Vulkan driver, since translating DirectX requires Vulkan 1.3 with many extensions. Back in April, I wrote Honeykrisp, the only Vulkan 1.3 driver for Apple hardware. I’ve since added DXVK support. Let’s look at some new features. Tessellation Tessellation enables games like The Witcher 3 to generate geometry. The M1 has hardware tessellation, but it is too limited for DirectX, Vulkan, or OpenGL. We must instead tessellate with arcane compute shaders, as detailed in today’s talk at XDC2024. Geometry shaders Geometry shaders are an older, cruder method to generate geometry. Like tessellation, the M1 lacks geometry shader hardware so we emulate with compute. Is that fast? No, but geometry shaders are slow even on desktop GPUs. They don’t need to be fast – just fast enough for games like Ghostrunner. Enhanced robustness “Robustness” permits an application’s shaders to access buffers out-of-bounds without crashing the hardware. In OpenGL and Vulkan, out-of-bounds loads may return arbitrary elements, and out-of-bounds stores may corrupt the buffer. Our OpenGL driver exploits this definition for efficient robustness on the M1. Some games require stronger guarantees. In DirectX, out-of-bounds loads return zero, and out-of-bounds stores are ignored. DXVK therefore requires VK_EXT_robustness2, a Vulkan extension strengthening robustness. Like before, we implement robustness with compare-and-select instructions. A naïve implementation would compare a loaded index with the buffer size and select a zero result if out-of-bounds. However, our GPU loads are vector while arithmetic is scalar. Even if we disabled page faults, we would need up to four compare-and-selects per load. load R, buffer, index * 16 ulesel R[0], index, size, R[0], 0 ulesel R[1], index, size, R[1], 0 ulesel R[2], index, size, R[2], 0 ulesel R[3], index, size, R[3], 0 There’s a trick: reserve 64 gigabytes of zeroes using virtual memory voodoo. Since every 32-bit index multiplied by 16 fits in 64 gigabytes, any index into this region loads zeroes. For out-of-bounds loads, we simply replace the buffer address with the reserved address while preserving the index. Replacing a 64-bit address costs just two 32-bit compare-and-selects. ulesel buffer.lo, index, size, buffer.lo, RESERVED.lo ulesel buffer.hi, index, size, buffer.hi, RESERVED.hi load R, buffer, index * 16 Two instructions, not four. Next steps Sparse texturing is next for Honeykrisp, which will unlock more DX12 games. The alpha already runs DX12 games that don’t require sparse, like Cyberpunk 2077. While many games are playable, newer AAA titles don’t hit 60fps yet. Correctness comes first. Performance improves next. Indie games like Hollow Knight do run full speed. Beyond gaming, we’re adding general purpose x86 emulation based on this stack. For more information, see the FAQ. Today’s alpha is a taste of what’s to come. Not the final form, but enough to enjoy Portal 2 while we work towards “1.0”. Acknowledgements This work has been years in the making with major contributions from… Alyssa Rosenzweig Asahi Lina chaos_princess Davide Cavalca Dougall Johnson Ella Stanforth Faith Ekstrand Janne Grunau Karol Herbst marcan Mary Guillemard Neal Gompa Sergio López TellowKrinkle Teoh Han Hui Rob Clark Ryan Houdek … Plus hundreds of developers whose work we build upon, spanning the Linux, Mesa, Wine, and FEX projects. Today’s release is thanks to the magic of open source. We hope you enjoy the magic. Happy gaming.
  • Peter Hutterer: HIOCREVOKE merged for kernel 6.12 (2024/10/04 00:27)
    TLDR: if you know what EVIOCREVOKE does, the same now works for hidraw devices via HIDIOCREVOKE. The HID standard is the most common hardware protocol for input devices. In the Linux kernel HID is typically translated to the evdev protocol which is what libinput and all Xorg input drivers use. evdev is the kernel's input API and used for all devices, not just HID ones. evdev is mostly compatible with HID but there are quite a few niche cases where they differ a fair bit. And some cases where evdev doesn't work well because of different assumptions, e.g. it's near-impossible to correctly express a device with 40 generic buttons (as opposed to named buttons like "left", "right", ...[0]). In particular for gaming devices it's quite common to access the HID device directly via the /dev/hidraw nodes. And of course for configuration of devices accessing the hidraw node is a must too (see Solaar, openrazer, libratbag, etc.). Alas, /dev/hidraw nodes are only accessible as root - right now applications work around this by either "run as root" or shipping udev rules tagging the device with uaccess. evdev too can only be accessed as root (or the input group) but many many moons ago when dinosaurs still roamed the earth (version 3.12 to be precise), David Rheinsberg merged the EVIOCREVOKE ioctl. When called the file descriptor immediately becomes invalid, any further reads/writes will fail with ENODEV. This is a cornerstone for systemd-logind: it hands out a file descriptor via DBus to Xorg or the Wayland compositor but keeps a copy. On VT switch it calls the ioctl, thus preventing any events from reaching said X server/compositor. In turn this means that a) X no longer needs to run as root[1] since it can get input devices from logind and b) X loses access to those input devices at logind's leisure so we don't have to worry about leaking passwords. Real-time forward to 2024 and kernel 6.12 now gained the HIDIOCREVOKE for /dev/hidraw nodes. The corresponding logind support has also been merged. The principle is the same: logind can hand out an fd to a hidraw node and can revoke it at will so we don't have to worry about data leakage to processes that should not longer receive events. This is the first of many steps towards more general HID support in userspace. It's not immediately usable since logind will only hand out those fds to the session leader (read: compositor or Xorg) so if you as application want that fd you need to convince your display server to give it to you. For that we may have something like the inputfd Wayland protocol (or maybe a portal but right now it seems a Wayland protocol is more likely). But that aside, let's hooray nonetheless. One step down, many more to go. One of the other side-effects of this is that logind now has an fd to any device opened by a user-space process. With HID-BPF this means we can eventually "firewall" these devices from malicious applications: we could e.g. allow libratbag to configure your mouse' buttons but block any attempts to upload a new firmware. This is very much an idea for now, there's a lot of code that needs to be written to get there. But getting there we can now, so full of optimism we go[2]. [0] to illustrate: the button that goes back in your browser is actually evdev's BTN_SIDE and BTN_BACK is ... just another button assigned to nothing particular by default. [1] and c) I have to care less about X server CVEs. [2] mind you, optimism is just another word for naïveté
  • Hans de Goede: IPU6 camera support in Fedora 41 (2024/10/02 18:09)
    I'm happy to announce that the last tweaks have landed and that the fully FOSS libcamera software ISP based IPU6 camera support in Fedora 41 now has no known bugs left. See the Changes page for testing instructions.Supported hardwareUnlike USB UVC cameras where all cameras work with a single kernel driver, MIPI cameras like the Intel IPU6 cameras require multiple drivers. The IPU6 input-system CSI receiver driver is common to all laptops with an IPU6 camera, but different laptops use different camera sensors and each sensor needs its own driver and then there are glue ICs like the LJCA USB IO-expander and the iVSC (Intel Visual Sensing Controller) and there also is the ipu-bridge code which translates Windows oriented ACPI tables with sensor info into the fwnodes which the Linux drivers expect.This means that even though IPU6 support has landed in Fedora 41 not all laptops with an IPU6 camera will work. Currently the IPU6 integrated in the following CPU models works if the sensor + glue hw/sw is also supported:Tiger LakeAlder LakeRaptor LakeJasper Lake and Meteor Lake also have an IPU6 but there is some more integration work necessary to get things to work there. Getting Meteor Lake IPU6 cameras to work is high on my TODO list.The mainline kernel IPU6 CSI receiver + libcamera software ISP has been successfully tested on the following models:Various Lenovo ThinkPad models with ov2740 (INT3474) sensor (1)Various Dell models with ov01a10 (OVTI01A0) sensorDell XPS 13 PLus with ov13b10 (OVTIDB10/OVTI13B1)Some HP laptops with hi556 sensor (INT3537)To see which sensor your laptop has run: "ls /sys/bus/i2c/devices" this will show e.g. "i2c-INT3474:00" if you have an ov2740, with INT3474 being the ACPI Hardware ID (HID) for the sensor. See here for a list of currently known HID to sensor mappings. Note not all of these have upstream drivers yet. In that cases chances are that there might be a sensor driver for your sensor here.We could really use help with people submitting drivers from there upstream. So if you have a laptop with a sensor which is not in the mainline but is available there, you know a bit of C-programming and you are willing to help, then please drop me an email so that we can work together to get the driver upstream.1) on some ThinkPads the ov2740 sensor fails to start streaming most of the time. I plan to look into this next week and hopefully I can come up with a fix.MIPI camera Integration work done for Fedora 41After landing the kernel IPU6 CSI receiver and libcamera software ISP support upstream early in the Fedora 41 cycle, there still was a lot of work to do with regards to integrating this into the rest of the stack so that the cameras can actually be used outside of the qcam test app.The whole stack looks like this "kernel → libcamera → pipewire | pipewire-camera-consuming-app". Where the 2 currently supported pipewire-camera consuming apps are Firefox and GNOME Snapshot.Once this was all up and running testing found quite a few bugs which have all been fixed now:Firefox showing 13 different cameras in its camera selection pulldown for a single IPU6 camera (fix).Installing pipewire-plugin-libcamera leads to UVC cameras being powered on all the time causing significant battery drain (bug, bug, discussion, fix).Pipewire does not always recognizes cameras on login (bug, bug, bug, fix).Pipewire fails to show cameras with relative controls (fix).spa_libcamera_buffer_recycle sometimes fails, causing stream to freeze on first frame (bug, fix)Firefox chooses bad default resolution of 640x480. I worked with Jan Grulich to get this fixed and this is fixed as of firefox-130.0.1-3.fc41. Thank you Jan!Snapshot prefers 4:3 mode, e.g. 1280x1080 on 16:9 camera sensors capable of 1920x1080 (pending fix)Added intel-vsc-firmware, pipewire-plugin-libcamera, libcamera-ipa to the Fedora 41 Workstation default package-set (pull, pull, pull) comments
  • Ricardo Garcia: Waiter, there's an IES in my DGC! (2024/09/27 09:42)
    Finally! Yesterday Khronos published Vulkan 1.3.296 including VK_EXT_device_generated_commands. Thousands of engineering hours seeing the light of day, and awesome news for Linux gaming. Device-Generated Commands, or DGC for short, are Vulkan’s equivalent to ExecuteIndirect in Direct3D 12. Thanks to this extension, originally based on a couple of NVIDIA vendor extensions, it will be possible to prepare sequences of commands to run directly from the GPU, and executing those sequences directly without any data going through the CPU. Also, Proton now has a much-more official leg to stand on when it has to translate ExecuteIndirect from D3D12 to Vulkan while you run games such as Starfield. The extension not only provides functionality equivalent to ExecuteIndirect. It goes beyond that and offers more fine-grained control like explicit preprocessing of command sequences, or switching shaders and pipelines with each sequence thanks to something called Indirect Execution Sets, or IES for short, that potentially work with ray tracing, compute and graphics (both regular and mesh shading). As part of my job at Igalia, I’ve implemented CTS tests for this extension and I had the chance to work very closely with an awesome group of developers discussing specification, APIs and test needs. I hope I don’t forget anybody and apologize in advance if so. Mike Blumenkrantz, of course. Valve contractor, Super Good Coder and current OpenGL Working Group chair who took the initial specification work from Patrick Doane and carried it across the finish line. Be sure to read his blog post about DGC. Also incredibly important for me: he developed, and kept up-to-date, an implementation of the extension for lavapipe, the software Vulkan driver from Mesa. This was invaluable in allowing me to create tests for the extension much faster and making sure tests were in good shape when GPU driver authors started running them. Spencer Fricke from LunarG. Spencer did something fantastic here. For the first time, the needed changes in the Vulkan Validation Layers for such a large extension were developed in parallel while tests and the spec were evolving. His work will be incredibly useful for app developers using the extension in their games. It also allowed me to detect test bugs and issues much earlier and fix them faster. Samuel Pitoiset (Valve contractor), Connor Abbott (Valve contractor), Lionel Landwerlin (Intel) and Vikram Kushwaha (NVIDIA) providing early implementations of the extension, discussing APIs, reporting test bugs and needs, and making sure the extension works as good as possible for a variety of hardware vendors out there. To a lesser degree, most others mentioned as spec contributors for the extension, such as Hans-Kristian Arntzen (Valve contractor), Baldur Karlsson (Valve contractor), Faith Ekstrand (Collabora), etc, making sure the spec works for them too and makes sense for Proton, RenderDoc, and drivers such as NVK and others. If you’ve noticed, a significant part of the people driving this effort work for Valve and, from my side, the work has also been carried as part of Igalia’s collaboration with them. So my explicit thanks to Valve for sponsoring all this work. If you want to know a bit more about DGC, stay tuned for future talks about this topic. In about a couple of weeks, I’ll present a lightning talk (5 mins) with an overview at XDC 2024 in Montreal. Don’t miss it!
  • Mike Blumenkrantz: Unsticking The Very Sticky (2024/09/27 00:00)
    Day 4 of Wayland governance hacking I wake at 5 AM. This is the perfect time to wake up in NYC TZ, as it affords me the ability to eat a whole apple in the time it takes my little internet-browsing chromebook to load all the IRC and Discord backlogs from the five hours that I snuck away for a nap when nobody was watching. I slather the apple with a haphazard scoop of peanut butter; getting away from a keyboard for more than twenty six seconds in a given stretch is difficult, and I need protein. While entering into a fraught negotiation over the meaning of 30-day discussion period with my left hand, I carefully scoop protein powder into a shaker with my right. There’s no time to waste. Not even a single second–Another argument could break out, steal a 1973 Pontiac Firebird, and go joyriding on the wrong side of the freeway. I’m writing this blog post with my toes. They know their way around a keyboard, but they’re slow and prone to mistakes. My cat is in charge of hitting an oversized backspace key when I dangle his favorite toy over it. It’ll be hours before we get something together that can be read coherently. This is my life now. This is what it takes to do Open Source. Final Day: Everything, Everywhere, All At Once I’ve put up a couple sizable proposals to resolve longstanding issues and oversights in the governance model. Today is Friday, however, which means it’s the final day. Once we hit the weekend, everyone will collectively fuck off and forget everything that happened this week, which means I have to maintain peak velocity and finish strong. Let’s fucking go. Last proposal. Problem 1: HOW IS THIS %#$@$#@#$%%$ PROTOCOL STILL STUCK AFTER 4 YEARS?!?!?!?!? It’s a great question. I asked it myself. The answers are myriad and nebulous, but I’m the guy who explains things, so I’m gonna break it down. Imagine you’re wayland-protocols. You’ve got all these puppies. And you’re walking them–so you tell yourself, but really they’re walking themselves. They’re walking you. And they’re going in whatever direction they want. And out of all these puppies you’ve got two, one’s trying to go left to chase a car, and the other one’s trying to sniff a telephone pole on the right. The other fifty seven puppies just want to keep moving because they love their walkies. But these two puppies are the biggest ones, and they’re pulling the others along with them. So now your leashes are getting all tangled, and you’re being dragged around, and everyone’s pointing at you because you look like you don’t know what you’re doing. That’s where we’re at now. Everyone’s laughing at you. Look at this idiot trying to walk fifty nine puppies at once. This absolute moron. Who would ever do that? Why not just walk one or maybe two puppies at a time like everyone else? That’s the way you’re supposed to walk them. The way people have always walked them. But you know what? Walking fifty nine puppies individually would take all day. Nobody has the time to walk fifty nine puppies individually no matter how cute or eloquent they are. So you need some way to resolve this. Or something. Look, you get where I’m going with this. Wayland protocol discussions get bogged down by people throwing out hypotheticals that can’t truly be resolved, or by people talking past each other, or by people disappearing, or the phase of the moon, or any number of reasons, and there’s no official way to get past these blockages. That’s why I’m proposing tie-breaker votes as a simple way of moving past these problems when they arise. Everyone understands tie-breakers: you vote, and the side with the most votes wins. It’s that simple. In this context, the wayland-protocols member projects vote (with one of them representing the author for non-members) and the majority wins. If there’s another tie, the author gets to break it. Simple. Done. Problem 2: Perfect Is the Enemy of Good Sometimes a protocol in staging/ is “good enough”. The author has checked out, people are using the protocol, and everyone is happy with it. But it’s still not a stable/ protocol. In this scenario, after an extended period of time without changes, any staging/ protocol can be nominated by a member project for stable/ promotion. Some discussion happens, and then it becomes stable. Simple. Done. Problem 3: Start Times The governance model talks about discussion periods, but it doesn’t specify exactly when they begin. For example, on any of my governance MRs, does the 30-day period start when I open the MR or when the MR is approved? Obviously it starts when I open the MR. We gotta keep things moving. Done. Problem 4: Project Representation The governance document specifies that a member project may have up to two official representatives. This can be problematic, as it puts pressure on 1-2 people to be on top of every active protocol discussion. Instead, projects should be represented by as many individuals as they want (pending the usual process for adding points-of-contact). This ensures that protocols don’t get blocked waiting for a given project to take a look when all representatives are busy. It also helps more diverse projects (e.g., wlroots) ensure that opinions from more of its constituents are officially represented. Each project still only gets one vote, but now that vote can be more readily deployed and voiced. I think we’re done here? From what I’ve seen, this should cover all the major issues that have been negatively impacting Wayland development. Sure, there are other, more minor issues, but I’m not aware of anything that can’t be solved through good old person-to-person discussion. Maybe all this works, and maybe it doesn’t. But at least now if we decide to throw away some puppies, nobody can question whether we really tried everything.
  • Mike Blumenkrantz: Device Generated Commands (2024/09/26 00:00)
    Big. While other development has been progressing, in the background I’ve been working on something big. Now, finally, I can talk about it. VK_EXT_device_generated_commands is a new extension which, it’s no exageration to say, is the biggest thing Vulkan has shipped since ray-tracing. I had the privilege of working with people across the industry while driving it, from both desktop and mobile hardware vendors, and despite it being EXT, we’re going to see some truly broad adoption here. Big shoutout to Patrick Doane, formerly of Activision-Blizzard and now (I think) at Deviation Games, for kickstarting this many years ago. Thanks for your work. I hope you’re satisfied with the final product. What does this do? DGC enables applications to record commands from shaders to then be executed directly. This means no more ping-ponging back and forth between CPU and GPU, which can help to eliminate performance bottlenecks. See also the NV extension and D3D12 ExecuteIndirect as prior art. While this functionality is used in big games such as Starfield and Halo Infinite, those examples are ETOOBIG to really comprehend. Also the code is proprietary, so I can’t share it publicly. Also I don’t have the code. Fortunately, I’ve hacked together a small demo program for people to look over to get a feel for the functionality. dgcgears is a rough fork of vkgears from mesa-demos (thanks to zink’s own godfather, Erik Faye-Lund for the original work!) which utilizes DGC to execute draws rather than record them directly. Now here’s where the crazy stuff starts. Changing shaders from shaders EXT DGC adds the ability to change shaders from shaders. By creating an Indirect Execution Set, multiple sets of shaders can be bundled together and indexed into from within shaders. dgcgears uses a different vertex shader to draw each gear. While the NV extension had this functionality, EXT takes it further, enabling it to be supported on all hardware. Shader Objects: fully supported Another big feature of EXT DGC is that it is agnostic to pipelines vs shader objects vs whatever new stuff comes out in the future. If you prefer one over the other, you’re free to go ahead and use that. VKD3D-proton: supported I’ve already written the code, and it should land at some point. Drivers: supported ANV Lavapipe NVIDIA NVK RADV Turnip other drivers soon 3 Device. Generated. Commands. Count ‘em.
  • Mike Blumenkrantz: Gettin Nacky (2024/09/26 00:00)
    Rejection It’s hard. Nobody likes that feeling, especially after putting in a bunch of work, double-especially when that work is on a Wayland protocol. That’s right, the target of today’s wayland-protocols governance update: NACKs. A NACK is intended to mean something like: this idea does not belong in wayland-protocols for [technical reason] It’s supposed to be the last resort when all other alternatives and gentler nudges have been exhausted. There’s been a lot of confusion over this concept over the years, specifically along the lines of: Who can actually NACK? When can NACKs be used? What’s stopping my protocol from being NACKed? I’m glad you asked. Definition I’ve put up a comprehensive proposal to reform and define the NACK. The short of it is: Only people in this file can NACK a protocol NACKs can only be used for extreme circumstances to block a protocol which does not belong in wayland-protocols NACKs now carry consequences if they are used improperly, including the potential removal of anyone using them improperly This should cover all the basic cases. It’s important to remember that a NACK can always be removed, which is to say that there’s always room for discussion in Open Source. If you’re considering submitting a protocol proposal, don’t worry too much about this! A NACK won’t ever be the first thing you see, and you’ll have ample time and room to discuss your ideas before anyone even considers bringing it up.
  • Melissa Wen: Reflections on 2024 Linux Display Next Hackfest (2024/09/25 13:50)
    Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year’s event in A Coruña, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year’s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year’s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here. Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic. Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope. Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest. Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies. No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics. Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room. Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees. Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant. Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA). Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware. Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers. Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session: KMS color management properties (new version - v5); IGT Tests; drm_info draft support of v4 DRM/KMS plane color properties; gamescope draft support of v4 DRM/KMS plane color properties; Kwin WIP implementation of DRM/KMS plane color properties. Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian’s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session: Wayland color management protocol; Wayland color representation protocol; HDR support merged on Mutter; Color management protocol on Mutter; Color management protocol on GTK. Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow “emulating” HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership). Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session: Kernel series: Add support for ‘power saving policy’ property (merged) Mutter: issue: support for “power saving policy” property Kwin: MR Draft: backends/drm: add support for the “power saving policy” property Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis. Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways. VRR/Frame Limit, Display Mux, Display Control, and more… and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display “wish list”, which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don’t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer. Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements. Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions. Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks. Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions. Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants. Thank you for joining the 2024 Display Next Hackfest We can’t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coruña: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn’t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo, Jiri Koten (RedHat), Jonas Ådahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel Dänzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!
  • Simon Ser: Status update, September 2024 (2024/09/19 22:00)
    Hi! Once again, this status update will be rather short due to limited time bandwidth. I hope to be able to allocate a bit more time slots for my open-source projects next month. We’re getting closer to a new Sway release (fingers crossed), with lots of help from Kenny and Alexander to iron out the remaining bugs. We’ve just shipped wlroots 0.18.1 today (thanks to Simon Zeni for leading the backporting efforts!). I’ve been expanding wlroots’ explicit synchronization support by adapting our multi-GPU logic, the Vulkan renderer and the libliftoff backend. I’ve released wayland 1.23.1 with some Clang and wayland-scanner fixes. I’ve ported the cage kiosk compositor to wlroots 0.18. Last but not least, I’ve rewritten makoctl in C because shell scripts only get you so far. I’ve been giving feedback and contributing to KDE’s SVG cursor spec. The cursor theme landscape isn’t in a great spot at the moment, because we’re stuck with XCursor images. Now that the cursor-shape protocol is gaining adoption there is an opportunity to more easily switch the underlying image format. Thanks to KDE folks for pushing this forward! I’d really like to see the spec standardized under the freedesktop.org umbrella. delthas has been contributing some nifty new features to soju: admins can now configure per-user network count limits, can now impersonate a user via SASL, and the file upload endpoint now sends back an error early when the file is too large. soju 0.8.2 has been released with a bunch of bug fixes. The NPotM is varlinkgen (better name TBD). It’s a Varlink C library and code generator. If you’ve been following my projects for a while, you probably know how much I love code generators producing type-safe APIs from schemas. I must admit, I appreciate Varlink’s simplicity and lack of central bus. I plan to use varlinkgen in kanshi, and maybe other daemons in need of an IPC. See you next month!
  • Hans de Goede: Fedora plymouth boot splash not showing on systems with AMD GPUs (2024/09/14 13:38)
    Recently there have been a number of reports (bug 2183743, bug 2276698, bug 2283839, bug 2312355) about the plymouth boot splash not showing properly on PCs using AMD GPUs.The problem without plymouth and AMD GPUs is that the amdgpu driver is a really really big driver, which easily takes up to 10 seconds to load on older PCs. The delay caused by this may cause plymouth to timeout while waiting for the GPU to be initialized, causing it to fallback to the 3 dot text-mode boot splash.There are 2 workaround for this depending on the PCs configuration:1. With older AMD GPUs the radeon driver is actually used to drive the GPU but even though it is unused the amdgpu driver still loads slowing things down.To check if this is the case for your PC start a terminal in a graphical login session and run: "lsmod | grep -E '^radeon|^amdgpu'" this will output something like this:amdgpu 17829888 0radeon 2371584 37The second number after each is the usage count. As you can see in this example the amdgpu driver is not used. In this case you can disable the loading of the amdgpu driver by adding "modprobe.blacklist=amdgpu" to your kernel commandline:sudo grubby --update-kernel=ALL --args="modprobe.blacklist=amdgpu"2. If the amdgpu driver is actually used on your PC then plymouth not showing can be worked around by telling plymouth to use the simpledrm drm/kms device created from the EFI framebuffer early on boot, rather then waiting for the real GPU driver to load. Note this depends on your PC booting in EFI mode. To do this run:sudo grubby --update-kernel=ALL --args="plymouth.use-simpledrm"After using 1 of these workarounds plymouth should show normally again on boot (and booting should be a bit faster). comments
  • Tvrtko Ursulin: DRM scheduling cgroup controller (2024/09/04 00:00)
    Introduction # The topic of a Direct Rendering Manager (DRM) cgroup controller is something which has been proposed a few times in the past, but so far is still missing from the Linux graphics stack. Some of those attempts were focusing on controlling the GPU memory usage aspect, while some were concerned with scheduling. As I am continuing to explore this area as part of my work at Igalia, in this post we will discuss one possible way of implementing the latter. General problem statement which we are trying to address is the fact many GPUs (and their respective kernel drivers) can simultaneously schedule workloads from different clients and that there are use-cases where having external control over scheduling decisions would be beneficial. But first to clarify what we mean by “external control”. By that term we refer to the scheduling decisions being influenced from the outside of the actual process doing the rendering. If we were to draw a parallel to CPU scheduling, that would be the difference between a process (or a thread) issuing a system call such as setpriority(2) or nice(2) itself (“internal control”), versus its scheduling priority being modified by an external entity such as the user issuing the renice(1) shell command, launching the executable via the nice(1) shell command, or even using the CPU scheduling cgroup controller (“external control”). This has two benefits. Firstly, it is the user who typically knows which tasks are higher priority and which should run in the background and therefore be as much as it is possible isolated from starving the foreground tasks from resources. Secondly, external control can be applied on any process in an unified manner, without the need for applications to individually expose the means to control their scheduling priority. If we now return back to the world of GPU scheduling we find ourselves in a landscape where internal scheduling control is possible with many GPU drivers, but the external control is not. To improve on that there are some technical and conceptual challenges, because GPUs are not as nice and uniform in their scheduling needs and capabilities as CPUs are, but if we would be able to come up with something reasonable even if not perfect, it could bring improvements to the user experience in a variety of scenarios. Past attempts - Priority based controllers # The earliest attempt I can remember was from 2018, by Matt Roper[1], who proposed to implement a driver-specific priority based controller. The RFC limited itself to i915 (kernel driver for Intel GPUs) and, although the priority-based setup is well established in the world of CPU scheduling, and it is easy to understand its effects, the proposal did not gain much traction. Because of the aforementioned advantages, when I proposed my version of the controller in 2022[2], it also included a slightly different version of a priority-based controller. In contrast to the earlier one, this proposal was in principle driver-agnostic and the priority levels were also abstracted. The proposal was also accompanied by benchmark results showing that the approach was effective in allowing users on Linux to launch GPU tasks in the background, while leaving more GPU bandwidth to the foreground task than when not using the controller. Similarly on ChromeOS, when wired into the focused versus un-focused window cgroup management, it was able to demonstrate relatively more GPU time given to the foreground window. Current proposal - Weight based controller # Anticipating the potential lack of sufficient support for this approach the same RFC also included a second controller which takes a different route. It abstracts things one step further and implements a weight based controller based on GPU utilisation[3]. The basic idea is that the GPU time budget is split based on relative group weights across the cgroup hierarchy, and that the controller notifies the individual DRM drivers when their clients are over budget. From there it is left for the individual drivers to know how to best manage this situation, depending on the specific scheduling capabilities of the driver and the GPU hardware. The user interface completely mimics the exiting CPU and IO cgroup controllers with the single drm.weight control file. The weights carry no absolute meaning and are only relative within a single group of siblings. Their only purpose is to split out the time budget between them. Visually one potential cgroup configuration could look like this: The DRM cgroup controller then executes a periodic scanning task which queries each DRM client for its GPU usage and notifies drivers when clients are over their allocated budget. If we expand the concept with runtime adjustment of group weights based on window focus status, with two graphically active clients such as a game and a web browser, we can end up with the following two scenarios: Here we show the actual GPU utilisation of each group together with their drm.weight. On the left hand side the web browser is the focused window, with the weights 100-to-10 in its favour. The compositor is not using its full 200 / (200 + 100) so a portion is passed on to the desktop group to the extent of the full 80% required. Inside the desktop group the game is currently using 70%, while its actual allocation is 80% * (10 / (100 + 10)) = 7.27%. Therefore it is currently consuming is more than the budget and the corresponding DRM driver will be notified by the controller and will be able to do something about it. After the user has given focus to the game window, relative weights will be adjusted and so will the budgets. Now the web browser will be over budget and therefore it can be throttled down, limiting the effect of its background activity on the foreground game window. First driver implementation - i915 # Back when I started developing this idea Intel GPU’s were my main focus, which is why i915 was the first driver I wired up with the controller. There I implemented a rather simple approach of dynamically adjusting the scheduling priority of the throttled contexts, to the amount proportional to how much client is over budget in relative terms. Implementation would also cross-check against the physical engine utilisation, since in i915 we have easy access to that metric, and only throttle if the latter is close to being fully utilised. (Why this makes sense could be an interesting digression relating to the fact that a single cgroup can in theory contain multiple GPUs and multiple clients using a mix of those GPUs. But lets leave that for later.) One of the scenarios I used to test how well this works is to run two demanding GPU clients, each in its own cgroup, tweak their relative weights, and see what happens. The results were encouraging and are shown in the following table. We can see that, when a clients group weight was decreased, the GPU bandwidth it was receiving also went down, as a consequence of the lowered context priority after receiving the over-budget notification. This is a suitable moment to mention how the DRM cgroup controller does not promise perfect control, that is, achieving the actual GPU sharing ratios as expressed by group-relative weights. As we have mentioned before, GPU scheduling is not nearly at the same level of quality and granularity as in the CPU world, so the goal it sets is simply to improve things - do something which has a positive impact on user experience. At the same time, the mechanism and control interface proposed does not preclude individual drivers doing as good job as they can. Or even a future possibility of replacing the inner workings with a controller with something smarter, with no need to change the user space control interface. Going back to the initial i915 implementation, the second test I have done was attempting to wire up with the background/foreground window focus handling in ChromeOS. There I experimented with a game (Android VM) running in parallel with a WebGL demo in a browser. At a certain point after both clients were running I lowered the weight of the background game and on the below screenshot we can see how the FPS metric in a browser jumped up. This illustrates how having the controller can indeed improve the user experience. The user’s focus will be at the foreground window and therefore it does make sense to prioritise GPU access to that client for better interactiveness and smoother rendering there. In fact, in this example the actual FPS jumped from around 48-49 to 60fps. Meaning that throttling the background client has allowed the foreground one to match its rendering to display’s refresh rate. Second implementation - amdgpu # AMD’s kernel module was the next interesting driver which I wired up with the controller. The fact that its scheduling is built on top of the DRM scheduler with only three distinct priority levels mandated a different approach to throttling. We keep a sorted list of “most offending” clients (most out of budget, or most borrowed unused budget from the sibling group), with the idea that the top client on that list gets throttled by lowering its scheduling priority. That was relatively straightforward to implement and sounded like it could potentially satisfy the most basic use case of background task isolation. To test the runtime behaviour we set up two sibling cgroups and vary their relative scheduling weights. In one cgroup we run glxgears with vsync turned off and log its frame rate over time, while in the second group we run glmark2. Let us first have a look on how glxgears frame rate varies during this test, depending on three different scheduling weight ratios between the cgroups. Scheduling weight ratio is expressed as glxgears:glmark2 ie. 10:1 means glxgears scheduling weight was ten times as much as configured for glmark2. We can observe that, as the glmark2 is progressing through its various sub-benchmarks, glxgears frame rate is changing too. But it was overall higher in the runs where the scheduling weight ratio was in its favour. That is a positive result showing that even a simple implementation seems to be having the desired effect, at least to some extent. For the second test we can look from the perspective of glmark2, checking how the benchmark score change depending on the ratio of scheduling weights. Again we see that the scores are generally improving when the scheduling weight ratio is increased in favour of the benchmark. However, in neither case the change of the result is proportional to actual ratios. This is because the primitive implementation is not able to precisely limit the “background” client, but is only able to achieve some throttling. Also, there is an inherent delay in how fast the controller can react given the control loop is based on periodic scanning. This period is configurable and was set to two seconds for the above tests. Conclusion # Hopefully this write-up has managed to demonstrate two main points: First, that a generic and driver agnostic approach to DRM scheduling cgroup controller can improve user experience and enable new use cases. While at the same time following the established control interface as it exists for CPU and IO control, which makes it future-proof and extendable; Secondly, that even relatively basic driver implementations can be somewhat effective in providing positive control effects. It also probably needs to be re-iterated that neither the driver implementations or the cgroup controller implementation itself are limited by the user interface proposed. Both could be independently improved under the hood in the future. What is next? There is more work to be done such as conducting more detailed testing, polishing the implementation and potentially attempting to wire up more drivers to the controller. Further advocacy work in the DRM community too. References # https://lore.kernel.org/dri-devel/20180120015141.10118-1-matthew.d.roper@intel.com/ ↩︎ https://lore.kernel.org/lkml/20221019173254.3361334-1-tvrtko.ursulin@linux.intel.com/ ↩︎ https://lore.kernel.org/lkml/ZVE3shwiRbUQyAqs@mtj.duckdns.org/T/ ↩︎
  • Dave Airlie (blogspot): On Rust, Linux, developers, maintainers (2024/08/30 01:52)
    There's been a couple of mentions of Rust4Linux in the past week or two, one from Linus on the speed of engagement and one about Wedson departing the project due to non-technical concerns. This got me thinking about project phases and developer types.Archetypes:I will regret making an analogy, in an area I have no experience in, but let's give it a go with a road building analogy. Let's sort developers into 3 rough categories. Let's preface by saying not all developers fit in a single category throughout their careers, and some developers can do different roles on different projects, or on the same project simultaneously.1. Wayfinders/MapmakersI want to go build a hotel somewhere but there exists no map or path. I need to travel through a bunch of mountains, valleys, rivers, weather, animals, friendly humans, antagonistic humans and some unknowns. I don't care deeply about them, I want to make a path to where I want to go. I hit a roadblock, I don't focus on it, I get around it by any means necessary and move onto the next one. I document the route by leaving maps, signs. I build a hotel at the end. 2. Road buildersI see the hotel and path someone has marked out. I foresee that larger volumes will want to traverse this path and build more hotels. The roadblocks the initial finder worked around, I have to engage with. I engage with each roadblock differently. I build a bridge, dig a tunnel, blow up some stuff, work with with/against humans, whatever is necessary to get a road built to the place the wayfinder built the hotel. I work on each roadblock until I can open the road to traffic. I can open it in stages, but it needs a completed road.3. Road maintainersI've got a road, I may have built the road initially. I may no longer build new roads. I've no real interest in hotels. I deal with intersections with other roads controlled by other people, I interact with builders who want to add new intersections for new roads, and remove old intersections for old roads. I fill in the holes, improve safety standards, handle the odd wayfinder wandering across my 8 lanes.Interactions:Wayfinders and maintainers is the most difficult interaction. Wayfinders like to move freely and quickly, maintainers have other priorities that slow them down. I believe there needs to be road builders engaged between the wayfinders and maintainers.Road builders have to be willing to expend the extra time to resolving roadblocks in the best way possible for all parties. The time it takes to resolve a single roadblock may be greater than the time expended on the whole wayfinding expedition, and this frustrates wayfinders. The builder has to understand what the maintainers concerns are and where they come from, and why the wayfinder made certain decisions. They work via education and trust building to get them aligned to move past the block. They then move down the road and repeat this process until the road is open. How this is done might change depending on the type of maintainers.Maintainer types:Maintainers can fall into a few different groups on a per-new road basis, and how do road builders deal with existing road maintainers depends on where they are for this particular intersection:1. Positive and engaged Aligned with the goal of the road, want to help out, design intersections, help build more roads and more intersections. Will often have helped wayfinders out.2. Positive with real concernsAgrees with the road's direction, might not like some of the intersections, willing to be educated and give feedback on newer intersection designs. Moves to group 1 or trusts that others are willing to maintain intersections on their road.3. Negative with real concernsDon't agree fully with road's direction or choice of building material. Might have some resistance to changing intersections, but may believe in a bigger picture so won't actively block. Hopefully can move to 1 or 2 with education and trust building. 4. Negative and unwillingDon't agree with the goal, don't want the intersection built, won't trust anyone else to care about their road enough. Education and trust building is a lot more work here, and often it's best to leave these intersections until later, where they may be swayed by other maintainers having built their intersections. It might be possible to build a reduced intersection. but if they are a major enough roadblock in a very busy road, then a higher authority might need to be brought in.5. Don't care/DisengagedDoesn't care where your road goes and won't talk about intersections. This category often just need to be told that someone else will care about it and they will step out of the way. If they are active blocks or refuse interaction then again a higher authority needs to be brought in.Where are we now?I think the r4l project has a had lot of excellent wayfinding done, has a lot of wayfinding in progress and probably has a bunch of future wayfinding to do. There are some nice hotels built. However now we need to build the roads to them so others can build hotels. To the higher authority, the road building process can look slow. They may expect cars to be driving on the road already, and they see roadblocks from a different perspective. A roadblock might look smaller to them, but have a lot of fine details, or a large roadblock might be worked through quickly once it's engaged with. For the wayfinders the process of interacting with maintainers is frustrating and slow, and they don't enjoy it as much as wayfinding, and because they still only care about the hotel at the end, when a maintainer gets into the details of their particular intersection they don't want to do anything but go stay in their hotel.  The road will get built, it will get traffic on it. There will be tunnels where we should have intersections, there will be bridges that need to be built from both sides, but I do think it will get built.I think my request from this is that contributors should try and identify the archetype they currently resonate with and find the next group over to interact with.For wayfinders, it's fine to just keep wayfinding, just don't be surprised when the road building takes longer, or the road that gets built isn't what you envisaged.For road builder, just keep building, find new techniques for bridging gaps and blowing stuff up when appropriate. Figure out when to use higher authorities. Take the high road, and focus on the big picture.For maintainers, try and keep up with modern road building, don't say 20 year old roads are the pinnacle of innovation. Be willing to install the rumble strips, widen the lanes, add crash guardrails, and truck safety offramps. Understand that wayfinders show you opportunities for longer term success and that road builders are going to keep building the road, and the result is better if you engage positively with them.
  • Simon Ser: Status update, August 2024 (2024/08/17 22:00)
    Hi! After months of bikeshedding finishing touches we’ve finally merged ext-image-capture-source-v1 and ext-image-copy-capture-v1 in wayland-protocols! These two new protocols supersede the old wlr-screencopy-v1 protocol. They unlock some nice features such as toplevel and cursor capture, as well as improved damage tracking. Thanks a lot to Andri Yngvason! He’s written a blog post about the new protocols with more details. The wlroots MR doesn’t have toplevel capture implemented yet, but that’s next on the TODO list. In other Wayland news, we’ve merged full support for explicit synchronization in wlroots. This generally results in a better system architecture than implicit synchronization, reduces over-synchronization for complicated pipelines, and makes wlroots work correctly with drivers lacking implicit synchronization support (e.g. NVIDIA). Alexander has implemented automatic X11 surface restacking in wlroots’ scene-graph. That way, all scene-graph compositors get proper X11 stack handling for free (Sway’s implementation was buggy). This should fix issues where the X11 server and the compositor don’t have the same idea of the relative ordering of surfaces, resulting in clicks going “through” windows or reaching invisible windows. Ricardo Steijn has contributed Sway support for tearing-control-v1. This allows users to opt-in to immediate page-flips which don’t wait for the vertical sync point (VSync) to program new frames into the hardware. For tearing to be enabled, two conditions need to be fulfilled: tearing needs to be enabled per-output via the output allow_tearing command, and tearing needs to be enabled per-application either via the tearing-control-v1 Wayland protocol or manually via the window allow_tearing command. I’ve also pushed kernel patches from André Almeida and me to fix a few bugs around tearing page-flips with the atomic KMS API, so once these land forcing the legacy KMS API shouldn’t be necessary anymore. drm_info v2.7.0 has been released with a few new features and cleanups. Support for DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT and DRM_CAP_ATOMIC_ASYNC_PAGE_FLIP has been added, and a new flag has been introduced to display information from a JSON dump. Last, I’ve released a new version of go-maildir with a brand new API. Instead of referring to messages by their Maildir key and phishing back their full filename on each operation, the API exposes a Message type. It should be much nicer to use than the previous one. That’s all for August, see you next month!
  • Matthias Klumpp: Freedesktop Specs Website Update (2024/08/04 18:54)
    The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier. Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can’t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later. So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow. DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now. Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough…) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but: If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it! Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look!
  • Tomeu Vizoso: Etnaviv NPU update 20: Fast object detection on the NXP i.MX 8M Plus SoC (2024/07/31 13:09)
    I'm happy to announce that my first project regarding support for the NPU in NXP's i.MX 8M Plus SoC has reached the feature complete stage.CC BY-NC 4.0 Henrik BoyeFor the last several weeks I have been working full-time on adding support for the NPU to the existing Etnaviv driver. Most of the existing code that supports the NPU in the Amlogic A311D was reused, but NXP used a much more recent version of the NPU IP so some advancements required new code, and this in turn required reverse engineering.This work has been kindly sponsored by the Open Source consultancy Ideas On Board, for which I am very grateful. I hope this will be useful to those companies that need full mainline support in their products, even if it is just the start.This company is unique in working on both NPU and camera drivers in Linux mainline, so they have the best experience for products that require long term support and vision processing.Since the last update I have fixed the last bugs in the compression of the weights tensor and implemented support for a new hardware-assisted way of executing depthwise convolutions. Some improvements on how the tensor addition operation is lowered to convolutions was needed as well.Performance is pretty good already, allowing for detecting objects in video streams at 30 frames per second, so at a similar performance level as the NPU in the Amlogic A311D. Some performance features are left to be implemented, so I think there is still substantial room for improvement.At current the code is at a very much proof-of-concept state. The next step is cleaning it all up and submitting for review to Mesa3D. In the meantime, you can find the draft code at https://gitlab.freedesktop.org/tomeu/mesa/-/tree/etnaviv-imx8mp.A big thanks to Philipp Zabel who reverse engineered the bitstream format of the weight encoding and added some patches to the kernel that were required for the NPU to work reliably.
  • Alberto Ruiz: Booting with Rust: Chapter 2 (2024/07/26 15:06)
    In a previous post I gave the context for my pet project ieee1275-rs, it is a framework to build bootable ELF payloads on Open Firmware (IEEE 1275). OF is a standard developed by Sun for SPARC and aimed to provide a standardized firmware interface that was rich and nice to work with, it was later adopted by IBM, Apple for POWER and even the OLPC XO.The crate is intended to provide a similar set of facilities as uefi-rs, that is, an abstraction over the entry point and the interfaces. I started the ieee1275-rs crate specifically for IBM’s POWER platforms, although if people want to provide support for SPARC, G3/4/5s and the OLPC XO I would welcome contributions. There are several ways the firmware takes a payload to boot, in Fedora we use a PReP partition type, which is a ~4MB partition labeld with the 41h type in MBR or 9E1A2D38-C612-4316-AA26-8B49521E5A8B as the GUID in the GPT table. The ELF is written as raw data in the partition. Another alternative is a so called CHRP script in “ppc/bootinfo.txt”, this script can load an ELF located in the same filesystem, this is what the bootable CD/DVD installer uses. I have yet to test whether this is something that can be used across Open Firmware implementations. To avoid compatibility issues, the ELF payload has to be compiled as a 32bit big-endian binary as the firmware interface would often assume that endianness and address size. The entry point As I entered this problem I had some experience writing UEFI binaries, the entry point in UEFI looks like this: #![no_main] #![no_std] use uefi::prelude::*; #[entry] fn main(_image_handle: Handle, mut system_table: SystemTable<Boot>) -> Status { uefi::helpers::init(&mut system_table).unwrap(); system_table.boot_services().stall(10_000_000); Status::SUCCESS } Basically you get a pointer to a table of functions, and that’s how you ask the firmware to perform system functions for you. I thought that maybe Open Firmware did something similar, so I had a look at how GRUB does this and it used a ppc assembler snippet that jumps to grub_ieee1275_entry_fn(), yaboot does a similar thing. I was already grumbling of having to look into how to embed an asm binary to my Rust project. But turns out this snippet conforms to the PPC function calling convention, and since those snippets mostly take care of zeroing the BSS segment but turns out the ELF Rust outputs does not generate one (although I am not sure this means there isn’t a runtime one, I need to investigate this further), I decided to just create a small ppc32be ELF binary with the start function into the top of the .text section at address 0x10000. I have created a repository with the most basic setup that you can run. With some cargo configuration to get the right linking options, and a script to create the disk image with the ELF payload on the PReP partition and run qemu, we can get this source code being run by Open Firmware: #![no_std] #![no_main] use core::{panic::PanicInfo, ffi::c_void}; #[panic_handler] fn _handler (_info: &PanicInfo) -> ! { loop {} } #[no_mangle] #[link_section = ".text"] extern "C" fn _start(_r3: usize, _r4: usize, _entry: extern "C" fn(*mut c_void) -> usize) -> isize { loop {} } Provided we have already created the disk image (check the run_qemu.sh script for more details), we can run our code by executing the following commands: $ cargo +nightly build --release --target powerpc-unknown-linux-gnu $ dd if=target/powerpc-unknown-linux-gnu/release/openfirmware-basic-entry of=disk.img bs=512 seek=2048 conv=notrunc $ qemu-system-ppc64 -M pseries -m 512 --drive file=disk.img [...] Welcome to Open Firmware Copyright (c) 2004, 2017 IBM Corporation All rights reserved. This program and the accompanying materials are made available under the terms of the BSD License available at http://www.opensource.org/licenses/bsd-license.php Trying to load: from: /vdevice/v-scsi@71000003/disk@8000000000000000 ... Successfully loaded Ta da! The wonders of getting your firmware to run an infinite loop. Here’s where the fun begins. Doing something actually useful Now, to complete the hello world, we need to do something useful. Remeber our _entry argument in the _start() function? That’s our gateway to the firmware functionality. Let’s look at how the IEEE1275 spec tells us how we can work with it.This function is a universal entry point that takes a structure as an argument that tells the firmware what to run, depending on the function it expects some extra arguments attached. Let’s look at how we can at least print “Hello World!” on the firmware console. The basic structure looks like this: #[repr(C)] pub struct Args { pub service: *const u8, // null terminated ascii string representing the name of the service call pub nargs: usize, // number of arguments pub nret: usize, // number of return values } This is just the header of every possible call, nargs and nret determine the size of the memory of the entire argument payload. Let’s look at an an example to just exit the program: #[no_mangle] #[link_section = ".text"] extern "C" fn _start(_r3: usize, _r4: usize, entry: extern "C" fn(*mut Args) -> usize) -> isize { let mut args = Args { service: "exit\0".as_ptr(), nargs: 0, nret: 0 }; entry (&mut args as *mut Args); 0 // The program will exit in the line before, we return 0 to satisfy the compiler } When we run it in qemu we get the following output: Trying to load: from: /vdevice/v-scsi@71000003/disk@8000000000000000 ... Successfully loaded W3411: Client application returned. Aha! We successfully called firmware code! To be continued… To summarize, we’ve learned that we don’t really need assembly code to produce an entry point to our OF bootloader (tho we need to zero our bss segment if we have one), we’ve learned how to build a valid OF ELF for the PPC architecture and how to call a basic firmware service.In a follow up post I intend to show a hello world text output and how the ieee1275 crate helps to abstract away most of the grunt to access common firmware services. Stay tuned!
  • Alberto Ruiz: Booting with Rust: Chapter 1 (2024/07/25 14:10)
    I have been doing random coding experiments with my spare time that I never got to publicize much outside of my inner circles. I thought I would undust my blog a bit to talk about what I did in case it is useful for others.For some background, I used to manage the bootloader team at Red Hat a few years ago alongside Peter Jones and Javier Martinez. I learned a great deal from them and I fell in love with this particular problem space and I have come to enjoy tinkering with experiments in this space.There many open challenges in this space that we could use to have a more robust bootpath across Linux distros, from boot attestation for initramfs and cmdline, A/B rollbacks, TPM LUKS decryption (ala BitLocker)…One that particularly interests me is unifying the firmware-kernel boot interface across implementations in the hypothetical absence of GRUB. Context: the issue with GRUB The priority of the team was to support RHEL boot path on all the architectures we supported. Namely x86_64 (legacy BIOS & UEFI), aarch64 (UEFI), s390x and ppc64le (Open Power and PowerVM). These are extremely heterogeneous firmware interfaces, some are on their way to extinction (legacy PC BIOS) and some will remain weird for a while.GRUB, (GRand Unified Bootloader) as it names stands, intends to be a unified bootloader for all platforms. GRUB has to support a supersetq of firmware interfaces, some of those, like legacy BIOS do not support much other than some rudimentary support disk or network access and basic graphics handling. To get to load a kernel and its initramfs, this means that GRUB has to implement basic drivers for storage, networking, TCP/IP, filesystems, volume management… every time there is a new device storage technology, we need to implement a driver twice, once in the kernel and once in GRUB itself. GRUB is, for all intent and purposes, an entire operating system that has to be maintained. The maintenance burden is actually quite big, and recently it has been a target for the InfoSec community after the Boot Hole vulnerability. GRUB is implemented in C and it is an extremely complex code base and not as well staffed as it should. It implements its own scripting language (parser et al) and it is clear there are quite a few CVEs lurking in there. So, we are basically maintaining code we already have to write, test and maintain in the Linux kernel in a different OS whose whole purposes (in the context of RHEL, CentOS and Fedora) its main job is to boot a Linux kernel.This realization led to the initiative that these days are taking shape in the discussions around nmbl (no more boot loader). You can read more about that in that blog post, I am not actively participating in that effort but I encourage you to read about it. I do want to focus on something else and very specific, which is what you do before you load the nmble kernel. Booting from disk I want to focus on the code that goes from the firmware interface to loading the kernel (nmbl or otherwise) from disk. We want some sort of A/B boot protocol that is somewhat normalized across the platforms we support, we need to pick the kernel from the disk. The systemd community has led some of the boot modernization initiatives, vocally supporting the adoption of UKI and signed pre-built initarmfs images, developing the Boot Loader Spec, and other efforts. At some point I heard Lennart making the point that we should standardize on using the EFI System Partition as /boot to place the kernel as most firmware implementations know how to talk to a FAT partition.This proposal caught my attention and I have been pondering if we could have a relatively small codebase written in a safe language (you know which) that could support a well define protocol for A/B booting a kernel in Legacy BIOS, S390 and OpenFirmware (UEFI and Open Power already support BLS snippets so we are covered there).My modest inroad into testing this hypothesis so far has been the development of ieee1275-rs, a Rust module to write programs for the Open Firmware interface, so far I have not been able to load a kernel by myself but I still think the lessons learned and some of the code could be useful to others. Please note this is a personal experiment and nothing Red Hat is officially working on.I will be writing more about the technical details of this crate in a follow up blog post where I get into some of the details of writing Rust code for a firmware interface, this post is long enough already. Stay tuned.
  • Simon Ser: Status update, July 2024 (2024/07/15 22:00)
    Hi! This month wlroots 0.18.0 has been released! This new version includes a fair share of niceties: ICC profiles, GPU reset recovery, less black screens when plugging in a monitor on Intel, a whole bunch of new protocol implementations, and much more. Thanks a lot to all contributors! Two recent merge requests made it in the release: Kenny’s Vulkan renderer optimizations, and support for the SIZE_HINTS KMS property to use a smaller cursor plane on Intel to save power. For the next release we’ll be trying out release candidates to formally focus on bugfixing and leave time for compositors and language bindings to update and report issues. I’ve continued working on various graphics-related topics, for instance the wlroots implementation of the upcoming ext-screencopy-v1 protocol is now complete and the protocol itself is almost ready (still figuring out the most difficult part: how to name it). I also sent out a kernel patch to fix tearing page-flips when cursor/overlay planes don’t change (and are included in the atomic commit). I reviewed patches by Enrico Weigelt to improve libdrm’s portability to OpenBSD and Solaris. Last, I’ve released libdisplay-info 0.2.0 with a new high-level API for colorimetry and support for more EDID/CTA/DisplayID blocks. To get the releases over with, let’s briefly mention Goguma 0.7.0. This one unlocks file uploads, a new look based on Material You with an adaptive color scheme, many improvements to the iOS port, and text/media can be shared to Goguma from other apps. slingamn has played with a gamja/Ergo setup configured with Forgejo as an OAuth server, and it worked nicely after fixing a gamja SASL-related bug and implementing a missing feature in Forgejo’s OAuth token introspection endpoint! Last, I also added a new libscfg API to write files - this can be useful to auto-generate some configuration files for instance. And I also performed some more boring X.Org Foundation sysadmin stuff, such as dealing with domain-related issues, recovering a server running out of disk space again, and convincing Postfix to start up. See you next month!
  • Madeeha Javed: Igalia's Latest Contributions to Graphics (2024/07/12 00:00)
    The Igalia Graphics team has been expanding and making significant contributions in the space of open source graphics. An earlier blog post by our team member Lucas provides an excellent insight in to the team’s evolution over the past years. The following series of posts will attempt to summarize the team’s recent engagements: This post covers our updates on GPU color management, Turnip, V3DV, DRM/KMS, Etnaviv and community events we have been participating in. The next post will cover news from our CTS, Vulkan Video, Mesa CI, GPU reset work and talks about some new initiatives that recently we got involved in. Before dwelling in to details, it is worth mentioning the recent highlights; Igalia hosted 2024 Linux Display Next Hackfest in May this year and X.org Developers Conference 2023 in October last year, both in the beautiful city of A Coruña. These events were a huge success in creating a hub for graphics experts to foster open innovation. Continue reading for more details on these events. A Vibrant Linux # Last year brought great news for AMD GPU color management: the AMD driver-specific color management properties reached the upstream linux-next! My Igalia colleague Melissa Wen has been spearheading this effort for some time now and has journalled every detail in a series of blog posts. AMD has been improving its display color management pipeline with each new hardware generation. The new color capabilities, before and after plane composition, can be used by compositors and userspace applications to provide a vibrant experience to the end-user. Exposing AMD driver-specific color properties is a step towards advanced color management on Linux, allowing gamut mapping, HDR rendering, HDR on SDR, and SDR on HDR. On a very high level, there are 2 parts of this support: Upgrading the DRM/KMS Linux interface to expose the new features to the user-space. One major challenge was the limited DRM/KMS interface, which only exposed a small set of post-blending color properties. Latest AMD Display Core Next hardware has many more post-blending and pre-blending capabilities. Melissa’s work involved mapping these capabilities to the AMD driver’s display core interface and then to the DRM interface. Her blog post provides a brief overview of this extensive mapping effort. Updating the AMD’s Linux display driver to expose the new hardware features. AMD DCN 3.0 comes with cutting edge color capabilities described by Melissa here and this blog post also talks about the AMD’s Linux display subsystem components and about the new properties. I quote here some of Melissa’s write-ups that helped me get some understanding about this vast subject: Navigating the Linux display subsystem Melissa’s XDC2023 talk Turnip Upgrades # Turnip, the open-source Vulkan driver for Qualcomm Adreno GPUs, has been receiving major upgrades this year for Qualcomm’s Adreno 7XX GPUs. From my colleague Danylo Piliaiev’s Turnip update at FOSDEM 2024, Turnip seems to be in a great state; major Vulkan extensions and better debug support, AAA desktop games can now run via FEX + Turnip on Linux, with some from the Termux community even running desktop games on Android with Box64/FEX + Turnip. The highlight of Danylo’s talk is the A7XX support. The team started the year with A7XX bring up and now ramping on adding support for the new features introduced in A7XX: Mark Collins, who also represents Igalia at the Khronos Vulkan WG, implemented GMEM rendering for A7XX, which can be considerably faster and more power efficient than sysmem rendering depending on what’s being rendered. Followed up by support for unidirectional LRZ, bringing A7XX to parity with A6XX’s GMEM rendering feature set and further boosting performance, with more performance improvements for A7XX on the horizon. Our colleague Amber Harmonia added support for allowing a shader to contain 64-bit atomic operations on signed and unsigned integers and support for allowing rasterizing wide lines while Fixed Stride Draw Table support is work-in-progress. In addition to new feature support, we are committed to providing a robust and performant driver. Recently, Job Noorman has joined our Turnip team to improve the IR3 compiler. He improved handling of predicate registers and added support for predication. Adreno GPUs have special registers that store the result of a condition called predicate registers, utilizing these registers can eliminate branches in the generated code thereby improving performance. Similarly, more than 10% code size reduction was observed in shader-db with his patch for using rptN instructions. Turnip has come far and has been giving competition to the Adreno’s proprietary driver recently. Here is Assassin’s Creed running on Adreno + Turnip. Check the FPS on that screen! Turnip Development Resources # Danylo usually talks about analyzing some of the major Turnip issues in his series of blog posts “Turnips in the wild” with part 3 being the latest addition. This is exactly what you need to jump start Turnip development. As always, the team also discovered many new techniques of debugging GPU issues. GPU driver developers want to modify the GPU command stream on run-time to see the outcome of editing it in different ways. Danylo implemented this highly sought out feature as a tool for Adreno and describes how this tool can be used. DRM/KMS Improvements # The management of the display, graphics and composition in Linux lies in the kernel DRM/KMS framework. Igalian Maíra Canal provides full disclosure on our notable contributions authoring, reviewing and testing kernel DRM patches while I privide a few highlights here: My Igalia colleague André Almeida and Simon Ser have been working on Asynchronous Page Flips, an optimization that allows applications to flip a plane for immediate presentation. The support for this feature is now available in the atomic API. Plus, with André’s patch, it is enabled for all planes including the primary plane if the hardware supports it. Maíra has been working on feature crucial to graphics development on RPi. She supplied per client GPU usage statistics as well as global GPU utilization. In order to ensure continuous job submission to the GPU, CPU jobs submitted from userspace must be prevented. With a series of patches from Maíra moved CPU jobs mechanisms from the V3DV driver to the V3D kernel driver. We want more Pi! # After achieving Vulkan 1.2 conformance on V3DV, the Igalia team working on V3DV have been focusing on instrumental enhancements of the driver. V3DV is Broadcom Video Core GPU’s Vulkan driver on the RPi 5 was launched in October last year with a new BCM GPU. Alejandro provided an overview of the team’s journey through V3DV development since RPi 4 and then talks about challenges of RPi 5 support in V3DV: More improvements and new Vulkan extensions were supported last year. This year Iago landed support for Vulkan dynamic rendering extension. VK_KHR_dynamic_rendering is a popular Vulkan extension that has added flexibility to the Vulkan API by allowing users to skip render pass and frame buffer objects and start immediate rendering. And now its available on the Pi. As mentioned in the DRM/KMS improvements above, Maíra together with José María Casanova (Chema) and Melissa supported GPU utilization stats and CPU jobs optimization. Here is a snapshot of collection of GPU stats on Pi5: RPi 5 continues to use OpenGL/Wayland based Wayfire compositor on these devices. Christopher was therefore tasked with enabling Wayfire to run on RPi 3 and 4 as well. He achieved this by software rendering implementing by a Pixman back-end. Check out the demo: Iago also made some interesting observations while experimenting with SuperTuxKart on the Pi. You will be pleasantly surprised to know how Vulkan out-performed OpenGL. The team has been working towards Vulkan 1.3 and we will hopefully be able to share more news on that front very soon. Etnaviv # Christian Gmeiner, one of the maintainers of Etnaviv (open-source graphics driver for Vivante GPUs), joined our team last year. We are very excited to have him on-board because it is a testament to Igalia’s dedication towards open source graphics software development. Christian is also enjoying being at Igalia as he discusses in blog post and also reveals his plans for Etnaviv: Improving Etnaviv’s Gallium driver. Exposing GLES3. Moving towards a new back-end compiler. One of his latest updates is the user-space hardware database. He explains that a user-space driver HW database has been introduced to obtain GPU specific information like GPU features and limits, corresponding to the introduction of an in-kernel hardware database. I am sure this will be super helpful for the reverse engineers out there! News & Community Events # Igalians are always eager to share their knowledge and expertise with the open source community by participating in key organizations and events. Good bye ‘Xorg’ and Hello ‘Linux Foundation’ # There is quite a trend in Igalians serving on the X.Org Foundation’s Board of Directors. Samuel Iglesias took on this responsibility for a number of terms but this year he is stepping down. He reminisced about his role in this blog post. Ricardo was, however, elected as one of the board of directors in 2022 and stayed on the board till Q1 2024, leaving Christopher Michael as the only Igalian currently on the board. In his blog post, Ricardo introduces the X.Org Foundation but also tackles some questions about its future. Samuel was invited to join the Linux Foundation (Europe) advisory board and he has accepted the invitation. This is a huge milestone for the whole graphics team. Congratulations Sam! 2024 Linux Display Hackfest # This is a rather new event that has materialized in the Linux community to enhance the Linux display stack. Melissa’s work on HDR and AMD color management together with interesting discussions during XDC 2023 Color Management workshop paved the way for the event this year and therefore, Igalia graciously offered to host it. The event attracted key participants from Linux community, AMD, Nvidia, Google, Fedora, and Gnome, focusing on topics like HDR/color Management, variable refresh rate, tearing, multiplane/hardware overlay for video and gaming, real-time scheduling, async KMS API, power saving vs. color/latency, content-adaptive scaling and sharpening, and display control. The success of this event has highlighted the need for future editions. Embedded Open Source Summit 2024 # At EOSS this year, we presented the following talks: Alejandro Piñeiro, Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driver for a New GPU FOSDEM 2024 # At FOSDEM this year, we presented the following talks: Danylo Piliaiev, “turnip: Update on Open Source Vulkan Driver for Adreno GPUs” José María Casanova Crespo, Juan A. Suarez, “Graphics stack updates for Raspberry Pi devices” Vukanised 2024 # At Vukanised this year, we presented the following talks: Stéphane Cerveau & Hyunjun Ko, “Implementing a Vulkan Video Encoder From Mesa to Streamer” Iago Toral, Faith Ekstrand, “8 Years of Open Drivers, including the State of Vulkan in Mesa” Igalians who attended the event found it quite informative on the subject. XDC 2023 # Igalia hosted XDC 2023 in the city of their headquarters, A Coruña. We also presented many talks and demos. Melissa Wen, “The rainbow treasure map: advanced color management on Linux with AMD/Steam Deck” Danylo Piliaiev, “Debugging GPU faults: QoL tools for your driver" Eric Engestrom with Martin Roukala and David Heidelberg, “Hosting a CI system at home - Slaying the regression dragon to bring stability to driver kingdom” Iago Toral, Juan A. Suarez, Maíra Canal, “On-going challenges in the Raspberry Pi driver stack: OpenGL 3, Vulkan and more” Maíra Canal, Melissa Wen, “Status Update of the VKMS DRM driver” André Almeida, “Having fun with GPU resets in Linux” Lucas Fryzek, “Freedreno on Android” Christian Gmeiner, “etnaviv: status update” The lightning talks and demos had an equally active participation from Igalia: Christopher Michael, “Wayfire - Making an OpenGL Wayland compositor render using Pixman” Guilherme G. Piccoli, “To crash or not to crash: if you do, at least recover fast!” Charles Turner, “Status of the Vulkan Video ecosystem” Alejandro Piñeiro, “v3dv: experience using gfxreconstruct/apitrace traces for performance evaluation” Eric Engestrom “Being a Mesa release maintainer” Workshops were organized for discussion on larger subjects like advance color management (discussion summary) and continuous integration (discussion summary). The Future # Igalia graphics team has profound expertise in Mesa, Vulkan, OpenGL and Linux kernel. We have also embraced new and really interesting graphics technologies that I talk about in my next post.
  • Christian Gmeiner: It All Started With a Nop - Part I (2024/07/11 00:00)
    Note This blog post is part 1 of a series of blog posts about isaspec and its usage in the etnaviv GPU stack. I will add here links to the other blog posts, once they are published. The first time I heard about isaspec, I was blown away by the possibilities it opens. I am really thankful that Igalia made it possible to complete this crucial piece of core infrastructure for the etnaviv GPU stack.
  • Tomeu Vizoso: Etnaviv NPU update 19: Ideas On Board sponsors support for the NXP i.MX 8M Plus SoC (2024/06/28 07:08)
    Last week I started work on adding support to the Etnaviv driver for the NPU inside the NXP i.MX 8M Plus SoC (VeriSilicon's VIPNano-SI+).This work is sponsored by the open source consultancy Ideas On Boards, and will include the same level of support as for the Amlogic A311D SoC, which means full acceleration for the SSDLite MobileDet object detection model.Right now all kinds of basic convolutions are supported, and work is well on its way for strided convolutions.For basic convolutions, most of the work was switching to a totally different way of encoding weights. At the low-level, the weights are encoded with Huffman, and zero run length encoding on top. This low level encoding has been already reverse engineered and implemented by Philipp Zabel of Pengutronix, as mentioned in my previous update on the variant of this NPU shipped inside the Amlogic S905D3.How weights are laid on top of the encoding is also different, so I had to reverse engineer that and implement it in the Mesa driver. That plus some changes on how tiling is computed got basic convolutions working, then I moved to strided convolutions. Pointwise convolutions got supported at the same time as basic convolutions, as they are not any different on this particular hardware.Strided convolutions are still not natively supported by the hardware, so I reused the code that lowers them to basic convolutions. But the existing jobs that use the tensor manipulation cores to transform the input tensor for strides contained many assumptions that don't hold valid in this hardware.So I have been reverse engineering these differences and now I have all kinds of strided convolutions supported up to 32 output channels. I feel that these will be done after addressing a couple of details about how the tensor reshuffle jobs are distributed among the available TP cores.Afterwards I will look at depthwise convolutions, which may be supported natively by the hardware, while on the A311D these were lowered to basic convolutions.Then on to tensor addition operations, and that should be all that is needed to get SSDLite MobileDet running, hopefully close to the performance of the closed source driver.I'm very grateful to Ideas On Board for sponsoring this work, for their trust on me to get it done, and for their vision of a fully featured mainline platform that all companies can base their products on without being held captive by any single vendor.I'm testing all this on a Verdin iMX8M Plus board that was kindly offered by Daniel Lang at Toradex, thanks!
  • Lucas Fryzek: Software Rendering and Android (2024/06/26 23:00)
    My current project at Igalia has had me working on Mesa’s software renderers, llvmpipe and lavapipe. I’ve been working to get them running on Android, and I wanted to document the progress I’ve made, the challenges I’ve faced, and talk a little bit about the development process for a project like this. My work is not totally merged into upstream mesa yet, but you can see the MRs I made here: llvmpipe: Add android platform integration u_gralloc/fallback: Set fd from handle directly llvmpipe & lavalpipe: Implement sync fd import/export extensions lavapipe: Implement VK_EXT_external_memory_dma_buf Setting up an Android development environment Getting system level software to build and run on Android is unfortunately not straightforward. Since we are doing software rendering we don’t need a physical device and instead we can make use of the Android emulator, and if you didn’t know Android has two emulators, the common one most people use is “goldfish” and the other lesser known is “cuttlefish”. For this project I did my work on the cuttlefish emulator as its meant for testing the Android OS itself instead of just Android apps and is more reflective of real hardware. The cuttlefish emulator takes a little bit more work to setup, and I’ve found that it only works properly in Debian based linux distros. I run Fedora, so I had to run the emulator in a debian VM. Thankfully Google has good instructions for building and running cuttlefish, which you can find here. The instructions show you how to setup the emulator using nightly build images from Google. We’ll also need to setup our own Android OS images so after we’ve confirmed we can run the emulator, we need to start looking at building AOSP. For building our own AOSP image, we can also follow the instructions from Google here. For the target we’ll want aosp_cf_x86_64_phone-trunk_staging-eng. At this point it’s a good idea to verify that you can build the image, which you can do by following the rest of the instructions on the page. Building AOSP from source does take a while though, so prepare to wait potentially an entire day for the image to build. Also if you get errors complaining that you’re out of memory, you can try to reduce the number of parallel builds. Google officially recommends to have 64GB of RAM, and I only had 32GB so some packages had to be built with the parallel builds set to 1 so I wouldn’t run out of RAM. For running this custom-built image on Cuttlefish, you can just copy all the *.img files from out/target/product/vsoc_x86_64/ to the root cuttlefish directory, and then launch cuttlefish. If everything worked successfully you should be able to see your custom built AOSP image running in the cuttlefish webui. Building Mesa targeting Android Working from the changes in MR !29344 building llvmpipe or lavapipe targeting Android should just work™️. To get to that stage required a few changes. First llvmpipe actually already had some support on Android, as long as it was running on a device that supports a DRM display driver. In that case it could use the dri window system integration which already works on Android. I wanted to get llvmpipe (and lavapipe) running without dri, so I had to add support for Android in the drisw window system integration. To support Android in drisw, this mainly meant adding support for importing dmabuf as framebuffers. The Android windowing system will provide us with a “gralloc” buffer which inside has a dmabuf fd that represents the framebuffer. Adding support for importing dmabufs in drisw means we can import and begin drawing to these frame buffers. Most the changes to support that can be found in drisw_allocate_textures and the underlying changes to llvmpipe to support importing dmabufs in MR !27805. The EGL Android platform code also needed some changes to use the drisw window system code. Previously this code would only work with true dri drivers, but with some small tweaks it was possible to get to have it initialize the drisw window system and then using it for rendering if no hardware devices are available. For lavapipe the changes were a lot simpler. The Android Vulkan loader requires your driver to have HAL_MODULE_INFO_SYM symbol in the binary, so that got created and populated correctly, following other Vulkan drivers in Mesa like turnip. Then the image creation code had to be modified to support the VK_ANDROID_native_buffer extension which allows the Android Vulkan loader to create images using Android native buffer handles. Under the hood this means getting the dmabuf fd from the native buffer handle. Thankfully mesa already has some common code to handle this, so I could just use that. Some other small changes were also necessary to address crashes and other failures that came up during testing. With the changes out of of the way we can now start building Mesa on Android. For this project I had to update the Android documentation for Mesa to include steps for building LLVM for Android since the version Google ships with the NDK is missing libraries that llvmpipe/lavapipe need to function. You can see the updated documentation here and here. After sorting out LLVM, building llvmpipe/lavapipe is the same as building any other Mesa driver for Android: we setup a cross file to tell meson how to cross compile and then we run meson. At this point you could manual modify the Android image and copy these files to the vm, but I also wanted to support building a new AOSP image directly including the driver. In order to do that you also have to rename the driver binaries to match Android’s naming convention, and make sure SO_NAME matches as well. If you check out this section of the documentation I wrote, it covers how to do that. If you followed all of that you should have built an version of llvmpipe and lavapipe that you can run on Android’s cuttlefish emulator. Android running lavapipe References https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29344 Main MR with Android changes https://source.android.com/docs/devices/cuttlefish/get-started Google’s official guide for getting started with the Cuttlefish emulator https://source.android.com/docs/setup/build/building Google’s official guide for building AOSP images https://gitlab.freedesktop.org/mesa/mesa/-/blob/9705df53408777d493eab19e5a58c432c1e75acb/docs/drivers/llvmpipe.rst My updated documentation in MR for llvmpipe https://gitlab.freedesktop.org/mesa/mesa/-/blob/9705df53408777d493eab19e5a58c432c1e75acb/docs/android.rst My updated documentation in MR for Android integration in mesa
  • Peter Hutterer: GNOME tablet support papercut fixes (2024/06/26 04:59)
    Over the last months I've started looking into a few of the papercuts that affects graphics tablet users in GNOME. So now that most of those have gone in, let's see what has happened: Calibration fixes and improvements (GNOME 47) The calibration code, a descendent of the old xinput_calibrator tool was in a pretty rough shape and didn't work particularly well. That's now fixed and I've made the calibrator a little bit easier to use too. Previously the timeout was quite short which made calibration quite stressfull, that timeout is now per target rather than to complete the whole calibration process. Likewise, the calibration targets now accept larger variations - something probably not needed for real use-cases (you want the calibration to be exact) but it certainly makes testing easier since clicking near the target is good enough. The other feature added was to allow calibration even when the tablet is manually mapped to a monitor. Previously this only worked in the "auto" configuration but some tablets don't correctly map to the right screen and lost calibration abilities. That's fixed now too. A picture says a thousand words, except in this case where the screenshot provides no value whatsoever. But here you have it anyway. Generic tablet fallback (GNOME 47) Traditionally, GNOME would rely on libwacom to get some information about tablets so it could present users with the right configuration options. The drawback was that a tablet not recognised by libwacom didn't exist in GNOME Settings - and there was no immediately obvious way of fixing this, the panel either didn't show up or (with multiple tablets) the unrecognised one was missing. The tablet worked (because the kernel and libinput didn't require libwacom) but it just couldn't be configured. libwacom 2.11 changed the default fallback tablet to be a built-in one since this is now the most common unsupported tablet we see. Together with the new fallback handling in GNOME settings this means that any unsupported tablet is treated as a generic built-in tablet and provides the basic configuration options for those (Map to Monitor, Calibrate, assigning stylus buttons). The tablet should still be added to libwacom but at least it's no longer a requirement for configuration. Plus there's now a link to the GNOME Help to explain things. Below is a screenshot on how this looks like (after modifying my libwacom to no longer recognise the tablet, poor Intuos). Monitor mapping names (GNOME 47) For historical reasons, the names of the display in the GNOME Settings Display configuration differed from the one used by the Wacom panel. Not ideal and that bit is now fixed with the Wacom panel listing the name of the monitor and the connector name if multiple monitors share the same name. You get the best value out of this if you have a monitor vendor with short names. (This is not a purchase recommendation). Highlighted SVGs (GNOME 46) If you're an avid tablet user, you may have multiple stylus tools - but it's also likely that you have multiple tools of the same type which makes differentiating them in the GUI hard. Which is why they're highlighted now - if you bring the tool into proximity, the matching image is highlighted to make it easier to know which stylus you're about to configure. Oh, and in the process we added a new SVG for AES styli too to make the picture look more like the actual physical tool. The <blink> tag may no longer be cool but at least we can disco our way through the stylus configuration now. More Pressure Curves (GNOME 46) GNOME Settings historically presents a slider from "Soft" to "Firm" to adjust the feel of the tablet tip (which influences the pressure values sent to the application). Behind the scenes this was converted into a set of 7 fixed curves but thanks to a old mutter bug those curves only covered a small amount of the possible range. This is now fixed so you can really go from pencil-hard to jelly-soft and the slider now controls an almost-continous range instead of just 7 curves. Behold, a picture of slidery goodness: Miscellaneous fixes And of course a bunch of miscellaneous fixes. Things that I quickly found were support for Alt in the tablet pad keymappings, fixing of erroneous backwards movement when wrapping around on the ring, a long-standing stylus button mismatch, better stylus naming and a rather odd fix causing configuration issues if the eraser was the first tool ever to be brought into proximity. There are a few more things in the pipe but I figured this is enough to write a blog post so I no longer have to remember to write a blog post about all this.
Enter your comment. Wiki syntax is allowed:
F Q T E J
 
  • news/planet/freedesktop.txt
  • Last modified: 2021/10/30 11:41
  • by 127.0.0.1