Planet Mozilla - Latest News

  • The Mozilla Blog: Huwa: From a WhatsApp group to sharing Palestinian olive oil with the world (2024/11/25 18:32)
    <figcaption class="wp-element-caption">From left: Omar Saleh Huwaoushi, Bilal Othman Huwaoushi and Maryam Othman Huwaoushi. For the family, Huwa is not just a business — it’s a legacy. Credit: Diane Sooyeon Kang</figcaption> Diane Sooyeon Kang is a food and travel photographer and writer with a passion for storytelling. She has traveled the world extensively, working with esteemed publications and brands. You can find more of her work at dianeskang.com. A vibrant spread adorns an overflowing table, filled with precious hand-painted ceramics from Palestine, hummus, yogurt dips, za’atar, and fresh tomatoes and mint picked from the backyard garden. Copious amounts of olive oil fill several bowls and are drizzled over nearly every dish. Tucked between the plates are olive oil squeeze bottles adorned with playful illustrations and stickers. The olive oil, with a surprisingly fruity yet peppery kick, is none other than Huwa, the Huwaoushi family’s newly launched product, made from handpicked, cold-pressed olives straight from a family-owned olive grove. “We didn’t want to take ourselves too seriously when making the packaging. Olive oil production, especially in Palestine, has never been a purely serious or somber activity,” shares Bilal Othman Huwaoushi, one of three Huwaoushi siblings involved in creating Huwa. “It’s about families coming together — kids playing, aunts and uncles gathering to pick olives.” This sense of joyful community is mirrored in the brand’s design, which includes playful illustrations of birds, a reference to Palestinian symbols, and even comic-style artwork on the inside sleeve. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> For generations, Bilal’s family has been farming olives in Palestine. This deep-rooted tradition is evident in how the family talks about their land, which has produced olives for as long as they can remember. “The same trees we eat from are the ones my grandfather planted as a kid,” Bilal shares, recounting the heritage of their olive groves and the rare, age-old practices that make their olive oil unique. When Bilal’s father, Omar Saleh Huwaoushi, a retired cab driver, immigrated to Chicago in the 1980s, he missed the flavors of home, especially the olive oil he grew up with. Unable to find anything like it, he started bringing it back with him. “Our family has been growing olives for centuries, but we’re the first generation to bring this olive oil to the U.S.,” Bilal states. Once their friends got a taste of the oil, they wanted it too. And from there, it took on a life of its own. As a lower-income family, everyone worked together to build the olive oil side business. Before Huwa was created, the Huwaoushis sold their olive oil in 17-liter tanks through a WhatsApp group chat. Feedback was overwhelmingly positive. “People were telling us this was some of the best oil they’d ever tasted,” Bilal recalls. Since the oil came directly from their uncle’s farm, they were able to offer a premium product at a fraction of the price compared to other premium olive oil brands. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> Through its popularity in WhatsApp groups, Bilal saw an opportunity, and they decided to brand it as Huwa. It became a passion project to share his Palestinian heritage with a larger audience and create something meaningful with his immediate and extended family, and for the people in his village. The olive oil is deeply interwoven with the story of their town called Aqraba, a close-knit village where people remain tightly connected even across generations and continents. As Omar shares, “If you mention my name in the village, everyone knows my family. Even after 40 years abroad, returning feels like I never left.” With over 600 family members across multiple generations, the legacy of togetherness is alive and well, both within the family and in their interactions with the community back home. Their heritage is celebrated each olive harvest season, when family and friends come together to enjoy freshly pressed oil, often with simple dishes like bread soaked in olive oil with onions and sumac. This ritual, as they explain, is not just a meal; it’s an expression of gratitude for the harvest, a way to reconnect with the land and with each other. “In the winter, we’d bake bread, soak it in olive oil, and sprinkle it with sumac and chicken — it’s such a simple meal, but it brings everyone together.” <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> Unlike modern olive harvesting practices that often use pesticides, chemical fertilizers, or pest prevention methods like wrapping trees in plastic, Palestinian farmers rely on multi-generational techniques and agricultural wisdom. Farmers plant fig trees within olive groves to naturally attract pests away from the olive trees, and fertilize the soil only with compost and rely solely on rainfall for irrigation — this preserves soil purity and yields high-quality oil. “The entire production process is very unique — from the way we handle the soil to how we cold-press the oil,” Bilal says. “While many cultures are defined by their food, our culture is unique in that it’s defined by the food process itself.” To produce most commercial-grade olive oil, large machines typically shake trees, disrupting the birds inhabiting the trees and causing unripened olives, branches and leaves to fall and get processed together. In contrast, Huwa uses a gentler method: Workers lean ladders against the trees and hand-pick only ripe olives, enhancing both oil quality and ecosystem balance. These are indigenous Aqrabawi practices honed over a thousand years of farming. The community mill used for pressing ensures fair compensation for everyone involved in the harvest. Olives are cold-pressed at low temperatures, preserving nutrients and enhancing flavor. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> For Bilal’s family, Huwa is not just a business — it’s a legacy. Their uncle, deeply respected in the community for his agricultural knowledge, serves as a critical cornerstone and is one of many keepers of this tradition. Preserving culture, heritage and knowledge is central to Huwa’s mission. In many ways, Huwa represents a bridge: one that connects Palestinian culture with the American community and preserves an ancient tradition in a modern world. As Huwa continues to grow, the family’s goal is to uphold their heritage while inviting others to experience it through the taste of their olive oil.  “The entire process has been pretty joyful, but there are so many things that have to be done,” Bilal said. “Content and copywriting have been challenging, so using AI tools has been helpful in that regard. I’d much rather spend that time on the street, having people sample our oil.” Like many entrepreneurs, Bilal has found that with the support of new technology and tools, tasks that were once time-consuming and tedious have become easier and quicker to complete. Yet, despite the workload, the business’s guiding purpose remains unchanged. “I think the nice thing about working with your family is that sometimes we decide to just hang out and other times we keep going,” Bilal said. “ At the end of the day, the KPI [key performance indicator] for whether we succeed is whether we’re enjoying each other’s company — that’s the guiding principle of how we like to run the business.” Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out Huwa’s Solo website here. Ready to start creating? Launch your website The post Huwa: From a WhatsApp group to sharing Palestinian olive oil with the world appeared first on The Mozilla Blog.
  • The Mozilla Blog: La Humita: 20 years of authentic Ecuadorian flavors in Chicago (2024/11/25 18:18)
    <figcaption class="wp-element-caption">Nestor Correa founded La Humita in 2003. Later, he hired Chef Juan Esteban, who introduced new dishes focused on Ecuadorian seafood. Credit: Diane Sooyeon Kang </figcaption> Diane Sooyeon Kang is a food and travel photographer and writer with a passion for storytelling. She has traveled the world extensively, working with esteemed publications and brands. You can find more of her work at dianeskang.com. When Nestor Correa opened La Humita in Chicago in 2003, he wasn’t just opening a restaurant; he was creating a culinary homage to his family’s heritage and Ecuadorian roots. Named after la humita — a traditional sweet tamale made from ground corn — the restaurant started with recipes passed down through generations, becoming one of the first Ecuadorian restaurants in the city to offer an authentic taste of Ecuador to a diverse audience.Nestor’s journey into the restaurant world began long before La Humita opened its doors. For over 15 years, he worked as a server at the Marriott Hotel, where he cultivated a deep appreciation for the restaurant industry. “I’ve always had a passion for our cuisine,” Nestor explains. “My mission is to leave something cultural in the city. It’s why I chose to open an Ecuadorian restaurant over other types.” This dedication stems from his childhood, growing up with his mother’s and sister’s homemade recipes that friends and family always praised. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang </figcaption> La Humita’s initial concept was a high-end dining experience. When it opened in 2003, it quickly gained media attention, appearing in publications like The Chicago Tribune and Chicago Magazine. With a unique menu of Ecuadorian dishes, La Humita quickly became a standout in Chicago’s dining scene, introducing many locals to the richness of Ecuadorian cuisine for the first time. But when the pandemic struck, Nestor faced a major setback. La Humita closed its doors for two years, during which he reevaluated the concept. He attempted to relaunch as La Humita Express, a fast-food version inspired by restaurants like Panda Express, with customizable plates and a simplified approach to serving.However, this new concept didn’t resonate with his loyal Ecuadorian clientele, who longed for more traditional dishes. “Our community didn’t accept the concept,” Nestor says. “They want specific, traditional dishes.” Realizing that the express concept was not meeting his community’s needs, he restructured the restaurant to honor its original offerings, bringing back traditional plates that felt authentic and comforting. This shift was solidified with the hiring of Juan Esteban, a chef from Quito, Ecuador, who introduced new dishes focused on Ecuadorian seafood, such as shrimp ceviche and the iconic encebollado de pescado (fish soup). “Our chef has brought fresh ideas and traditional flavors,” Nestor shares, crediting this hire with revitalizing La Humita’s menu. <figcaption class="wp-element-caption">“Our chef has brought fresh ideas and traditional flavors,” Nestor Correa said of Chef Juan Esteban. Credit: Diane Sooyeon Kang </figcaption> This renewed focus on Ecuadorian authenticity has also allowed La Humita to double down on what sets it apart. “We only serve 100% Ecuadorian cuisine — no Mexican, American or Italian dishes,” Nestor emphasizes. Their approach highlights a distinct culinary identity, one that differentiates Ecuadorian cuisine from other Latin American food, especially with dishes like ceviche, which is boiled instead of cooked in lemon as it’s typically prepared in other countries.Despite these adaptations, challenges remain, particularly in reaching new customers. “For us, it’s been complicated because our cuisine isn’t as well-known as Italian or Mexican,” Nestor admits. “It’s hard to make Ecuadorian food popular.” Initially, La Humita attracted a mix of local and international patrons, but over the years, its customer base has become primarily Ecuadorian. Now, with a focus on maintaining cultural authenticity, Nestor hopes to regain a wider audience.One major hurdle to expanding his reach has been the restaurant’s limited digital presence. While they have social media accounts, Nestor acknowledges that without a professional website, they lack visibility. “Many business owners don’t realize how critical a website is,” he says. “But if you don’t have one, or if it’s not up-to-date, you’re missing out. People are looking for information, and not having it can hurt your business.” With this in mind, Correa hopes to build a new website that will better showcase the restaurant’s true identity and give diners a clearer picture of the food and experiences they can expect. Even small changes, like displaying photos of their dishes in the restaurant windows, have made a noticeable difference, drawing in more people from the neighborhood <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> For Nestor, the business is personal. Not only does his family’s legacy influence the menu—his mother’s and sister’s recipes remain unchanged on dishes like la humita and traditional tamales — but his family also plays an active role in the restaurant’s operations. His wife, who is originally from Mexico, learned to cook Ecuadorian food from his mother and sister, and she now works in the kitchen. “We grew up with business in our blood,” Nestor explains. “My wife, my sister, and even my mother, who’s 93, have all helped bring Ecuadorian flavors to life here.”As Nestor reflects on La Humita’s 20-year journey, he remains steadfast in his commitment to Ecuadorian cuisine. Going digital may help him reach more people, but for Nestor, the heart of La Humita will always be the authenticity and warmth of home-cooked Ecuadorian dishes. And with the support of his family and community, he’s hopeful La Humita will continue to thrive for many years to come.His vision of sharing Ecuadorian cuisine with Chicago continues to guide him, and he’s excited for what the future holds. “It’s all about sharing my culture through my food,” he says. “Everything we do is a reflection of that.” <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out La Humita’s Solo website here. Ready to start creating? Launch your website The post La Humita: 20 years of authentic Ecuadorian flavors in Chicago appeared first on The Mozilla Blog.
  • The Mozilla Blog: Diaspora: Where Southern, West African and Caribbean traditions come alive in Chicago (2024/11/25 17:41)
    <figcaption class="wp-element-caption">Rob Carter is the founder of Diaspora, a progressive Afrocentric food concept that celebrates Southern, West African and Caribbean flavors. Credit: Diane Sooyeon Kang </figcaption> Diane Sooyeon Kang is a food and travel photographer and writer with a passion for storytelling. She has traveled the world extensively, working with esteemed publications and brands. You can find more of her work at dianeskang.com. For Rob Carter, founder of Diaspora, food is a bridge to history, identity and community. Inspired by the rich flavors of Southern, West African and Caribbean cuisines, Diaspora brings Rob’s heritage to life through dishes that tell a story. What started as a pop-up in Chicago has become a platform to share his roots, tackle the challenges of entrepreneurship and prove that food has the power to unite people across cultures and generations.Growing up in a family where food was central to daily life, Rob’s path into the culinary world began at an early age. “My grandmother was my first mentor, even though she didn’t know it,” he recalls fondly. As a child, he often helped her prepare meals that fed large groups; this instilled in him a deep appreciation for hospitality and the ability of food to bring people together. His grandmother lived with 21 siblings and cousins, making her well-suited to cooking for small crowds. Her Southern cooking became the foundation of Rob’s culinary identity. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> While Rob’s early experiences shaped his love for food, becoming an entrepreneur was not without its challenges. Despite years of working in upscale restaurants, including Michelin-starred Vie Restaurant and stages at Band of Bohemia and Blackbird in Chicago, the leap to running his own business was daunting. “You don’t learn business by working the line,” he says. “You learn by doing, making mistakes, and figuring out what works.” One of the toughest lessons has been the art of timing. Organizing pop-up events — where crowds are unpredictable and profit margins are tight — has proved to be a learning curve. “I once did a pop-up during Lollapalooza weekend, and it was a disaster,” he recalls. “The city was buzzing with festival-goers, and my event was completely overlooked.” These setbacks, however, have helped him refine his approach, teaching him to be more strategic and adapt when necessary. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> In addition to timing, Rob’s journey has highlighted the importance of collaboration. Managing partnerships and navigating last-minute cancellations has been a source of stress. “It’s tough when you rely on other people’s schedules, and then they cancel,” he says, referring to a series of collaborations that fell through in June. Yet, these challenges have only fueled his determination to push forward and remain flexibleTechnology has provided a new set of opportunities and challenges. Going digital, especially for a small business with limited resources, can be daunting. But for Rob, the shift is not just about convenience — it’s a way to craft a brand and tell a story. “You have to build a community before you even open your doors,” he shares, describing how Diaspora is leveraging social media to connect with people and create buzz before opening a physical space. The goal is to have a loyal following already in place by the time the doors open, so the business doesn’t have to build momentum from scratch. “People want to know when the space is opening, not when we’re trying to convince them to come. That’s the difference,” he says. <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> As he continues to grow Diaspora, the chef remains focused on creating meaningful experiences for his guests. Whether through pop-up dinners or catered events, he aims to foster connections and create spaces where people feel part of something special. “It’s about building trust,” he explains. “If people feel like they’re part of something meaningful, they’ll keep coming back.” Looking to the future, the chef envisions expanding his culinary offerings while also keeping the spirit of collaboration alive. While the idea of a brick-and-mortar restaurant is tempting, the rising costs of rent and food have made him cautious. Instead, he’s focused on continuing to build a strong presence through pop-ups and collaborations before taking the plunge into opening a physical space.For Rob, this journey is about more than just food; it’s about culture and the connections that can be formed around the table. “I want to create something bigger than just a restaurant,” he says. “It’s about purpose, community and connection.” <figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption> Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out Diaspora’s Solo website here. Ready to start creating? Launch your website The post Diaspora: Where Southern, West African and Caribbean traditions come alive in Chicago appeared first on The Mozilla Blog.
  • The Mozilla Blog: Shopping for everyone? We’ve got you covered with the Fakespot Holiday Gift Guide 2024 (2024/11/25 17:36)
    When it comes to gifting, it’s the thought that counts, right? Then why does it feel so terrible when your creative, thoughtfully chosen gift ends up being a total dud? We’re talking glitching karaoke machines, pepper grinders that dump full peppercorns on your pasta, and doll-sized sweaters that you definitely ordered in a women’s size medium. The fact is, while online shopping has opened up a world of possibilities for gift-givers, it has also created ample opportunities for scammers and bad actors to sell products that don’t live up to their promises. Throw in some shady ranking tactics and AI-generated reviews and suddenly your simple gift search feels like a research project — or game of whack-a-mole. This year, the Fakespot Holiday Gift Guide is here to help. It showcases products vetted with Fakespot’s product review analysis technology and helps weed out items with untrustworthy reviews. Whoever you’re shopping for — and whatever their age or interest — this guide will help you find quality products backed by reliable customer feedback. What makes the Fakespot Holiday Gift Guide 2024 stand out? The Fakespot Holiday Gift Guide is more than just a list of popular products. It’s a curated selection backed by advanced AI technology designed to analyze reviews and ratings across major e-commerce sites like Amazon, Best Buy and Walmart. Fakespot works to protect shoppers from misleading or fake reviews – a common problem during the holiday rush when online shopping activity spikes.  Every product featured in the Fakespot Holiday Gift Guide has received a Fakespot Review Grade of an A or B, indicating reliable reviews likely written by real customers who left unbiased feedback. By filtering out products with untrustworthy reviews, the Fakespot Holiday Gift Guide helps you shop smarter and avoid the disappointments of low-quality or misrepresented items. It’s a practical resource for anyone looking to cut through the holiday noise and make more informed purchases.  Gift ideas for everyone you love (or just like a lot) with the Fakespot Holiday Gift Guide 2024 The guide spans a wide variety of categories, offering options for every type of person on your gift list. Here’s a look at some of the featured categories: Tech and electronics From wireless earbuds and portable chargers to smart home devices, the tech section has something for every gadget lover. Fakespot has carefully selected items that are not only popular, but also come from brands with trusted reviews. Product recommendations: Loop Experience 2 ear plugs Sharper Image LED light-up word clock FHD 1080P Digital Cameras for Kids <figcaption class="wp-element-caption">Fakespot Review Grade: B</figcaption> Fitness and outdoors Offering quality items like durable yoga mats, reliable tents, and versatile sports equipment, the Fitness and Outdoors section is perfect for active individuals and nature enthusiasts. Product recommendations: Spikeball 3 Ball original roundnet game set LEATHERMAN, bond multitool Crazy Creek the chair for camping, backpacking <figcaption class="wp-element-caption">Fakespot Review Grade: B</figcaption> Home and kitchen For anyone looking to elevate their living space, the home and kitchen category is a goldmine. Fakespot has highlighted items like cookware sets, coffee makers and air purifiers. Product recommendations: Instant Vortex Plus air fryer Fellow Stagg EKG Pro electric gooseneck kettle Fishwife X Fly by Jing smoked salmon trio <figcaption class="wp-element-caption">Fakespot Review Grade: A</figcaption> Fashion and beauty Holiday fashion doesn’t have to be a gamble. With Fakespot’s selection, you’ll find stylish, quality pieces, from cozy scarves to durable watches and chic bags. Product recommendations: Bath bomb gift set Kitcsch satin hair scrunchies Baggu medium nylon crescent bag <figcaption class="wp-element-caption">Fakespot Review Grade: B</figcaption> Toys and games Shopping for kids can be a challenge, especially when reviews don’t tell the whole story about safety or durability. This section brings together exciting and interactive options like classic board games, challenging puzzles and engaging card games for all ages. Product recommendations: Red panda weighted stuffed animals Woobles beginners crochet kit Magnatiles <figcaption class="wp-element-caption">Fakespot Review Grade: A</figcaption> Tips for shopping smart this holiday season Along with gift recommendations, Fakespot offers valuable tips for making the most of online holiday shopping: Trust but verify: Even highly rated items might have fake reviews. Use tools like Fakespot’s browser extension to double-check reviews while shopping on popular sites. Compare prices: The holiday season can bring fluctuating prices. Keep an eye on price trends and consider setting up alerts for big-ticket items. Look beyond ratings: Sometimes a product might have high ratings, but lack detailed, verified reviews. Focus on the authenticity of reviews rather than just on the star rating. Wrapping up your holiday shopping with confidence With its carefully selected products and commitment to transparency, the Fakespot Holiday Gift Guide provides an invaluable resource for holiday shoppers. Head over to the Fakespot Holiday Gift Guide and cross “perfect gifts” off your to-do list. Shop smarter with reliable product reviews for everyone on your list Check out the Fakespot Holiday Gift Guide The post Shopping for everyone? We’ve got you covered with the Fakespot Holiday Gift Guide 2024 appeared first on The Mozilla Blog.
  • Don Marti: Use an ad blocking extension when performing Internet searches (2024/11/24 00:00)
    The FBI seems to have taken down the public service announcement covered in Even the FBI says you should use an ad blocker | TechCrunch. Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others. This is still good advice. Search ads are full of scams, and you can block ads on search without blocking the ads on legit sites. I made a local copy of the FBI alert. Why did they take it down? Maybe we’ll find out. I sent the FBI a FOIA request for any correspondence about this alert and the decision to remove it. The Malwarebytes site has more good info on ongoing problems with search ads. Google Search user interface: A/B testing shows security concerns remain Related effective privacy tips SingleFile is a convenient extension for saving copies of pages. (I got the FBI page from the Internet Archive. It’s a US government work so make all the copies you want.) Bonus links “Interpreting the Ambiguities of Section 230” by Alan Rozenshtein (Section 230 covers publisher liability, but not distributor liability.) Confidential OCR (How to install and use Tesseract locally on Linux) The Great Bluesky Migration: I Answer (Some) Of Your Questions Bluesky also offers a remedy for quote-dunking. If someone quotes your post to make a nasty comment on it, you can detach the quoted post entirely. (And then you should block the jerk). Related: Bluesky’s success is a rejection of big tech’s operating system Designing a push life in a pull world Everything in our online world is designed to push through our boundaries, usually because it’s in someone else’s financial best interest. And we’ve all just accepted that this is the way the world works now. Killer Robots About to Fill Skies… (this kind of thing is why the EU doesn’t care about AI innovation in creepy tracking and copyright infringement—they need those developers to get jobs in the defense industry, which isn’t held back by the AI Act.) Inside the Bitter Battle Between Starbucks and Its Workers (More news from management putting dogmatic union-busting ahead of customers and shareholders, should be a familiar story to anyone dealing with inadequate ad review or search quality ratings.) National Public Data saga illustrates little-regulated US data broker industry National Public Data appears to have been a home-based operation run by Verini himself. The enterprise maintains no dedicated physical offices. The owner/operator maintains the operations of company from his home office, and all infrastructure is housed in independent data centers, Verini said in his bankruptcy filing.
  • Cameron Kaiser: CHRP removal shouldn't affect Linux Power Macs (2024/11/23 21:53)
    A recent patch removed support for the PowerPC Common Hardware Reference Platform from the Linux kernel. However, Power Macs, even New World systems, were never "pure" CHRP, and there were very few true CHRP systems ever made (Amiga users may encounter the Pegasos and Pegasos II, but few others existed, even from IBM). While Mac OS 8 had some support for CHRP, New World Macs are a combination of CHRP and PReP (the earlier standard), and the patch specifically states that it should not regress Apple hardware. That said, if you're not running MacOS or Mac OS X, you may be better served by one of the BSDs — I always recommend NetBSD, my personal preference — or maybe even think about MorphOS, if you're willing to buy a license and have supported hardware.
  • Don Marti: prediction markets and the 2024 election link dump (2024/11/23 00:00)
    Eric Neyman writes, in Seven lessons I didn’t learn from election day, Many people saw the WSJ report as a vindication of prediction markets. But the neighbor method of polling hasn’t worked elsewhere. More: Polling by asking people about their neighbors: When does this work? Should people be doing more of it? And the connection to that French dude who bet on Trump The money is flooding in, but what are prediction markets truly telling us? If we look back further, predicted election markets were actually legal in the US from the 1800s to 1924, and historical data shows that they were accurate. There’s a New York Times story of Andrew Carnegie noting how surprisingly accurate the election betting markets were at predicting outcomes. They were actually more accurate before the introduction of polling as a concept, which implies that the introduction of polling diluted the accuracy of the market, rather than the opposite. Was the Polymarket Trump whale smart or lucky? Whether one trader’s private polling tapped sentiment more accurately than the publicly available surveys, or whether statistical noise just happened to reinforce his confidence to buy a dollar for 40c, can’t be known without seeing the data. Koleman Strumpf Interview - Prediction Markets & More 2024 was a huge vindication for the markets. I don’t know how else to say it, but all the polls and prognosticators were left in the dust. Nobody came close to the markets. They weren’t perfect, but they were an awful lot better than anything else, to say the least. FBI raids Polymarket CEO Shayne Coplan’s apartment, seizes phone: source Though U.S. election betting is newly legal in some circumstances, Polymarket is not supposed to allow U.S. users after the Commodity Futures Trading Commission halted its operations in 2022, but its user base largely operates through cryptocurrency, which allows for easy anonymity. Polymarket Explained: How Blockchain Prediction Markets Are Shaping the Future of Forecasting (Details of how Polymarket works including tokens and smart contracts.) Betting odds called the 2024 election better than polls did. What does this mean for the future of prediction markets? Prediction Markets for the Win Just betting on an election every few years is not the interesting part, though. Info Finance is a broader concept. [I]nfo finance is a discipline where you (i) start from a fact that you want to know, and then (ii) deliberately design a market to optimally elicit that information from market participants. Bonus links The rise and fall of peer review - by Adam Mastroianni The Great Redbox Cleanup: One Company is Hauling Away America’s Last DVD Kiosks Both Democrats and Republicans can pass the Ideological Turing Test The Verge Editor-In-Chief Nilay Patel breathes fire on Elon Musk and Donald Trump’s Big Tech enablers 2024-11-09 iron mountain atomic storage How Upside-Down Models Revolutionized Architecture, Making Possible St. Paul’s Cathedral, Sagrada Família & More
  • Firefox Developer Experience: Firefox DevTools Newsletter — 132 (2024/11/22 15:16)
    Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 132 Nightly release cycle. Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues Firefox 133 is around the corner and I’m late to tell you about what was done in 132! This release does not offer any new features as the team is working on bigger tasks that are still not visible by the users. But this still contains a handful of important bug fixes, so let’s jump right in. Offline mode and cached requests When enabling Offline mode from the Network panel, cached requests would fail, which doesn’t match the actual behavior of the browser when there is no network (#1907304). This is fixed now and cached requests will succeed as you’d expect. Inactive CSS and pseudo elements You might be familiar with what we call Inactive CSS in the Inspector: small hints on declarations that don’t have any impact on the selected element as the property requires other properties to be set (for example, setting top on non-positioned element). Sometimes we would show invalid hints on pseudo-element rules displayed in their binding elements (i.e. the one that we show under the “Pseudo element” section), and so we fixed this to avoid any confusion (#1583641). Stable device detection on about:debugging In order to debug Firefox for Android, you can go to about:debugging , plug your phone through USB and inspect the tabs you have opened on your phone. Unfortunately the device detection was a bit flaky and it could happen that the device wouldn’t show up in the list of connected phones. After some investigation, we found out the culprit (adb is now grouping device status notifications in a single message), and device detection should be more stable (#1899330). Service Workers console logs Still in about:debugging, we introduced a regression a couple releases ago which would prevent any Service Workers console logs to be displayed in the console. The issue was fixed and we added automated tests to prevent regressing such an important features (#1921384, #1923648) Keyboard navigation We tackled a few accessibility problems: in the Network panel, “Raw” toggles couldn’t be checked with the keyboard (#1917296), and the inspector filter input clear button couldn’t be focused with the keyboard (#1921001). Misc Finally, we fixed an issue where you couldn’t use the element picker after a canceled navigation from about:newtab (#1914863), as well as a pretty nasty Debugger crash that could happen when debugging userscript code (#1916086). And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks days for a new round of updates Full list of fixed bugs in DevTools for the Firefox 132 release: Bob Owen (:bobowen) resource://devtools/shared/loader/builtin-modules.js fails to open on Windows local build (#1916286) Fatih Kilic [:fkilic] about:debugging should show origin attributes for dFPI and FPI (#1583891) Florian Quèze [:florian] Remove telemetry tracking Application Panel usage (#1675235) Sean Kim DevTools offline mode should not make cached requests fail (#1907304) Tooru Fujisawa [:arai] Integrate SharedSubResourceCache into DevTools network monitor (#1916960) David Shin[:dshin] Nonsensical auto-completion values for inset properties (#1918463) Emilio Cobos Álvarez (:emilio) Make DevTools work with CSSNestedDeclarations objects. (#1919853) Alexandre Poirot [:ochameau] Display all the DOM events of a record in the Debugger Tracer Sidebar (#1908615) Alexandre Poirot [:ochameau] Display function arguments in popup previews (#1909548) Alexandre Poirot [:ochameau] Remove highlight of selected paused frame when selecting a tracer frame (#1914239) Alexandre Poirot [:ochameau] Tracer timeline sometimes throws when clicking on it while being zoomed in (#1915619) Alexandre Poirot [:ochameau] Enable tracer debugger sidebar by default (#1916462) Alexandre Poirot [:ochameau] Always use native backend when recording JS Traces to the profiler output (#1916533) Alexandre Poirot [:ochameau] JS Tracer can be very slow when debugging google docs (#1919713) Alexandre Poirot [:ochameau] Remove JS Tracer automatic stop on infinite loop (#1919804) Alexandre Poirot [:ochameau] Highlighted events in the tracers are not necessarely visible because of wrong z-index (#1919910) Nicolas Chevobbe [:nchevobbe] Make the FontsHighlighter compatible with Fission (#1572655) Nicolas Chevobbe [:nchevobbe] [InactiveCSS] hints on pseudo-element rules displayed in their binding elements’ rules are incorrect (#1583641) Nicolas Chevobbe [:nchevobbe] Intermittent devtools/client/inspector/fonts/test/browser_fontinspector_reveal-in-page.js | single tracking bug (#1853030) Julian Descottes [:jdescottes] Devices not showing up when connecting them loading about:debugging (#1899330) Julian Descottes [:jdescottes] Intermittent devtools/client/inspector/rules/test/browser_rules_preview-tooltips-sizes.js | single tracking bug (#1905529) Nicolas Chevobbe [:nchevobbe] Use light-dark() in variables.css (#1911733) Nicolas Chevobbe [:nchevobbe] CSS variable tooltip “arrow” has a visible border (#1912399) Nicolas Chevobbe [:nchevobbe] Emit `property-updated-by-dragging` once the value was actually set (#1912868) Nicolas Chevobbe [:nchevobbe] Use Codemirror 6 for conditional breakpoint panel (#1913189) Julian Descottes [:jdescottes] Fix propTypes warnings when using the debugger (#1913529) Hubert Boma Manilla (:bomsy) Add variant / jobs for running with codemirror 6 enabled (#1914654) Julian Descottes [:jdescottes] NodePicker stops working after canceled navigation from about:newtab (#1914863) Nicolas Chevobbe [:nchevobbe] inspector-shared.css and inspector-color-swatches.css could be refactored (#1915382) Nicolas Chevobbe [:nchevobbe] Inspector color swatch checker background is not visible even on transluscent colors (#1915435) Nicolas Chevobbe [:nchevobbe] Add focus indicator on condition breakpoint/log point panel (#1915799) Nicolas Chevobbe [:nchevobbe] Change modifier to add new line in Conditional panel (#1915800) Nicolas Chevobbe [:nchevobbe] Select the conditional breakpoint / logpoint editor content on edit (#1915802) Nicolas Chevobbe [:nchevobbe] Add AbortController to control sourceeditor event listeners (#1915804) Nicolas Chevobbe [:nchevobbe] Refactor getThemeColorAsRgba (#1915857) Nicolas Chevobbe [:nchevobbe] Refactor getColor (#1915872) Hubert Boma Manilla (:bomsy) Debugger crash when debugging userscript code (#1916086) Nicolas Chevobbe [:nchevobbe] Add colorSwatchReadOnly to OutputParser (#1916258) Nicolas Chevobbe [:nchevobbe] Input and evaluation results icon don’t respect forced colors in High Contrast Mode (#1916328) Nicolas Chevobbe [:nchevobbe] Menus don’t have visible hover style in High Contrast Mode (#1916329) Nicolas Chevobbe [:nchevobbe] Filter input icons keep their original color in High Contrast Mode (#1916333) Nicolas Chevobbe [:nchevobbe] Pretty print icon in Editor toolbar is not visible in High Contrast Mode (#1916341) Nicolas Chevobbe [:nchevobbe] Console input selected text doesn’t have any background color (#1916344) Nicolas Chevobbe [:nchevobbe] Waterfall Timing bars aren’t visible in High Contrast Mode (#1916354) Nicolas Chevobbe [:nchevobbe] Waterfall DOMContentLoaded/load ticks aren’t visible in High Contrast Mode (#1916355) Nicolas Chevobbe [:nchevobbe] Request list doesn’t adapt well to High Contrast Mode (#1916363) Nicolas Chevobbe [:nchevobbe] “Raw” toggle in request detail isn’t usable in High Contrast Mode (#1916366) Nicolas Chevobbe [:nchevobbe] Search matches in search panel on selected node are not legible in High Contrast Mode (#1916394) Nicolas Chevobbe [:nchevobbe] Markup view selected node isn’t different from other node in High Contrast Mode (#1916603) Nicolas Chevobbe [:nchevobbe] Markup view toggle buttons color doesn’t adapt in High Contrast Mode (#1916605) Nicolas Chevobbe [:nchevobbe] Flex/Grid highlighter color swatch doesn’t have the selected color in High Contrast Mode (#1916614) Nicolas Chevobbe [:nchevobbe] The Box Model is barely usable in High Contrast Mode (#1916712) Nicolas Chevobbe [:nchevobbe] Clicking on event badge doesn’t do anything on some pages using jQuery (#1916881) Nicolas Chevobbe [:nchevobbe] Focus indicator on Event Listener Breakpoints looks weird (#1916918) Nicolas Chevobbe [:nchevobbe] Headers accordion Raw toggles can’t be checked/unchecked with the keyboard (#1917296) Nicolas Chevobbe [:nchevobbe] Use light-dark() for markup view (#1917526) Nicolas Chevobbe [:nchevobbe] Refactor accessibility.css to use light-dark() (#1918109) Nicolas Chevobbe [:nchevobbe] Refactor variable color declarations in webconsole.css (#1918158) Nicolas Chevobbe [:nchevobbe] DevTools highlighters are impacted by forced-colors: active in the page (#1918358) Nicolas Chevobbe [:nchevobbe] Accessibility selected node “issue badge” is almost invisible in Hight Contrast Mode (#1918415) Hubert Boma Manilla (:bomsy) [DevTools Release Tasks – Cycle 132] Remove backward compatibility code (#1918587) Nicolas Chevobbe [:nchevobbe] Use light-dark() in netmonitor variables.css (#1918981) Hubert Boma Manilla (:bomsy) Update MDN compat data (132) (#1918993) Nicolas Chevobbe [:nchevobbe] Replace reference to bugs.firefox-dev.tools by codetribute (#1919211) Nicolas Chevobbe [:nchevobbe] Use light-dark() in boxmodel.css (#1919452) Nicolas Chevobbe [:nchevobbe] Use light-dark() in tooltip.css (#1920689) Nicolas Chevobbe [:nchevobbe] Markup view filter input clear button can’t be focused with the keyboard (#1921001) Nicolas Chevobbe [:nchevobbe] Built-in console functions no longer work in the Service Worker console (#1921384) Nicolas Chevobbe [:nchevobbe] [InactiveCSS] incorrect inactive CSS on pseudo element when pseudo element node is selected (#1921937) Julian Descottes [:jdescottes] Service worker console logs are blank (#1923648)
  • Mozilla Open Policy & Advocacy Blog: Mozilla Responds to DOE’s RFI on the Frontiers in AI for Science, Security, and Technology (FASST) (2024/11/21 14:09)
    This month, the US Department of Energy’s (DOE)  released a Request for Information on their Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative. Mozilla was eager to provide feedback, particularly given our recent focus on the emerging conversation around Public AI. The Department of Energy’s (DOE’s) FASST initiative has the potential to create the foundation for Public AI infrastructure, which will not only help to enable increased access to critical technologies within the government that can be leveraged to create more efficient and useful services, but also potentially catalyze non-governmental innovation. In addressing DOE’s questions outlined in the RFI, Mozilla focused on key themes including the myriad benefits of open source, the need to keep competition related to the whole AI stack top of mind, and the opportunity or FASST to help lead the development of Public AI by creating the program as “public” by default.   Below, we set out ideas in more depth. Mozilla’s response to DOE in full can be found here. Benefits of Open Source: Given Mozilla’s long standing support of the open source community, a clear through line in Mozilla’s responses to DOE’s questions is the importance of open source in advancing key government objectives. Below are four key themes related to the benefits of open source: Economic Security: Open source by its nature enables the more rapid proliferation of a technology and according to NTIA’s report on Dual-Use Foundation Models with Widely Available Model Weights, “They diversify and expand the array of actors, including less resourced actors, that participate in AI research and development.” For the United States, whose competitive advantage in global competition is its innovative private sector, the rapid proliferation of newly accessible technologies means that new businesses can be created on the back of a new technology, speeding innovation. Existing businesses, whether a hospital or a factory, can more easily adopt new technologies as well, helping to increase efficiency. Expanding the Market for AI: While costs are rapidly decreasing, the use of cutting edge AI products purchased from major labs and big tech companies are not cheap. Many small businesses, research institutions, and nonprofits would be unable to benefit from the AI boom if they did not have the option to use freely available open source AI models. This means that more people around the world get access to American built open source technologies, furthering the use of American technology tools and standards, while forging deeper economic and technological ties. Security & Safety: Open source has had demonstrable security and safety benefits. Rather than a model of “security through obscurity,” open source AI thrives from having many eyes examining code bases and models for exploits by harnessing the wisdom of the crowd to find issues, whether related to discriminatory outputs from LLMs or security vulnerabilities. Resource Optimization: Open source in AI means more than freely downloadable model weights – it means considering how to make the entire AI stack more open and transparent, from the energy cost of training to data on the resources used to develop the chips necessary to train and operate AI models. By making more information on AI’s resource usage open and transparent, we can collectively work to optimize the efficiency of AI, ensuring that the benefits truly outweigh the costs. Keep Competition Top of Mind: The U.S. government wields outsized influence in shaping markets as its role not just as a promulgator of standards and regulations but due to its purchasing power. We urge the DOE to consider broader competitive concerns when determining potential vendors and partnerships for products and services, ranging from cloud resources to semiconductors. This would foster a more competitive AI ecosystem, as noted in OMB’s guidance to Advance the Responsible Acquisition of AI in Government which highlights the importance of promoting competition in procurement of AI. The DOE should make an effort to work with a range of  partners and civil society organizations rather than defaulting to standard government partners and big tech companies. Making FASST “Public” By Default: It is critical that as FASST engages in the development of new models, datasets, and other tools and resources, it defaults to making its work public by default. This may mean directly open sourcing datasets and models, or working with partners, civil society, academia, and beyond to advance access to AI assets which can provide public value. We applaud DOE’s commitment to advancing open, public-focused AI, and we’re excited about the potential of the FASST program. Mozilla is eager to work alongside DOE and other partners to make sure FASST supports the development of technology that serves the public good. Here’s to a future where AI is open, accessible, and beneficial for everyone. The post Mozilla Responds to DOE’s RFI on the Frontiers in AI for Science, Security, and Technology (FASST) appeared first on Open Policy & Advocacy.
  • Martin Thompson: Everything you need to know about selective disclosure (2024/11/21 00:00)
    Why does this matter? A lot of governments are engaging with projects to build “Digital Public Infrastructure”. That term covers a range of projects, but one of the common and integral pieces relates to government-backed identity services. While some places have had some form of digital identity system for years — hi Estonia! — there are many more governments looking to roll out some sort of digital identity wallet for their citizens. Notably, the European Union recently passed a major update to their European Digital Identity Regulation, which seeks to have a union-wide digital identity system for all European citizens. India’s Aadhaar is still the largest such project with well over a billion people enrolled. There are a few ways that these systems end up being implemented, but most take the same basic shape. A government agency will be charged with issuing people with credentials. That might be tied to driver licensing, medical services, passports, or it could be a new identity agency. That agency issues digital credentials that are destined for wallets in phones. Then, services can request that people present these credentials at certain points, as necessary. The basic model that is generally used looks something like this: The government agency is the “issuer”, your wallet app is a “holder”, and the service that wants your identity information is a “verifier”. This is a model for digital credentials that is useful in describing a lot of different interactions. A key piece of that model is the difference between a credential, which is the thing that ends up in a wallet, and a presentation, which is what you show a verifier. This document focuses on online use cases. That is, where you might be asked to present information about your identity to a website Though there are many other uses for identity systems, online presentation of identity is becoming more common. How we use identity online is likely to shape how identity is used more broadly. The goal of this post is to provide information and maybe a fresh perspective on the topic. This piece also has a conclusion that suggests that the truly hard problems in online identity are not technical in nature, so do not necessarily benefit from the use of selective disclosure. As much as selective disclosure is useful in some contexts, there are significant challenges in deploying it on the Web. What is selective disclosure? A presentation might be a reduced form of the credential. Let’s say that you have a driver license, like the following: One way of thinking about selective disclosure is to think of it as redacting those parts of the credential that you don’t want to share. Let’s say that you want to show that you are old enough to buy alcohol. You might imagine doing something like this: That is, if you were presenting that credential to a store in person, you would want to show that the card truly belongs to you and that you are old enough. If you aren’t turning up in person, the photo and physical description are not that helpful, so you might cover those as well. You don’t need to share your exact birth date to show that you are old enough. You might be able to cover the month and day of those too. That is still too much information, but the best you can easily manage with a black highlighter. If there was a “can buy alcohol” field on the license, that might be even better. But the age at which you can legally buy alcohol varies quite a bit across the world. And laws apply to the location, not the person. A 19 year old from Canada can’t buy alcohol in the US just because they can buy alcohol at home[1]. Most digital credential systems have special fields to allow for this sort of rule, so that a US[2] liquor store could use an “over_21” property, whereas a purchase in Canada might check for “over_18” or “over_19” depending on the province. Simple digital credentials The simplest form of digital credential is a bag of attributes, covered by a digital signature from a recognized authority. For instance, this might be a JSON Web Token, which is basically just a digitally-signed chunk of JSON. For our purposes, let’s run with the example, which we’d form into something like this: { "number": "01-47-87441", "name": "McLOVIN", "address": "892 MOMONA ST, HONOLULU, HI 96820", "iss": "1998-06-18", "exp": "2008-06-03", "dob": "1981-06-03", "over_18": true, "over_21": true, "over_55": false, "ht": "5'10", ... } That could then be wrapped up and signed by whatever Hawaiian DMV issues the license. Something like this: That isn’t perfect, because a blob of bytes like that can just be copied around by anyone that receives that credential. Anyone that received a credential could “impersonate” our poor friend. The way that problem is addressed is through the use of a digital wallet. The issuer requires that the wallet hold a second signing key. The wallet provides the issuer with an attestation, which is just evidence from the wallet maker (which is often the maker of your phone) that they are holding a private key in a place where it can’t be moved or copied[3]. That attestation includes the public key that matches that private key. Once the issuer is sure that the private key is tied to the device, the issuer produces a credential that lists the public key from the wallet. In order to use the credential, the wallet signs the credential along with some other stuff, like the current time and maybe the identity of the verifier[4], as follows: With something like this, unless someone is able to use the signing key that is in the wallet, they can’t generate a presentation that a verifier will accept. It also ensures that the wallet can use a biometric or password check to ensure that a presentation is only created when the person allows it. That is a basic presentation that includes all the information that the issuer knows about. The problem is that this is probably more than you might be comfortable with sharing with a liquor store. After all, while you might be able to rely on the fact that the cashier in a store isn’t copying down your license details, you just know that any digital information you present is going to be saved, stored, and sold. That’s where selective disclosure is supposed to help. Salted hash selective disclosure One basic idea behind selective disclosure is to replace all of the data elements in a credential — or at least the ones that someone might want to keep to themselves — with placeholders. Those placeholders are replaced with a commitment to the actual values. Any values that someone wants to reveal are then included in the presentation. A verifier can validate that the revealed value matches the commitment. The most basic sort of commitment is a hash commitment. That uses a hash function, which is really anything where it is hard to produce two inputs that result in the same output. The commitment to a value of X is H(X). That is, you might replace the (“name”, “McLOVIN”) with a commitment like H(“name” || “McLOVIN”). The hash function ensures that it is easy to validate that the underlying values match the commitment, because the verifier can compute the hash for themselves. But it is basically impossible to recover the original values from the hash. And it is similarly difficult to find another set of values that hash to the same value, so you can’t easily substitute false information. A key problem with a hash commitment is that a simple hash commitment only works to protect the value of the input if that input is hard to guess in the first place. But most of the stuff on a license is pretty easy to guess in one way or another. For simple stuff like “over_21”, there are just two values: “true” or “false”. If you want to know the original value, you can just check each of the values and see which matches. Even for fields that have more values, it is possible to build a big table of hash values for every possible (or likely) value. This is called a “rainbow table”[5]. Rainbow tables don’t work if the committed value very hard to guess. So, in addition to the value of the field, a large random number is added to the hidden value. This number is called “salt” and a different value needs to be generated for every field that can be hidden, with different values for every new credential. As long as there are many more values for the salt than can reasonably be stored in a rainbow table, there is no easy way to work out which commitment corresponds to which value. So for each field, the issuer generates a random number and replaces all fields in the credential with H(salt || name || value), using some agreed encoding. The issuer then signs over those commitments and provides the wallet with a credential that is full of commitments, plus the full set of values that were committed to, including the associated salt. The wallet can then use the salt and the credential to reveal a value and prove that it was included in the credential, creating a presentation something like this: The verifier then gets a bunch of fields with the key information replaced with commitments. All of the commitments are then signed by the issuer. The verifier also gets some number of unsigned tuples of (salt, name, value). The verifier can then check that H(salt || name || value) matches one of the commitments. This is the basic design that underpins a number of selective disclosure designs. Salted hash selective disclosure is pretty simple to build because it doesn’t require any fancy cryptography. However, salted hash designs have some limitations that can be a little surprising. Other selective disclosure approaches There are other approaches that might be used to solve this problem. Imagine that you had a set of credentials, each of which contained a single attribute. You might imagine sharing each of those credentials separately, choosing which ones you show based on what the situation demanded. That might look something like this: Having multiple signatures can be nefficient, but this basic idea is approximately sound[7]. There are a lot of signatures, which would make a presentation pretty unwieldy if there were lots of properties. There are digital signature schemes that make this more efficient though, like the BLS scheme, which allows multiple signatures to be folded into one. That is the basic idea behind SD-BLS. SD-BLS doesn’t make it cheaper for an issuer. An issuer still needs to sign a whole bunch of separate attributes. But combining signatures means that it can make presentations smaller and easier to verify. SD-BLS has some privacy advantages over salted hashes, but the primary problem that the SD-BLS proposal aims to solve is revocation, which is covered in more detail below. Problems with salted hashes Going back to the original example, the effect of the salted hash is that you probably get something like this: Imagine that every field on the license is covered with the gray stuff you get on scratch lottery tickets. You can choose which to scratch off before you hand it to someone else[8]. Here’s what they learn: That this is a valid Hawaii driver license. That is, they learn who issued the credential. When the license expires. The value of the fields that you decided to reveal. How many fields you decided not to reveal. Any other places that you present that same credential, as discussed below. On the plus side, and contrary to what is shown for a physical credential, the size and position of fields is not revealed for a digital credential. Still, that is likely a bit more information than might be expected. If you only wanted to reveal the “over_21” field so that you could buy some booze, having to reveal all those other things isn’t exactly ideal. Revealing who issued the credential seems like it might be harmless, but for a digital credential, that’s revealing a lot more than your eligibility to obtain liquor. Potentially a lot more. Maybe in Hawaii, holding a Hawaii driver license isn’t notable, but it might be distinguishing — or even disqualifying — in other places. A Hawaii driver license reveals that you likely live in Hawaii, which is not exactly relevant to your alcohol purchase. It might not even be recognized as valid in some places. If the Hawaiian DMV uses multiple keys to issue credentials, you’ll also reveal which of those keys was used. That’s unlikely to be a big deal, but worth keeping in mind as we look at alternative approaches. Revealing the number of fields is a relatively minor information leak. This constrains the design a little, but not in a serious way. Basically, it means that you should probably have the same set of fields for everyone. For instance, you can’t include only the “over_XX” age fields that are true; you have to include the false ones as well or the number of fields would reveal an approximate age. That is, avoid: { ..., "older_than": [16, 18], ... } Note: Some formats allow individual items in lists like this to be committed separately. The name of the list is generally revealed in that case, but the specific values are hidden. These usually just use H(salt || value) as the commitment. And instead use: { ..., "over_16": true, "over_18": true, "over_21": false, "over_55": false, ... } Expiration dates are tricky. For some purposes, like verifying that someone is allowed to drive, the verifier will need to know if the credential is not expired. On the other hand, expiry is probably not very useful for something like age verification. After all, it’s not like you get younger once your license expires. The exact choice of expiration date might also carry surprising information. Imagine that only one person was able to get a license one day because the office had to close or the machine broke down. If the expiry date is a fixed time after issuance, the expiry date on their license would then be unique to them, which means that revealing that expiration date would effectively be identifying them. The final challenge here is the least obvious and most serious shortcoming of this approach: linkability. Linkability and selective disclosure A salted hash credential carries several things that makes the credential itself identifiable. This includes the following: The value of each commitment is unique and distinctive. The public key for the wallet. The signature that the issuer attaches to the credential. Each of these is unique, so if the same credential is used in two places, it will clearly indicate that this is the same person, even if the information that is revealed is very limited. For example, you might present an “over_21” to purchase alcohol in one place, then use the full credential somewhere else. If those two presentations use the same credential, those two sites will be able to match up the presentations. The entity that obtains the full credential can then share all that knowledge with the one that only knows you are over 21, without your involvement. Even if the two sites only receive limited information, they can still combine the information they obtain — that you are over 21 and what you did on each site — into a profile. The building of that sort of profile online is known as unsanctioned tracking and generally regarded as a bad thing. This sort of matching is technically called verifier-verifier linkability. The way that it can be prevented is to ensure that a completely fresh credential is used for every presentation. That includes a fresh set of commitments, a new public key from the wallet, and a new signature from the issuer (naturally, the thing that is being signed is new). At the same time, ensuring that the presentation doesn’t include any extraneous information, like expiry dates, helps. A system like this means that wallets need to be able to handle a whole lot of credentials, including fresh public keys for each. The wallet also needs to be able to handle cases where its store of credentials run out, especially when the wallet is unable to contact the issuer. Issuers generally need to be able to issue larger batches of credentials to avoid that happening. That involves a lot of computationally intensive work for the issuer. This makes wallets quite a bit more complex. It also increases the cost of running issuance services because they need better availability, not just because they need more issuance capacity. In this case, SD-BLS has a small advantage over salted hashes because its “unregroupability” property means that presentations with differing sets of attributes are not linkable by verifiers. That’s a weaker guarantee than verifier-verifier unlinkability, because presentations with the same set of attributes can still be linked by a verifier; for that, fresh credentials are necessary. Using a completely fresh credential is a fairly effective way to protect against linkability for different verifiers, but it does nothing to prevent verifier-issuer linkability. An issuer can remember the values they saw when they issued the credential. A verifier can take any one of the values from a presentation they receive (commitments, public key, or signature) and ask the issuer to fill in the blanks. The issuer and verifier can then share anything that they know about the person, not limited to what is included in the credential. What the issuer and verifier can share isn’t limited to the credential. They can share anything they know, not just the stuff that was included in the credential. Maybe McLovin needed to show a passport and a utility bill in order to get a license and the DMV kept a copy. The issuer could give that information to the verifier. The verifier can also share what they have learned about the person, like what sort of alcohol they purchased. Useful linkability In some cases, linkability might be a useful or essential feature. Imagine that selective disclosure is used to authorize access to a system that might be misused. Selective disclosure avoids exposing the system to information that is not essential. Maybe the system is not well suited to safeguarding private information. The system only logs access attempts and the presentation that was used. In the event that the access results in some abuse, the abuse could be investigated using verifier-issuer linkability. For example, the access could be matched to information available to the issuer to find out who was responsible for the abuse. The IETF is developing a couple of salted hash formats (in JSON and CBOR) that should be well suited to a number of applications where linkability is a desirable property. All of this is a pretty serious problem for use for something like online age verification. Having issuers, which are often government agencies, being in a position to trace activity, might have an undesirable chilling effect. This is something that legislators generally recognize and laws often include provisions that require unlinkability[9]. In short, salted hash based systems only work if you trust the issuer. Linkable attributes There is not much point in avoiding linkability when the disclosed information is directly linkable. For instance, if you selectively disclose your name and date of birth, that information is probably unique or highly identifying. Revealing identifying information to a verifier makes verifier-issuer linkability easy; just like revealing the same information to two verifiers makes verifier-verifier linkability simple. This makes linkability for selective disclosure less concerning when it comes to revealing information that might be identifying. Unlinkability therefore tends to be most useful for non-identifying attributes. Simple attributes — like whether someone meets a minimum age requirement, holds a particular qualification, or has authorization — are less likely to be inherently linkable, so are best suited to being selectively disclosed. Privacy Pass If the goal is to provide a simple signal, such as whether a person is older than a target age, Privacy Pass is specifically designed to prevent verifier-issuer linkability. Privacy Pass also includes options that split the issuer into two separate functions — an issuer and an attester — where the attester is responsible for determining if a holder (or client) has the traits required for token issuance and the issuer only creates the tokens. This might be used to provide additional privacy protection. A Privacy Pass issuer could produce a token that signifies possession of a given trait. Only those with the trait would receive the token. For age verification, the token might signify that a person is at a selected age or older. Token formats for Privacy Pass that include limited public information are also defined, which might be used to support selective disclosure. This is far less flexible than the salted hash approach as a fresh token needs to be minted with the set of traits that will be public. That requires that the issuer is more actively involved or that the different sets of public traits are known ahead of time. Privacy Pass does not naturally provide verifier-verifier unlinkability, but a fresh token could be used for each usage, just like for the salted hash design. Some of the Privacy Pass modes can issue a batch of tokens for this reason. In order to provide tokens for different age thresholds or traits, an issuer would need to use different public keys, each corresponding to a different trait. Privacy Pass is therefore a credible alternative to the use of salted hash selective disclosure for very narrow cases. It is somewhat inflexible in terms of what can be expressed, but that could mean more deliberate additions of capabilities. The strong verifier-issuer unlinkability is definitely a plus, but it isn’t without shortcomings. Key consistency One weakness of Privacy Pass is that it depends on the issuer using the same key for everyone. The ideal privacy is provided when there is a single issuer with just one key for each trait. With more keys or more issuers, the key that is used to generate a token carries information, revealing who issued the token. This is just like the salted hash example where the verifier needs to learn that the Hawaiian DMV issued the credential. The privacy of the system breaks down if every person receives tokens that are generated using a key that is unique to them. This risk can be limited through the use of key consistency schemes. This makes the system a little bit harder to deploy and operate. As foreshadowed earlier, the same key switching concern also applies to a salted hash design if you don’t trust the issuer. Of course, we’ve already established that a salted hash design basically only works if you trust the issuer. Salted hash presentations are linkable based on commitments, keys, or signatures, so there is no real need to play games with keys. Anonymous credentials A zero knowledge proof enables the construction of evidence that a prover knows something, without revealing that information. For an identity system, it allows a holder to make assertions about a credential without revealing that credential. That creates what is called an anonymous credential. Anonymous credentials are appealing as the basis for a credential system because the proofs themselves contain no information that might link them to the original credential. Verifier-issuer unlinkability is a natural consequence of using a zero knowledge proof. Verifier-verifier unlinkability would be guaranteed by providing a fresh proof for each verifier, which is possible without obtaining a fresh credential. The result is that anonymous credentials provide excellent privacy characteristics. Zero knowledge proofs trace back to systems of provable computation, which mean that they are potentially very flexible. A proof can be used to prove any property that can be computed. The primary cost is in the amount of computation it takes to produce and validate the proof[10]. If the underlying credential can be adjusted to support the zero knowledge system, these costs can be reduced, which is what the BBS signature scheme does. Unmodified credentials can be used if necessary. Thus, a proof statement for use in age verification might be a machine translation of the following compound statement: this holder has a credential signed by the Hawaiian DMV; the expiration date on the credential is later than the current date; the person is 21 or older (or the date of birth plus 21 years is earlier than the current date); the holder knows the secret key associated with the public key mentioned in the credential; and, the credential has not been used with the current verifier more than once on this day[11]. A statement in that form should be sufficient to establish that someone is old enough to purchase alcohol, while providing assurances that the credential was not stolen or reused. The only information that is revealed is that this is a valid Hawaiian license. We’ll see below how hiding that last bit is also possible and probably a good idea. Reuse protections The last statement from the set of statements above provides evidence that the credential has not been shared with others. This condition, or something like it, is a necessary piece of building a zero-knowledge system. Otherwise, the same credential can be used and reused many times by multiple people. Limiting the number of uses doesn’t guarantee that a credential isn’t shared, but it limits the number of times that it can be reused. If the credential can only be used once per day, then that is how many times the credential can be misused by someone other than the person it was issued to. Choosing how many times a credential might be used will vary on the exact circumstances. For instance, it might not be necessary to have the same person present proof of age to an alcohol vendor multiple times per day. Maybe it would be reasonable for the store to remember them if they come back to make multiple purchases on any given day. One use per day might be reasonable on that assumption. In practice, multiple rate limits might be used. This can make the system more flexible over short periods (to allow for people making multiple alcohol purchases in a day) but also stricter over the long term (because people rarely need to make multiple purchases every day). For example, age checks for the purchase of alcohol might combine a three per day limit with a weekly limit of seven. Multiple conditions can be easily added to the proof, with a modest cost. It is also possible for each verifier to specify their own rate limits according to their own conditions. A single holder would then limit the use of credentials according to those limits. Tracking usage is easy for a single holder. An actor looking to abuse credentials by sharing and reusing them has more difficulty. A bad actor would need to carefully coordinate their reuse of a credential so that any rate limits were not exceeded. Hiding the issuer of credentials People often do not get to choose who issues them a credential. Revealing the identity of an issuer might be more identifying than is ideal. This is especially true for people who have credentials issued by an atypical issuer. Consider that Europe is building a union-wide system of identity. That means that verifiers will be required to accept credentials from any country in the EU. Someone accessing a service in Portugal with an Estonian credential might be unusual if most people use a Portuguese credential. Even if the presentation is limited to something like age verification, the choice of issuer becomes identifying. This could also mean that a credential that should be valid is not recognized as such by an issuer, simply because they chose not to consider that issuer. Businesses in Greece might be required by law to recognize other EU credentials, but what about a credential issued by Türkiye? Zero knowledge proofs can also hide the issuer, only revealing that a credential was issued by one of a set of issuers. This means that a verifier is unable to discriminate on the basis of issuer. For a system that operates at scale, that creates positive outcomes for those who hold credentials from atypical issuers. Credential revocation Perhaps the hardest problem in any system that involves the issuance of credentials is what to do when the credential suddenly becomes invalid. For instance, if a holder is a phone, what do you do if the phone is lost or stolen? That is the role of revocation. On the Web, certificate authorities are required to have revocation systems to deal with lost keys, attacks, change of ownership, and a range of other problems. For wallets, the risk of loss or compromise of wallets might also be addressed with revocation. Revocation typically involves the verifier confirming with the issuer that the credential issued to the holder (or the holder itself) has not been revoked. That produces a tweak to our original three-entity system as follows: Revocation is often the most operationally challenging aspect of running identity infrastructure. While issuance might have real-time components — particularly if the issuer needs to ensure a constant supply of credentials to maintain unlinkability — credentials might be issued ahead of time. However, revocation often requires a real-time response or something close to it. That makes a system with revocation much more difficult to design and operate. Revoking full presentations When a full credential or more substantive information is compromised, lack of revocation creates a serious impersonation risk. The inability to validate biometrics online means that a wallet might be exploited to perform identity theft or similarly serious crimes. Being able to revoke a wallet could be a necessary component of such a system. The situation with a complete credential presentation, or presentations that include identifying information, is therefore fairly simple. When the presentation contains identifying information, like names and addresses, preventing linkability provides no benefit. So providing a direct means of revocation checking is easy. With verifier-issuer linkability, the verifier can just directly ask the issuer whether the credential was revoked. This is not possible if there is a need to perform offline verification, but it might be possible to postpone such checks or rely on batched revocations (CRLite is a great example of a batched revocation system). Straightforward or not, providing adequate scale and availability make the implementation of a reliable revocation system a difficult task. Revoking anonymous credentials When you have anonymous credentials, which protect against verifier-issuer linkability, revocation is very challenging. A zero-knowledge assertion that the credential has not been revoked is theoretically possible, but there are a number of serious challenges. One issue is that proof of non-revocation depends on providing real-time or near-real-time information about the underlying credential. Research into solving the problem is still active. It is possible that revocation for some selective disclosure cases is unnecessary. Especially those cases where zero-knowledge proofs are used. We have already accepted some baseline amount of abuse of credentials, by virtue of permitting non-identifying and unlinkable presentations. Access to a stolen credential is roughly equivalent to sharing or borrowing a credential. So, as long as the overall availability of stolen credentials is not too high relative to the availability of borrowed credentials, the value of revocation is low. In other words, if we accept some risk that credentials will be borrowed, then we can also tolerate some use of stolen credentials. Revocation complications Even with linkability, revocation is not entirely trivial. Revocation effectively creates a remote kill switch for every credential that exists. The safeguards around that switch are therefore crucial in determining how the system behaves. For example, if any person can ask for revocation, that might be used to deny a person the use of a perfectly valid credential. There are well documented cases where organized crime has deprived people of access to identification documents in order to limit their ability to travel or access services. These problems are more tied to the processes that are used, rather than the technical design. However, technical measures might be used to improve the situation. For instance, SD-BLS suggests that threshold revocation be used, where multiple actors need to agree before a credential can be revoked. All told, and especially if dealing with revocation on the Web has taught us anything, it might not be worth the effort to add revocation. It might be easier — and no less safe — to frequently update credentials. Authorizing Verifiers Selective disclosure systems can fail to achieve their goals if there is a power imbalance between verifiers and holders. For instance, a verifier might withhold services unless a person agrees to provide more information than the verifier genuinely requires. That is, the verifier might effectively extort people to provide non-essential information. A system that can withhold information to improve privacy is pointless unless attempts to exercise withholding are supported. One way to work around this is to require that verifiers be certified before they can request certain information. For instance, EU digital identity laws require that it be possible to restrict who can request a presentation. This might involve the certification of verifiers, so that verifiers would be required to provide holders with evidence that they are authorized to receive certain attributes. A system of verifier authorization could limit overreach, but it might also render credentials ineffective in unanticipated situations, including for interactions in foreign jurisdictions. Authorizations also need monitoring for compliance. Businesses — particularly larger businesses that engage in many activities — might gain authorization for many different purposes. Abuse might occur if a broad authorization is used where a narrower authorization is needed. That means more than a system of authorization, but creating a way to ensure that businesses or agencies are accountable for their use of credentials. Quantum computers Some of these systems depend on cryptography that is only classically secure. That is, a sufficiently powerful quantum computer might be able to attack the system. Salted hash selective disclosure relies only on digital signatures and hash functions, which makes them the most resilient to attacks that use a quantum computer. However, many of the other systems described rely on some version of the discrete logarithm problem being difficult, which can make them vulnerable. Predicting when a cryptographically-relevant quantum computer might be created is as hard as any other attempt to look into the future, but we can understand some of the risks. Quantum computers present two potential threats to any system that relies on classical cryptographic algorithms: forgery and linkability. A sufficiently powerful quantum computer might use something like Shor’s algorithm to recover the secret key used to issue credentials. Once that key has been obtained, new credentials could be easily forged. Of course, forgeries are only a threat after the key is recovered. Some schemes that rely on classical algorithms could be vulnerable to linking by a quantum computer, which could present a very serious privacy risk. This sort of linkability is a serious problem because it potentially affects presentations that are made before the quantum computer exists. Presentations that were saved by verifiers could later be linked. Some of the potential mechanisms, such as the BBS algorithm, are still able to provide privacy, even if that the underlying cryptography is broken by a quantum computer. The quantum computer would be able to create forgeries, but not break privacy by linking presentations. If we don’t need to worry about forgery until a quantum computer exists and privacy is maintained even then, we are therefore largely concerned with how long we might be able to use these systems. That gets back to the problem of predictions and balancing the cost of deploying a system against how long the system is going to remain secure. Credential systems take a long time to deploy, so — while they are not vulnerable to a future advance in the same way as encryption — planning for that future is likely necessary. The limitations of technical solutions If there is a single conclusion to this article is that the problems that exist in identity systems are not primarily technical. There are several very difficult problems to consider when establishing a system. Those problems only start with the selection of technology. Any technological choice presents its own problems. Selective disclosure is a powerful tool, but with limited applicability. Properties like linkability need to be understood or managed. Otherwise, the actual privacy properties of the system might not meet expectations. The same goes for any rate limits or revocation that might be integrated. How different actors might participate in the system needs further consideration. Decisions about who might act as an issuer in the system needs a governance structure. Otherwise, some people might be unjustly denied the ability to participate. For verifiers, their incentives need to be examined. A selective disclosure system might be built to be flexible, which might seem to empower people with choice about what they disclose, however that might be abused by powerful verifiers to extort additional information from people. All of which to say is: better technology does not always help as much as you might hope. Many of the problems are people problems, social problems, and governance problems, not technical problems. Technical mechanisms tend to only change the shape of non-technical problems. That is only helpful if the new shape of the problem is something that people are better able to deal with. This is different from licensing to drive, where most countries recognize driving permits from other jurisdictions. That’s probably because buying alcohol is a simple check based on an objective measure, whereas driving a car is somewhat more involved. ↩︎ Well, most of the US. It has to do with highways. ↩︎ The issuer might want some additional assurances, like some controls over how the credential can be accessed, controls over what happens if a device is lost, stolen, or sold, but they all basically reduce to this basic idea. ↩︎ If the presentation didn’t include information about the verifier and time of use, one verifier could copy the presentation they receive and impersonate the person. ↩︎ Rainbow tables can handle relatively large numbers of values without too much difficulty. Even some of the richer fields can probably be put in a rainbow table. For example, there are about 1.4 million people in Hawaii. All the values for some fields are known, such as the complete set of possible addresses. Even if every person has a unique value, a very simple rainbow table for a field would take a few seconds to build and around 100Mb to store, likely a lot less. A century of birthdays would take much less storage[6]. ↩︎ In practice, a century of birthdays (40k values) will have no collisions with even a short hash. You don’t need much more than 32 bits for that many values. Furthermore, if you are willing to have a small number of values associated with each hash, you can save even more space. 40k values can be indexed with a 16-bit value and a 32-bit hash will produce very few collisions. A small number of collisions are easy to resolve by hashing a few times, so maybe this could be stored in about 320kB with no real loss of utility. ↩︎ There are a few things that need care, like whether different attributes can be bound to a different wallet key and whether the attributes need to show common provenance. With different keys, the holder might mix and match attributes from different people into a single presentation. ↩︎ To continue the tortured analogy, imagine that you take a photo of the credential to present, so that the recipient can’t just scratch off the stuff that you didn’t. Or maybe you add a clear coat of enamel. ↩︎ For example, Article 5a, 16 of the EU Digital Identity Framework requires that wallets “not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorised by the user”. ↩︎ A proof can be arbitrarily complex, so this isn’t always cheap, but most of the things we imagine here are probably very manageable. ↩︎ This isn’t quite accurate. The typical approach involves the use of tokens that repeat if the credential is reused too often. That makes it possible to catch reuse, not prevent it. ↩︎
  • This Week In Rust: This Week in Rust 574 (2024/11/20 05:00)
    Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR. Want TWIR in your inbox? Subscribe here. Updates from Rust Community Official Announcing four new members of the compiler team Foundation Announcing the Rust Foundation’s Newest Project Director: Carol Nichols Rust Foundation Collaborates With AWS Initiative to Verify Rust Standard Libraries EuroRust 2024 Through the Fire and the Flames - Jon Gjengset Build bigger in less time: code testing beyond the basics - Predrag Gruevski A gentle introduction to procedural macros - Sam Van Overmeire Practical Rust for Web Audio - Attila Haraszti Augmented docs: a love letter to rustdoc and docs.rs - Francois Mockers The Impact of Memory Allocators on Performance: A Deep Dive - Arthur Pastel Proving macro expansion with expandable - Sasha Pourcelot Runtime Scripting for Rust Applications - Niklas Korz Unleashing 🦀 The Ferris Within - Victor Ciura The first six years in the development of Polonius - Amanda Stjerna Non-binary Rust: Between Safe and Unsafe - Boxy Uwu Writing a SD Card driver in Rust - Johnathan Pallant My Journey from WebDev to Medical Visualization Rustacean - David Peherstorfer Code to contract to code: making ironclad APIs - Adam Chalmers Rust Irgendwie, Irgendwo, Irgendwann - Henk Oordt Linting with Dylint - Samuel Moelius RustConf 2024 Dr. Rebecca Rumbul (Rust Foundation Executive Director): "Welcome Remarks" Aeva Black: "Making Open Source Secure by Design" | KEYNOTE Marc-André Moreau (CTO, Devolutions): Diamond Sponsor Talk Nick Cameron: "Eternal Sunshine of the Rustfmt'ed Mind" Jack Wrenn: "Safety Goggles for Alchemists" Rohit Dandamundi: "Widening the Ferris Net" Isabel Atkinson: "Rustify Your API: A Journey from Specification to Implementation" Sparrow Li: "The Current State and Future of Rust Compiler Performance" Nathan Stocks: "Shooting Stars! Livecode a Game in Less Than 30 Mins" Pedro Rittner & Sean Lawlor: "Actors and Factories in Rust" David Koloski: "The (Many) Mistakes I Made in rkyv" Kyler Chin: "How We Built a Rust-y Real-Time Public Transport Map" Adam Chalmers: "Making a Programming Language for 3D Design" Martin Pool: "Finding Bugs with cargo-mutants" 1Password, Adobe, Woven by Toyota: Gold Sponsor Lightning Talks Miguel Ojeda (Rust for Linux): KEYNOTE JetBrains, K2 Space, Zed: Gold Sponsor Lightning Talks Jonathan Pallant: "Six Clock Cycle per Pixel - Graphics on the Neotrol Pico" Joannah Nanjekye: "Rust Interop: Memory Safety Across Foreign Function Boundaries" Jacob Pratt: "Compiler-Driven Development: Making Rust Work for You" Angus Morrison: "How Rust is Powering Next-Generation Space Mission Simulators" Michael Gattozzi: "What Happens When You Run Cargo Build?" Pallavi Thukral: "Rust in Motion: Building Reliable and Performant Robotics Systems" Marc-André Giroux: "Low-Overhead Observability in High-RPS Servers" Predrag Gruevski: "Putting an End to Accidental SemVer-Breaking Changes" Chris Biscardi: "Web Sites, Web Apps, and Web Assembly" Nicholas Matsakis (Co-Lead, Rust Design Team): "Rust Roadmap 2.0" | KEYNOTE Frédéric Ameye: "Rust in Legacy Regulated Industries" Walter Pearce: "Dude, Where's My C?" Ed Jones: "Fearless Refactoring & the Art of Argument-Free Rust" Dr. Rebecca Rambul: Opening Remarks OxidOS Sponsored Talk Martin Geisler: "Rust Training at Scale" Quanyi Ma: "Embracing Monorepo and LLM Evolution" Joshua Liebow-Feeser: "Safety in an Unsafe World" Jack Huey & James Munns: "An Outsider's Guide to the Rust Project" Newsletters This Month in Rust OSDev: October 2024 Project/Tooling Updates hyper in curl Needs a Champion godot-rust November 2024 dev update Security in hickory-dns Virtual Geometry in Bevy 0.15 Glues v0.5 - Editor Tabs and Enhanced Vim Commands Streaming data analytics, Fluvio 0.13.0 release Rerun 0.20 - Geospatial data and full H.264 support git-cliff 2.7.0 is released! (a highly customizable changelog generator) Observations/Thoughts You don't (always) need async The fastest WASM zlib A rustc soundness bug in the wild [audio] Compile Time Crimes [audio] Oxide with Steve Klabnik Rust Walkthroughs Zed Rope Optimizations, Part 1 Futexes at Home Build your own SQLite, Part 3: SQL parsing 101 dtype_dispatch: a most beautiful hack Sending Events to Bevy from anywhere Building an email address parser in Rust with nom Exploring Async Runtimes by Building our Own Traits to Unify all Vectors Basics of Pinning in Rust Building a Wifi-controlled car with Rust and ESP32 [video] Build with Naz : Diesel ORM, SQLite and Rust Crate of the Week This week's crate is fixed-slice-vec, a no-std dynamic length Vec with runtime-determined maximum capacity backed by a slice. Thanks to Jay Oster for the suggestion! Please submit your suggestions and votes for next week! Calls for Testing An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward: RFCs No calls for testing were issued this week. Testing Steps Rust No calls for testing were issued this week. Testing steps Rustup No calls for testing were issued this week. Testing steps If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing. Call for Participation; projects and speakers CFP - Projects Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! CFP - Events Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! Updates from the Rust Project 480 pull requests were merged in the last week ABI checks: add support for some tier3 arches, warn on others ABI checks: add support for tier2 arches CFI: append debug location to CFI blocks AIX: Add crate "unwind" to link with libunwind illumos: use pipe2 to create anonymous pipes check_consts: fix error requesting feature gate when that gate is not actually needed const_panic: inline in bootstrap builds to avoid f16/f128 crashes rustc_metadata: Preprocess search paths for better performance suggest_borrow_generic_arg: instantiate clauses properly add visit_coroutine_kind to ast::Visitor add parentheses when unboxing suggestion needed add reference annotations for diagnostic attributes allow CFGuard on windows-gnullvm always inline functions signatures containing f16 or f128 borrowck diagnostics: suggest borrowing function inputs in generic positions change Visitor::visit_precise_capturing_arg so it returns a Visitor::Result change intrinsic declarations to new style check use<..> in RPITIT for refinement consolidate type system const evaluation under traits::evaluate_const delete the cfg(not(parallel)) serial compiler deny capturing late-bound ty/const params in nested opaques diagnostics for let mut in item context extend the "if-unchanged" logic for compiler builds feature gate yield expressions not in 2024 fix ICE when passing DefId-creating args to legacy_const_generics fix REGISTRY_USERNAME to reuse cache between auto and pr jobs fix a copy-paste issue in the NuttX raw type definition fix compilation error on Solaris due to flock usage fix span edition for 2024 RPIT coming from an external macro for expr return (_ = 42); unused_paren lint should not be triggered handle infer vars in anon consts on stable improve VecCache under parallel frontend increase accuracy of if condition misparse suggestion liberate aarch64-gnu-debug from the shackles of --test-args=clang likely unlikely fix make precise capturing suggestion machine-applicable only if it has no APITs make sure to ignore elided lifetimes when pointing at args for fulfillment errors mention both release and edition breakage for never type lints move all mono-time checks into their own folder, and their own query proper support for cross-crate recursive const stability checks querify MonoItem collection recurse into APITs in impl_trait_overcaptures refactor configure_annotatable remove attributes from generics in built-in derive macros rename rustc_const_stable_intrinsic → rustc_intrinsic_const_stable_indirect skip locking span interner for some syntax context checks trim extra space when suggesting removing bad let trim whitespace in RemoveLet primary span tweak attributes for const panic macro unify FnKind between AST visitors and make WalkItemKind more straight forward use TypingMode throughout the compiler instead of ParamEnv warn about invalid mir-enable-passes pass names miri: implement blocking eventfd miri: refactor: refine thread variant for windows miri: renamed this to ecx in extern_static miri: use -Zroot-dir instead of --remap-path-prefix for diagnostic dir handling stabilize const_atomic_from_ptr stabilize const_option_ext stabilize const_ptr_is_null stabilize const_unicode_case_lookup vectorize slice::is_sorted #[inline] integer parsing functions add as_slice/into_slice for IoSlice/IoSliceMut generalize NonNull::from_raw_parts per ACP362 rwlock downgrade implement mixed_integer_ops_unsigned_sub improve codegen of fmt_num to delete unreachable panic float types: move copysign, abs, signum to libcore make CloneToUninit dyn-compatible mark is_val_statically_known intrinsic as stably const-callable optimize char::to_digit and assert radix is at least 2 hashbrown: further sequester Group/Tag code hashbrown: mark const fn constructors as rustc_const_stable_indirect codegen_gcc: fix volatile loads and stores cargo resolver: Stabilize resolver v3 cargo rustdoc: diplay env vars in extra verbose mode cargo fix: error context for git_fetch refspec not found cargo: always include Cargo.lock in published crates cargo: migrate build-rs to the Cargo repo cargo: simplify English used in guide rustdoc search: allow queries to end in an empty path segment rustdoc-search: case-sensitive only when capitals are used rustdoc-search: use smart binary search in bitmaps rustdoc: treat declarative macros more like other item kinds rustdoc: use a trie for name-based search rustdoc: Fix duplicated footnote IDs rustdoc: Fix handling of footnote reference in footnote definition rustdoc: Fix items with generics not having their jump to def link generated rustdoc: Perform less work when cleaning middle::ty parenthesized generic args clippy: missing_safety_doc accept uppercase "SAFETY" clippy: allow conditional Send futures in future_not_send clippy: do not trigger if_let_mutex starting from Edition 2024 clippy: don't lint CStr literals, do lint float literals in redundant_guards clippy: handle Option::map_or(true, …) in unnecessary_map_or lint clippy: new lint: unnecessary_map_or clippy: support user format-like macros rust-analyzer: migrate reorder_fields assist to use SyntaxFactory Rust Compiler Performance Triage We saw improvements to a large swath of benchmarks with the querification of MonoItem collection (PR #132566). There were also some PRs where we are willing to pay a compile-time cost for expected runtime benefit (PR #132870, PR #120370), or pay a small cost in the single-threaded case in exchange for a big parallel compilation win (PR #124780). Triage done by @pnkfelix. Revision range: d4822c2d..7d40450b 2 Regressions, 4 Improvements, 10 Mixed; 6 of them in rollups 47 artifact comparisons made in total Full report here Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: [RFC] Thread spawn hook (inheriting thread locals) Final Comment Period Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs No RFCs were approved this week. Tracking Issues & PRs Rust [disposition: merge] Always display first line of impl blocks even when collapsed [disposition: merge] Stabilize async closures (RFC 3668) [disposition: merge] Tracking Issue for fn const BuildHasherDefault::new() [disposition: merge] Add AsyncFn* to to the prelude in all editions [disposition: merge] Tracking Issue for #![feature(const_float_methods)] Cargo [disposition: merge] Add future-incompat warning against keywords in cfgs and add raw-idents Language Team [disposition: merge] Consensus check: let-chains and is are not mutually exclusive Language Reference No Language Reference RFCs entered Final Comment Period this week. Unsafe Code Guidelines No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week. New and Updated RFCs [new] Hierarchy of Sized traits Upcoming Events Rusty Events between 2024-11-20 - 2024-12-18 🦀 Virtual 2024-11-20 | Virtual (Cardiff, UK) | Rust and C++ Cardiff Rust for Rustaceans Book Club: Chapter 12: Rust Without the Standard Library 2024-11-20 | Virtual and In-Person (Vancouver, BC, CA) | Vancouver Rust Embedded Rust Workshop 2024-11-21 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-11-21 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Trustworthy IoT with Rust--and passwords! 2024-11-21 | Virtual (Rotterdam, NL) | Bevy Game Development Bevy Meetup #7 2024-11-25 | Virtual (Bratislava, SK) | Bratislava Rust Meetup Group ONLINE Talk, sponsored by Sonalake - Bratislava Rust Meetup 2024-11-26 | Virtual (Dallas, TX, US) | Dallas Rust Last Tuesday 2024-11-28 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-11-28 | Virtual (Nürnberg, DE) | Rust Nuremberg Rust Nürnberg online 2024-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup Buffalo Rust User Group 2024-12-04 | Virtual (Indianapolis, IN, US) | Indy Rust Indy.rs - with Social Distancing 2024-12-05 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-12-07 | Virtual (Kampala, UG) | Rust Circle Kampala Rust Circle Meetup 2024-12-10 | Virtual (Dallas, TX, US) | Dallas Rust Second Tuesday 2024-12-11 | Virtual (Vancouver, BC, CA) | Vancouver Rust Rust Study/Hack/Hang-out 2024-12-12 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-12-12 | Virtual (Nürnberg, DE) | Rust Nuremberg Rust Nürnberg online 2024-12-17 | Virtual (Washington, DC, US) | Rust DC Mid-month Rustful Africa 2024-12-10 | Johannesburg, ZA | Johannesburg Rust Meetup Hello World... again 2024-12-07 | Virtual( Kampala, UG) | Rust Circle Kampala Rust Circle Meetup Asia 2024-11-21 | Seoul, KR | Rust Programming Meetup Seoul Seoul Rust Meetup 2024-11-28 | Bangalore/Bengaluru, IN | Rust Bangalore RustTechX Summit 2024 BOSCH 2024-11-30 | Tokyo, JP | Rust Tokyo Rust.Tokyo 2024 Europe 2024-11-20 | Paris, FR | Rust Paris Rust meetup #72 2024-11-21 | Copenhagen, DK | Copenhagen Rust Community Rust meetup #53 sponsored by Microsoft 2024-11-21 | Edinburgh, UK | Rust and Friends Rust and Friends (pub) 2024-11-21 | Madrid, ES | MadRust Taller de introducción a unit testing en Rust 2024-11-21 | Oslo, NO | Rust Oslo Rust Hack'n'Learn at Kampen Bistro 2024-11-23 | Basel, CH | Rust Basel Rust + HTMX - Workshop #3 2024-11-25 | Zagreb, HR | impl Zagreb for Rust Rust Meetup 2024/11: Panel diskusija - Usvajanje Rusta i iskustva iz industrije 2024-11-26 | Warsaw, PL | Rust Warsaw New Rust Warsaw Meetup #3 2024-11-27 | Dortmund, DE | Rust Dortmund Rust Dortmund 2024-11-28 | Aarhus, DK | Rust Aarhus Talk Night at Lind Capital 2024-11-28 | Augsburg, DE | Rust Meetup Augsburg Augsburg Rust Meetup #10 2024-11-28 | Berlin, DE | OpenTechSchool Berlin + Rust Berlin Rust and Tell - Title 2024-11-28 | Gdansk, PL | Rust Gdansk Rust Gdansk Meetup #5 2024-11-28 | Hamburg, DE | Rust Meetup Hamburg Rust Hack & Learn with Mainmatter & Otto 2024-11-28 | Manchester, UK | Rust Manchester Rust Manchester November Code Night 2024-11-28 | Prague, CZ | Rust Prague Rust/C++ Meetup Prague (November 2024) 2024-12-03 | Copenhagen, DK | Copenhagen Rust Community Rust Hack Night #11: Advent of Code 2024-12-04 | Oxford, UK | Oxford Rust Meetup Group Oxford Rust and C++ social 2024-12-05 | Olomouc, CZ | Rust Moravia Rust Moravia Meetup (December 2024) 2024-12-06 | Moscow, RU | RustCon RU RustCon Russia 2024-12-11 | Reading, UK | Reading Rust Workshop Reading Rust Meetup 2024-12-12 | Amsterdam, NL | Rust Developers Amsterdam Group Rust Meetup @ JetBrains 2024-12-17 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig Types, Traits und Best Practices North America 2024-11-21 | Chicago, IL, US | Chicago Rust Meetup Rust Happy Hour 2024-11-23 | Boston, MA, US | Boston Rust Meetup Boston Common Rust Lunch, Nov 23 2024-11-25 | Ferndale, MI, US | Detroit Rust Rust Community Meetup - Ferndale 2024-11-26 | Minneapolis, MN, US | Minneapolis Rust Meetup Minneapolis Rust Meetup Happy Hour 2024-11-27 | Austin, TX, US | Rust ATX Rust Lunch - Fareground 2024-11-28 | Mountain View, CA, US | Hacker Dojo RUST MEETUP at HACKER DOJO 2024-12-05 | St. Louis, MO, US | STL Rust Rust Strings 2024-12-10 | Ann Arbor, MI, US | Detroit Rust Rust Community Meetup - Ann Arbor 2024-12-12 | Mountain View, CA, US | Hacker Dojo RUST MEETUP at HACKER DOJO 2024-12-16 | Minneapolis, MN, US | Minneapolis Rust Meetup Minneapolis Rust Meetup Happy Hour 2024-12-17 | San Francisco, CA, US | San Francisco Rust Study Group Rust Hacking in Person Oceania 2024-12-04 | Sydney, AU | Rust Sydney 2024 🦀 Encore ✨ Talks 2024-12-08 | Canberra, AU | Canberra Rust User Group CRUG Xmas party If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Jobs Please see the latest Who's Hiring thread on r/rust Quote of the Week The whole point of Rust is that before there were two worlds: Inefficient, garbage collected, reliable languages Efficient, manually allocated, dangerous languages And the mark of being a good developer in the first was mitigating the inefficiency well, and for the second it was it didn't crash, corrupt memory, or be riddled with security issues. Rust makes the trade-off instead that being good means understanding how to avoid the compiler yelling at you. – Simon Buchan on rust-users Thanks to binarycat for the suggestion! Please submit quotes and vote for next week! This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez. Email list hosting is sponsored by The Rust Foundation Discuss on r/rust
  • Firefox Nightly: New Address Bar Updates are Here – These Weeks in Firefox: Issue 172 (2024/11/19 19:47)
    Highlights Our newly updated address bar, also known as “Scotch Bonnet”, is available in Nightly builds! 🎉 See this Connect thread to find more details and share your feedback. We are working towards a change where the dedicated search button won’t be shown permanently. Weather suggestions have also been enabled in Nightly. The feature is US only at this time, as part of Firefox Suggest. robwu fixed a regression introduced in Firefox 132 that was triggering the default built-in theme to be re-enabled on every browser startup – Bug 1928082 Love Firefox Profiler and DevTools? Check out the latest DevTools updates and see how they can better help you track down issues. Friends of the Firefox team Resolved bugs (excluding employees) Script to find new contributors from bug list Volunteers that fixed more than one bug abhijeetchawla[:ff2400t] Collin Richards John Bieling (:TbSync) kernp25 New contributors (🌟 = first patch) 🌟 Lucas Enoki removed amIAddonManager interface registration in amManager.sys.mjs abhijeetchawla[:ff2400t] removed using UNSAFE_componentWillMount in devtools/client/shared/components/Frame.js and removed using UNSAFE_componentWillReceiveProps in devtools/client/inspector/animation/components/keyframes-graph/ColorPath.js 🌟 Collin Richards fixed pressing escape to revert address bar after clear and amended focusContentDocumentEsc test for trimHttps Robert Holdsworth made Reader mode not available on PDF files Project Updates Add-ons / Web Extensions Addon Manager & about:addons As a part of Bug 1928082, a failure hit by the new test_default_theme.js xpcshell test will ensure the default theme manifest version is in sync in both the manifest and the XPIProvider startup call to maybeInstallBuiltinAddon WebExtensions Framework Fixed a leak in ext-theme hit when an extension was setting a per-window theme using the theme WebExtensions API – Bug 1579943 ExtensionPolicyService content scripts helper methods has been tweaked to fix a low frequency crash hit by ExtensionPolicyService::ExecuteContentScripts – Bug 1916569 Fixed an unexpected issue with loading moz-extension url as subframe of the background page for extensions loaded temporarily from a directory – Bug 1926106 Prevent window.close() calls originated from the WebExtensions registered devtools panel to close the browser chrome window (when there is only a single tab open) – Bug 1926373 Thanks to Becca King for contributing this fix 🎉 Native messaging support for snap-packaged Firefox (default on Ubuntu): Thanks to Alexandre Lissy for working on finalizing the patches from Bug 1661935 Fixed a regression hit by the snap-packaged Firefox 133 build – Bug 1930119 WebExtension APIs Fixed a bug preventing declarativeNetRequest API dynamic rules to work correctly after a browser restart for extensions not having any static rules registered – Bug 1921353 DevTools DevTools Toolbox Abhijeet Chawla updated some of our code to avoid using deprecated React lifecycle methods (#1810462, #1810437) Jeff Muizelaar added profiler markers for debugger log points (#1928514) Alexandre Poirot added the ability to record console API calls and DOM Mutations when tracing to the profiler (#1929007, #1929004) Alexandre Poirot made it possible to search for traces by function call argument values (#1921020) Alexandre Poirot improved webextension debugging by automatically switching to the expected document on hover when inspecting webextension UI (e.g. sidebar, popup, …) (#1754452) Nicolas Chevobbe is still working on adding support for High Contrast Mode in the most used panels (#1916391, #1926878, #1917782, #1926794), #1926851, #1926852, #1916698, #1920711, #1927063, #1926983, #1916650, #1916656, #1921758, #1916682, #1916693, #1916721, #1928108, #1916722, #1916660, #1929200, #1929594, #1916669, #1929508, #1930099) Hubert Boma Manilla fixed the “Go to line” palette + Cmd/Ctrl+B to add a breakpoint on the line the user jumped to (#1925974) Lint, Docs and Workflow A change to the mozilla/reject-addtask-only has just landed on Autoland. This makes it so that when the rule is raising an issue with .only() in tests, only the .only() is highlighted, not the whole test: This should make it easier to develop tests whilst using .only(). The ESLint curly rule has now been re-enabled, after it was accidentally disabled. Migration Improvements mconley landed a patch that makes it so that automatic backups don’t get regenerated if a cookie expires “naturally” mconley also landed some probes to get a sense of how the backup service is performing in the wild, and what errors it is hitting New Tab Page The team is working on some new section layout and organization variations – specifically, we’re testing whether or not recommended stories should be grouped into various configurable topic sections. Stay tuned! Picture-in-Picture Thanks to contributor kern25 for: Updating our Dailymotion site-specific wrapper (bug), which also happens to fix broken PiP captions (bug). Updating our videojs site-specific wrapper (bug) to recognize multiple cue elements. This fixes PiP captions rendering incorrectly on Windows for some sites. Search and Navigation Scotch Bonnet daisuke changed shift-tab so that after focusing the address bar (e.g. ctrl-L), the dedicated search button is focused. jteow fixed an issue where search mode was transferred to other tabs. Address Bar Yazan fixed a regression where favicons could take a while to appear on the address bar. Search Moritz has been working on re-enabling support for the search form to OpenSearch engines & adding it to the search configuration. This is accessed by shift-clicking on one of the engines under the dedicated search button, or on the engines in the separate search bar. Moritz has also been working towards improving search engine support for differently sized icons. Standard8 fixed some no-shadow ESLint warnings and test manifest ordering warnings in search code.
  • Firefox Nightly: Celebrating 20 years of Firefox – These Weeks in Firefox: Issue 171 (2024/11/19 19:46)
    Highlights Firefox is turning 20 years old! Here’s a sneak peek of what’s to come for the browser. We completed work on the new messaging surface for the AppMenu / FxA avatar menu. There’s a new FXA_ACCOUNTS_APPMENU_PROTECT_BROWSING_DATA entry in about:asrouter for people who’d like to try it. Here’s another variation: The experiment will also test new copy for the state of the sign-in button when this message is dismissed: Alexandre Poirot added an option in the Debugger Sources panel to control the visibility of WebExtension content scripts (#1698068) Hubert Boma Manilla improved the Debugger by adding the paused line location in the “paused” section, and making it a live region so it’s announced to screen reader when pausing/stepping (#1843320) Friends of the Firefox team Resolved bugs (excluding employees) Script to find new contributors from bug list Volunteers that fixed more than one bug abhijeetchawla[:ff2400t] New contributors (🌟 = first patch) Diego Ciudad Real added the reusable components group to the “Getting Reviews” documentation abhijeetchawla[:ff2400t] fixed UNSAFE_* react lifecycle methods in devtools/client/* 1810429, 1810480, 1810482, 1810483, 1810485, 1810486 abhijeetchawla[:ff2400t] also updated the console architecture diagram to use MermaidJS Collin Richards made pressing ESC on the address bar return focus to window Project Updates Add-ons / Web Extensions WebExtensions Framework In Firefox >= 133, WebExtensions sidebar panels can close themselves using window.close() (Bug 1921631) Thanks to Becca King for contributing this enhancement to the WebExtensions sidebar panels 🎉 WebExtension APIs A new telemetry probe related to the storage.sync quota has been introduced in Firefox 133 (Bug 1915183). The new probe is meant to help plan replacement of the deprecated Kinto-based backend with a rust-based storage.sync implementation in Firefox for Android (similar to the one introduced in Firefox for desktop v79). DevTools DevTools Toolbox Abhijeet Chawla is refactoring our React codebase, switching away from deprecated lifecycle methods (#1810429, #1810480, #1810482, #1810483, #1810485, #1810486) Abhijeet Chawla also migrated a couple of our ASCII base diagram in our documentation to use MermaidJS instead (#1855165, #1855168) Hubert Boma Manilla made items in the Breakpoints panel accessible to keyboard users (#1870062) Hubert Boma Manilla enabled CodeMirror 6 on the Debugger on Nightly, which is the culmination of months of hard work! (#1904489) Nicolas Chevobbe continues his work on supporting High Contrast Mode in the toolbox (#1916391, #1917782, #1921427, #1926794, #1921428, #1926851, #1926852, #1926878) Nicolas Chevobbe  and Julian Descottes fixed an issue where console.log emitted in Service Workers weren’t displayed in the console (#1921384, #1923648) Nicolas Chevobbe fixed a serious performance issue on pages with thousands of CSS variables (#1922511) Lint, Docs and Workflow The source documentation generate and upload tasks on CI will now output specific TEST-UNEXPECTED-FAILURE lines for new warnings/errors. Running ./mach doc locally should generally do the same. The previous “max n warnings” has been replaced by an allow list of current warnings/errors. Flat config and ESLint v9 support has now been added to eslint-plugin-mozilla. This is a big step in preparing to switch mozilla-central over to the new flat configuration & then v9. hjones upgraded stylelint to the latest version and swapped its plugins to use ES modules. New Tab Page The New Tab team is analyzing the results from an experiment that tried different layouts, to see how it impacted usage. Our Data Scientists are pouring over the data to help inform design directions moving forward. Another experiment is primed to run once Firefox 132 fully ships to release – the new “big rectangle” vertical widget will be tested to see whether or not users find this new affordance useful. Work completed on the Fakespot experiment that we’re going to be running for Firefox 133 in December. We’ll be using the vertical widget to display products identified as high-quality, with reliable reviews. Search and Navigation 2024 Address Bar Scotch Bonnet Project Various bugs were fixed by Mandy, Dale, and Yazan quick actions search mode preview was formatted incorrectly (1923550) dedicated Search button was getting stuck after clicking twice (1913193) about chiclets not showing up when scotch bonnet is enabled (1925643) tab to search not shown when scotch bonnet is enabled (1925129) searchmode switcher works when Search Services fails (1906541) localize strings for search mode switcher button (1924228) secondary actions UX updated to be shown between heuristic and first search suggestion. (1922570) To try out these scotch bonnet features, use the PREF browser.urlbar.scotchBonnet.enableOverride Address Bar Moritz deduplicated bookmark and history results with the same URL, but different references. (1924968) browser.urlbar.deduplication.enabled Daisuke fixed overlapping remote tab text in compact mode (1924911) Richardscollin, a volunteer contributor, fixed pressing esc on the address bar when it was selected and will now return focus to the window. (1086524) Daisuke fixed the “Not Secure” label being Illegible when the width is too small (1925332) Suggest adw has been working on City-based weather suggestions (1921126, 1925734, 1925735, 1927010) adw working on integrating machine learning (MLSuggest) with UrlbarPRoviderQuickSuggest (1926381) Search Mortiz landed a patch to localize the keyword for wikipedia search engine. 1687153, 1925735 Places Yazan landed favicon improvement on how firefox picks the best favicon for page-icon urls without a path. (1664001) Mak landed a patch where we significantly improved performance and memory usage when checking for visited URIs. The process by executing a single query for the entire batch of URIs, instead of running one query per URI. (1594368)
  • Firefox Nightly: Experimental address bar deduplication, better auto-open Picture-in-Picture, and more – These Weeks in Firefox: Issue 170 (2024/11/19 19:45)
    Highlights A new messaging surface for the AppMenu and PXI menu is landing imminently so that we can experiment with some messages to help users understand the value of signing up for / signing into a Mozilla account mconley landed a patch to make the heuristics for the automatic Picture-in-Picture feature a bit smarter. This should make it less likely to auto-pip silent or small videos. Moritz fixed an older bug for the address bar where duplicate Google Docs results had been appearing in the address bar dropdown. This fix is currently behind a disabled pref – people are free to test the behavior flipping browser.urlbar.deduplication.enabled to true, and feedback is welcome. We’re still investigating UI treatments to eventually show the duplicates. (1389229) Friends of the Firefox team Resolved bugs (excluding employees) Script to find new contributors from bug list Volunteers that fixed more than one bug Gregory Pappas [:gregp] New contributors (🌟 = first patch) 🌟 bootleq moved findbar sound handling to new JS module 🌟 Diego Ciudad Real replaced broken panel-list link with working link 🌟 abhijeetchawla[:ff2400t] used MermaidJS to replace ASCII based Inspector Panel architecture diagram 🌟 Haoran Tang fixed a bug to ensure target language persists when reopening the translations panel Project Updates Add-ons / Web Extensions Addon Manager & about:addons Soft-blocks support re-introduced in the Add-ons Blocklist v3 (Bug 1917845, Bug 1917846, Bug 1921483, Bug 1923268, Bug 1917852, Bug 1917859, Bug 1922369) The add-ons Install and Optional Permissions dialogs list all the domains the extension has access to (Bug 1911163) WebExtensions Framework Thanks to Florian for moving WebExtensions and AddonsManager telemetry probes away from the legacy telemetry API (Bug 1920073, Bug 1923015) WebExtension APIs The cookies API will be sorting cookies according to RFC 6265 (Bug 1818968), fixing a small chrome incompatibility issue Migration Improvements mconley landed a patch to fix backup regeneration when cookies are purged due to expiry or space limitations New Tab Page We will be running an experiment in December featuring a Fakespot feed in the vertical list on newtab. This list will show products that have been identified as high-quality, and with reliable product reviews. They will link to more detailed Fakespot product pages that will give a breakdown of the product analysis. The test is not being monetized. Note: A previous version of this post featured a mockup image that predated the feature being built. Picture-in-Picture Special shout-out to volunteer contributor def00111 who has been helping out with our site-specific wrappers! Search and Navigation 2024 Address Bar Updates (previously known as “Project Scotch Bonnet”) Intuitive Search Keywords Mandy added new telemetry related to intuitive search keywords (1919180) Mandy also landed a patch to list the keywords in the results panel when a user types `@` (1921549) Unified Search Button Daisuke refined our telemetry so that user interactions with the unified search button are differentiated from user interactions with the original one-off search button row (1919857) Persisted Search James fixed a bug related to persisting search terms for non-default search engines (1921092) Search Config v2 Moritz landed a patch that streamlines how we handle search parameter names for search engine URLs (1895934) Search & Suggest Nan landed a patch that allows us to integrate a user-interest-based relevance ranking into the address bar suggestions we receive from our Merino server (1923187) Places Database Daisuke landed a series of patches so that the Places database no longer fetches any icons over the network. Now that icon fetching is delegated to consumers which have better knowledge about how to do it in a safer way. (1894633) Favicons Yazan landed several patches related to favicons which improve the way we pick a best favicon, avoiding excessive downscaling of large favicons that could make the favicon unrecognizable. (1494016, 1556396, 1923175)
  • Mozilla Thunderbird: Maximize Your Day: Make Important Messages Stand Out with Filters (2024/11/19 16:57)
    For the past two decades, I’ve been trying to get on Jeopardy. This is harder than answering a Final Jeopardy question in your toughest subject. Roughly a tenth of people who take the exam get invited to auditions, and only a tenth of those who make it to auditions make it to the Contestant Pool and into the show. During this time, there are two emails you DON’T want to miss: the first saying you made it to auditions, and the second that you’re in the Contestant Pool. (This second email comes with your contestant form, and yes, I have my short, fun anecdotes to share with host Ken Jennings ready to go.) The next time I audition, reader, I am eliminating refreshing my inbox every five minutes. Instead, I’ll use Thunderbird Filters to make any emails from the Jeopardy Contestant department STAND OUT. Whether you’re hoping to be called up for a game show, waiting on important life news, or otherwise needing to be alert, Thunderbird is here to help you out. Make Important Messages Stand Out with Filters Most of our previous posts have focused on cleaning out your inbox. Now, in addition to showing you how Thunderbird can clear visual and mental clutter out of the way, we’re using filters to make important messages stand out. Click the Application menu button, then Tools. followed by Message Filters. Click New. A Filter Rules dialog box will appear. In the “Filter Name” field, type a name for your filter. Under “Apply filter when”, check one of the options or both. (You probably won’t want to change from the default “Getting New Mail” and “Manually Run” options.) In the “Getting New Mail: ” dropdown menu, choose either Filter before Junk Classification or Filter after Junk Classification. (As for me, I’m choosing Filter before Junk Classification. Just in case) Choose a property, a test and a value for each rule you want to apply: A property is a message element or characteristic such as “Subject” or “From” A test is a check on the property, such as “contains” or “is in my address book” A value completes the test with a specific detail, such as an email address or keyword Choose one or more actions for messages that meet those criteria. (For extra caution, I put THREE actions on my sample filter. You might only need one!) <figcaption class="wp-element-caption">(Note – not the actual Jeopardy addresses!)</figcaption> Find (and Filter) Your Important Messages Thunderbird also lets you create a filter directly from a message. Say you’re organizing your inbox and you see a message you don’t want to miss in the future. Highlight the email, and click on the Message menu button. Scroll down to and click on ‘Create Filter from Message.’ This will open a New Filter window, automatically filled with the sender’s address. Add any other properties, tests, or values, as above. Choose your actions, name your filter, and ta-da! Your new filter will help you know when that next important email arrives. Resources As with last month’s article, this post was inspired by a Mastodon post (sadly, this one was deleted, but thank you, original poster!). Many thanks to our amazing Knowledge Base writers at Mozilla Support who wrote our guide to filters. Also, thanks to Martin Brinkmann and his ghacks website for this and many other helpful Thunderbird guides! Getting Started with Filters Mozilla Support article: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters How to Make Important Messages Stick Out in Thunderbird: https://www.ghacks.net/2022/12/02/how-to-make-important-emails-stick-out-in-thunderbird/ The post Maximize Your Day: Make Important Messages Stand Out with Filters appeared first on The Thunderbird Blog.
  • The Mozilla Blog: 20 years of Firefox: How a community project changed the web (2024/11/18 23:40)
    What was browsing the web like in 2004? People said things like “surfing the internet,” for starters. Excessive pop-up ads were annoying but they felt like the norm. The search bar and multiple tabs did not exist, and there seemed to be only one browser in sight. That is, until Firefox 1.0 arrived and gave it real competition. Built by a group of passionate developers who believed the web should be open, safe and not controlled by a single tech giant, Firefox became the choice for anyone who wanted to experience the internet differently. Millions made the switch, and the web felt bigger.  As the internet started to evolve, so did Firefox — becoming a symbol of open innovation, digital privacy and, above all, the ability to experience the web on your own terms. Here are some key moments of the last 20 years of Firefox. 2004: Firefox 1.0 launch Firefox 1.0 launched on Nov. 9, 2004. As an open-source project, Firefox was developed by a global community of volunteers who collaborated to make a browser that’s more secure, user-friendly and customizable. With built-in pop-up blocking, users could finally decide when and if they wanted to see pop-ups. Firefox introduced tabbed browsing, which let people open multiple sites in one window. It also made online safety a priority, with fraud protection to guard against phishing and spoofing.  <figcaption class="wp-element-caption">On Dec. 15, 2004, Firefox’s community-funded, two-page ad appeared in The New York Times, featuring the names of thousands of supporters and declaring to millions that a faster, safer, and more open browser was here to stay.</figcaption> 2005: Mozilla Developer Center Mozilla launched the Mozilla Developer Center (now MDN Web Docs) as a hub for web standards and developer resources. Today, MDN remains a trusted resource maintained by Mozilla and a global community of contributors. <figcaption class="wp-element-caption">Local Firefox fans in Oregon made a Firefox crop circle in an oat field in August 2006. </figcaption> 2007: Open-source community support The SUMO (support.mozilla.org) platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors. Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors. Six active contributors have been with us since day one (shout outs to cor-el, jscher2000, James, mozbrowser, AliceWyman and marsf) and 16 contributors have been here for 15+ years! <figcaption class="wp-element-caption">A Mozilla contributor story by Chris Hoffman.</figcaption> 2008: A Guinness World Record Firefox 3.0 made history by setting a Guinness World Record for the most software downloads – over 8 million – in a single day. The event known as Download Day was celebrated across Mozilla communities worldwide, marking a moment of pride for developers, contributors and fans.  2010: Firefox goes mobile Firefox made its debut on mobile on Nokia N900. It brought beloved features like tabbed browsing, the Awesome Bar, and Weave Sync, allowing users to sync between desktop and mobile. It also became the first mobile browser to support add-ons, giving users the freedom to customize their browsing on the go. <figcaption class="wp-element-caption">Pocketfox by Yaroslaff Chekunov, the winner of the “Firefox Goes Mobile” design challenge. </figcaption> 2013: Hello Chrome, it’s Firefox calling Firefox made a major leap with WebRTC (Web Real-Time Communication), allowing users to make video and voice calls directly between Firefox and Chrome without needing plugins. This cross-browser communication was a breakthrough for open web standards, making it easier for users to connect seamlessly. Firefox also introduced RTCPeerConnection, enabling users to share files during video calls, further enhancing online collaboration. 2014: Privacy on the web Firefox has shipped a steady drumbeat of anti-tracking features over the years, greatly increasing the privacy of the web. The impact has gone beyond just Firefox users, as online privacy is now a table-stakes deliverable for all browsers. 2014: Block trackers from loading 2016: Containers can isolate sites within Firefox 2018: Enhanced tracking protection blocks tracking cookies (more on this below) 2020: Significant improvements to prevent sites from “fingerprinting” users 2022: Total Cookie Protection isolates all third party tracking cookies (more on this below) 2017: Twice as fast, 30% less memory Firefox took a huge step forward with Firefox Quantum, an update that made browsing twice as fast. Thanks to a new engine built using Mozilla’s Rust programming language, Firefox Quantum made pages load faster and used 30% less memory than Chrome. It was all about speed and efficiency, letting users browse quicker without slowing down their computer. 2018: Firefox blocks trackers  Enhanced Tracking Protection (ETP) was introduced as a new feature that blocks third-party cookies, the primary tool used by companies to track users across websites. ETP made it simple for users to protect their privacy by automatically blocking trackers while ensuring websites still functioned smoothly. Initially an optional feature, ETP became the default setting by early 2019, marking a significant step in giving users better privacy without sacrificing browsing experience. 2019: Advocacy for media formats not encumbered by patents Mozilla played a significant role in the standardization and adoption of AV1 and AVIF as part of its commitment to open, royalty-free and high-quality media standards for the web. Shipping early support in Firefox for AV1 and AVIF, along with Mozilla’s advocacy, accelerated adoption by platforms like YouTube, Netflix and Twitch. The result is a next-generation, royalty-free video codec that provides high-quality video compression without licensing fees, making it an open and accessible choice for the entire web. 2020: Adobe Flash is discontinued Adobe retired Flash on Dec. 31, 2020. Mozilla and Firefox played a pivotal role in the end of Adobe Flash by leading the transition toward more secure, performant and open web standards like HTML5, WebGL and WebAssembly. As Firefox and other browsers adopted HTML5, it helped establish these as viable alternatives to Flash. This shift supported more secure and efficient ways to deliver multimedia content, minimizing the web’s reliance on proprietary plugins like Flash. 2022: Total Cookie Protection  Firefox took privacy further with Total Cookie Protection (TCP), building on the foundation of ETP. Cookies, while helpful for site-specific tasks like keeping you logged in, can also be used by advertisers to track you across multiple sites. TCP isolates cookies by keeping them locked to the site they came from, preventing cross-site tracking. Inspired by the Tor Browser’s privacy features, Firefox’s approach integrates this tool directly into ETP, giving users more control over their data and stopping trackers in their tracks. 2024: 20 years of Firefox These milestones are just a snapshot of Firefox’s story, full of many chapters that have shaped the web as we know it. Today, Firefox remains at the forefront of championing privacy, open innovation and choice. And while the last 20 years have been transformative, the best is yet to come. <figcaption class="wp-element-caption">From left to right: Stuart Parmenter, Tracy Walker, Scott McGregor, Ben Goodger, Myk Melez, Chris Hofmann, Asa Dotzler, Johnny Stenbeck, Rafael Ebron, Jay Patel, Vlad Vucecevic and Bryan Ryner. Sitting, from left to right: Chase Philips, David Baron, Mitchell Baker, Brendan Eich, Dan Mosedale, Chris Beard and Doug Turner in 2004. Credit: Mozilla</figcaption> <figcaption class="wp-element-caption">Mozillians and Foxy in Dublin, Ireland in August 2024. Credit: Mozilla</figcaption> Get Firefox Get the browser that protects what’s important The post 20 years of Firefox: How a community project changed the web appeared first on The Mozilla Blog.
  • The Mozilla Blog: Charging ahead on AI openness and safety (2024/11/18 17:54)
    On the official ”road to the French Government’s AI Action Summit,” Mozilla and Columbia University’s Institute of Global Politics are bringing together AI experts and practitioners to advance AI safety approaches that embody the values of open source. On Tuesday in San Francisco, Mozilla and Columbia University’s Institute of Global Politics will hold the Columbia Convening on AI Openness and Safety. The convening, which takes place on the eve of the convening of the International Network of AI Safety Institutes, will bring together leading researchers and practitioners to advance practical approaches to AI safety that embody the values of openness, transparency, community-centeredness and pragmatism. The Convening seeks to make these values actionable, and demonstrate the power of centering pluralism in AI safety to ultimately empower developers to create safer AI systems. The Columbia Convening series started in October 2023 before the UK Safety Summit, where over 1,800 leading experts and community members jointly stated in an open letter coordinated by Mozilla and Columbia that “when it comes to AI Safety and Security, openness is an antidote not a poison.” In February 2024, the first Columbia Convening was held with this community to explore the complexities of openness in AI. It culminated in a collective framework characterizing the dimensions of openness throughout the stack of foundation models. This second convening holds particular significance as an official event on the road to the AI Action Summit, to be held in France in February 2025. The outputs and conclusions from the collective work will directly shape the agenda and actions for the Summit, offering a crucial opportunity to foreground openness, pluralism and practicality in high-level conversations on AI safety. The timing is particularly relevant as the open ecosystem gains unprecedented momentum among AI practitioners. Open models now cover a large range of modalities and sizes with performance almost on par with the best closed models, making them suitable for most AI use cases. This growth is reflected in the numbers: Hugging Face reported an 880% increase in the number of generative AI model repositories in two years, from 160,000 to 1.57 million. In the private sector, according to a 2024 study by the investment firm a16z, 46% of Fortune 500 company leaders say they strongly prefer to leverage open source models. In this context, many researchers, policymakers and companies are embracing openness in AI as a benefit to safety, rather than a risk. There is also an increased recognition that safety is as much (if not more of) a system property than a model property, making it critical to extend open safety research and tooling to address risks arising at other stages of the AI development lifecycle. The technical and research communities invested in openness in AI systems have been developing tools to make AI safer for years — to include better evaluations and benchmarks, deploying content moderation systems, and creating clear documentation for datasets and AI models. This second Columbia Convening seeks to address the needs of these AI systems developers to ensure the safe and trustworthy deployment of their systems, and to accelerate building safety tools, systems, and interventions that incorporate and reflect the values of openness.  Working with a group of leading researchers and practitioners, the convening is structured around five key tracks: What’s missing from taxonomies of harm and safety definitions? The convening will examine gaps in popular taxonomies of harms and explore what notions of safety popularized by governments and big tech companies fail to capture, working to put critical concerns back on the agenda. Safety tooling in open AI stacks. As the ecosystem of open source tools for AI safety continues to grow, developers need better ways to navigate it. This work will focus on mapping technical interventions and related tooling, and will help identify gaps that need to be addressed for safer system deployment. The future of content safety classifiers. This discussion will chart a future roadmap for foundation models based on open source content safety classifiers, addressing key questions, necessary resources, and research agenda requirements, while drawing insights from past and current classifier system deployments. Participants will explore gaps in the content safety filtering ecosystem, considering both developer needs and future technological developments. Agentic risks for AI systems interfacing with the web. With growing interest in “agentic applications,” participants will work toward a robust working definition and map the specific needs of AI-system developers in developing safe agentic systems, while identifying current gaps to address. Participatory inputs in safety systems. The convening will examine how participatory inputs and democratic engagement can support safety tools and systems throughout development and deployment pipelines, making them more pluralistic and better adapted to specific communities and contexts. Through these tracks, the convening will develop a community-informed research agenda at the intersection of safety and openness in AI, which will inform the AI Action Summit. In keeping with the principles of openness and working in public, we look forward to sharing our work on these issues. The post Charging ahead on AI openness and safety appeared first on The Mozilla Blog.
  • The Mozilla Blog: A civic tech creative on modernizing government sites, MySpace coding and pre-internet memories (2024/11/13 18:06)
    Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and what reclaiming the internet really looks like. This month, we caught up with Senongo Akpem, a creative in civic tech. He’s currently VP of design at Nava, a public benefit corporation that takes a human-centered approach to modernizing government technology and making it more accessible. We talked to him about his MySpace coding days, his fascination with the Internet Archive and why he thinks smart design might just be the bridge we need between government and the people it serves. What is your favorite corner of the internet?  The Internet Archive. It’s stunning how much of the Western world’s knowledge is captured there, metadata and all. You can spend hours examining typography choices in old travel magazines, or old-school VHS-quality shows from Japan, or an intro to modern architecture that was written in 1962.   What is an internet deep dive that you can’t wait to jump back into? There are a growing number of sites like The People Say that act as research indexes and databases to capture the voices of the public. One of Nava’s core strengths is our research practice, which includes human-centered design. We strive to speak to the same priority populations when we conduct research with our government partners in the benefits delivery and health care spaces.  I’m eager to get back into the data and read more of the stories in there.  What is the one tab you always regret closing? Semafor Africa has a great newsletter that explores in detail the political, social and cultural news across the continent. In one post, you might read about clean energy projects and the cost of their capital investments. In another, you might read about the backstory of an Africa Cup of Nations (AFCON) match delay.  There are so many complex, 21st century stories to be told about the African continent. What can you not stop talking about on the internet right now? For the past year, I’ve been part of a Nava team working on an effort to modernize Grants.gov. Grants.gov is the front door for grants across the federal government, and disburses more than $300 billion (yes, that’s a B!!) in grants throughout the country every year. These grants go to a range of grantees from small, community-based organizations to large, national nonprofits. Nava also supports the Office of Grants to help ensure the federal government doesn’t underserve any communities.  I’ve mainly been leading strategic branding and communications efforts on the project, which often means nerding out with our government partners and coworkers on things like accessible color palettes, type scales and image banks. It’s a facet of civic tech that people often don’t think about.  In 2023, the Office of Management and Budget released guidance directing agencies to deliver a “digital-first public experience.” Their guidance gives agencies details and deadlines for the implementation of the 21st Century IDEA Act, which was signed into law four years ago. In multiple places, the memo describes how brand identity, visual design and design systems play a role in building trust in government systems — specifically, that clear and consistent use of an agency’s brand identity and visual design help the public identify official government entities.   How do you see your work with Nava helping improve public trust in digital services? Nava is a public benefit corporation (PBC), which is pretty unique in our space, and was intentionally set up that way by our founders. Being a PBC is not just a best practice or a label — it has legal weight, and is part of the company DNA. The people I work with at Nava have a fiduciary duty not only to our stakeholders, but to our stated mission: to improve the access, effectiveness, and simplicity of government services. Nava believes that for companies like ours — that are paid with taxpayer dollars, whose work affects millions of lives — social responsibility should be the norm, not the exception. The human-centered approach we take creates a better experience for end-users and the agencies we partner with. It ultimately builds trust in public institutions and the digital services provided. I see huge opportunities for the researchers, service designers, content strategists, frontend designers and communications designers at Nava to contribute to this.  As Nava grows — we’ve recently entered the mid-sized category — we continue to place our mission at the forefront, and strive to set a good example of what’s possible.  What was the first online community you engaged with? My first sustained experience with an online community (not counting email) was probably MySpace around 2005-06. As I’m sure many people remember, it was a hit as soon as people got on there and started adding content. I was living in Japan at the time, and used the CSS/HTML hack to put a skin on my page while adding music, friends, you name it. I think that was one of the first times I felt the internet converging across cultures, rather than just the Web 1.0 model of static blocks of information.  What articles and/or videos are you waiting to read/watch right now? I got about two-thirds of the way through Scavengers Reign before I had to take a pause. It’s about the survivors of a spaceship crash on a distant planet that is teeming with strange life. When I started it, I assumed it would be a beautiful, quiet anime like a Moebius illustration. Spoiler Alert: It turned into a horror show! Every episode was more desperate than the last one. I’m waiting to build up the nerve to finish the first season.  If you could create your own corner of the internet, what would it look like? It would probably be something dedicated to archiving cultural/family ephemera. For the past few years, I have been slowly scanning in my parents photos, letters, postcards, passports and other small pieces of their life that I have managed to save.  <figcaption class="wp-element-caption">Senongo’s mother, father, and older sister sit on motorbikes in Benue State, Nigeria, around 1975-76.</figcaption> A few years back, while in a taxi in Denver, I told the driver about my project and we began to chat about how important it is to save those family memories. The driver explained that she was from New Orleans, and her grandmother had been a Voodoo priestess. The family had sadly not been able to capture any of her stories or memories before she passed.  My own corner of the internet would be a set of these poignant little memories from before the internet, scanned or recorded for future generations to share.  Senongo Akpem is the vice president of design at Nava, a public benefit corporation working to make government services simple, effective and accessible to all. For the past two decades, he has specialized in collaborating with clients across the world on flexible, impactful digital experiences. Prior to joining Nava, he was design director at Constructive, a social impact design agency, and an art director at Cambridge University Press, where he led a global design team. Senongo is the author of “Cross-Cutural Design,” a book about creating culturally relevant and responsible experiences that reach a truly global audience. The child of a Nigerian father and a Dutch-American mother, Senongo grew up in Nigeria, lived in Japan for almost a decade, and now calls New York City home. Living in constantly shifting cultural and physical spaces has given him unique insight into the influence of culture on communication and creativity. Senongo speaks at conferences around the world about cross-cultural design, digital storytelling, and transmedia. He loves any and all science fiction. The post A civic tech creative on modernizing government sites, MySpace coding and pre-internet memories appeared first on The Mozilla Blog.
  • About:Community: A tribute to Dian Ina Mahendra (2024/11/13 09:04)
    It is with a heavy heart that I share the passing of my dear friend, Dian Ina Mahendra, who left us after a long battle with illness. Dian Ina was a remarkable woman whose warmth, kindness, and ever-present support touched everyone around her. Her ability to offer solutions to even the most challenging problems was truly a gift, and she had an uncanny knack for finding a way out of every situation. Dian Ina’s contribution to Mozilla spanned back to the launch of Firefox 4 in 2011. She had also been heavily involved during the days of Firefox OS, the Webmaker campaign, FoxYeah, and most recently, Firefox Rocket (later renamed Firefox Lite) when it first launched in Indonesia. Additionally, she had been a dedicated contributor to localization through Pontoon. Those who knew Dian Ina were constantly drawn to her, not just for her brilliant ideas, but for her open heart and listening ear. She was the person people turned to when they needed advice or simply someone to talk to. No matter how big or small the problem, she always knew just what to say, offering guidance with grace and clarity. Beyond her wisdom, Dian Ina was a source of light and laughter. Her fun-loving nature and infectious energy made her the key person everyone turned to when they were looking for recommendations, whether it was for the best restaurant in town, a great book, or even advice on life itself. Her opinions were trusted, not only for their insight but also for the care she took in considering what would truly benefit others. Her impact on those around her was immeasurable. She leaves behind a legacy of warmth, wisdom, and a deep sense of trust from everyone who had the privilege of knowing her. We will miss her dearly, but her spirit and the lessons she shared will live on in the hearts of all who knew her. Here are some of the memories that people shared about Dian Ina: Franc: Ina was a funny person, always with a smile. We shared many events like All Hands, Leadership Summit and more. Que la tierra te sea leve. Rosana Ardila: Dian Ina was a wonderful human being. I remember her warm smile, when she was supporting the community, talking about art or food. She was independent and principled and so incredibly fun to be around. I was looking forward to seeing her again, touring her museum in Jakarta, discovering more food together, talking about art and digital life, the little things you do with people you like. She was so multifaceted, so smart and passionate. She left a mark on me and I will remember her, I’ll keep the memory of her big smile with me. Delphine: I am deeply saddened to hear of Dian Ina’s passing. She was a truly kind and gentle soul, always willing to lend a hand. I will cherish the memories of our conversations and her dedication to her work as a localizer and valued member of the Mozilla community. Her presence will be profoundly missed. Fauzan: For me, Ina is the best mentor in conflict resolution, design, art, dan L10n. She is totally irreplaceable in Indonesian community. We already missed her a lot. William: I will never forget that smile and that contagious laughter of yours. I have such fond memories of my many trips to Jakarta, in large part thanks to you. May you rest in peace dearest Dian Ina. Amira Dhalla: I’m going to remember Ina as the thoughtful, kind, and warm person she always was to everyone around her. We have many memories together but I specifically remember us giggling and jumping around together on the grounds of a castle in Scotland. We had so many fun memories together talking technology, art, and Indonesia. I’m saddened by the news of her passing but comforted by the Mozilla community honoring her in a special way and know we will keep her legacy alive. Kiki: Mbak Ina was one of the female leaders I looked up to within the Mozilla Indonesia Community. She embodied all the definition of a smart and capable woman. The kind who was brave, assertive and above all, so fun to be around. I like that she can keep things real by not being afraid of sharing the hard truth, which is truly appreciative within a community setting. I always thought about her and her partner (Mas Mahen) as a fun and intelligent couple. Deep condolences to Mas Mahen and her entire family in Malang and Bandung. She left a huge mark on the Mozilla Indonesia Community, and she’ll be deeply missed. Joe Cheng: I am deeply saddened to hear of Dian Ina’s passing. As the Product Manager for Firefox Lite, I had the privilege of witnessing her invaluable contributions firsthand. Dian was not only a crucial part of Mozilla’s community in Indonesia but also a driving force behind the success of Firefox Lite and other Mozilla projects. Her enthusiasm, unwavering support, and kindness left an indelible mark on everyone who met her. I fondly remember the time my team and I spent with her during our visit to Jakarta, where her vibrant spirit and warm smiles brought joy to our interactions. Dian’s positive energy and dedication will be remembered always, and her legacy will live on in the Mozilla community and beyond. She will be dearly missed.
  • This Week In Rust: This Week in Rust 573 (2024/11/13 05:00)
    Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR. Want TWIR in your inbox? Subscribe here. Updates from Rust Community Official gccrs: An alternative compiler for Rust Google Summer of Code 2024 results Foundation Rust Foundation Releases Problem Statement on C++/Rust Interoperability Newsletters Linebender in October 2024: resvg stewardship The Embedded Rustacean Issue #32 Project/Tooling Updates Introducing Hyperlight: Virtual machine-based security for functions at scale Introducing Sled, a Rust Library for Creating Spatial LED Strip Lighting Effects Redis Shield: A high-performance rate limiting module in Rust using the Token Bucket algorithm Cohen: gccrs: An alternative compiler for Rust Progress on toolchain security features Next-gen builder macro Bon 3.0 release Observations/Thoughts Perhaps Rust needs "defer" Rust needs an official specification Why is std::pin::Pin so weird? Bringing faster exceptions to Rust Exploring the Assembly Code generated by Rust Recursive Tree Traversal Typed IDs with SeaORM Spawning Processes in Linux [video] Rust 2024 Project Goals Update & Rust 1.80.1 [video] Rio: Next generation terminal emulator written in Rust Rust Walkthroughs Parsing arguments in Rust with no dependencies Using portable SIMD in stable Rust Rust Syn Crate Tutorial: Automate Builder Patterns with Custom Macros Tutorial: Implementing JSON parsing Impl Snake For Micro:bit - Embedded async Rust on BBC Micro:bit with Embassy Miscellaneous October 2024 Rust Jobs Report Crate of the Week This week's crate is struct-split, a proc macro to implement partial borrows. Thanks to Felix for the suggestion! Please submit your suggestions and votes for next week! Calls for Testing An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward: RFCs No calls for testing were issued this week. Rust No calls for testing were issued this week. Rustup No calls for testing were issued this week. If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing. Call for Participation; projects and speakers CFP - Projects Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Rama — multiple basic/bearer credentials for 'Authorization' server support Rama — implement take and replace for Context and Extensions If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! CFP - Events Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! Updates from the Rust Project 403 pull requests were merged in the last week remove the wasm32-wasi target from rustc add a new wide-arithmetic feature for WebAssembly add Unicode block-drawing compiler output support add {ignore,needs}-{rustc,std}-debug-assertions directive support add a default implementation for CodegenBackend::link add discriminators to DILocations when multiple functions are inlined into a single point add v9, v8plus, and leoncasa target feature to sparc and use v8plus in create_object_file additional tests to ensure let is rejected during parsing arbitrary self types v2: (unused) Receiver trait basic inline assembly support for SPARC and SPARC64 coverage: extract safe FFI wrapper functions to llvm_cov coverage: restrict empty-span expansion to only cover { and } coverage: simplify parts of coverage graph creation do not filter empty lint passes & re-do CTFE pass do not reveal opaques in the param-env, we got lazy norm instead do not trust download-rustc=if-unchanged on CI for now don't suggest .into_iter() on iterators don't use maybe_unwrap_block when checking for macro calls in a block expr dont suggest use<impl Trait> when we have an edition-2024-related borrowck issue drop "gnu" in the target env for FreeBSD armv6/7 emit warning when calling/declaring functions with unavailable vectors enforce that raw lifetimes must be valid raw identifiers ensure that tail expr receive lifetime extension fix parens mangled in shared mut static lint suggestion get rid of check_opaque_type_well_formed make RustString an extern type to avoid improper_ctypes warnings make Ty::primitive_symbol recognize str make fn_abi_sanity_check a bit stricter make sure that we suggest turbofishing the right type arg for never suggestion mark some target features as 'forbidden' so they cannot be (un)set with -Ctarget-feature only disable cache if predicate has opaques within it passWrapper: adapt for new parameter in LLVM prefer pub(super) in unreachable_pub lint suggestion properly suggest E::assoc when we encounter E::Variant::assoc provide placeholder generics for traits in "no method found for type parameter" suggestions reject raw lifetime followed by ', like regular lifetimes do remove 'platform-intrinsic' ABI leftovers remove rustc_session::config::rustc_short_optgroups remove support for rustc_safe_intrinsic attribute; use rustc_intrinsic functions instead remove unnecessary pub enum glob-imports from rustc_middle::ty require const_impl_trait gate for all conditional and trait const calls revert using HEAP static in Windows alloc set "symbol name" in raw-dylib import libraries to the decorated name simplify FFI calls for -Ztime-llvm-passes and -Zprint-codegen-stats simplify some places that deal with generic parameter defaults simplify the internal API for declaring command-line options suggest swapping LHS and RHS when RHS impls PartialEq<lhs_ty> tweak E0320 overflow error wording tweak detection of multiple crate versions to be more encompassing use download-rustc="if-unchanged" as a global default use a separate dir for r-a builds consistently in helix config use verbose for path separator suggestion pointee_info_at: fix logic for recursing into enums rustc_codegen_llvm: Add a new 'pc' option to branch-protection rustc_target: more target string fixes for LLVM 20 interpret: get_alloc_info: also return mutability StableMIR: A few fixes to pretty printing StableMIR: API to retrieve definitions from crates miri: fix linux-futex test being accidentally disabled miri: get/set thread name shims return errors for invalid handles miri: preparing for merge from rustc miri: pthread-sync test: avoid confusing error when running with preemption miri: remove MutexID list miri: renamed this arguments to ecx miri: stacked borrows tests: add those that fail under TB miri: standardized variable names for InterpCx miri: store futexes in per-allocation data rather than globally miri: sync support: dont implicitly clone inside the general sync machinery stabilise const_char_encode_utf16 stabilize Arm64EC inline assembly stabilize WebAssembly multivalue, reference-types, and tail-call target features stabilize UnsafeCell::from_mut stabilize s390x inline assembly add new unstable feature const_eq_ignore_ascii_case make char::is_whitespace unstably const inline str::repeat core/fmt: Replace checked slice indexing by unchecked to support panic-free code add Set entry API implement div_ceil for NonZero<unsigned> implement file_lock feature initialize channel Blocks directly on the heap disable f16 on platforms that have recursion problems cargo: warnings: add build.warnings option cargo: test: Make redactions consistent with snapbox cargo: git: do not validate submodules of fresh checkouts cargo: normalize the target paths cargo: refactor: clone-on-write when needed for InternedString cargo: rustfix: replace special-case duplicate handling with error rustdoc-search: show type signature on type-driven SERP rustdoc-search: simplify rules for generics and type params bindgen: fix field_visibility not called for new-type aliases bindgen: fix unsafe_op_in_unsafe_fn when using dynamic librarys and wrap_unsafe_ops handle separate prefixes in clippy rules clippy: no_mangle_with_rust_abi: properly position the suggested ABI clippy: add match-based manual try to clippy::question_mark clippy: collect attribute spans early for disallowed macros clippy: fix large_include_file lint being triggered all the time by doc comments clippy: fix: identity_op suggestions use correct parenthesis rust-analyzer: editors/code: change minimum VS Code from 1.78 to 1.83 rust-analyzer: use completion item indices instead of property matching when searching for the completion item to resolve Rust Compiler Performance Triage Regressions primarily in doc builds. No significant changes in cycle or max-rss counts. Triage done by @simulacrum. Revision range: 27e38f8f..d4822c2d 1 Regressions, 1 Improvements, 4 Mixed; 1 of them in rollups 47 artifact comparisons made in total Full report here Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: No RFCs were approved this week. Final Comment Period Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs [disposition: merge] [RFC] Thread spawn hook (inheriting thread locals) Tracking Issues & PRs Rust [disposition: merge] Tracking issue for const_size_of_val and const_align_of_val [disposition: merge] mark is_val_statically_known intrinsic as stably const-callable [disposition: merge] Tracking issue for const <*const T>::is_null [disposition: merge] Tracking issue for const Pin methods [disposition: merge] Stabilize const_atomic_from_ptr Cargo [disposition: merge] feat(resolver): Stabilize resolver v3 Language Team No Language Team Proposals entered Final Comment Period this week. Language Reference No Language Reference RFCs entered Final Comment Period this week. Unsafe Code Guidelines No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week. New and Updated RFCs [new] RFC: Unsafe Set Enum Discriminants Upcoming Events Rusty Events between 2024-11-13 - 2024-12-11 🦀 Virtual 2024-11-14 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-11-14 | Virtual and In-Person (Lehi, UT, US) | Utah Rust Green Thumb: Building a Bluetooth-Enabled Plant Waterer with Rust and Microbit 2024-11-14 | Virtual and In-Person (Seattle, WA, US) | Seattle Rust User Group November Meetup 2024-11-15 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative Rust Coding / Game Dev Fridays Open Mob Session! 2024-11-19 | Virtual (Los Angeles, CA, US) | DevTalk LA Discussion - Topic: Rust for UI 2024-11-19 | Virtual (Washington, DC, US) | Rust DC Mid-month Rustful 2024-11-20 | Virtual (Cardiff, UK) | Rust and C++ Cardiff Rust for Rustaceans Book Club: Chapter 12: Rust Without the Standard Library 2024-11-20 | Virtual and In-Person (Vancouver, BC, CA) | Vancouver Rust Embedded Rust Workshop 2024-11-21 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-11-21 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Trustworthy IoT with Rust--and passwords! 2024-11-21 | Virtual (Rotterdam, NL) | Bevy Game Development Bevy Meetup #7 2024-11-25 | Virtual (Bratislava, SK) | Bratislava Rust Meetup Group ONLINE Talk, sponsored by Sonalake - Bratislava Rust Meetup 2024-11-26 | Virtual (Dallas, TX, US) | Dallas Rust Last Tuesday 2024-11-28 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-11-28 | Virtual (Nürnberg, DE) | Rust Nuremberg Rust Nürnberg online 2024-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup Buffalo Rust User Group 2024-12-04 | Virtual (Indianapolis, IN, US) | Indy Rust Indy.rs - with Social Distancing 2024-12-05 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-12-10 | Virtual (Dallas, TX, US) | Dallas Rust Second Tuesday 2024-12-11 | Virtual (Vancouver, BC, CA) | Vancouver Rust Rust Study/Hack/Hang-out Africa 2024-12-07 | Virtual( Kampala, UG) | Rust Circle Kampala Rust Circle Meetup Asia 2024-11-28 | Bangalore/Bengaluru, IN | Rust Bangalore RustTechX Summit 2024 BOSCH 2024-11-30 | Tokyo, JP | Rust Tokyo Rust.Tokyo 2024 Europe 2024-11-13 | Reading, UK | Reading Rust Workshop Reading Rust Meetup 2024-11-14 | Stockholm, SE | Stockholm Rust Rust Meetup @UXStream 2024-11-19 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig Daten sichern mit ZFS (und Rust) 2024-11-19 | Paris, FR | Rust Paris Rust meetup #72 2024-11-21 | Edinburgh, UK | Rust and Friends Rust and Friends (pub) 2024-11-21 | Madrid, ES | MadRust Taller de introducción a unit testing en Rust 2024-11-21 | Oslo, NO | Rust Oslo Rust Hack'n'Learn at Kampen Bistro 2024-11-23 | Basel, CH | Rust Basel Rust + HTMX - Workshop #3 2024-11-26 | Warsaw, PL | Rust Warsaw New Rust Warsaw Meetup #3 2024-11-27 | Dortmund, DE | Rust Dortmund Rust Dortmund 2024-11-28 | Aarhus, DK | Rust Aarhus Talk Night at Lind Capital 2024-11-28 | Augsburg, DE | Rust Meetup Augsburg Augsburg Rust Meetup #10 2024-11-28 | Berlin, DE | OpenTechSchool Berlin + Rust Berlin Rust and Tell - Title 2024-11-28 | Gdansk, PL | Rust Gdansk Rust Gdansk Meetup #5 2024-11-28 | Hamburg, DE | Rust Meetup Hamburg Rust Hack & Learn with Mainmatter & Otto 2024-11-28 | Prague, CZ | Rust Prague Rust/C++ Meetup Prague (November 2024) 2024-12-03 | Copenhagen, DK | Copenhagen Rust Community Rust Hack Night #11: Advent of Code 2024-12-04 | Oxford, UK | Oxford Rust Meetup Group Oxford Rust and C++ social 2024-12-05 | Olomouc, CZ | Rust Moravia Rust Moravia Meetup (December 2024) 2024-12-06 | Moscow, RU | RustCon RU RustCon Russia 2024-12-11 | Reading, UK | Reading Rust Workshop Reading Rust Meetup North America 2024-11-14 | Mountain View, CA, US | Hacker Dojo Rust Meetup at Hacker Dojo 2024-11-14 | Portland, OR, US | PDXRust PDXRust November 2024: Lightning Talks! 2024-11-15 | Mexico City, DF, MX | Rust MX Multi threading y Async en Rust parte 2 - Smart Pointes y Closures 2024-11-15 | Somerville, MA, US | Boston Rust Meetup Ball Square Rust Lunch, Nov 15 2024-11-19 | San Francisco, CA, US | San Francisco Rust Study Group Rust Hacking in Person 2024-11-19 | Spokane, WA, US | Spokane Rust Building Your First Command Line Interface - A Code-Along Workshop 2024-11-23 | Boston, MA, US | Boston Rust Meetup Boston Common Rust Lunch, Nov 23 2024-11-25 | Ferndale, MI, US | Detroit Rust Rust Community Meetup - Ferndale 2024-11-26 | Minneapolis, MN, US | Minneapolis Rust Meetup Minneapolis Rust Meetup Happy Hour 2024-11-27 | Austin, TX, US | Rust ATX Rust Lunch - Fareground 2024-11-28 | Mountain View, CA, US | Hacker Dojo RUST MEETUP at HACKER DOJO 2024-12-05 | St. Louis, MO, US | STL Rust Rust Strings 2024-12-10 | Ann Arbor, MI, US | Detroit Rust Rust Community Meetup - Ann Arbor Oceania 2024-12-08 | Canberra, AU | Canberra Rust User Group CRUG Xmas party If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Jobs Please see the latest Who's Hiring thread on r/rust Quote of the Week Netstack3 encompasses 63 crates and 60 developer-years of code. It contains more code than the top ten crates on crates.io combined. ... For the past eleven months, they have been running the new networking stack on 60 devices, full time. In that time, Liebow-Feeser said, most code would have been expected to show "mountains of bugs". Netstack3 had only three; he attributed that low number to the team's approach of encoding as many important invariants in the type system as possible. – Joshua Liebow-Feeser at RustConf, as reported by Daroc Alden on Linux Weekly News Thanks to Anton Fetisov for the suggestion! Please submit quotes and vote for next week! This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez. Email list hosting is sponsored by The Rust Foundation Discuss on r/rust
  • The Mozilla Blog: Exploring the Firefox community on r/firefox (2024/11/12 18:08)
    Open source thrives because of its people. Firefox, like so many successful open-source projects, is powered by passionate contributors and dedicated supporters. Their collective efforts have transformed Firefox from just a web browser into the cornerstone of a global community, bringing together users and developers with a shared vision for the open web. Reddit, one of the most visited websites in the world, is a platform where millions of users — called Redditors — share and vote on content in self-moderated subreddits. One such space is r/firefox, a vibrant community of over 195,000 Firefox enthusiasts. Unlike a corporate-managed forum, this is an organic, user-driven environment where members engage in everything from technical discussions and support to passionate rants and heartfelt expressions for Firefox. Let’s explore this dynamic corner of the “front page of the internet” by diving into r/firefox, the Reddit community for all things Firefox. <figcaption class="wp-element-caption">r/firefox 2008 courtesy of internet archives Wayback Machine.</figcaption> Mozilla’s online community and contributors live across a wide variety of digital spaces. Mozilla Connect, the official portal for ideas and discussion receives millions of visits and has over 200 employees registered there. There are communities in Discord, Matrix, Github, Discourse, Bugzilla, support, MDN, the list goes on… But among the endless corners of the internet, Firefox’s r/firefox subreddit stands out — not a space managed by Mozilla, but as an organic community of passionate Firefox users. Though it’s been around since 2008, most of its members have joined in just the past five years, with nearly 100,000 new members joining in the last four alone. <figcaption class="wp-element-caption">Which Firefox logo do you like the most? asks Redditor aphaits.</figcaption> Who are the members of r/firefox, and what drives their posts? In many online communities, a small group of users tends to drive most of the conversation. The 90-9-1 rule is often used as a general guideline to describe this, where 1% of users create the majority of the content, 9% contribute occasionally, and the remaining 90% are passive consumers. However, this is just a rough yardstick, not an exact science—every community is unique in terms of who posts, who drives content, and how others engage. While we don’t have precise numbers for r/firefox, it seems to follow this general trend, with a core group of passionate Redditors contributing the most in-depth discussions and keeping the community vibrant. As we explore the community on the Firefox subreddit, we can broadly identify a few archetypes for this group of super contributors to the Firefox Community to give us a better sense of what kinds of posts we can find there. The Developer: Engages in technical discussions and may even contribute to Firefox’s code or features. The Privacy and Open Source Advocate: Values Firefox’s commitment to privacy, web standards, and open source. <figcaption class="wp-element-caption">Mozilla employees also have a history participating directly in r/firefox </figcaption> The Customizer: Thrives on Firefox’s extensive customization options, especially add-ons and themes. <figcaption class="wp-element-caption">OctoNezd sharing their Firefox add-on in this post. </figcaption> The Challenger: Engaged Firefox users who want the product to be improved and provide critical feedback on what they find frustrating or lacking in Firefox. Posting feedback about bugs, performance issues, or changes they don’t agree with. While sometimes harsh, their feedback can highlight areas for improvement. <figcaption class="wp-element-caption">While sometimes harsh, their feedback can highlight areas for improvement. </figcaption> Firefox Supporter: is loyal to Firefox for its open-source values and commitment to a better internet. Participates in light-hearted discussions, from cool browser themes to quirky extensions, and loves helping others. <figcaption class="wp-element-caption">Tracking down cute drawings with this post from janka12fsdf</figcaption> Flair and moderators help highlight the diverse range of contributors who keep r/firefox lively. Each member brings something unique to the conversation with moderators playing a crucial role in ensuring these interactions remain productive. Flair allows contributors to display their identity and expertise, helping to shape the community’s culture and focus. <figcaption class="wp-element-caption">The flair of r/firefox</figcaption> The current Moderator team of r/firefox:u/Antabakau/yoasifu/rctgamer3u/TimVdEyndeu/Alan976 (Mario583)u/SKITTLE_LAModerators play a crucial role in managing online communities like r/firefox. They ensure the subreddit remains organized, safe, and aligned with community guidelines. Not just another browser  At the core of this community is a shared belief: Firefox isn’t just another browser; it’s a symbol of a better, more human-centered internet. This passion comes from Firefox’s open-source roots and its commitment to privacy and customization. In a world where tech giants dominate the market, Firefox offers something different—something people feel deeply connected to. The users of r/firefox prove that a browser can be more than just a tool for browsing the internet. For many, it’s a symbol of their commitment to an open, people-first web. In this corner of “the front page of the internet,” their contributions—whether coding, troubleshooting, or sharing memes—are collectively helping shape the future of the web. Appendix: r/firefox Through the Years 2008: Firefox global market share reaches 21.5% | Mozilla Links They Shrunk My Firefox! Mozilla Shows off Mobile Mockups 2009: Firefox 3.5 RC3 coming this week Mozilla’s internal tools for its most popular add-on, how its creator wants to let you use it! (Firefox Sync Interview) Mozilla Firefox 3.5 Release Candidate 3 now available Mozilla has more than 750 million users Speed tweaks for Firefox without Linux Speed tip for Firefox: Try increasing the Page File to quadruple RAM (Linux only) Drowning under 30 tabs? Help is on the way Google Toolbar in Firefox 3.5? Firefox memory usage control plan (Linux only) 2010: Why do I have 8 different versions of Java extensions in my Firefox? Shouldn’t there even be one delete button for all? 2010 Best of Show prize for top 10 browsers at CES Add-on recommendations in which Firefox addon depends please add a mini Firefox icon on the addons search results Mozilla releases Firefox 3.6.13; fixes multiple plugin crashes for uninterrupted browsing experience Plugin for Firefox’s ‘undead’ status: boosts performance and fixes crashes Firefox is gradually making addons less bloated every day, we won’t have to laugh back then. It’s been a long wait but it’s coming! 2011: Chrome’s RiverZoom extension ported to Firefox via Scriptish/Greasemonkey script, w00t! How do you open a new tab next to the current focused tab? [Support Request] Firefox Aurora 9.0.2 Won’t download anything, details inside 2012: Mozilla releases: Shuts, blocks IE, Chrome and Internet Explorer down! This is how to make Firefox actually do the process when you click close! Firefox 11 is out now and you can access it (I got 1 problem, what’s up?) Any advice for experimental branches of Firefox? Many users: is it becoming superfluous? Mozilla Firefox on the long-awaited Multi-account extensions experiment Is Firefox doing what the interface wants with my brain’s memory? [Fixed] In Firefox, when you run Java Runtime, the result will be mind-blowing 2016: Mozilla releases Windows nightlies and all updates address all issues Pick the One: There are experimental browsers and extensions by Mozilla and Google 2018: [Sticky] Trying to use Firefox with no extensions has surfaced numerous user complaints. We need to stop the sync from showing a default response. It’s gone too far for being Firefox’s choice. [Sticky] Synonymity EVER Privacy in Firefox; Mozilla’s opinion 2020: Mozilla’s Daily Note 2020-09-11 Introducing r/firefox subreddit design update Megathread Nightly Discussion for Builds in Firefox Nightly 2022: Weekly Addon suggestions! “I have an addon for that!” post for 2022-03-09 How to easily transfer bookmarks to the Firefox bookmarks bar in Windows? Firefox tabs are bringing back web-clips – an update for Chrome users 2024: 2024 is the best year for Firefox Opportunity to contribute to Multi-Account Containers extension Get Firefox Get the browser that protects what’s important The post Exploring the Firefox community on r/firefox appeared first on The Mozilla Blog.
  • The Mozilla Blog: How AI is reshaping creativity: Insights from art, tech and policy (2024/11/12 17:00)
    AI is shaking things up in the creative world, and I get why a lot of artists feel anxious. Whenever new technology comes along — especially in industries like ours — it brings fear. Fear of losing control, fear of being replaced. That’s real. But there’s another side to this: AI can open doors we never thought possible. In “Creativity in the Age of AI: Insights, Ethics & Opportunities,” a report I co-wrote with digital policy expert Natalia Domagala and technologist Angela Lungati — in collaboration with Mozilla and Skillshare — we explore both the anxieties and opportunities AI brings to creatives. Together, we explore the future of creativity in this paper, touching on the ethical challenges and the immense possibilities AI brings. Key points explored in the report: Ethical concerns about ownership: One of the biggest issues is ownership, as Natalia explores in our paper. AI pulls from massive datasets, often without the original creators’ consent. This raises serious concerns about transparency and who owns the work AI generates. Copyright laws weren’t built for this.  Bias in AI: AI is only as good as the data it’s trained on. If AI is trained on biased data, it will reproduce those biases. It’s crucial to make sure AI tools are built with diverse datasets, and that the people designing them understand the importance of inclusivity. AI can lower barriers to creative innovation: The rise of generative AI tools like ChatGPT and OpenAI Sora is making high-level creative outputs accessible to non-experts. These tools can help level the playing field, allowing creatives to explore ideas and storytelling that wouldn’t have been possible before. A tool for cultural preservation: With “Protopica,” a short film I co-directed with Will Selviz, AI allowed us to blend Caribbean heritage and futurism, creating a new form of storytelling that wouldn’t have been achievable with traditional methods. This shows how AI can preserve culture while pushing creative boundaries. AI as a force for social change: Angela highlighted how AI-powered tools are supporting civic education and engagement in Kenya. For example, Corrupt Politicians GPT exposed corruption cases involving Kenyan politicians, while Finance Bill GPT simplified the complex provisions of a controversial finance bill. These tools have helped local communities understand the implications of proposed laws, contributing to nationwide protests and civic participation. AI amplifying, not replacing, human creativity: There’s a real fear among creatives that AI could replace their jobs, and that fear is legitimate. In a world driven by productivity, companies often cut human roles first. But AI shouldn’t be about replacing humans — it’s about amplifying what we can do. It should be used to empower, not replace, human creativity. If you’re curious about how AI is changing the creative world — whether you’re excited or skeptical — this paper is for you. We explore the risks, the rewards and what AI means for the future of creativity. It’s the start of a crucial conversation about creativity and control, with insights from the worlds of art, technology and policy — offering a glimpse into how AI is reshaping the future. Creativity in the Age of AI Read the paper Manuel Sainsily is a TEDx speaker and an XR and AI instructor at McGill University and UMass Boston. Born in Guadeloupe and a Canadian citizen based in Montreal, where he completed his Master of Science in computer sciences, he is a trilingual public speaker, designer, and educator with over 15 years of experience who champions the responsible use and understanding of artificial intelligence. From delivering a masterclass on AI ethics and speaking at worldwide tech, film, and gaming conferences to being celebrated by NVIDIA, Mozilla Rise25, and Skillshare, and producing art exhibitions with Meta, OpenAI, and VIFFest, Manuel amplifies the conversation around cultural preservation and emerging technologies such as spatial computing, AI, real-time 3D, haptics, and BCI through powerful keynotes and curated events. The post How AI is reshaping creativity: Insights from art, tech and policy appeared first on The Mozilla Blog.
  • About:Community: Contributor spotlight – MyeongJun Go (2024/11/12 00:28)
    The beauty of an open source software lies in the collaborative spirit of its contributors. In this post, we’re highlighting the story of MyeongJun Go (Jun), who has been a dedicated contributor to the Performance Tools team. His contributions have made a remarkable impact on performance testing and tooling, from local tools like Mach Try Perf and Raptor to web-based tools such as Treeherder. Thanks to Jun, developers are even more empowered to improve the performance of our products. Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code. Q: Can you tell us a little about how you first got involved with Mozilla? I felt a constant thirst for development while working on company services. I wanted to create something that could benefit the world and collaborate with developers globally. That’s when I decided to dive into open source development. Around that time, I was already using Firefox as my primary browser, and I frequently referenced MDN for work, naturally familiarizing myself with Mozilla’s services. One day, I thought, how amazing would it be to contribute to a Mozilla open source project used by people worldwide? So, I joined an open source challenge. At first, I wondered, can I really contribute to Firefox? But thanks to the supportive Mozilla staff, I was able to tackle one issue at a time and gradually build my experience. Q: Your contributions have had a major impact on performance testing and tooling. What has been your favourite or most rewarding project to work on so far? I’ve genuinely found every project and task rewarding—and enjoyable too. Each time I completed a task, I felt a strong sense of accomplishment. If I had to pick one particularly memorable project, it would be the Perfdocs tool. It was my first significant project when I started contributing more actively, and its purpose is to automate documentation for the various performance tools scattered across the ecosystem. With every code push, Perfdocs automatically generates documentation in “Firefox Source Docs”. Working on this tool gave me the chance to familiarize myself with various performance tools one by one, while also building confidence in contributing. It was rewarding to enhance the features and see the resulting documentation instantly, making the impact very tangible. Hearing from other developers about how much it simplified their work was incredibly motivating and made the experience even more fulfilling. Q: Performance tools are critical for developers. Can you walk us through how your work helps improve the overall performance of Mozilla products? I’ve applied various patches across multiple areas, but updates to tools like Mach Try Perf and Perfherder, which many users rely on, have had a particularly strong impact. With Mach Try Perf, developers can easily perform performance tests by platform and category, comparing results between the base commit (before changes) and the head commit (after changes). However, since each test can take considerable time, I developed a caching feature that stores test results from previous runs when the base commit is the same. This allows us to reuse existing results instead of re-running tests, significantly reducing the time needed for performance testing. I also developed several convenient flags to enhance testing efficiency. For instance, when an alert occurs in Perfherder, developers can now re-run tests simply by using the “–alert” flag with the alert ID in the Mach Try Perf command. Additionally, I recently integrated Perfherder with Bugzilla to automatically file bugs. Now, with just a click of the ‘file bug’ button, related bugs are filed automatically, reducing the need for manual follow-up. These patches, I believe, have collectively helped improve the productivity of Mozilla’s developers and contributors, saving a lot of time in the development process. Q: How much of a challenge do you find being in a different time zone to the rest of the team? How do you manage this? I currently live in South Korea (GMT+9), and most team meetings are scheduled from 10 PM to midnight my time. During the day, I focus on my job, and in the evening, I contribute to the project. This setup actually helps me use my time more efficiently. In fact, I sometimes feel that if we were in the same time zone, balancing both my work and attending team meetings might be even more challenging. Q: What are some tools or methodologies you rely on? When developing Firefox, I mainly rely on two tools: Visual Studio Code (VSC) on Linux and SearchFox. SearchFox is incredibly useful for navigating Mozilla’s vast codebase, especially as it’s web-based and makes sharing code with teammates easy. Since Mozilla’s code is open source, it’s accessible for the world to see and contribute to. This openness encourages me to seek feedback from mentors regularly and to focus on refactoring through detailed code reviews, with the goal of continually improving code quality. I’ve learned so much in this process, especially about reducing code complexity and enhancing quality. I’m always grateful for the detailed reviews and constructive feedback that help me improve. Q: Are there any exciting projects you’d like to work on? I’m currently finding plenty of challenge and growth working with testing components, so rather than seeking new projects, I’m focused on my current tasks. I’m also interested in learning Rust and exploring trends like AI and blockchain. Recently, I’ve considered ways to improve user convenience in tools like Mach Try Perf and Perfherder, such as making test results clearer and easier to review. I’m happy with my work and growth here, but I keep an open mind toward new opportunities. After all, one thing I’ve learned in open source is to never say, ‘I can’t do this.’ Q: What advice would you give to someone new to contributing? If you’re starting as a contributor to the codebase, building it alone might feel challenging. You might wonder, “Can I really do this?” But remember, you absolutely can. There’s one thing you’ll need: persistence. Hold on to a single issue and keep challenging yourself. As you solve each issue, you’ll find your skills growing over time. It’s a meaningful challenge, knowing that your contributions can make a difference. Contributing will make you more resilient and help you grow into a better developer. Q: What’s something you’ve learned during your time working on performance tools? Working with performance tools has given me valuable experience across a variety of tools, from local ones like Mach Try Perf, Raptor, and Perfdocs to web based tools such as Treeherder and Perfherder. Not only have I deepened my technical skills, but I also became comfortable using Python, which wasn’t my primary language before. Since Firefox runs across diverse environments, I learned how to execute individual tests for different conditions and manage and visualize performance test results efficiently. This experience taught me the full extent of automation’s capabilities and inspired me to explore how far we can push it. Through this large scale project, I’ve learned how to approach development from scratch, analyze requirements, and carry out development while considering the impact of changes. My skills in impact analysis and debugging have grown significantly. Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code. Q: What do you enjoy doing in your spare time when you’re not contributing to Mozilla? I really enjoy reading and learning new things in my spare time. Books offer me a chance to grow, and I find it exciting to dive into new subjects. I also prioritize staying active with running and swimming to keep both my body and mind healthy. It’s a great balance that keeps me feeling refreshed and engaged. Interested in contributing to performance tools like Jun? Check out our wiki to learn more.
  • The Servo Blog: Behind the code: an interview with msub2 (2024/11/12 00:00)
    Behind the Code is a new series of interviews with the contributors who help propel Servo forward. Ever wondered why people choose to work on web browsers, or how they get started? We invite you to look beyond the project’s pull requests and issue reports, and get to know the humans who make it happen. msub2 Some representative contributions: OpenXR: Separate graphics handling from main OpenXR code Surface supported interaction profiles from OpenXR runtime Support OpenXR runtimes that do not support fovMutable crypto: Begin SubtleCrypto implementation bindings: Allow Guard to take multiple conditions, check for SecureContext in ConstructorEnabled Implement non-XR Gamepad discovery and input Tell us about yourself! My name is Daniel, though I more commonly go by my online handle “msub2”. I’m something of a generalist, but my primary interests are developing for the web, XR, and games. I created and run the WebXR Discord, which has members from both the Immersive Web Working Group and the Meta Browser team, among others. In my free time (when I’m not working, doing Servo things, or tending to my other programming projects) I’m typically watching videos from YouTube/Dropout/Nebula/etc and playing video games. Why did you start contributing to Servo? A confluence of interests, to put it simply. I was just starting to really get into Rust, having built a CHIP-8 emulator and an NES emulator to get my hands dirty, but I also had prior experience contributing to other browser projects like Chromium and Gecko. I was also eyeing Servo’s WebXR implementation (which I had submitted a couple small fixes for last year) as I could see there was still plenty of work that could be done there. To get started though, I looked for an adjacent area that I could work on to get familiar with the main Servo codebase, which led to my first contribution being support for non-XR gamepads! What was challenging about your first contribution? I’d say the most challenging part of my first contribution was twofold: the first was just getting oriented with how data flows in and out of Servo via the embedding API and the second was understanding how DOM structs, methods, and codegen all worked together in the script crate. Servo is a big project, but luckily I got lots of good help and feedback as I was working through it, which definitely made things easier. Looking at existing examples in the codebase of the things I was trying to do got me the rest of the way there I’d say. What do you like about contributing to the project? What do you get out of it? The thing I like most about Servo (and perhaps the web platform as an extension) is the amount of interesting problems that there are to solve when it comes to implementing/supporting all of its different features. While most of my contributions so far have been focused around Gamepad and WebXR, recently I’ve been working to help implement SubtleCrypto alongside another community member, which has been really interesting! In addition to the satisfaction I get just from being able to solve interesting problems, I also rather enjoy the feeling of contributing to a large, communal, open-source project. Any final thoughts you’d like to share? I’d encourage anyone who’s intrigued by the idea of contributing to Servo to give it a shot! The recent waves of attention for projects like Verso and Ladybird have shown that there is an appetite for new browsers and browser engines, and with Servo’s history it just feels right that it should finally be able to rise to a more prominent status in the ecosystem.
  • Don Marti: Links for 10 November 2024 (2024/11/10 00:00)
    Signal Is Now a Great Encrypted Alternative to Zoom and Google Meet These updates mean that Signal is now a free, robust, and secure video conferencing service that can hang with the best of them. It lets you add up to 50 people to a group call and there is no time limit on each call. The New Alt Media and the Future of Publishing - Anil Dash I’m a neuroscientist who taught rats to drive − their joy suggests how anticipating fun can enrich human life Ecosia and Qwant, two European search engines, join forces What can McCain’s Grand Prix win teach us? Nothing new Ever since Byron Sharp decided he was going for red for his book cover, marketing thinkers have assembled a quite extraordinary disciplinary playbook. And it’s one that looks nothing like the existing stuff that it replaced. Of course, the majority of marketers know nothing about any of it. They inhabit the murkier corners of marketing, where training is rejected because change is held up as a circuit-breaker for learning anything from the past. AI and the ‘new consumer’ mean everything we once knew is pointless now. Better to be ignorant and untrained than waste time on irrelevant historical stuff. But for those who know that is bullshit, who study, who respect marketing knowledge, who know the foundations do not change, the McCain case is a jewel sparkling with everything we have learned in these very fruitful 15 years. The Counterculture Switch: creating in a hostile environment Why Right-Wing Media Thrives While The Left Gets Left Behind The Rogue Emperor, And What To Do About Them Anywhere there is an organisation or group that is centred around an individual, from the smallest organisation upwards, it’s possible for it to enter an almost cult-like state in which the leader both accumulates too much power, and loses track of some of the responsibilities which go with it. If it’s a tech company or a bowls club we can shrug our shoulders and move to something else, but when it occurs in an open source project and a benevolent dictator figure goes rogue it has landed directly on our own doorstep as the open-source community. We need a Wirecutter for groceries Historic calculators invented in Nazi concentration camp will be on exhibit at Seattle Holocaust center One Company A/B Tested Hybrid Work. Here’s What They Found. According to the Society of Human Resource Management, each quit costs companies at least 50% of the employees’ annual salary, which for Trip.com would mean $30,000 for each quit. In Trip.com’s experiment, employees liked hybrid so much that their quit rates fell by more than a third — and saved the company millions of dollars a year.
  • Mozilla Thunderbird: VIDEO: Q&A with Mark Surman (2024/11/08 17:57)
    Last month we had a great chat with two members of the Thunderbird Council, our community governance body. This month, we’re looking at the relationship between Thunderbird and our parent organization, MZLA, and the broader Mozilla Foundation. We couldn’t think of a better way to do this than sitting down for a Q&A with Mark Surman, president of the Mozilla Foundation. We’d love to hear your suggestions for topics or guests for the Thunderbird Community Office Hours! You can always send them to officehours@thunderbird.org. October Office Hours: Q&A with Mark Surman In many ways, last month’s office hours was a perfect lead-in to this month’s, as our community and Mozilla have been big parts of the Thunderbird Story. Even though this year marks 20 years since Thunderbird 1.0, Thunderbird started as ‘Minotaur’ alongside ‘Phoenix,’ the original name for Firefox, in 2003. Heather, Monica, and Mark all discuss Thunderbird’s now decades-long journey, but this chat isn’t just about our past. We talk about what we hope is a a long future, and how and where we can lead the way. If you’ve been a long-time user of Thunderbird, or are curious about how Thunderbird, MZLA, and the Mozilla Foundation all relate to each other, this video is for you. Watch, Read, and Get Involved We’re so grateful to Mark for joining us, and turning an invite during a chat at Mozweek into reality! We hope this video gives a richer context to Thunderbird’s past as it highlights one of the main characters in our long story. VIDEO (Also on Peertube): Thunderbird and Mozilla Resources: Want to know more about the history of Thunderbird? Ryan Sipes, our product director, describes it in a compelling tale: https://blog.thunderbird.net/2023/11/the-untold-history-of-thunderbird Want to keep current with Mark and the rest of the Mozilla Foundation? Check out the Mozilla Blog, which details initiatives to keep the internet open and accessible to all: https://blog.mozilla.org/en/latest/ The post VIDEO: Q&A with Mark Surman appeared first on The Thunderbird Blog.
  • Andrew Halberstadt: Jujutsu: A Haven for Mercurial Users at Mozilla (2024/11/08 12:19)
    One of the pleasures of working at Mozilla, has been learning and using the Mercurial version control system. Over the past decade, I’ve spent countless hours tinkering my worfklow to be just so. Reading docs and articles, meticulously tweaking settings and even writing an extension. I used to be very passionate about Mercurial. But as time went on, the culture at Mozilla started changing. More and more repos were created in Github, and more and more developers started using git-cinnabar to work on mozilla-central. Then my role changed and I found that 90% of my work was happening outside of mozilla-central and the Mercurial garden I had created for myself. So it was with a sense of resigned inevitability that I took the news that Mozilla would be migrating mozilla-central to Git. The fire in me was all but extinguished, I was resigned to my fate. And what’s more, I had to agree. The time had come for Mozilla to officially make the switch. Glandium wrote an excellent post outlining some of the history of the decisions made around version control, putting them into the context of the time. In that post, he offers some compelling wisdom to Mercurial holdouts like myself: I’ll swim against the current here, and say this: the earlier you can switch to git, the earlier you’ll find out what works and what doesn’t work for you, whether you already know Git or not. When I read that, I had to agree. But, I just couldn’t bring myself to do it. No, if I was going to have to give up my revsets and changeset obsolesence and my carefully curated workflows, then so be it. But damnit! I was going to continue using them for as long as possible. And I’m glad I didn’t switch because then I stumbled upon Jujutsu.
  • The Mozilla Blog: We asked why you love Firefox. Here’s what you said. (2024/11/08 02:08)
    For two decades, Firefox has been at the heart of an open, user-centered web. From the early days of tabbed browsing and pop-up blocking to today’s privacy protections and customization options, Firefox has empowered users like you with control and freedom to explore the internet on your own terms. So, to mark our 20th anniversary, we asked: What made you fall in love with Firefox?  Whether you’ve been with us since the very first version or joined more recently, your answers remind us of the deep connections Firefox has built over the years. Some of you love Firefox for the features that make it stand out from other browsers. Others value Firefox for the trust it has earned over time. And for many, it’s been a loyal companion from the very beginning. Here’s a look at what makes Firefox special to so many of you. Features that keep you coming back These are the features that make Firefox your go-to browser. “It’s just that other browsers are too privacy invasive. And Firefox has a lot of great features not just one.”— @xonidev “Containers is the killer feature.”— @Kaegun “PIP (picture-in-picture) in every video”— @JanakXD “Add-ons on mobile ”— @kotulp “switched on V 1.0 never gone back to other browsers. Using sync between multiple desktops /  Laptops & mobile is great [especially] for adblock extensions!”— @satanas_g Improvements over the years As Firefox has grown, so has our commitment to making your browsing experience better and faster. “Stability improvements, cleaner UI, rust under the hood, adblock”— @lee_official_the_real_one “It’s fast”— @blessedwithsins The trust factor Beyond features, many of you choose Firefox for its transparency, commitment to open-source, and user-first principles. “It wasn’t a feature. It was trust.”— @JimConnolly “I moved from Opera to Firefox because it was open-source and obeyed most of the standards. It’s been my default since version 1.5. Why wouldn’t it be?”— @omarwilley In it from the beginning Some of you have been here since the early days, and Firefox has become part of your internet history. “my dad used it first before installing it on the family laptop 18 years ago where every one could use it. Gotta say i never switched to another browser after i got my own computer 16 years ago.”— @032Zero “Tabbed browsing. Never left since.”— @ergosteur “I just liked Mozilla’s logo at the time, this was 20 years ago”— @SneedPlays “I don’t remember using anything else except firefox (as main web browser)”— @Miki1877852468 “Well, it was a successor to Netscape. Though migration was in 2008, I started using Firefox around 2005-2006. It was the best browser at the time. It is still the best browser for me now.”— @erolcanulutas Whatever it is that made you fall in love with Firefox, we’re so glad you’re here. Thanks for being part of our story and helping us keep the web open, safe and truly yours. Get Firefox Get the browser that protects what’s important The post We asked why you love Firefox. Here’s what you said. appeared first on The Mozilla Blog.
  • The Servo Blog: This month in Servo: faster fonts, fetches, and flexbox! (2024/11/08 00:00)
    Servo now supports ‘mix-blend-mode: plus-lighter’ (@mrobinson, #34057) and ‘transition-behavior: allow-discrete’ (@Loirooriol, #33991), including in the ‘transition’ shorthand (@Loirooriol, #34005), along with the fetch metadata request headers ‘Sec-Fetch-Site’, ‘Sec-Fetch-Mode’, ‘Sec-Fetch-User’, and ‘Sec-Fetch-Dest’ (@simonwuelker, #33830). We now have partial support for the CSS size keywords ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ (@Loirooriol, #33558, #33659, #33854, #33951), including in floats (@Loirooriol, #33666), atomic inlines (@Loirooriol, #33737), and elements with ‘position: absolute’ or ‘fixed’ (@Loirooriol, #33950). We’re implementing the SubtleCrypto API, starting with full support for crypto.subtle.digest() (@simonwuelker, #34034), partial support for generateKey() with AES-CBC and AES-CTR (@msub2, #33628, #33963), and partial support for encrypt(), and decrypt() with AES-CBC (@msub2, #33795). More engine changes Servo’s architecture is improving, with a new cross-process compositor API that reduces memory copy overhead for video (@mrobinson, @crbrz, #33619, #33660, #33817). We’ve also started phasing out our old OpenGL bindings (gleam and sparkle) in favour of glow, which should reduce Servo’s complexity and binary size (@sagudev, @mrobinson, surfman#318, webxr#248, #33538, #33910, #33911). We’ve updated to Stylo 2024-10-04 (@Loirooriol, #33767) and wgpu 23 (@sagudev, #34073, #33819, #33635). The new version of wgpu includes several patches from @sagudev, adding support for const_assert, as well as accessing const arrays with runtime index values. We’ve also reworked WebGPU canvas presentation to ensure that we never use old buffers by mistake (@sagudev, #33613). We’ve also landed a bunch of improvements to our DOM geometry APIs, with DOMMatrix now supporting toString() (@simonwuelker, #33792) and updating is2D on mutation (@simonwuelker, #33796), support for DOMRect.fromRect() (@simonwuelker, #33798), and getBounds() on DOMQuad now handling NaN correctly (@simonwuelker, #33794). We now correctly handle non-ASCII characters in <img srcset> (@evuez, #33873), correctly handle data: URLs in more situations (@webbeef, #33500), and no longer throw an uncaught exception when pages try to use IntersectionObserver (@mrobinson, #33989). Outreachy contributors are doing great work in Servo again, helping us land many of this month’s improvements to GC static analysis (@taniishkaa, @webbeef, @chickenleaf, @jdm, @jahielkomu, @wulanseruniati, @lauwwulan, #33692, #33706, #33800, #33774, #33816, #33808, #33827, #33822, #33820, #33828, #33852, #33843, #33836, #33865, #33862, #33891, #33888, #33880, #33902, #33892, #33893, #33895, #33931, #33924, #33917, #33921, #33958, #33920, #33973, #33960, #33928, #33985, #33984, #33978, #33975, #34003, #34002) and code health (@chickenleaf, @DileepReddyP, @taniishkaa, @mercybassey, @jahielkomu, @cashall-0, @tony-nyagah, @lwz23, @Noble14477, #33959, #33713, #33804, #33618, #33625, #33631, #33632, #33633, #33643, #33643, #33646, #33648, #33653, #33664, #33685, #33686, #33689, #33686, #33690, #33705, #33707, #33724, #33727, #33728, #33729, #33730, #33740, #33744, #33757, #33771, #33757, #33782, #33790, #33809, #33818, #33821, #33835, #33840, #33853, #33849, #33860, #33878, #33881, #33894, #33935, #33936, #33943). Performance improvements Our font system is faster now, with reduced latency when loading system fonts (@mrobinson, #33638), layout no longer blocking on sending font data to WebRender (@mrobinson, #33600), and memory mapped system fonts on macOS and FreeType platforms like Linux (@mrobinson, @mukilan, #33747). Servo now has a dedicated fetch thread (@mrobinson, #33863). This greatly reduces the number of IPC channels we create for individual requests, and should fix crashes related to file descriptor exhaustion on some platforms. Brotli-compressed responses are also handled more efficiently, such that we run the parser with up to 8 KiB of decompressed data at a time, rather than only 10 bytes of compressed data at a time (@crbrz, #33611). Flexbox layout now uses caching to avoid doing unnecessary work (@mrobinson, @Loirooriol, #33964, #33967), and now has experimental tracing-based profiling support (@mrobinson, #33647), which in turn no longer spams RUST_LOG=info when not enabled (@delan, #33845). We’ve also landed optimisations in table layout (@Loirooriol, #33575) and in our layout engine as a whole (@Loirooriol, #33806). Work continues on making our massive script crate build faster, with improved incremental builds (@sagudev, @mrobinson, #33502) and further patches towards splitting script into smaller crates (@sagudev, @jdm, #33627, #33665). We’ve also fixed several crashes, including when initiating a WebXR session on macOS (@jdm, #33962), when laying out replaced elements (@Loirooriol, #34006), when running JavaScript modules (@jdm, #33938), and in many situations when garbage collection occurs (@chickenleaf, @taniishkaa, @Loirooriol, @jdm, #33857, #33875, #33904, #33929, #33942, #33976, #34019, #34020, #33965, #33937). servoshell, embedding, and devtools Devtools support (--devtools 6080) is now compatible with Firefox 131+ (@eerii, #33661), and no longer lists iframes as if they were inspectable tabs (@eerii, #34032). Servo-the-browser now avoids unnecessary redraws (@webbeef, #34008), massively reducing its CPU usage, and no longer scrolls too slowly on HiDPI systems (@nicoburns, #34063). We now update the location bar when redirects happen (@rwakulszowa, #34004), and these updates are sent to all embedders of Servo, not just servoshell. We’ve added a new --unminify-css option (@Taym95, #33919), allowing you to dump the CSS used by a page like you can for JavaScript. This will pave the way for allowing you to modify that CSS for debugging site compat issues, which is not yet implemented. We’ve also added a new --screen-size option that can help with testing mobile websites (@mrobinson, #34038), renaming the old --resolution option to --window-size, and we’ve removed --no-minibrowser mode (@Taym95, #33677). We now publish nightly builds for OpenHarmony on servo.org (@mukilan, #33801). When running servoshell on OpenHarmony, we now display toasts when pages load or panic (@jschwe, #33621), and you can now pass certain Servo options via hdc shell aa start or a test app (@jschwe, #33588). Donations Thanks again for your generous support! We are now receiving 4201 USD/month (+1.3% over September) in recurring donations. We are no longer accepting donations on LFX — if you were donating there, please move your recurring donations to GitHub or Open Collective. Servo is also on thanks.dev, and already ten GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community. 4201 USD/month 10000 With this money, we’ve been able to pay for a second Outreachy intern in this upcoming round, plus our web hosting and self-hosted CI runners for Windows and Linux builds. When the time comes, we’ll also be able to afford macOS runners and perf bots! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page. Conference talks Servo project update — Manuel Rego spoke at the LF Europe Member Summit about the status and long-term vision of the Servo project Servo: Building a Browser Rendering Engine in Rust (slides) — Rakhi Sharma spoke at the Ubuntu Summit about Servo’s recent work in embedding, layout, and benchmarking
  • Support.Mozilla.Org: Celebrating our top contributors on Firefox’s 20th anniversary (2024/11/07 17:48)
    Firefox was built by a group of passionate developers, and has been supported by a dedicated community of caring contributors since day one. The SUMO platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors. Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors. SUMO is not just a support platform but a place where other like-minded users, who care about making the internet a better place for everyone, can find opportunities to grow their skills and contribute. Our contributor community has been integral to Firefox’s success. Contributors humanize the experience across our support channels, champion meaningful fixes and changes, and help us onboard the next generation of Firefox users (and potential contributors!). Fun facts about our community: We’re global! We have active contributors in 63 countries. 6 active contributors have been with us since day one (Shout outs to Cor-el, jscher2000, James, mozbrowser, AliceWyman, and marsf) and 16 contributors have been here for 15+ years! In 2024*, our contributor community responded to 18,390 forum inquiries, made 747 en-US revisions and 5,684 l10n revisions to our Knowledge Base, responded to 441 Tweets, and issued 1,296 Play Store review responses (*from Jan-Oct 2024 for Firefox desktop, Android, and iOS. Non OP and non staff) Chart reflects top contributors for Firefox (Desktop, Android, and iOS) Highlights from throughout the years: Started in October 2007, SUMO has evolved in many different ways, but its spirit remains the same. It supports our wider user community while also allowing us to build strong relationships with our contributors. Below is a timeline of some key moments in SUMO’s history: 2 October 2007 – SUMO launched on TikiWiki. Knowledge Base was implemented in this initial phase, but article localization wasn’t supported until February 2008. 18 December 2007 – Forum went live 28 December 2007 – Live chat launched 5 February 2009 – SUMO logo was introduced 11 October 2010 – We expanded to Twitter (now X) supported by the Army of Awesome December 2010 – SUMO migrated from TikiWiki to Kitsune. The migration was done in stages and lasted most of 2010. 14 March 2021 – We expanded to take on Play Store support and consolidated our social support platforms in Conversocial/Verint 9 November 2024 – Our SUMO channels are largely powered by active contributors across forums, Knowledge Base and social We are so grateful for our active community of contributors who bring our mission to life every day. Special thanks to those of you who have been with us since the beginning. And to celebrate this milestone, we are going to reward top contributors (>99 contributions) for all products in 2024 with a special SUMO badge. Additionally, contributors with more than 999 contributions throughout SUMO’s existence and those with >99 contributions in 2024 will be given swag vouchers to shop at Mozilla’s swag stores. Cheers to the progress we’ve made, and the incredible foundation we’ve built together. The best is yet to come!   P.S. Thanks to Chris Ilias for additional note on SUMO's history.
  • Mozilla Open Policy & Advocacy Blog: Join Us to Mark 20 Years of Firefox (2024/11/07 14:13)
    You’re invited to Firefox’s 20th birthday!   We’re marking 20 years of Firefox — the independent open-source browser that has reshaped the way millions of people explore and experience the internet. Since its launch, Firefox has championed privacy, security, transparency, and put control back in the hands of people online. Come celebrate two decades of innovation, advocacy, and community — while looking forward to what’s to come. The post Join Us to Mark 20 Years of Firefox appeared first on Open Policy & Advocacy.
  • Mozilla Open Policy & Advocacy Blog: Behind the Scenes of eIDAS: A Look at Article 45 and Its Implications (2024/11/07 10:43)
    On October 21, 2024, Mozilla hosted a panel discussion during the Global Encryption Summit to explore the ongoing debate around Article 45 of the eIDAS regulation. Moderated by Robin Wilton from the Internet Society, the panel featured experts Dennis Jackson from Mozilla, Alexis Hancock from Certbot at EFF, and Thomas Lohninger from epicenter.works. Our panelists provided their insights on the technical, legal, and privacy concerns surrounding Article 45 and the potential impact on internet security and privacy. The panel, facilitated by Mozilla in connection with its membership on the Global Encryption Coalition Steering Committee, was part of the annual celebration of Global Encryption Day on October 21. What is eIDAS and Why is Article 45 Important? The original eIDAS regulation, introduced in 2014, aimed to create a unified framework for secure electronic identification (eID) and trust services across the European Union. Such trust services, provided by designated Trust Service Providers (TSPs), included electronic signatures, timestamps, and website authentication certificates. Subsequently, Qualified Web Authentication Certificates (QWACs) were also recognized as a method to verify that the entity behind a website also controls the domain in an effort to increase trust amongst users that they are accessing a legitimate website. Over the years, the cybersecurity community has expressed its concerns for users’ privacy and security regarding the use of QWACs, as they can lead to a false sense of security. Despite this criticism, in 2021, an updated EU proposal to the original law, in essence, aimed to mandate the recognition of QWACs as long as they were issued by qualified TSPs. This, in practice, would undermine decades of web security measures and put users’ privacy and security at stake. The Security Risk Ahead campaign raised awareness and addressed these issues by engaging widely with policymakers and including through a public letter signed by more than 500 experts that was also endorsed by organizations including Internet Society, European Digital Rights (EDRi), EFF, and Epicenter.works among others. The European Parliament introduced last-minute changes to mitigate risks of surveillance and fraud, but these safeguards now need to be technically implemented to protect EU citizens from potential exposure. Technical Concerns and Security Risks Thomas Lohninger provided context on how Article 45 fits into the larger eIDAS framework. He explained that while eIDAS aims to secure the wider digital ecosystem, QWACs under Article 45 could erode trust in website security, affecting both European and global users. Dennis Jackson, a member of Mozilla’s cryptography team, cautioned that without robust safeguards, Qualified Website Authentication Certificates (QWACs) could be misused, leading to increased risk of fraud. He noted limited involvement of technical experts in drafting Article 45 resulted in significant gaps within the law. The version of Article 45, as originally proposed in 2021, radically expanded the capabilities of EU governments to surveil their citizens by ensuring that cryptographic keys under government control can be used to intercept encrypted web traffic across the EU. Why Extended Validation Certificates (EVs) Didn’t Work—and Why Article 45 Might Not Either Alexis Hancock compared Article 45 to extended validation (EV) certificates, which were introduced years ago with similar intentions but ultimately failed to achieve their goals. EV certificates were designed to offer more information about the identity of websites but ended up being expensive and ineffective as most users didn’t even notice them. Hancock cautioned that QWACs could suffer from the same problems. Instead of focusing on complex authentication mechanisms, she argued, the priority should be on improving encryption and keeping the internet secure for everyone, regardless of whether a website has paid for a specific type of certificate. Balancing Security and Privacy: A Tough Trade-Off A key theme was balancing online transparency and protecting user privacy. All the panelists agreed that while identifying websites more clearly may have its advantages, it should not come at the expense of privacy and security. The risk is that requiring more authentication online could lead to reduced anonymity and greater potential for surveillance, undermining the principles of free expression and privacy on the internet. The panelists also pointed out that Article 45 could lead to a fragmented internet, with different regions adopting conflicting rules for registering and asserting ownership of a website. This fragmentation would make it harder to maintain a secure and unified web, complicating global web security. The Role of Web Browsers in Protecting Users Web browsers, like Firefox, play a crucial role in protecting users. The panelists stressed that browsers have a responsibility to push back against policies that could compromise user privacy or weaken internet security. Looking Ahead: What’s Next for eIDAS and Web Security? Thomas Lohninger raised the possibility of legal challenges to Article 45. If the regulation is implemented in a way that violates privacy rights or data protection laws, it could be contested under the EU’s legal frameworks, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive. Such battles could be lengthy and complex however, underscoring the need for continued advocacy. As the panel drew to a close, the speakers emphasized that while the recent changes to Article 45 represent progress, the fight is far from over. The implementation of eIDAS continues to evolve, and it’s crucial that stakeholders, including browsers, cybersecurity experts, and civil society groups, remain vigilant in advocating for a secure and open internet. The consensus from the panel was clear: as long as threats to encryption and web security exist, the community must stay engaged in these debates. Scrutinizing policies like eIDAS  is essential to ensure they truly serve the interests of internet users, not just large institutions or governments. The panelists concluded by calling for ongoing collaboration between policymakers, technical experts, and the public to protect the open web and ensure that any changes to digital identity laws enhance, rather than undermine, security and privacy for all. — You can watch the panel discussion here. The post Behind the Scenes of eIDAS: A Look at Article 45 and Its Implications appeared first on Open Policy & Advocacy.
  • The Rust Programming Language Blog: Google Summer of Code 2024 results (2024/11/07 00:00)
    As we have previously announced, the Rust Project participated in Google Summer of Code (GSoC) for the first time this year. Nine contributors have been tirelessly working on their exciting projects for several months. The projects had various durations; some of them have ended in August, while the last one has been concluded in the middle of October. Now that the final reports of all the projects have been submitted, we can happily announce that all nine contributors have passed the final review! That means that we have deemed all of their projects to be successful, even though they might not have fulfilled all of their original goals (but that was expected). We had a lot of great interactions with our GSoC contributors, and based on their feedback, it seems that they were also quite happy with the GSoC program and that they had learned a lot. We are of course also incredibly grateful for all their contributions - some of them have even continued contributing after their project has ended, which is really awesome. In general, we think that Google Summer of Code 2024 was a success for the Rust Project, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our project idea list. Below you can find a brief summary of each of our GSoC 2024 projects, including feedback from the contributors and mentors themselves. You can find more information about the projects here. Adding lint-level configuration to cargo-semver-checks Contributor: Max Carr Mentor: Predrag Gruevski Final report cargo-semver-checks is a tool designed for automatically detecting semantic versioning conflicts, which is planned to one day become a part of Cargo itself. The goal of this project was to enable cargo-semver-checks to ship additional opt-in lints by allowing users to configure which lints run in which cases, and whether their findings are reported as errors or warnings. Max achieved this goal by implementing a comprehensive system for configuring cargo-semver-checks lints directly in the Cargo.toml manifest file. He also extensively discussed the design with the Cargo team to ensure that it is compatible with how other Cargo lints are configured, and won't present a future compatibility problem for merging cargo-semver-checks into Cargo. Predrag, who is the author of cargo-semver-checks and who mentored Max on this project, was very happy with his contributions that even went beyond his original project scope: He designed and built one of our most-requested features, and produced design prototypes of several more features our users would love. He also observed that writing quality CLI and functional tests was hard, so he overhauled our test system to make better tests easier to make. Future work on cargo-semver-checks will be much easier thanks to the work Max put in this summer. Great work, Max! Implementation of a faster register allocator for Cranelift Contributor: Demilade Sonuga Mentors: Chris Fallin and Amanieu d'Antras Final report The Rust compiler can use various backends for generating executable code. The main one is of course the LLVM backend, but there are other backends, such as GCC, .NET or Cranelift. Cranelift is a code generator for various hardware targets, essentially something similar to LLVM. The Cranelift backend uses Cranelift to compile Rust code into executable code, with the goal of improving compilation performance, especially for debug (unoptimized) builds. Even though this backend can already be faster than the LLVM backend, we have identified that it was slowed down by the register allocator used by Cranelift. Register allocation is a well-known compiler task where the compiler decides which registers should hold variables and temporary expressions of a program. Usually, the goal of register allocation is to perform the register assignment in a way that maximizes the runtime performance of the compiled program. However, for unoptimized builds, we often care more about the compilation speed instead. Demilade has thus proposed to implement a new Cranelift register allocator called fastalloc, with the goal of making it as fast as possible, at the cost of the quality of the generated code. He was very well-prepared, in fact he had a prototype implementation ready even before his GSoC project has started! However, register allocation is a complex problem, and thus it then took several months to finish the implementation and also optimize it as much as possible. Demilade also made extensive use of fuzzing to make sure that his allocator is robust even in the presence of various edge cases. Once the allocator was ready, Demilade benchmarked the Cranelift backend both with the original and his new register allocator using our compiler benchmark suite. And the performance results look awesome! With his faster register allocator, the Rust compiler executes up to 18% less instructions across several benchmarks, including complex ones like performing a debug build of Cargo itself. Note that this is an end-to-end performance improvement of the time needed to compile a whole crate, which is really impressive. If you would like to examine the results in more detail or even run the benchmark yourself, check out Demilade's final report, which includes detailed instructions on how to reproduce the benchmark. Apart from having the potential to speed up compilation of Rust code, the new register allocator can be also useful for other use-cases, as it can be used in Cranelift on its own (outside the Cranelift codegen backend). What can we can say other than we are very happy with Demilade's work! Note that the new register allocator is not yet available in the Cranelift codegen backend out-of-the-box, but we expect that it will eventually become the default choice for debug builds and that it will thus make compilation of Rust crates using the Cranelift backend faster in the future. Improve Rust benchmark suite Contributor: Eitaro Kubotera Mentor: Jakub Beránek Final report This project was relatively loosely defined, with the overarching goal of improving the user interface of the Rust compiler benchmark suite. Eitaro tackled this challenge from various angles at once. He improved the visualization of runtime benchmarks, which were previously a second-class citizen in the benchmark suite, by adding them to our dashboard and by implementing historical charts of runtime benchmark results, which help us figure out how is a given benchmark behaving over a longer time span. Another improvement that he has worked on was embedding a profiler trace visualizer directly within the rustc-perf website. This was a challenging task, which required him to evaluate several visualizers and figure out a way how to include them within the source code of the benchmark suite in a non-disruptive way. In the end, he managed to integrate Perfetto within the suite website, and also performed various optimizations to improve the performance of loading compilation profiles. Last, but not least, Eitaro also created a completely new user interface for the benchmark suite, which runs entirely in the terminal. Using this interface, Rust compiler contributors can examine the performance of the compiler without having to start the rustc-perf website, which can be challenging to deploy locally. Apart from the mentioned contributions, Eitaro also made a lot of other smaller improvements to various parts of the benchmark suite. Thank you for all your work! Move cargo shell completions to Rust Contributor: shanmu Mentor: Ed Page Final report Cargo's completion scripts have been hand maintained and frequently broken when changed. The goal for this effort was to have the completions automatically generated from the definition of Cargo's command-line, with extension points for dynamically generated results. shanmu took the prototype for dynamic completions in clap (the command-line parser used by Cargo), got it working and tested for common shells, as well as extended the parser to cover more cases. They then added extension points for CLI's to provide custom completion results that can be generated on the fly. In the next phase, shanmu added this to nightly Cargo and added different custom completers to match what the handwritten completions do. As an example, with this feature enabled, when you type cargo test --test= and hit the Tab key, your shell will autocomplete all the test targets in your current Rust crate! If you are interested, see the instructions for trying this out. The link also lists where you can provide feedback. You can also check out the following issues to find out what is left before this can be stabilized: clap#3166 cargo#14520 Rewriting esoteric, error-prone makefile tests using robust Rust features Contributor: Julien Robert Mentor: Jieyou Xu Final report The Rust compiler has several test suites that make sure that it is working correctly under various conditions. One of these suites is the run-make test suite, whose tests were previously written using Makefiles. However, this setup posed several problems. It was not possible to run the suite on the Tier 1 Windows MSVC target (x86_64-pc-windows-msvc) and getting it running on Windows at all was quite challenging. Furthermore, the syntax of Makefiles is quite esoteric, which frequently caused mistakes to go unnoticed even when reviewed by multiple people. Julien helped to convert the Makefile-based run-make tests into plain Rust-based tests, supported by a test support library called run_make_support. However, it was not a trivial "rewrite this in Rust" kind of deal. In this project, Julien: Significantly improved the test documentation; Fixed multiple bugs that were present in the Makefile versions that had gone unnoticed for years -- some tests were never testing anything or silently ignored failures, so even if the subject being tested regressed, these tests would not have caught that. Added to and improved the test support library API and implementation; and Improved code organization within the tests to make them easier to understand and maintain. Just to give you an idea of the scope of his work, he has ported almost 250 Makefile tests over the span of his GSoC project! If you like puns, check out the branch names of Julien's PRs, as they are simply fantestic. As a result, Julien has significantly improved the robustness of the run-make test suite, and improved the ergonomics of modifying existing run-make tests and authoring new run-make tests. Multiple contributors have expressed that they were more willing to work with the Rust-based run-make tests over the previous Makefile versions. The vast majority of run-make tests now use the Rust-based test infrastructure, with a few holdouts remaining due to various quirks. After these are resolved, we can finally rip out the legacy Makefile test infrastructure. Rewriting the Rewrite trait Contributor: SeoYoung Lee Mentor: Yacin Tmimi Final report rustfmt is a Rust code formatter that is widely used across the Rust ecosystem thanks to its direct integration within Cargo. Usually, you just run cargo fmt and you can immediately enjoy a properly formatted Rust project. However, there are edge cases in which rustfmt can fail to format your code. That is not such an issue on its own, but it becomes more problematic when it fails silently, without giving the user any context about what went wrong. This is what was happening in rustfmt, as many functions simply returned an Option instead of a Result, which made it difficult to add proper error reporting. The goal of SeoYoung's project was to perform a large internal refactoring of rustfmt that would allow tracking context about what went wrong during reformatting. In turn, this would enable turning silent failures into proper error messages that could help users examine and debug what went wrong, and could even allow rustfmt to retry formatting in more situations. At first, this might sound like an easy task, but performing such large-scale refactoring within a complex project such as rustfmt is not so simple. SeoYoung needed to come up with an approach to incrementally apply these refactors, so that they would be easy to review and wouldn't impact the entire code base at once. She introduced a new trait that enhanced the original Rewrite trait, and modified existing implementations to align with it. She also had to deal with various edge cases that we hadn't anticipated before the project started. SeoYoung was meticulous and systematic with her approach, and made sure that no formatting functions or methods were missed. Ultimately, the refactor was a success! Internally, rustfmt now keeps track of more information related to formatting failures, including errors that it could not possibly report before, such as issues with macro formatting. It also has the ability to provide information about source code spans, which helps identify parts of code that require spacing adjustments when exceeding the maximum line width. We don't yet propagate that additional failure context as user facing error messages, as that was a stretch goal that we didn't have time to complete, but SeoYoung has expressed interest in continuing to work on that as a future improvement! Apart from working on error context propagation, SeoYoung also made various other improvements that enhanced the overall quality of the codebase, and she was also helping other contributors understand rustfmt. Thank you for making the foundations of formatting better for everyone! Rust to .NET compiler - add support for compiling & running cargo tests Contributor: Michał Kostrubiec Mentor: Jack Huey Final report As was already mentioned above, the Rust compiler can be used with various codegen backends. One of these is the .NET backend, which compiles Rust code to the Common Intermediate Language (CIL), which can then be executed by the .NET Common Language Runtime (CLR). This backend allows interoperability of Rust and .NET (e.g. C#) code, in an effort to bring these two ecosystems closer together. At the start of this year, the .NET backend was already able to compile complex Rust programs, but it was still lacking certain crucial features. The goal of this GSoC project, implemented by Michał, who is in fact the sole author of the backend, was to extend the functionality of this backend in various areas. As a target goal, he set out to extend the backend so that it could be used to run tests using the cargo test command. Even though it might sound trivial, properly compiling and running the Rust test harness is non-trivial, as it makes use of complex features such as dynamic trait objects, atomics, panics, unwinding or multithreading. These features were especially tricky to implement in this codegen backend, because the LLVM intermediate representation (IR) and CIL have fundamental differences, and not all LLVM intrinsics have .NET equivalents. However, this did not stop Michał. He has been working on this project tirelessly, implementing new features, fixing various issues and learning more about the compiler's internals every new day. He has also been documenting his journey with (almost) daily updates on Zulip, which were fascinating to read. Once he has reached his original goal, he moved the goalpost up to another level and attempted to run the compiler's own test suite using the .NET backend. This helped him uncover additional edge cases and also led to a refactoring of the whole backend that resulted in significant performance improvements. By the end of the GSoC project, the .NET backend was able to properly compile and run almost 90% of the standard library core and std test suite. That is an incredibly impressive number, since the suite contains thousands of tests, some of which are quite arcane. Michał's pace has not slowed down even after the project has ended and he is still continuously improving the backend. Oh, and did we already mention that his backend also has experimental support for emitting C code, effectively acting as a C codegen backend?! Michał has been very busy over the summer. We thank Michał for all his work on the .NET backend, as it was truly inspirational, and led to fruitful discussions that were relevant also to other codegen backends. Michał's next goal is to get his backend upstreamed and create an official .NET compilation target, which could open up the doors to Rust becoming a first-class citizen in the .NET ecosystem. Sandboxed and deterministic proc macro using WebAssembly Contributor: Apurva Mishra Mentor: David Lattimore Final report Rust procedural (proc) macros are currently run as native code that gets compiled to a shared object which is loaded directly into the process of the Rust compiler. Because of this design, these macros can do whatever they want, for example arbitrarily access the filesystem or communicate through a network. This has not only obvious security implications, but it also affects performance, as this design makes it difficult to cache proc macro invocations. Over the years, there have been various discussions about making proc macros more hermetic, for example by compiling them to WebAssembly modules, which can be easily executed in a sandbox. This would also open the possibility of distributing precompiled versions of proc macros via crates.io, to speed up fresh builds of crates that depend on proc macros. The goal of this project was to examine what would it take to implement WebAssembly module support for proc macros and create a prototype of this idea. We knew this would be a very ambitious project, especially since Apurva did not have prior experience with contributing to the Rust compiler, and because proc macro internals are very complex. Nevertheless, some progress was made. With the help of his mentor, David, Apurva was able to create a prototype that can load WebAssembly code into the compiler via a shared object. Some work was also done to make use of the existing TokenStream serialization and deserialization code in the compiler's proc_macro crate. Even though this project did not fulfill its original goals and more work will be needed in the future to get a functional prototype of WebAssembly proc macros, we are thankful for Apurva's contributions. The WebAssembly loading prototype is a good start, and Apurva's exploration of proc macro internals should serve as a useful reference for anyone working on this feature in the future. Going forward, we will try to describe more incremental steps for our GSoC projects, as this project was perhaps too ambitious from the start. Tokio async support in Miri Contributor: Tiffany Pek Yuan Mentor: Oli Scherer Final report miri is an intepreter that can find possible instances of undefined behavior in Rust code. It is being used across the Rust ecosystem, but previously it was not possible to run it on any non-trivial programs (those that ever await on anything) that use tokio, due a to a fundamental missing feature: support for the epoll syscall on Linux (and similar APIs on other major platforms). Tiffany implemented the basic epoll operations needed to cover the majority of the tokio test suite, by crafting pure libc code examples that exercised those epoll operations, and then implementing their emulation in miri itself. At times, this required refactoring core miri components like file descriptor handling, as they were originally not created with syscalls like epoll in mind. Suprising to everyone (though probably not tokio-internals experts), once these core epoll operations were finished, operations like async file reading and writing started working in miri out of the box! Due to limitations of non-blocking file operations offered by operating systems, tokio is wrapping these file operations in dedicated threads, which was already supported by miri. Once Tiffany has finished the project, including stretch goals like implementing async file operations, she proceeded to contact tokio maintainers and worked with them to run miri on most tokio tests in CI. And we have good news: so far no soundness problems have been discovered! Tiffany has become a regular contributor to miri, focusing on continuing to expand the set of supported file descriptor operations. We thank her for all her contributions! Conclusion We are grateful that we could have been a part of the Google Summer of Code 2024 program, and we would also like to extend our gratitude to all our contributors! We are looking forward to joining the GSoC program again next year.
  • The Rust Programming Language Blog: gccrs: An alternative compiler for Rust (2024/11/07 00:00)
    This is a guest post from the gccrs project, at the invitation of the Rust Project, to clarify the relationship with the Rust Project and the opportunities for collaboration. gccrs is a work-in-progress alternative compiler for Rust being developed as part of the GCC project. GCC is a collection of compilers for various programming languages that all share a common compilation framework. You may have heard about gccgo, gfortran, or g++, which are all binaries within that project, the GNU Compiler Collection. The aim of gccrs is to add support for the Rust programming language to that collection, with the goal of having the exact same behavior as rustc. First and foremost, gccrs was started as a project because it is fun. Compilers are incredibly rewarding pieces of software, and are great fun to put together. The project was started back in 2014, before Rust 1.0 was released, but was quickly put aside due to the shifting nature of the language back then. Around 2019, work on the compiler started again, led by Philip Herron and funded by Open Source Security and Embecosm. Since then, we have kept steadily progressing towards support for the Rust language as a whole, and our team has kept growing with around a dozen contributors working regularly on the project. We have participated in the Google Summer of Code program for the past four years, and multiple students have joined the effort. The main goal of gccrs is to provide an alternative option for compiling Rust. GCC is an old project, as it was first released in 1987. Over the years, it has accumulated numerous contributions and support for multiple targets, including some not supported by LLVM, the main backend used by rustc. A practical example of that reach is the homebrew Dreamcast scene, where passionate engineers develop games for the Dreamcast console. Its processor architecture, SuperH, is supported by GCC but not by LLVM. This means that Rust is not able to be used on those platforms, except through efforts like gccrs or the rustc-codegen-gcc backend - whose main differences will be explained later. GCC also benefits from the decades of software written in unsafe languages. As such, a high amount of safety features have been developed for the project as external plugins, or even within the project as static analyzers. These analyzers and plugins are executed on GCC's internal representations, meaning that they are language-agnostic, and can thus be used on all the programming languages supported by GCC. Likewise, many GCC plugins are used for increasing the safety of critical projects such as the Linux kernel, which has recently gained support for the Rust programming language. This makes gccrs a useful tool for analyzing unsafe Rust code, and more generally Rust code which has to interact with existing C code. We also want gccrs to be a useful tool for rustc itself by helping pan out the Rust specification effort with a unique viewpoint - that of a tool trying to replicate another's functionality, oftentimes through careful experimentation and source reading where the existing documentation did not go into enough detail. We are also in the process of developing various tools around gccrs and rustc, for the sole purpose of ensuring gccrs is as correct as rustc - which could help in discovering surprising behavior, unexpected functionality, or unspoken assumptions. We would like to point out that our goal in aiding the Rust specification effort is not to turn it into a document for certifying alternative compilers as "Rust compilers" - while we believe that the specification will be useful to gccrs, our main goal is to contribute to it, by reviewing and adding to it as much as possible. Furthermore, the project is still "young", and still requires a huge amount of work. There are a lot of places to make your mark, and a lot of easy things to work on for contributors interested in compilers. We have strived to create a safe, fun, and interesting space for all of our team and our GSoC students. We encourage anyone interested to come chat with us on our various communication platforms, and offer mentorship for you to learn how to contribute to the project and to compilers in general. Maybe more importantly however, there is a number of things that gccrs is NOT for. The project has multiple explicit non-goals, which we value just as highly as our goals. The most crucial of these non-goals is for gccrs not to become a gateway for an alternative or extended Rust-like programming language. We do not wish to create a GNU-specific version of Rust, with different semantics or slightly different functionality. gccrs is not a way to introduce new Rust features, and will not be used to circumvent the RFC process - which we will be using, should we want to see something introduced to Rust. Rust is not C, and we do not intend to introduce subtle differences in standard by making some features available only to gccrs users. We know about the pain caused by compiler-specific standards, and have learned from the history of older programming languages. We do not want gccrs to be a competitor to the rustc_codegen_gcc backend. While both projects will effectively achieve the same goal, which is to compile Rust code using the GCC compiler framework, there are subtle differences in what each of these projects will unlock for the language. For example, rustc_codegen_gcc makes it easy to benefit from all of rustc's amazing diagnostics and helpful error messages, and makes Rust easily usable on GCC-specific platforms. On the other hand, it requires rustc to be available in the first place, whereas gccrs is part of a separate project entirely. This is important for some users and core Linux developers for example, who believe that having the ability to compile the entire kernel (C and Rust parts) using a single compiler is essential. gccrs can also offer more plugin entrypoints by virtue of it being its own separate GCC frontend. It also allows Rust to be used on GCC-specific platforms with an older GCC where libgccjit is not available. Nonetheless, we are very good friends with the folks working on rustc_codegen_gcc, and have helped each other multiple times, especially in dealing with the patch-based contribution process that GCC uses. All of this ties into a much more global goal, which we could summarize as the following: We do not want to split the Rust ecosystem. We want gccrs to help the language reach even more people, and even more platforms. To ensure that, we have taken multiple measures to make sure the values of the Rust project are respected and exposed properly. One of the features we feel most strongly about is the addition of a very annoying command line flag to the compiler, -frust-incomplete-and-experimental-compiler-do-not-use. Without it, you are not able to compile any code with gccrs, and the compiler will output the following error message: crab1: fatal error: gccrs is not yet able to compile Rust code properly. Most of the errors produced will be the fault of gccrs and not the crate you are trying to compile. Because of this, please report errors directly to us instead of opening issues on said crate's repository. Our github repository: https://github.com/rust-gcc/gccrs Our bugzilla tracker: https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=__open__&component=rust&product=gcc If you understand this, and understand that the binaries produced might not behave accordingly, you may attempt to use gccrs in an experimental manner by passing the following flag: -frust-incomplete-and-experimental-compiler-do-not-use or by defining the following environment variable (any value will do) GCCRS_INCOMPLETE_AND_EXPERIMENTAL_COMPILER_DO_NOT_USE For cargo-gccrs, this means passing GCCRS_EXTRA_ARGS="-frust-incomplete-and-experimental-compiler-do-not-use" as an environment variable. Until the compiler can compile correct Rust and, most importantly, reject incorrect Rust, we will be keeping this command line option in the compiler. The hope is that it will prevent users from potentially annoying existing Rust crate maintainers with issues about code not compiling, when it is most likely our fault for not having implemented part of the language yet. Our goal of creating an alternative compiler for the Rust language must not have a negative effect on any member of the Rust community. Of course, this command line flag is not to the taste of everyone, and there has been significant pushback to its presence... but we believe it to be a good representation of our main values. In a similar vein, gccrs separates itself from the rest of the GCC project by not using a mailing list as its main mode of communication. The compiler we are building will be used by the Rust community, and we believe we should make it easy for that community to get in touch with us and report the problems they encounter. Since Rustaceans are used to GitHub, this is also the development platform we have been using for the past five years. Similarly, we use a Zulip instance as our main communication platform, and encourage anyone wanting to chat with us to join it. Note that we still have a mailing list, as well as an IRC channel (gcc-rust@gcc.gnu.org and #gccrust on oftc.net), where all are welcome. To further ensure that gccrs does not create friction in the ecosystem, we want to be extremely careful about the finer details of the compiler, which to us means reusing rustc components where possible, sharing effort on those components, and communicating extensively with Rust experts in the community. Two Rust components are already in use by gccrs: a slightly older version of polonius, the next-generation Rust borrow-checker, and the rustc_parse_format crate of the compiler. There are multiple reasons for reusing these crates, with the main one being correctness. Borrow checking is a complex topic and a pillar of the Rust programming language. Having subtle differences between rustc and gccrs regarding the borrow rules would be annoying and unproductive to users - but by making an effort to start integrating polonius into our compilation pipeline, we help ensure that the results we produce will be equivalent to rustc. You can read more about the various components we use, and we plan to reuse even more here. We would also like to contribute to the polonius project itself and help make it better if possible. This cross-pollination of components will obviously benefit us, but we believe it will also be useful for the Rust project and ecosystem as a whole, and will help strengthen these implementations. Reusing rustc components could also be extended to other areas of the compiler: Various components of the type system, such as the trait solver, an essential and complex piece of software, could be integrated into gccrs. Simpler things such as parsing, as we have done for the format string parser and inline assembly parser, also make sense to us. They will help ensure that the internal representation we deal with will correspond to the one expected by the Rust standard library. On a final note, we believe that one of the most important steps we could take to prevent breakage within the Rust ecosystem is to further improve our relationship with the Rust community. The amount of help we have received from Rust folks is great, and we think gccrs can be an interesting project for a wide range of users. We would love to hear about your hopes for the project and your ideas for reducing ecosystem breakage or lowering friction with the crates you have published. We had a great time chatting about gccrs at RustConf 2024, and everyone's interest in the project was heartwarming. Please get in touch with us if you have any ideas on how we could further contribute to Rust.
  • This Week In Rust: This Week in Rust 572 (2024/11/06 05:00)
    Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR. Want TWIR in your inbox? Subscribe here. Updates from Rust Community Official October project goals update Next Steps on the Rust Trademark Policy This Development-cycle in Cargo: 1.83 Re-organising the compiler team and recognising our team members This Month in Our Test Infra: October 2024 Call for proposals: Rust 2025h1 project goals Foundation Q3 2024 Recap from Rebecca Rumbul Rust Foundation Member Announcement: CodeDay, OpenSource Science(OS-Sci), & PROMOTIC Newsletters The Embedded Rustacean Issue #31 Project/Tooling Updates Announcing Intentrace, an alternative strace for everyone Ractor Quickstart Announcing Sycamore v0.9.0 CXX-Qt 0.7 Release An 'Educational' Platformer for Kids to Learn Math and Reading—and Bevy for the Devs [ZH][EN] Select HTML Components in Declarative Rust Observations/Thoughts Safety in an unsafe world MinPin: yet another pin proposal Reached the recursion limit... at build time? Building Trustworthy Software: The Power of Testing in Rust Async Rust is not safe with io_uring Macros, Safety, and SOA how big is your future? A comparison of Rust’s borrow checker to the one in C# Streaming Audio APIs in Rust pt. 3: Audio Decoding [audio] InfinyOn with Deb Roy Chowdhury Rust Walkthroughs Difference Between iter() and into_iter() in Rust Rust's Sneaky Deadlock With if let Blocks Why I love Rust for tokenising and parsing "German string" optimizations in Spellbook Rust's Most Subtle Syntax Parsing arguments in Rust with no dependencies Simple way to make i18n support in Rust with with examples and tests How to shallow clone a Cow Beginner Rust ESP32 development - Snake [video] Rust Collections & Iterators Demystified 🪄 Research Charon: An Analysis Framework for Rust Crux, a Precise Verifier for Rust and Other Languages Miscellaneous Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk [audio] Let's talk about Rust with John Arundel [audio] Exploring Rust for Embedded Systems with Philip Markgraf Crate of the Week This week's crate is wtransport, an implementation of the WebTransport specification, a successor to WebSockets with many additional features. Thanks to Josh Triplett for the suggestion! Please submit your suggestions and votes for next week! Calls for Testing An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward: RFCs No calls for testing were issued this week. Rust No calls for testing were issued this week. Rustup No calls for testing were issued this week. If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing. Call for Participation; projects and speakers CFP - Projects Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! CFP - Events Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! Updates from the Rust Project 473 pull requests were merged in the last week account for late-bound depth when capturing all opaque lifetimes add --print host-tuple to print host target tuple add f16 and f128 to invalid_nan_comparison add lp64e RISC-V ABI also treat impl definition parent as transparent regarding modules cleanup attributes around unchecked shifts and unchecked negation in const cleanup op lookup in HIR typeck collect item bounds for RPITITs from trait where clauses just like associated types do not enforce ~const constness effects in typeck if rustc_do_not_const_check don't lint irrefutable_let_patterns on leading patterns if else if let-chains double-check conditional constness in MIR ensure that resume arg outlives region bound for coroutines find the generic container rather than simply looking up for the assoc with const arg fix compiler panic with a large number of threads fix suggestion for diagnostic error E0027 fix validation when lowering ? trait bounds implement suggestion for never type fallback lints improve missing_abi lint improve duplicate derive Copy/Clone diagnostics llvm: match new LLVM 128-bit integer alignment on sparc make codegen help output more consistent make sure type_param_predicates resolves correctly for RPITIT pass RUSTC_HOST_FLAGS at once without the for loop port most of --print=target-cpus to Rust register ~const preds for Deref adjustments in HIR typeck reject generic self types remap impl-trait lifetimes on HIR instead of AST lowering remove "" case from RISC-V llvm_abiname match statement remove do_not_const_check from Iterator methods remove region from adjustments remove support for -Zprofile (gcov-style coverage instrumentation) replace manual time convertions with std ones, comptime time format parsing suggest creating unary tuples when types don't match a trait support clobber_abi and vector registers (clobber-only) in PowerPC inline assembly try to point out when edition 2024 lifetime capture rules cause borrowck issues typingMode: merge intercrate, reveal, and defining_opaque_types miri: change futex_wait errno from Scalar to IoError stabilize const_arguments_as_str stabilize if_let_rescope mark str::is_char_boundary and str::split_at* unstably const remove const-support for align_offset and is_aligned unstably add ptr::byte_sub_ptr implement From<&mut {slice}> for Box/Rc/Arc<{slice}> rc/Arc: don't leak the allocation if drop panics add LowerExp and UpperExp implementations to NonZero use Hacker's Delight impl in i64::midpoint instead of wide i128 impl xous: sync: remove rustc_const_stable attribute on Condvar and Mutex new() add const_panic macro to make it easier to fall back to non-formatting panic in const cargo: downgrade version-exists error to warning on dry-run cargo: add more metadata to rustc_fingerprint cargo: add transactional semantics to rustfix cargo: add unstable -Zroot-dir flag to configure the path from which rustc should be invoked cargo: allow build scripts to report error messages through cargo::error cargo: change config paths to only check CARGO_HOME for cargo-script cargo: download targeted transitive deps of with artifact deps' target platform cargo fix: track version in fingerprint dep-info files cargo: remove requirement for --target when invoking Cargo with -Zbuild-std rustdoc: Fix --show-coverage when JSON output format is used rustdoc: Unify variant struct fields margins with struct fields rustdoc: make doctest span tweak a 2024 edition change rustdoc: skip stability inheritance for some item kinds mdbook: improve theme support when JS is disabled mdbook: load the sidebar toc from a shared JS file or iframe clippy: infinite_loops: fix incorrect suggestions on async functions/closures clippy: needless_continue: check labels consistency before warning clippy: no_mangle attribute requires unsafe in Rust 2024 clippy: add new trivial_map_over_range lint clippy: cleanup code suggestion for into_iter_without_iter clippy: do not use gen as a variable name clippy: don't lint unnamed consts and nested items within functions in missing_docs_in_private_items clippy: extend large_include_file lint to also work on attributes clippy: fix allow_attributes when expanded from some macros clippy: improve display of clippy lints page when JS is disabled clippy: new lint map_all_any_identity clippy: new lint needless_as_bytes clippy: new lint source_item_ordering clippy: return iterator must not capture lifetimes in Rust 2024 clippy: use match ergonomics compatible with editions 2021 and 2024 rust-analyzer: allow interpreting consts and statics with interpret function command rust-analyzer: avoid interior mutability in TyLoweringContext rust-analyzer: do not render meta info when hovering usages rust-analyzer: add assist to generate a type alias for a function rust-analyzer: render extern blocks in file_structure rust-analyzer: show static values on hover rust-analyzer: auto-complete import for aliased function and module rust-analyzer: fix the server not honoring diagnostic refresh support rust-analyzer: only parse safe as contextual kw in extern blocks rust-analyzer: parse patterns with leading pipe properly in all places rust-analyzer: support new #[rustc_intrinsic] attribute and fallback bodies Rust Compiler Performance Triage A week dominated by one large improvement and one large regression where luckily the improvement had a larger impact. The regression seems to have been caused by a newly introduced lint that might have performance issues. The improvement was in building rustc with protected visibility which reduces the number of dynamic relocations needed leading to some nice performance gains. Across a large swath of the perf suit, the compiler is on average 1% faster after this week compared to last week. Triage done by @rylev. Revision range: c8a8c820..27e38f8f Summary: (instructions:u) mean range count Regressions ❌ (primary) 0.8% [0.1%, 2.0%] 80 Regressions ❌ (secondary) 1.9% [0.2%, 3.4%] 45 Improvements ✅ (primary) -1.9% [-31.6%, -0.1%] 148 Improvements ✅ (secondary) -5.1% [-27.8%, -0.1%] 180 All ❌✅ (primary) -1.0% [-31.6%, 2.0%] 228 1 Regression, 1 Improvement, 5 Mixed; 3 of them in rollups 46 artifact comparisons made in total Full report here Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: [RFC] Default field values RFC: Give users control over feature unification Final Comment Period Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs [disposition: merge] Add support for use Trait::func Tracking Issues & PRs Rust [disposition: merge] Stabilize Arm64EC inline assembly [disposition: merge] Stabilize s390x inline assembly [disposition: merge] rustdoc-search: simplify rules for generics and type params [disposition: merge] Fix ICE when passing DefId-creating args to legacy_const_generics. [disposition: merge] Tracking Issue for const_option_ext [disposition: merge] Tracking Issue for const_unicode_case_lookup [disposition: merge] Reject raw lifetime followed by ', like regular lifetimes do [disposition: merge] Enforce that raw lifetimes must be valid raw identifiers [disposition: merge] Stabilize WebAssembly multivalue, reference-types, and tail-call target features Cargo No Cargo Tracking Issues or PRs entered Final Comment Period this week. Language Team No Language Team Proposals entered Final Comment Period this week. Language Reference No Language Reference RFCs entered Final Comment Period this week. Unsafe Code Guidelines No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week. New and Updated RFCs [new] Implement The Update Framework for Project Signing [new] [RFC] Static Function Argument Unpacking [new] [RFC] Explicit ABI in extern [new] Add homogeneous_try_blocks RFC Upcoming Events Rusty Events between 2024-11-06 - 2024-12-04 🦀 Virtual 2024-11-06 | Virtual (Indianapolis, IN, US) | Indy Rust Indy.rs - with Social Distancing 2024-11-07 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-11-08 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative Rust Coding / Game Dev Fridays Open Mob Session! 2024-11-12 | Virtual (Dallas, TX, US) | Dallas Rust Second Tuesday 2024-11-14 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-11-14 | Virtual and In-Person (Lehi, UT, US) | Utah Rust Green Thumb: Building a Bluetooth-Enabled Plant Waterer with Rust and Microbit 2024-11-14 | Virtual and In-Person (Seattle, WA, US) | Seattle Rust User Group November Meetup 2024-11-15 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative Rust Coding / Game Dev Fridays Open Mob Session! 2024-11-19 | Virtual (Los Angeles, CA, US) | DevTalk LA Discussion - Topic: Rust for UI 2024-11-19 | Virtual (Washington, DC, US) | Rust DC Mid-month Rustful 2024-11-20 | Virtual and In-Person (Vancouver, BC, CA) | Vancouver Rust Embedded Rust Workshop 2024-11-21 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-11-21 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Trustworthy IoT with Rust--and passwords! 2024-11-21 | Virtual (Rotterdam, NL) | Bevy Game Development Bevy Meetup #7 2024-11-25 | Bratislava, SK | Bratislava Rust Meetup Group ONLINE Talk, sponsored by Sonalake - Bratislava Rust Meetup 2024-11-26 | Virtual (Dallas, TX, US) | Dallas Rust Last Tuesday 2024-11-28 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-11-28 | Virtual (Nürnberg, DE) | Rust Nuremberg Rust Nürnberg online 2024-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup Buffalo Rust User Group Asia 2024-11-28 | Bangalore/Bengaluru, IN | Rust Bangalore RustTechX Summit 2024 BOSCH 2024-11-30 | Tokyo, JP | Rust Tokyo Rust.Tokyo 2024 Europe 2024-11-06 | Oxford, UK | Oxford Rust Meetup Group Oxford Rust and C++ social 2024-11-06 | Paris, FR | Paris Rustaceans Rust Meetup in Paris 2024-11-09 - 2024-11-11 | Florence, IT | Rust Lab Rust Lab 2024: The International Conference on Rust in Florence 2024-11-12 | London, UK | Rust London User Group LDN Talks November 2024 RustRover Takeover with JetBrains 2024-11-12 | Zurich, CH | Rust Zurich Encrypted/distributed filesystems, wasm-bindgen 2024-11-13 | Reading, UK | Reading Rust Workshop Reading Rust Meetup 2024-11-14 | Stockholm, SE | Stockholm Rust Rust Meetup @UXStream 2024-11-19 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig Daten sichern mit ZFS (und Rust) 2024-11-19 | Paris, FR | Rust Paris Rust meetup #72 2024-11-21 | Edinburgh, UK | Rust and Friends Rust and Friends (pub) 2024-11-21 | Oslo, NO | Rust Oslo Rust Hack'n'Learn at Kampen Bistro 2024-11-23 | Basel, CH | Rust Basel Rust + HTMX - Workshop #3 2024-11-26 | Warsaw, PL | Rust Warsaw New Rust Warsaw Meetup #3 2024-11-27 | Dortmund, DE | Rust Dortmund Rust Dortmund 2024-11-28 | Aarhus, DK | Rust Aarhus Talk Night at Lind Capital 2024-11-28 | Augsburg, DE | Rust Meetup Augsburg Augsburg Rust Meetup #10 2024-11-28 | Berlin, DE | OpenTechSchool Berlin + Rust Berlin Rust and Tell - Title 2024-11-28 | Hamburg, DE | Rust Meetup Hamburg Rust Hack & Learn with Mainmatter & Otto 2024-11-28 | Prague, CZ | Rust Prague Rust/C++ Meetup Prague (November 2024) 2024-12-04 | Oxford, UK | Oxford Rust Meetup Group Oxford Rust and C++ social North America 2024-11-07 | Chicago, IL, US | Chicago Rust Meetup Chicago Rust Meetup 2024-11-07 | Montréal, QC, CA | Rust Montréal November Monthly Social 2024-11-07 | St. Louis, MO, US | STL Rust Game development with Rust and the Bevy engine 2024-11-12 | Ann Arbor, MI, US | Detroit Rust Rust Community Meetup - Ann Arbor 2024-11-12 | New York, NY, US | Rust NYC Rust NYC Monthly Meetup 2024-11-14 | Mountain View, CA, US | Hacker Dojo Rust Meetup at Hacker Dojo 2024-11-15 | Mexico City, DF, MX | Rust MX Multi threading y Async en Rust parte 2 - Smart Pointes y Closures 2024-11-15 | Somerville, MA, US | Boston Rust Meetup Ball Square Rust Lunch, Nov 15 2024-11-19 | San Francisco, CA, US | San Francisco Rust Study Group Rust Hacking in Person 2024-11-23 | Boston, MA, US | Boston Rust Meetup Boston Common Rust Lunch, Nov 23 2024-11-25 | Ferndale, MI, US | Detroit Rust Rust Community Meetup - Ferndale 2024-11-27 | Austin, TX, US | Rust ATX Rust Lunch - Fareground Oceania 2024-11-12 | Christchurch, NZ | Christchurch Rust Meetup Group Christchurch Rust Meetup If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Jobs Please see the latest Who's Hiring thread on r/rust Quote of the Week Any sufficiently complicated C project contains an adhoc, informally specified, bug ridden, slow implementation of half of cargo. – Folkert de Vries at RustNL 2024 (youtube recording) Thanks to Collin Richards for the suggestion! Please submit quotes and vote for next week! This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez. Email list hosting is sponsored by The Rust Foundation Discuss on r/rust
  • The Rust Programming Language Blog: Next Steps on the Rust Trademark Policy (2024/11/06 00:00)
    As many of you know, the Rust language trademark policy has been the subject of an extended revision process dating back to 2022. In 2023, the Rust Foundation released an updated draft of the policy for input following an initial survey about community trademark priorities from the previous year along with review by other key stakeholders, such as the Project Directors. Many members of our community were concerned about this initial draft and shared their thoughts through the feedback form. Since then, the Rust Foundation has continued to engage with the Project Directors, the Leadership Council, and the wider Rust project (primarily via all@) for guidance on how to best incorporate as much feedback as possible. After extensive discussion, we are happy to circulate an updated draft with the wider community today for final feedback. An effective trademark policy for an open source community should reflect our collective priorities while remaining legally sound. While the revised trademark policy cannot perfectly address every individual perspective on this important topic, its goal is to establish a framework to help guide appropriate use of the Rust trademark and reflect as many common values and interests as possible. In short, this policy is designed to steer our community toward a shared objective: to maintain and protect the integrity of the Rust programming language. The Leadership Council is confident that this updated version of the policy has addressed the prevailing concerns about the initial draft and honors the variety of voices that have contributed to its development. Thank you to those who took the time to submit well-considered feedback for the initial draft last year or who otherwise participated in this long-running process to update our policy to continue to satisfy our goals. Please review the updated Rust trademark policy here, and share any critical concerns you might have via this form by November 20, 2024. The Foundation has also published a blog post which goes into more detail on the changes made so far. The Leadership Council and Project Directors look forward to reviewing concerns raised and approving any final revisions prior to an official update of the policy later this year.
  • Niko Matsakis: MinPin: yet another pin proposal (2024/11/05 17:20)
    This post floats a variation of boats’ UnpinCell proposal that I’m calling MinPin.1 MinPin’s goal is to integrate Pin into the language in a “minimally disruptive” way2 – and in particular a way that is fully backwards compatible. Unlike Overwrite, MinPin does not attempt to make Pin and &mut “play nicely” together. It does however leave the door open to add Overwrite in the future, and I think helps to clarify the positives and negatives that Overwrite would bring. TL;DR: Key design decisions Here is a brief summary of MinPin’s rules The pinned keyword can be used to get pinned variations of things: In types, pinned P is equivalent to Pin<P>, so pinned &mut T and pinned Box<T> are equivalent to Pin<&mut T> and Pin<Box<T>> respectively. In function signatures, pinned &mut self can be used instead of self: Pin<&mut Self>. In expressions, pinned &mut $place is used to get a pinned &mut that refers to the value in $place. The Drop trait is modified to have fn drop(pinned &mut self) instead of fn drop(&mut self). However, impls of Drop are still permitted (even encouraged!) to use fn drop(&mut self), but it means that your type will not be able to use (safe) pin-projection. For many types that is not an issue; for futures or other “address sensitive” types, you should use fn drop(pinned &mut self). The rules for field projection from a s: pinned &mut S reference are based on whether or not Unpin is implemented: Projection is always allowed for fields whose type implements Unpin. For fields whose types are not known to implement Unpin: If the struct S is Unpin, &mut projection is allowed but not pinned &mut. If the struct S is !Unpin[^neg] and does not have a fn drop(&mut self) method, pinned &mut projection is allowed but not &mut. If the type checker does not know whether S is Unpin or not, or if the type S has a Drop impl with fn drop(&mut self), neither form of projection is allowed for fields that are not Unpin. There is a type struct Unpinnable<T> { value: T } that always implements Unpin. Design axioms Before I go further I want to layout some of my design axioms (beliefs that motivate and justify my design). Pin is part of the Rust language. Despite Pin being entirely a “library-based” abstraction at present, it is very much a part of the language semantics, and it deserves first-class support. It should be possible to create pinned references and do pin projections in safe Rust. Pin is its own world. Pin is only relevant in specific use cases, like futures or in-place linked lists. Pin should have zero-conceptual-cost. Unless you are writing a Pin-using abstraction, you shouldn’t have to know or think about pin at all. Explicit is possible. Automatic operations are nice but it should always be possible to write operations explicitly when needed. Backwards compatible. Existing code should continue to compile and work. Frequently asked questions For the rest of the post I’m just going to go into FAQ mode. I see the rules, but can you summarize how MinPin would feel to use? Yes. I think the rule of thumb would be this. For any given type, you should decide whether your type cares about pinning or not. Most types do not care about pinning. They just go on using &self and &mut self as normal. Everything works as today (this is the “zero-conceptual-cost” goal). But some types do care about pinning. These are typically future implementations but they could be other special case things. In that case, you should explicitly implement !Unpin to declare yourself as pinnable. When you declare your methods, you have to make a choice Is the method read-only? Then use &self, that always works. Otherwise, use &mut self or pinned &mut self, depending… If the method is meant to be called before pinning, use &mut self. If the method is meant to be called after pinning, use pinned &mut self. This design works well so long as all mutating methods can be categorized into before-or-after pinning. If you have methods that need to be used in both settings, you have to start using workarounds – in the limit, you make two copies. How does MinPin compare to UnpinCell? Those of you who have been following the various posts in this area will recognize many elements from boats’ recent UnpinCell. While the proposals share many elements, there is also one big difference between them that makes a big difference in how they would feel when used. Which is overall better is not yet clear to me. Let’s start with what they have in common. Both propose syntax for pinned references/borrows (albeit slightly different syntax) and both include a type for “opting out” from pinning (the eponymous UnpinCell<T> in UnpinCell, Unpinnable<T> in MinPin). Both also have a similar “special case” around Drop in which writing a drop impl with fn drop(&mut self) disables safe pin-projection. Where they differ is how they manage generic structs like WrapFuture<F>, where it is not known whether or not they are Unpin. struct WrapFuture<F: Future> { future: F, } The r: pinned &mut WrapFuture<F>, the question is whether we can project the field future: impl<F: Future> WrapFuture<F> { fn method(pinned &mut self) { let f = pinned &mut r.future; // -------------------- // Is this allowed? } } There is a specific danger case that both sets of rules are trying to avoid. Imagine that WrapFuture<F> implements Unpin but F does not – e.g., imagine that you have a impl<F: Future> Unpin for WrapFuture<F>. In that case, the referent of the pinned &mut WrapFuture<F> reference is not actually pinned, because the type is unpinnable. If we permitted the creation of a pinned &mut F, where F: !Unpin, we would be under the (mistaken) impression that F is pinned. Bad. UnpinCell handles this case by saying that projecting from a pinned &mut is only allowed so long as there is no explicit impl of Unpin for WrapFuture (“if [WrapFuture<F>] implements Unpin, it does so using the auto-trait mechanism, not a manually written impl”). Basically: if the user doesn’t say whether the type is Unpin or not, then you can do pin-projection. The idea is that if the self type is Unpin, that will only be because all fields are unpin (in which case it is fine to make pinned &mut references to them); if the self type is not Unpin, then the field future is pinned, so it is safe. In contrast, in MinPin, this case is only allowed if there is an explicit !Unpin impl for WrapFuture: impl<F: Future> !Unpin for WrapFuture<F> { // This impl is required in MinPin, but not in UnpinCell } Explicit negative impls are not allowed on stable, but they were included in the original auto trait RFC. The idea is that a negative impl is an explicit, semver-binding commitment not to implement a trait. This is different from simply not including an impl at all, which allows for impls to be added later. Why would you prefer MinPin over UnpinCell or vice versa? I’m not totally sure which of these is better. I came to the !Unpin impl based on my axiom that pin is its own world – the idea was that it was better to push types to be explicitly unpin all the time than to have “dual-mode” types that masquerade as sometimes pinned and sometimes not. In general I feel like it’s better to justify language rules by the presence of a declaration than the absence of one. So I don’t like the idea of saying “the absence of an Unpin impl allows for pin-projection” – after all, adding impls is supposed to be semver-compliant. Of course, that’s much lesss true for auto traits, but it can still be true. In fact, Pin has had some unsoundness in the past based on unsafe reasoning that was justified by the lack of an impl. We assumed that &T could never implemented DerefMut, but it turned out to be possible to add weird impls of DerefMut in very specific cases. We fixed this by adding an explicit impl<T> !DerefMut for &T impl. On the other hand, I can imagine that many explicitly implemented futures might benefit from being able to be ambiguous about whether they are Unpin. What does your design axiom “Pin is its own world” mean? The way I see it is that, in Rust today (and in MinPin, pinned places, UnpinCell, etc), if you have a T: !Unpin type (that is, a type that is pinnable), it lives a double life. Initially, it is unpinned, and you interact can move it, &-ref it, or &mut-ref it, just like any other Rust value. But once a !Unpin value becomes pinned to a place, it enters a different state, in which you can no longer move it or use &mut, you have to use pinned &mut: flowchart TD Unpinned[ Unpinned: can access 'v' with '&' and '&mut' ] Pinned[ Pinned: can access 'v' with '&' and 'pinned &mut' ] Unpinned -- pin 'v' in place (only if T is '!Unpin') --> Pinned One-way transitions like this limit the amount of interop and composability you get in the language. For example, if my type has &mut methods, I can’t use them once the type is pinned, and I have to use some workaround, such as duplicating the method with pinned &mut.3 In this specific case, however, I don’t think this transition is so painful, and that’s because of the specifics of the domain: futures go through a pretty hard state change where they start in “preparation mode” and then eventually start executing. The set of methods you need at these two phases are quite distinct. So this is what I meant by “pin is its own world”: pin is not very interopable with Rust, but this is not as bad as it sounds, because you don’t often need that kind of interoperability. How would Overwrite affect pin being in its own world? With Overwrite, when you pin a value in place, you just gain the ability to use pinned &mut, you don’t give up the ability to use &mut: flowchart TD Unpinned[ Unpinned: can access 'v' with '&' and '&mut' ] Pinned[ Pinned: can additionally access 'v' with 'pinned &mut' ] Unpinned -- pin 'v' in place (only if T is '!Unpin') --> Pinned Making pinning into a “superset” of the capabilities of pinned means that pinned &mut can be coerced into an &mut (it could even be a “true subtype”, in Rust terms). This in turn means that a pinned &mut Self method can invoke &mut self methods, which helps to make pin feel like a smoothly integrated part of the language.3 So does the axiom mean you think Overwrite is a bad idea? Not exactly, but I do think that if Overwrite is justified, it is not on the basis of Pin, it is on the basis of immutable fields. If you just look at Pin, then Overwrite does make Pin work better, but it does that by limiting the capabilities of &mut to those that are compatible with Pin. There is no free lunch! As Eric Holk memorably put it to me in privmsg: It seems like there’s a fixed amount of inherent complexity to pinning, but it’s up to us how we distribute it. Pin keeps it concentrated in a small area which makes it seem absolutely terrible, because you have to face the whole horror at once.4 I think Pin as designed is a “zero-conceptual-cost” abstraction, meaning that if you are not trying to use it, you don’t really have to care about it. That’s worth maintaining, if we can. If we are going to limit what &mut can do, the reason to do it is primarily to get other benefits, not to benefit pin code specifically. To be clear, this is largely a function of where we are in Rust’s evolution. If we were still in the early days of Rust, I would say Overwrite is the correct call. It reminds me very much of the IMHTWAMA, the core “mutability xor sharing” rule at the heart of Rust’s borrow checker. When we decided to adopt the current borrow checker rules, the code was about 85-95% in conformance. That is, although there was plenty of aliased mutation, it was clear that “mutability xor sharing” was capturing a rule that we already mostly followed, but not completely. Because combining aliased state with memory safety is more complicated, that meant that a small minority of code was pushing complexity onto the entire language. Confining shared mutation to types like Cell and Mutex made most code simpler at the cost of more complexity around shared state in particular. There’s a similar dynamic around replace and swap. Replace and swap are only used in a few isolated places and in a few particular ways, but the all code has to be more conservative to account for that possibility. If we could go back, I think limiting Replace to some kind of Replaceable<T> type would be a good move, because it would mean that the more common case can enjoy the benefits: fewer borrow check errors and more precise programs due to immutable fields and the ability to pass an &mut SomeType and be sure that your callee is not swapping the value under your feet (useful for the “scope pattern” and also enables Pin<&mut> to be a subtype of &mut). Why did you adopt pinned &mut and not &pin mut as the syntax? The main reason was that I wanted a syntax that scaled to Pin<Box<T>>. But also the pin! macro exists, making the pin keyword somewhat awkward (though not impossible). One thing I was wondering about is the phrase “pinned reference” or “pinned pointer”. On the one hand, it is really a reference to a pinned value (which suggests &pin mut). On the other hand, I think this kind of ambiguity is pretty common. The main thing I have found is that my brain has trouble with Pin<P> because it wants to think of Pin as a “smart pointer” versus a modifier on another smart pointer. pinned Box<T> feels much better this way. Can you show me an example? What about the MaybeDone example? Yeah, totally. So boats [pinned places][] post introduced two futures, MaybeDone and Join. Here is how MaybeDone would look in MinPin, along with some inline comments: enum MaybeDone<F: Future> { Polling(F), Done(Unpinnable<Option<F::Output>>), // ---------- see below } impl<F: Future> !Unpin for MaybeDone<F> { } // ----------------------- // // `MaybeDone` is address-sensitive, so we // opt out from `Unpin` explicitly. I assumed // opting out from `Unpin` was the *default* in // my other posts. impl<F: Future> MaybeDone<F> { fn maybe_poll(pinned &mut self, cx: &mut Context<'_>) { if let MaybeDone::Polling(fut) = self { // --- // This is in fact pin-projection, although // it's happening implicitly as part of pattern // matching. `fut` here has type `pinned &mut F`. // We are permitted to do this pin-projection // to `F` because we know that `Self: !Unpin` // (because we declared that to be true). if let Poll::Ready(res) = fut.poll(cx) { *self = MaybeDone::Done(Some(res)); } } } fn is_done(&self) -> bool { matches!(self, &MaybeDone::Done(_)) } fn take_output(pinned &mut self) -> Option<F::Output> { // ---------------- // This method is called after pinning, so it // needs a `pinned &mut` reference... if let MaybeDone::Done(res) = self { res.value.take() // ------------ // // ...but take is an `&mut self` method // and `F:Output: Unpin` is known to be true. // // Therefore we have made the type in `Done` // be `Unpinnable`, so that we can do this // swap. } else { None } } } Can you translate the Join example? Yep! Here is Join: struct Join<F1: Future, F2: Future> { fut1: MaybeDone<F1>, fut2: MaybeDone<F2>, } impl<F1: Future, F2: Future> !Unpin for Join<F> { } // ------------------ // // Join is a custom future, so implement `!Unpin` // to gain access to pin-projection. impl<F1: Future, F2: Future> Future for Join<F1, F2> { type Output = (F1::Output, F2::Output); fn poll(pinned &mut self, cx: &mut Context<'_>) -> Poll<Self::Output> { // The calls to `maybe_poll` and `take_output` below // are doing pin-projection from `pinned &mut self` // to a `pinned &mut MaybeDone<F1>` (or `F2`) type. // This is allowed because we opted out from `Unpin` // above. self.fut1.maybe_poll(cx); self.fut2.maybe_poll(cx); if self.fut1.is_done() && self.fut2.is_done() { let res1 = self.fut1.take_output().unwrap(); let res2 = self.fut2.take_output().unwrap(); Poll::Ready((res1, res2)) } else { Poll::Pending } } } What’s the story with Drop and why does it matter? Drop’s current signature takes &mut self. But recall that once a !Unpin type is pinned, it is only safe to use pinned &mut. This is a combustible combination. It means that, for example, I can write a Drop that uses mem::replace or swap to move values out from my fields, even though they have been pinned. For types that are always Unpin, this is no problem, because &mut self and pinned &mut self are equivalent. For types that are always !Unpin, I’m not too worried, because Drop as is is a poor fit for them, and pinned &mut self will be beter. The tricky bit is types that are conditionally Unpin. Consider something like this: struct LogWrapper<T> { value: T, } impl<T> Drop for LogWrapper<T> { fn drop(&mut self) { ... } } At least today, whether or not LogWrapper is Unpin depends on whether T: Unpin, so we can’t know it for sure. The solution that boats and I both landed on effectively creates three categories of types:5 those that implement Unpin, which are unpinnable; those that do not implement Unpin but which have fn drop(&mut self), which are unsafely pinnable; those that do not implement Unpin and do not have fn drop(&mut self), which are safely pinnable. The idea is that using fn drop(&mut self) puts you in this purgatory category of being “unsafely pinnable” (it might be more accurate to say being “maybe unsafely pinnable”, since often at compilation time with generics we won’t know if there is an Unpin impl or not). You don’t get access to safe pin projection or other goodies, but you can do projection with unsafe code (e.g., the way the pin-project-lite crate does it today). It feels weird to have Drop let you use &mut self when other traits don’t. Yes, it does, but in fact any method whose trait uses pinned &mut self can be implemented safely with &mut self so long as Self: Unpin. So we could just allow that in general. This would be cool because many hand-written futures are in fact Unpin, and so they could implement the poll method with &mut self. Wait, so if Unpin types can use &mut self, why do we need special rules for Drop? Well, it’s true that an Unpin type can use &mut self in place of pinned &mut self, but in fact we don’t always know when types are Unpin. Moreover, per the zero-conceptual-cost axiom, we don’t want people to have to know anything about Pin to use Drop. The obvious approaches I could think of all either violated that axiom or just… well… seemed weird: Permit fn drop(&mut self) but only if Self: Unpin seems like it would work, since most types are Unpin. But in fact types, by default, are only Unpin if their fields are Unpin, and so generic types are not known to be Unpin. This means that if you write a Drop impl for a generic type and you use fn drop(&mut self), you will get an error that can only be fixed by implementing Unpin unconditionally. Because “pin is its own world”, I believe adding the impl is fine, but it violates “zero-conceptual-cost” because it means that you are forced to understand what Unpin even means in the first place. To address that, I considered treating fn drop(&mut self) as implicitly declaring Self: Unpin. This doesn’t violate our axioms but just seems weird and kind of surprising. It’s also backwards incompatible with pin-project-lite. These considerations let me to conclude that actually the current design kind of puts in a place where we want three categories. I think in retrospect it’d be better if Unpin were implemented by default but not as an auto trait (i.e., all types were unconditionally Unpin unless they declare otherwise), but oh well. What is the forwards compatibility story for Overwrite? I mentioned early on that MinPin could be seen as a first step that can later be extended with Overwrite if we choose. How would that work? Basically, if we did the s/Unpin/Overwrite/ change, then we would rename Unpin to Overwrite (literally rename, they would be the same trait); prevent overwriting the referent of an &mut T unless T: Overwrite (or replacing, swapping, etc). These changes mean that &mut T is pin-preserving. If T: !Overwrite, then T may be pinned, but then &mut T won’t allow it to be overwritten, replaced, or swapped, and so pinning guarantees are preserved (and then some, since technically overwrites are ok, just not replacing or swapping). As a result, we can simplify the MinPin rules for pin-projection to the following: Given a reference s: pinned &mut S, the rules for projection of the field f are as follows: &mut projection is allowed via &mut s.f. pinned &mut projection is allowed via pinned &mut s.f if S: !Unpin What would it feel like if we adopted Overwrite? We actually got a bit of a preview when we talked about MaybeDone. Remember how we had to introduce Unpinnable around the final value so that we could swap it out? If we adopted Overwrite, I think the TL;DR of how code would be different is that most any code that today uses std::mem::replace or std::mem::swap would probably wind up using an explicit Unpinnable-like wrapper. I’ll cover this later. This goes a bit to show what I meant about there being a certain amount of inherent complexity that we can choose to distibute: in MinPin, this pattern of wrapping “swappable” data is isolated to pinned &mut self methods in !Unpin types. With Overwrite, it would be more widespread (but you would get more widespread benefits, as well). Conclusion My conclusion is that this is a fascinating space to think about!6 So fun. Hat tip to Tyler Mandry and Eric Holk who discussed these ideas with me in detail. ↩︎ MinPin is the “minimal” proposal that I feel meets my desiderata; I think you could devise a maximally minimal proposal is even smaller if you truly wanted. ↩︎ It’s worth noting that coercions and subtyping though only go so far. For example, &mut can be coerced to &, but we often need methods that return “the same kind of reference they took in”, which can’t be managed with coercions. That’s why you see things like last and last_mut. ↩︎ ↩︎ I would say that the current complexity of pinning is, in no small part, due to accidental complexity, as demonstrated by the recent round of exploration, but Eric’s wider point stands. ↩︎ Here I am talking about the category of a particular monomorphized type in a particular version of the crate. At that point, every type either implements Unpin or it doesn’t. Note that at compilation time there is more grey area, as they can be types that may or may not be pinnable, etc. ↩︎ Also that I spent way too much time iterating on this post. JUST GONNA POST IT. ↩︎
  • Mozilla Thunderbird: Thunderbird Monthly Development Digest: October 2024 (2024/11/05 11:00)
    Hello again Thunderbird Community! The last few months have involved a lot of learning for me, but I have a much better appreciation (and appetite!) for the variety of challenges and opportunities ahead for our team and the broader developer community. Catch up with last month’s update, and here’s a quick summary of what’s been happening across the different teams: Exchange Web Services support in Rust An important member of our team left recently and while we’ll very much miss the spirit and leadership, we all learned a lot and are in a good position to carry the project forwards. We’ve managed to unstick a few pieces of the backlog and have a few sprints left to complete work on move/copy operations, protocol logging and priority two operations (flagging messages, folder rename & delete, etc). New team members have moved past the most painful stages and have patches that have landed. Kudos to the patient mentors involved in this process! QR Code Cross-Device Account Import Thunderbird for Android launched this week, and the desktop client (Daily, Beta & ESR 128.4.0) now provides a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the mobile app. Download Thunderbird for Android from the Play store Account Hub Development of a refreshed account hub is moving forward apace and with the critical path broken down into sprints, our entire front end team is working to complete things in the next two weeks. Meta bug & progress tracking. Clean up on aisle 2 In addition to our project work, we’ve had to be fairly nimble this month, with a number of upstream changes breaking our builds and pipelines. We get a ton of benefit from the platforms we inherit but at times it feels like we’re dealing with many things out of our control. Mental note: stay calm and focus on future improvements! Global Database, Conversation View & folder corruption issues On top of the conversation view feature and core refactoring to tackle the inner workings of thread-safe folder and message manipulation, work to implement a long term database replacement is well underway. Preliminary patches are regularly pumped into the development ecosystem for discussion and review, for which we’re very excited! In-App Notifications With phase 1 of this project now complete, we’ve scoped out additions that will make it even more flexible and suitable for a variety of purposes. Beta users will likely see the first notifications coming in November, so keep your eyes peeled. Meta Bug & progress tracking. New Features Landing Soon Several requested features are expected to debut this month (or very soon) and include… Dark Reader support RSS feed corrections  Button inside link resolution PGP Key window adjustment Folder compaction fixes Lazy loading stack to improve performance Various calendar improvements startup crash avoidance and many more which are listed in release notes for beta. As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early. See you next month. Toby Pilling Senior Manager, Desktop Engineering The post Thunderbird Monthly Development Digest: October 2024 appeared first on The Thunderbird Blog.
  • Don Marti: links for 3 November 2024 (2024/11/03 00:00)
    Remote Startups Will Win the War for Top Talent Ironically, in another strike against the spontaneous collaboration argument, a study of two Fortune 500 headquarters found that transitioning from cubicles to an open office layout actually reduced face-to-face interactions by 70 percent. Why Strava Is a Privacy Risk for the President (and You Too) Not everybody uses their real names or photos on Strava, but many do. And if a Strava account is always in the same place as the President, you can start to connect a few dots. Why Getting Your Neighborhood Declared a Historic District Is a Bad Idea Historic designations are commonly used to control what people can do with their own private property, and can be a way of creating a kind of “backdoor” homeowners association. Some historic neighborhoods (many of which have dubious claims to the designation) around the country have HOA-like restrictions on renovations, repairs, and even landscaping. Donald Trump Talked About Fixing McDonald’s Ice Cream Machines. Lina Khan Actually Did. Back in March, the FTC submitted a comment to the US Copyright Office asking to extend the right to repair certain equipment, including commercial soft-serve equipment. An awful lot of FOSS should thank the Academy Linux and open source in general seem to be huge components of the movie special effects industry – to an extent that we had not previously realized. (unless you have a stack of old Linux Journal back issues from the early 2000s—we did a lot of movie covers at the time that much of this software was being developed.) Using an 8K TV as a Monitor For programming, word processing, and other productive work, consider getting an 8K TV instead of a multi-monitor setup. An 8K TV will have superior image quality, resolution, and versatility compared to multiple 4K displays, at roughly the same size. (huge TVs are an under-rated, subsidized technology, like POTS lines. Most or all of the huge TVs available today are smart and sold with the expectation that they’ll drive subscription and advertising revenue, which means a discount for those who use them as monitors.) Suchir Balaji, who spent four years at OpenAI, says OpenAI’s use of copyrighted data broke the law and failed to meet fair use criteria; he left in August 2024 Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems. The Unlikely Inventor of the Automatic Rice Cooker Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results. Comments on TSA proposal for decentralized nonstandard ID requirements Compliance with the REAL-ID Act requires a state to electronically share information concerning all driver’s licenses and state-issued IDs with all other states, but not all states do so. Because no state complies with this provision of the REAL-ID Act, or could do so unless and until all states do so, no state-issued driver’s licenses or ID cards comply with the REAL-ID Act.
  • Don Marti: or we could just not (2024/11/02 00:00)
    previously: Sunday Internet optimism The consensus, dismal future of the Internet is usually wrong. Dystopias make great fiction, but the Internet is surprisingly good at muddling through and reducing each one to nuisance level. We don’t have Clipper Chip dystopia that would have put backdoors in all cryptography. We don’t have software patent cartel dystopia that would have locked everyone in to limited software choices and functionality, and a stagnant market. We don’t have Fritz Chip dystopia that would have mandated Digital Rights Management on all devices. None of these problems have gone away entirely—encryption backdoors, patent trolls, and DRM are all still there—but none have reached either Internet-wide catastrophe level or faded away entirely. Today’s hottest new dystopia narrative is that we’re going to end up with surveillance advertising features in web browsers. They’ll be mathematically different from old-school cookie tracking, so technically they won’t make it possible to identify anyone individually, but they’ll still impose the same old surveillance risks on users, since real-world privacy risks are collective. Compromising with the dystopia narrative always looks like the realistic or grown-up path forward, until it doesn’t. And then the non-dystopia timeline generally looks inevitable once you get far enough along it. This time it’s the same way. We don’t need cross-context personalized (surveillance) advertising in our web browsers any more than we need SCO licensesnot counting the SCO license timeline as dystopia, but another good example of dismal timeline averted in our operating systems. Let’s look at the numbers. I’m going to make all the assumptions most favorable to the surveillance advertising argument. It’s actually probably a lot better than this. And it’s probably better in other countries, since the USA is relatively advanced in the commercial surveillance field. (If you have these figures for other countries, please let me know and I’ll link to them.) Total money spent on advertising in the USA: $389.49 billion USA population: 335,893,238 That comes out to about $1,160 spent on advertising to reach the average person in the USA every year. That’s $97 per month. So let’s assume (again, making the assumption most favorable to the surveillance side) that all advertising is surveillance advertising. And ads without the surveillance, according to Professor Garrett Johnson are worth 52 percent less than the surveillance ads. So if you get rid of the surveillance, your ad subsidy goes from $97 to $46. Advertisers would be spending $51 less to advertise to you, and the missing $51 is a good-sized amount of extra money to come up with every month. But remember, that’s advertising money, total, not the amount that actually makes it to the people who make the ad-supported resources you want. Since the problem is how to replace the income for the artists, writers, and everyone else who makes ad-supported content, we need to multiply the missing ad subsidy by the fraction of that top-level advertising total that makes it through to the content creator in order to come up with the amount of money that needs to be filled in from other sources like subscriptions and memberships. How much do you need to spend on subscriptions to replace $51 in ad money? That’s going to depend on your habits. But even if you have everything set up totally right, a dollar spent on ads to reach you will buy you less than a dollar you spend yourself. Thomas Baekdal writes, in How independent publishing has changed from the 1990s until today, Up until this point, every publisher had focused on ‘traffic at scale’, but with the new direct funding focus, every individual publisher realized that traffic does not equal money, and you could actually make more money by having an audience who paid you directly, rather than having a bunch of random clicks for the sake of advertising. The ratio was something like 1:10,000. Meaning that for every one person you could convince to subscribe, donate, become a member, or support you on Patreon … you would need 10,000 visitors to make the same amount from advertising. Or to put that into perspective, with only 100 subscribers, I could make the same amount of money as I used to earn from having one million visitors. All surveillance ad media add some kind of adtech tax. The Association of National Advertisers found that about 1/3 of the money spent to buy ad space makes it through to the publisher. A subscription platform and subscriber services impose some costs too. To be generous to the surveillance side, let’s say that a subscription dollar is only three times as valuable as an advertising dollar. So that $51 in missing ad money means you need to come up with $17 from somewhere. This estimate is really on the high side in practice. A lot of ad money goes to overhead and to stuff like retail ad networks (online sellers bidding for better spots in shopping search results) and to ad media like billboards that don’t pay for content at all. So, worst case, where do you get the $17? From buying less crap, that’s where. Mustri et al.(PDF) write, [behaviorally] targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products… You also get a piece of the national security and other collective security benefits of eliminating surveillance, some savings in bandwidth and computing resources, and a lower likelihood of becoming a victim of fraud and identity theft. But that’s pure bonus benefit on top of the win from saving money by spending less on overpriced, personally targeted, low-quality products. (If privacy protection didn’t help you buy better stuff, the surveillance companies would have said so by now.) Because surveillance advertising gives an advantage to deceptive advertisers over legit ones, the end of surveillance advertising would also mean an increase in sales for legit brands. And we’re not done. As a wise man once said, But wait! There’s more! Before you rush to do effective privacy tips or write to your state legislators to support anti-surveillance laws, there’s one more benefit for getting rid of surveillance/personalized advertising. Remember that extra $51 that went away? It didn’t get burned up in a fire just because it didn’t get spent on surveillance advertising. Companies still have it, and they still want to sell you stuff. Without surveillance, they’ll have to look for other ways to spend it. And many of the options are win-win for the customer. In Product is the P all marketers should strive to influence, Mark Ritson points out the marketing wins from incremental product improvements, and that’s the kind of work that often gets ignored in favor of niftier, short-term, surveillance advertising projects. Improving service and pricing are other areas that will will also do better without surveillance advertising contending for budgets. There is a lot of potential gain for a lot of people in getting rid of surveillance advertising, so let’s not waste the opportunity. Don’t worry, we’ll get another Internet dystopia narrative to worry about eventually. More: stop putting privacy-enhancing technologies in web browsers Related Product is the P all marketers should strive to influence If there is one thing I have learned from a thousand customers discussing a hundred different products it’s that the things a company thinks are small are, from a consumer perspective, big. And the grand improvements the company is spending bazillions on are probably of little significance. Finding out from the source what needs to be fixed or changed and then getting it done is the quiet product work of proper marketers. (yes, I linked to this twice.) I Bought Tech Dupes on Temu. The Shoddy Gear Wasn’t Worth the $1,260 in Savings My journey into the shady side of shopping brought me to the world of dupes — from budget alternatives to bad knockoffs of your favorite tech. Political fundraisers WinRed and ActBlue are taking millions of dollars in donations from elderly dementia patients to fuel their campaigns [S]some of these elderly, vulnerable consumers have unwittingly given away six-figure sums – most often to Republican candidates – making them among the country’s largest grassroots political donors. Bonus links Marketers in a dying internet: Why the only option is a return to simplicity With machine-generated content now cluttering the most visible online touchpoints (like the frontpage of Google, or your Facebook timeline), it feels inevitable that consumer behaviors will shift as a result. And so marketers need to change how they reach target audiences. I attended Google’s creator conversation event, and it turned into a funeral Is AI advertising going to be too easy for its own good? As Rory Sutherland said, When human beings process a message, we sort of process how much effort and love has gone into the creation of this message and we pay attention to to the message accordingly. It’s costly signaling of a kind. How Google is Killing Bloggers and Small Publishers – And Why Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election Ninth Circuit Upholds AADC Ban on “Dark Patterns” Economist ‘future-proofing’ bid brings back brand advertising and targets students
  • The Talospace Project: Updated Baseline JIT OpenPOWER patches for Firefox 128ESR (2024/11/01 21:45)
    I updated the Baseline JIT patches to apply against Firefox 128ESR, though if you use the Mercurial rebase extension (and you should), it will rebase automatically and only one file had to be merged — which it did for me also. Nevertheless, everything is up to date against tip again, and this patchset works fine for both Firefox and Thunderbird. I kept the fix for bug 1912623 because I think Mozilla's fix in bug 1909204 is wrong (or at least suboptimal) and this is faster on systems without working Wasm. Speaking of, I need to get back into porting rr to ppc64le so I can solve those startup crashes.
  • Mozilla Performance Blog: Performance Testing Newsletter (Q3 Edition) (2024/10/31 23:22)
    Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. Last quarter was MozWeek, and we had a great time meeting a number of you in our PerfTest Regression Workshop – thank you all for joining us, and making it a huge success! If you didn’t get a chance to make it, you can find the slides here, and most of the information from the workshop (including some additional bits) can be found in this documentation page. We will be running this workshop again next MozWeek, along with a more advanced version. See below for highlights from the changes made in the last quarter. Highlights [mayankleoboy1] This quarter, Mayank became our first official Community Performance Sheriff! [mayankleoboy1] Filed some issues related to missing alerts. [myeongjun] AWSY now uses tp6 by default to match CI tests where only tp6 is tested. [julienw] PerfCompare now being shown by default in Mach Try Perf! [beatrice] Compare View now provides a helpful link to redirect to PerfCompare. [aglavic] Added new mobile app link startup tests in CI. [aglavic] Replaced Android Samsung S21 device with Samsung S24 devices. [kshampur] New mobile foreground resource usage tests are now available in CI. [kshampur] Android Samsung A51 devices have been replaced with Samsung A55 devices in CI (includes a larger device pool). [kshampur] AWFY has been updated with Windows 11, MacOSX M2, Safari Tech Preview, and much more! [sparky] New mobile background resource usage tests are now available in CI. [sparky] New option –tests is now available in mach try perf to specify tasks to run using test name. [sparky] Documentation for basics of performance testing now available (see here). [sparky] New tool available to run all alerting tests locally. Run with `mach perftest <ALERT-NUMBER>`. Blog Posts ✍️ [csevere] Announcing PerfCompare: the new comparison tool! Contributors Myeongjun Go [:myeongjun] Mayank Bansal [:mayankleoboy1] If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.
  • The Rust Programming Language Blog: October project goals update (2024/10/31 00:00)
    The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as flagship goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository. Flagship goals Bring the async Rust experience closer to parity with sync Rust The biggest elements of our goal are solving the "send bound" problem via return-type notation (RTN) and adding support for async closures. This month we made progress towards both. For RTN, @compiler-errors extended the return-type notation landed support for using RTN in self-types like where Self::method(): Send. He also authored a blog post with a call for testing explaining what RTN is and how it works. For async closures, the lang team reached a preliminary consensus on the async Fn syntax, with the understanding that it will also include some "async type" syntax. This rationale was documented in RFC #3710, which is now open for feedback. The team held a design meeting on Oct 23 and @nikomatsakis will be updating the RFC with the conclusions. We have also been working towards a release of the dynosaur crate that enables dynamic dispatch for traits with async functions. This is intended as a transitionary step before we implement true dynamic dispatch. The next steps are to polish the implementation and issue a public call for testing. With respect to async drop experiments, @nikomatsakis began reviews. It is expected that reviews will continue for some time as this is a large PR. Finally, no progress has been made towards async WG reorganization. A meeting was scheduled but deferred. @tmandry is currently drafting an initial proposal. Resolve the biggest blockers to Linux building on stable Rust We have made significant progress on resolving blockers to Linux building on stable. Support for struct fields in the offset_of! macro has been stabilized. The final naming for the "derive-smart-pointer" feature has been decided as #[derive(CoercePointee)]; @dingxiangfei2009 prepared PR #131284 for the rename and is working on modifying the rust-for-linux repository to use the new name. Once that is complete, we will be able to stabilize. We decided to stabilize support for references to statics in constants pointers-refs-to-static feature and are now awaiting a stabilization PR from @dingxiangfei2009. Rust for Linux (RfL) is one of the major users of the asm-goto feature (and inline assembly in general) and we have been examining various extensions. @nbdd0121 authored a hackmd document detailing RfL's experiences and identifying areas for improvement. This led to two immediate action items: making target blocks safe-by-default (rust-lang/rust#119364) and extending const to support embedded pointers (rust-lang/rust#128464). Finally, we have been finding an increasing number of stabilization requests at the compiler level, and so @wesleywiser and @davidtwco from the compiler team have started attending meetings to create a faster response. One of the results of that collaboration is RFC #3716, authored by Alice Ryhl, which proposes a method to manage compiler flags that modify the target ABI. Our previous approach has been to create distinct targets for each combination of flags, but the number of flags needed by the kernel make that impractical. Authoring the RFC revealed more such flags than previously recognized, including those that modify LLVM behavior. Rust 2024 edition The Rust 2024 edition is progressing well and is on track to be released on schedule. The major milestones include preparing to stabilize the edition by November 22, 2024, with the actual stabilization occurring on November 28, 2024. The edition will then be cut to beta on January 3, 2025, followed by an announcement on January 9, 2025, indicating that Rust 2024 is pending release. The final release is scheduled for February 20, 2025. The priorities for this edition have been to ensure its success without requiring excessive effort from any individual. The team is pleased with the progress, noting that this edition will be the largest since Rust 2015, introducing many new and exciting features. The process has been carefully managed to maintain high standards without the need for high-stress heroics that were common in past editions. Notably, the team has managed to avoid cutting many items from the edition late in the development process, which helps prevent wasted work and burnout. All priority language items for Rust 2024 have been completed and are ready for release. These include several key issues and enhancements. Additionally, there are three changes to the standard library, several updates to Cargo, and an exciting improvement to rustdoc that will significantly speed up doctests. This edition also introduces a new style edition for rustfmt, which includes several formatting changes. The team is preparing to start final quality assurance crater runs. Once these are triaged, the nightly beta for Rust 2024 will be announced, and wider testing will be solicited. Rust 2024 will be stabilized in nightly in late November 2024, cut to beta on January 3, 2025, and officially released on February 20, 2025. More details about the edition items can be found in the Edition Guide. Goals with updates "Stabilizable" prototype for expanded const generics camelid has started working on using the new lowering schema for more than just const parameters, which once done will allow the introduction of a min_generic_const_args feature gate. compiler-errors has been working on removing the eval_x methods on Const that do not perform proper normalization and are incompatible with this feature. Assemble project goal slate Posted the September update. Created more automated infrastructure to prepare the October update, utilizing an LLM to summarize updates into one or two sentences for a concise table. Associated type position impl trait No progress has been made on this goal. The goal will be closed as consensus indicates stabilization will not be achieved in this period; it will be revisited in the next goal period. Begin resolving `cargo-semver-checks` blockers for merging into cargo No major updates to report. Preparing a talk for next week's EuroRust has taken away most of the free time. Const traits Key developments: With the PR for supporting implied super trait bounds landed (#129499), the current implementation is mostly complete in that it allows most code that should compile, and should reject all code that shouldn't. Further testing is required, with the next steps being improving diagnostics (#131152), and fixing more holes before const traits are added back to core. Explore sandboxed build scripts A working-in-process pull request is available at https://github.com/weihanglo/cargo/pull/66. The use of wasm32-wasip1 as a default sandbox environment is unlikely due to its lack of support for POSIX process spawning, which is essential for various build script use cases. Expose experimental LLVM features for automatic differentiation and GPU offloading The Autodiff frontend was merged, including over 2k LoC and 30 files, making the remaining diff much smaller. The Autodiff middle-end is likely getting a redesign, moving from a library-based to a pass-based approach for LLVM. Extend pubgrub to match cargo's dependency resolution Significant progress was made with contributions by @x-hgg-x, improving the resolver test suite in Cargo to check feature unification against a SAT solver. This was followed by porting the test cases that tripped up PubGrub to Cargo's test suite, laying the groundwork to prevent regression on important behaviors when Cargo switches to PubGrub and preparing for fuzzing of features in dependency resolution. Make Rustdoc Search easier to learn The team is working on a consensus for handling generic parameters, with both PRs currently blocked on this issue. Next-generation trait solver Attempted stabilization of -Znext-solver=coherence was reverted due to a hang in nalgebra, with subsequent fixes improving but not fully resolving performance issues. No significant changes to the new solver have been made in the last month. Optimizing Clippy & linting GnomedDev pushed rust-lang/rust#130553, which replaced an old Clippy infrastructure with a faster one (string matching into symbol matching). Inspections into Clippy's type sizes and cache alignment are being started, but nothing fruitful yet. Patterns of empty types The linting behavior was reverted until an unspecified date. The next steps are to decide on the future of linting and to write the never patterns RFC. Provided reasons for yanked crates The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged. Work on the frontend feature is in progress. Scalable Polonius support on nightly Key developments in the 'Scalable Polonius support on nightly' project include fixing test failures due to off-by-one errors from old mid-points, and ongoing debugging of test failures with a focus on automating the tracing work. Efforts have been made to accept variations of issue #47680, with potential adjustments to active loans computation and locations of effects. Amanda has been cleaning up placeholders in the work-in-progress PR #130227. Stabilize cargo-script rust-lang/cargo#14404 and rust-lang/cargo#14591 have been addressed. Waiting on time to focus on this in a couple of weeks. Stabilize parallel front end Key developments: Added the cases in the issue list to the UI test to reproduce the bug or verify the non-reproducibility. Blockers: null. Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue. Survey tools suitability for Std safety verification Students from the CMU Practicum Project have started writing function contracts that include safety conditions for some unsafe functions in the core library, and verifying that safe abstractions respect those pre-conditions and are indeed safe. Help is needed to write more contracts, integrate new tools, review pull requests, or participate in the repository discussions. Use annotate-snippets for rustc diagnostic output Progress has been made in matching rustc suggestion output within annotate-snippets, with most cases now aligned. The focus has been on understanding and adapting different rendering styles for suggestions to fit within annotate-snippets. Goals without updates The following goals have not received updates in the last month: Ergonomic ref-counting Implement "merged doctests" to save doctest time Stabilize doc_cfg Testing infra + contributors for a-mir-formality User-wide build cache
  • Mozilla Thunderbird: Thunderbird for Android 8.0 Takes Flight (2024/10/30 13:59)
    Just over two years ago, we announced our plans to bring Thunderbird to Android by taking K-9 Mail under our wing. The journey took a little longer than we had originally anticipated and there was a lot to learn along the way, but the wait is finally over! For all of you who have ever asked “when is Thunderbird for Android coming out?”, the answer is – today! We are excited to announce that the first stable release of Thunderbird for Android is out now, and we couldn’t be prouder of the newest, most mobile member of the Thunderbird family. Resources What’s New: https://support.mozilla.org/kb/new-thunderbird-android-version-8 Detailed Release Notes: https://github.com/thunderbird/thunderbird-android/releases/tag/THUNDERBIRD_8_0 Community Support Forum: Thunderbird for Android has its own home on the official Mozilla Support (SUMO) forums. Find the help you need to configure and use the newest Thunderbird from our community on a mobile friendly site. Import Settings: Whether you’re importing your information from K-9 Mail or Thunderbird on the desktop, transfer your information quickly and easily with our guide. System Requirements: Thunderbird for Android runs on mobile devices running Android 5 and above. Platform Availability: Download Thunderbird for Android from the following places. Google Play Store F-droid GitHub Releases (apk only, no updates) The Thunderbird website (on an Android device) Get Involved: Thunderbird for Android thrives thanks to community support, and you can be part of the community! We are grateful to everyone who donates their skill and time to answer support questions, test releases, translate and more. Find out all the ways to get in where you fit in. Support Us: We are 100% donor-supported. Your gift helps us develop new apps (like this one!), improve speed and stability, promote Thunderbird and software freedom, and provide downloads free-of-charge to millions. Donate on our webpage or in the app. Suggest New Features: We know you have great ideas for future features. You can share them on Mozilla Connect, where community members can upvote and comment on them. Our team uses the feedback here to help shape our roadmap. Thanks for Helping Thunderbird for Android Fly Thank you for being a part of the community and sharing this adventure on Android with us! We’re especially grateful to all of you who have helped us test the beta and release candidate images. Your feedback helped us find and fix bugs, test key features, and polish the stable release. We hope you enjoy using the newest Thunderbird, now and for a long time to come! The post Thunderbird for Android 8.0 Takes Flight appeared first on The Thunderbird Blog.
  • Wladimir Palant: The Karma connection in Chrome Web Store (2024/10/30 13:03)
    Somebody brought to my attention that the Hide YouTube Shorts extension for Chrome changed hands and turned malicious. I looked into it and could confirm that it contained two undisclosed components: one performing affiliate fraud and the other sending users’ every move to some Amazon cloud server. But that wasn’t all of it: I discovered eleven more extensions written by the same people. Some contained only the affiliate fraud component, some only the user tracking, some both. A few don’t appear to be malicious yet. While most of these extensions were supposedly developed or bought by a person without any other traces online, one broke this pattern. Karma shopping assistant has been on Chrome Web Store since 2020, the company behind it founded in 2013. This company employs more than 50 people and secured tons of cash in venture capital. Maybe a mistake on my part? After looking thoroughly this explanation seems unlikely. Not only does Karma share some backend infrastructure and considerable amounts of code with the malicious extensions. Not only does Karma Shopping Ltd. admit to selling users’ browsing profiles in their privacy policy. There is even more tying them together, including a mobile app developed by Karma Shopping Ltd. whereas the identical Chrome extension is supposedly developed by the mysterious evildoer. Contents The affected extensions Hiding in plain sight Affiliate fraud functionality Browsing profile collection Who is behind this? What does Karma Shopping want with the data? The affected extensions Most of the extensions in question changed hands relatively recently, the first ones in the summer of 2023. The malicious code has been added immediately after the ownership transfer, with some extensions even requesting additional privileges citing bogus reasons. A few extensions have been developed this year by whoever is behind this. Some extensions from the latter group don’t have any obvious malicious functionality at this point. If there is tracking, it only covers the usage of the extension’s user interface rather than the entire browsing behavior. This can change at any time of course. Name Weekly active users Extension ID Malicious functionality Hide YouTube Shorts 100,000 aljlkinhomaaahfdojalfmimeidofpih Affiliate fraud, browsing profile collection DarkPDF 40,000 cfemcmeknmapecneeeaajnbhhgfgkfhp Affiliate fraud, browsing profile collection Sudoku On The Rocks 1,000 dncejofenelddljaidedboiegklahijo Affiliate fraud Dynamics 365 Power Pane 70,000 eadknamngiibbmjdfokmppfooolhdidc Affiliate fraud, browsing profile collection Israel everywhere 70 eiccbajfmdnmkfhhknldadnheilniafp – Karma | Online shopping, but better 500,000 emalgedpdlghbkikiaeocoblajamonoh Browsing profile collection Where is Cookie? 93 emedckhdnioeieppmeojgegjfkhdlaeo – Visual Effects for Google Meet 1,000,000 hodiladlefdpcbemnbbcpclbmknkiaem Affiliate fraud Quick Stickies 106 ihdjofjnmhebaiaanaeeoebjcgaildmk – Nucleus: A Pomodoro Timer and Website Blocker 20,000 koebbleaefghpjjmghelhjboilcmfpad Affiliate fraud, browsing profile collection Hidden Airline Baggage Fees 496 kolnaamcekefalgibbpffeccknaiblpi Affiliate fraud M3U8 Downloader 100,000 pibnhedpldjakfpnfkabbnifhmokakfb Affiliate fraud Update (2024-11-11): Hide YouTube Shorts, DarkPDF, Nucleus and Hidden Airline Baggage Fees have been taken down. Two of them have been marked as malware and one as violating Chrome Web Store policies, meaning that existing extension users will be notified. I cannot see the reason for different categorization, the functionality being identical in all of these extensions. The other extensions currently remain active. Hiding in plain sight Whoever wrote the malicious code chose not to obfuscate it but to make it blend in with the legitimate functionality of the extension. Clearly, the expectation was that nobody would look at the code too closely. So there is for example this: if (window.location.href.startsWith("http") || window.location.href.includes("m.youtube.com")) { … } It looks like the code inside the block would only run on YouTube. Only when you stop and consider the logic properly you realize that it runs on every website. In fact, that’s the block wrapping the calls to malicious functions. The malicious functionality is split between content script and background worker for the same reason, even though it could have been kept in one place. This way each part looks innocuous enough: there is some data collection in the content script, and then it sends a check_shorts message to the background worker. And the background worker “checks shorts” by querying some web server. Together this just happens to send your entire browsing history into the Amazon cloud. Similarly, there are some complicated checks in the content script which eventually result in a loadPdfTab message to the background worker. The background worker dutifully opens a new tab for that address and, strangely, closes it after 9 seconds. Only when you sort through the layers it becomes obvious that this is actually about adding an affiliate cookie. And of course there is a bunch of usual complicated conditions, making sure that this functionality is not triggered too soon after installation and generally doesn’t pop up reliably enough that users could trace it back to this extension. Affiliate fraud functionality The affiliate fraud functionality is tied to the kra18.com domain. When this functionality is active, the extension will regularly download data from https://www.kra18.com/v1/selectors_list?&ex=90 (90 being the extension ID here, the server accepts eight different extension IDs). That’s a long list containing 6,553 host names: Update (2024-11-19): As of now, the owners of this server disabled the endpoints mentioned here. You can still see the original responses on archive.today however. Whenever one of these domains is visited and the moons are aligned in the right order, another request to the server is made with the full address of the page you are on. For example, the extension could request https://www.kra18.com/v1/extension_selectors?u=https://www.tink.de/&ex=90: The shortsNavButtonSelector key is another red herring, the code only appears to be using it. The important key is url, the address to be opened in order to set the affiliate cookie. And that’s the address sent via loadPdfTab message mentioned before if the extension decides that right now is a good time to collect an affiliate commission. There are also additional “selectors,” downloaded from https://www.kra18.com/v1/selectors_list_lr?&ex=90. Currently this functionality is only used on the amazon.com domain and will replace some product links with links going through jdoqocy.com domain, again making sure an affiliate commission is collected. That domain is owned by Common Junction LLC, an affiliate marketing company that published a case study on how their partnership with Karma Shopping Ltd. (named Shoptagr Ltd. back then) helped drive profits. Browsing profile collection Some of the extensions will send each page visit to https://7ng6v3lu3c.execute-api.us-east-1.amazonaws.com/EventTrackingStage/prod/rest. According to the extension code, this is an Alooma backend. Alooma is a data integration platform which has been acquired by Google a while ago. Data transmitted could look like this: Yes, this is sent for each and every page loaded in the browser, at least after you’ve been using the extension for a while. And distinct_id is my immutable user ID here. But wait, it’s a bit different for the Karma extension. Here you can opt out! Well, that’s only if you are using Firefox because Mozilla is rather strict about unexpected data collection. And if you manage to understand what “User interactions” means on this options page: Well, I may disagree with the claim that url addresses do not contain personably identifiable information. And: yes, this is the entire page. There really isn’t any more text. The data transmitted is also somewhat different: The user_id field no longer contains the extension ID but my personal identifier, complementing the identifier in distinct_id. There is a tab_id field adding more context, so that it is not only possible to recognize which page I navigated to and from where but also to distinguish different tabs. And some more information about my system is always useful of course. Who is behind this? Eleven extensions on my list are supposedly developed by a person going by the name Rotem Shilop or Roni Shilop or Karen Shilop. This isn’t a very common last name, and if this person really exists it managed to leave no traces online. Yes, I also searched in Hebrew. Yet one extension is developed by Karma Shopping Ltd. (formerly Shoptagr Ltd.), a company based in Israel with at least 50 employees. An accidental association? It doesn’t look like it. I’m not going into the details of shared code and tooling, let’s just say: it’s very obvious that all twelve extensions are being developed by the same people. Of course, there is still the possibility that the eleven malicious extensions are not associated directly with Karma Shopping but with some rogue employee or contractor or business partner. However, it isn’t only the code. As explained above, five extensions including Karma share the same tracking backend which is found nowhere else. They are even sending the same access token. Maybe this backend isn’t actually run by Karma Shopping and they are only one of the customers of some third party? Yet if you look at the data being sent, clearly the Karma extension is considered first-party. It’s the other extensions which are sending external: true and component: external_extension flags. Then maybe Karma Shopping is merely buying data from a third party, without actually being affiliated with their extensions? Again, this is possible but unlikely. One indicator is the user_id field in the data sent by these extensions. It’s the same extension ID that they use for internal communication with the kra18.com server. If Karma Shopping were granting a third party access to their server, wouldn’t they assign that third party some IDs of their own? And those affiliate links produced by the kra18.com server? Some of them clearly mention karmanow.com as the affiliate partner. Finally, if we look at Karma Shopping’s mobile apps, they develop two of them. In addition to the Karma app, the app stores also contain an app called “Sudoku on the Rocks,” developed by Karma Shopping Ltd. Which is a very strange coincidence because an identical “Sudoku on the Rocks” extension also exists in the Chrome Web Store. Here however the developer is Karen Shilop. And Karen Shilop chose to include hidden affiliate fraud functionality in their extension. By the way, guess who likes the Karma extension a lot and left a five-star review? I contacted Karma Shopping Ltd. via their public relations address about their relationship to these extensions and the Shilop person but didn’t hear back so far. Update (2024-10-30): An extension developer told me that they were contacted on multiple independent occasions about selling their Chrome extension to Karma Shopping, each time by C-level executives of the company, from official karmanow.com email addresses. The first outreach was in September 2023, where Karma was supposedly looking into adding extensions to their portfolio as part of their growth strategy. They offered to pay between $0.2 and $1 per weekly active user. Update (2024-11-11): Another hint pointed me towards this GitHub issue. While the content has been removed here, you can still see the original content in the edit history. It’s the author of the Hide YouTube Shorts extension asking the author of the DarkPDF extension about that Karma company interested in buying their extensions. What does Karma Shopping want with the data? It is obvious why Karma Shopping Ltd. would want to add their affiliate functionality to more extensions. After all, affiliate commissions are their line of business. But why collect browsing histories? Only to publish semi-insightful articles on people’s shopping behavior? Well, let’s have a look at their privacy policy which is actually meaningful for a change. Under 1.3.4 it says: Browsing Data. In case you a user of our browser extensions we may collect data regarding web browsing data, which includes web pages visited, clicked stream data and information about the content you viewed. How we Use this Data. We use this Personal Data (1) in order to provide you with the Services and feature of the extension and (2) we will share this data in an aggregated, anonymized manner, for marketing research and commercial use with our business partners. Legal Basis. (1) We process this Personal Data for the purpose of providing the Services to you, which is considered performance of a contract with you. (2) When we process and share the aggregated and anonymized data we will ask for your consent. First of all, this tells us that Karma collecting browsing data is official. They also openly state that they are selling it. Good to know and probably good for their business as well. As to the legal basis: I am no lawyer but I have a strong impression that they don’t deliver on the “we will ask for your consent” promise. No, not even that Firefox options page qualifies as informed consent. And this makes this whole data collection rather doubtful in the light of GDPR. There is also a difference between anonymized and pseudonymized data. The data collection seen here is pseudonymized: while it doesn’t include my name, there is a persistent user identifier which is still linked to me. It is usually fairly easy to deanonymize pseudonymized browsing histories, e.g. because people tend to visit their social media profiles rather often. Actually anonymized data would not allow associating it with any single person. This is very hard to achieve, and we’ve seen promises of aggregated and anonymized data go very wrong. While it’s theoretically possible that Karma correctly anonymizes and aggregates data on the server side, this is a rather unlikely outcome for a company that, as we’ve seen above, confuses the lack of names and email addresses with anonymity. But of course these considerations only apply to the Karma extension itself. Because related extensions like Hide YouTube Shorts just straight out lie: Some of these extensions actually used to have a privacy policy before they were bought. Now only three still have an identical and completely bogus privacy policy. Sudoku on the Rocks happens to be among these three, and the same privacy policy is linked by the Sudoku on the Rocks mobile apps which are officially developed by Karma Shopping Ltd.
  • This Week In Rust: This Week in Rust 571 (2024/10/30 04:00)
    Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR. Want TWIR in your inbox? Subscribe here. Updates from Rust Community Project/Tooling Updates An update on Apple M1/M2 GPU drivers Announcing Toasty, an async ORM for Rust gitoxide - October 2024 Glues v0.4 - MongoDB support and Vim editing features Meilisearch 1.11 - AI-powered search & federated search improvements Observations/Thoughts Toward safe transmutation in Rust The performance of the Rust compiler A new approach to validating test suites Why You Shouldn't Arc a HashMap in Rust Implementing the Tower Service Trait Best Practices for Derive Macro Attributes in Rust Trimming down a rust binary in half A deep look into our new massive multitenant architecture Unsafe Rust Is Harder Than C Generators with UnpinCell Which LLM model is best for generating Rust code? Learnings from Contributing to the Rust Project Dyn Box Vs. Generics: What is the best approach for achieving conditional generics in Rust? Rust Walkthroughs Basic Integer Compression Miscellaneous Rust Prism [audio] Rust vs. C++ with Steve Klabnik and Herb Sutter [audio] What's New in Rust 1.76, 1.77, and 1.78 [video] Talk on Chrome's new Rust font stack, fontations [video] Architecting a Rust Game Engine (with Alice Cecile) [video] Gitoxide: What it is, and isn't - Sebastian Thiel Crate of the Week This week's crate is tower-http-client, a library of middlewares and various utilities for HTTP-clients. Thanks to Aleksey Sidorov for the self-suggestion! Please submit your suggestions and votes for next week! Calls for Testing An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward: RFCs No calls for testing were issued this week. Rust No calls for testing were issued this week. Rustup No calls for testing were issued this week. If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing. Call for Participation; projects and speakers CFP - Projects Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. wtx - [HTTP/2] Investigate requests latency If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! CFP - Events Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon! Updates from the Rust Project 447 pull requests were merged in the last week add wasm32v1-none target AIX: use /dev/urandom for random implementation rustc_target: Add pauth-lr aarch64 target feature add a note for ? on a impl Future<Output = Result<..>> in sync function add support for ~const item bounds consider param-env candidates even if they have errors const stability checks v2 coverage: consolidate creation of covmap/covfun records coverage: don't rely on the custom traversal to find enclosing loops coverage: emit LLVM intrinsics using the normal helper method coverage: pass coverage mappings to LLVM as separate structs deeply normalize TypeTrace when reporting type error in new solver deny calls to non-#[const_trait] methods in MIR constck do not remove .cargo directory don't stage-off to previous compiler when CI rustc is available emit future-incompatibility lint when calling/declaring functions with vectors that require missing target feature enable LSX feature for LoongArch Linux targets error on alignments greater than isize::MAX expand: stop using artificial ast::Item for macros loaded from metadata fixup Windows verbatim paths when used with the include! macro hashStable for rustc_feature::Features: stop hashing compile-time constant lint against getting pointers from immediately dropped temporaries move cmp_in_dominator_order out of graph dominator computation pass constness with span into lower_poly_trait_ref prevent overflowing enum cast from ICEing refactor change detection for rustdoc and download-rustc replace an FTP link in comments with an equivalent HTTPS link replace some LLVMRust wrappers with calls to the LLVM C API represent hir::TraitBoundModifiers as distinct parts in HIR represent trait constness as a distinct predicate round negative signed integer towards zero in iN::midpoint simplify force-recompile logic for "library" simplify param handling in resolve_bound_vars taking a raw ref (&raw (const|mut)) of a deref of pointer (*ptr) is always safe use Enabled{Lang,Lib}Feature instead of n-tuples validate args are correct for UnevaluatedConst, ExistentialTraitRef/ExistentialProjection x86 target features: make pclmulqdq imply sse2 x86-32 float return for 'Rust' ABI: treat all float types consistently miri: add option for generating coverage reports miri: android: added syscall support miri: clear eval_libc errors from unix shims miri: consistently use io error handlers miri: fix error returned from readdir_r when isolation is enabled, and uses of raw_os_error miri: implement LLVM x86 vpclmulqdq intrinsics miri: indicate more explicitly where we close host file/dir handles (Big performance change) Do not run lints that cannot emit optimize Rc<T>::default specialize read_exact and read_buf_exact for VecDeque stabilize isqrt feature stabilize shorter-tail-lifetimes support char::is_digit in const contexts remove the Arc rt::init allocation for thread info provide a default impl for Pattern::as_utf8_pattern vectorized SliceContains avoid using imports in thread_local_inner! in static better default capacity for str::replace musl: use posix_spawn if a directory change was requested cargo resolver: Make room for v3 resolver cargo complete: Include descriptions in zsh cargo env: remove unnecessary clones cargo: fingerprint: avoid unnecessary fopen calls cargo: added unstable-schema generation for Cargo.toml cargo: deprecate cargo verify-project cargo fix: add source replacement info when no matching package found cargo fix: trace config [env] table in dep-info cargo test: add fixes in the sat resolver rustdoc: Do not consider nested functions as main function even if named main in doctests rustdoc: extend fake_variadic to "wrapped" tuples rustdoc: hash assets at rustdoc build time allow type-based search on foreign functions clippy: borrow_deref_ref: do not trigger on &raw references clippy: don't trigger const_is_empty for inline const assertions clippy: fire large_const_arrays for computed array lengths clippy: fix incorrect suggestion for !(a >= b) as i32 == c clippy: fix not working lint anchor (generation and filtering) clippy: remove unnecessary filter_map usages clippy: stop linting unused_io_amount in io traits rust-analyzer: add text edits to more inlay hints rust-analyzer: implement diagnostics pull model rust-analyzer: render docs from aliased type when type has no docs rust-analyzer: resolve range patterns to their structs rust-analyzer: split macro-error diagnostic so users can ignore only parts of it rust-analyzer: support cfg(true) and cfg(false) rust-analyzer: fix diagnostic enable config being ignored rust-analyzer: fix dyn incompatible hint message rust-analyzer: fix formatting on welcome page, read only paths setting example rust-analyzer: add missing cfg flags for core crate rust-analyzer: allow public re-export of extern crate import rust-analyzer: correctly handle #"" in edition <2024 rust-analyzer: don't compute diagnostics for non local files rust-analyzer: fix checking for false labelDetailsSupport value rust-analyzer: fix incorrect parsing of use bounds rust-analyzer: handle missing time offsets gracefully rust-analyzer: implement mixed site hygiene rust-analyzer: nail destructuring assignment once and for all rust-analyzer: prevent public re-export of private item rust-analyzer: properly resolve prelude paths inside modules inside blocks rust-analyzer: put leading | in patterns under OrPat rust-analyzer: turn "Remove dbg!" into a quick fix for better prioritization rust-analyzer: move text-edit into ide-db rust-analyzer: only construct a resolver in macro descension when needed rust-analyzer: swap query call order in file_item_tree_query Rust Compiler Performance Triage This week saw a lot of activity both on the regressions and improvements side. There was one large regression, which was immediately reverted. Overall, the week ended up being positive, thanks to a rollup PR that caused a tiny improvement to almost all benchmarks. Triage done by @kobzol. Revision range: 3e33bda0..c8a8c820 Summary: (instructions:u) mean range count Regressions ❌ (primary) 0.7% [0.2%, 2.7%] 15 Regressions ❌ (secondary) 0.8% [0.1%, 1.6%] 22 Improvements ✅ (primary) -0.6% [-1.5%, -0.2%] 153 Improvements ✅ (secondary) -0.7% [-1.9%, -0.1%] 80 All ❌✅ (primary) -0.5% [-1.5%, 2.7%] 168 6 Regressions, 6 Improvements, 4 Mixed; 6 of them in rollups 58 artifact comparisons made in total Full report here Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: No RFCs were approved this week. Final Comment Period Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs No RFCs entered Final Comment Period this week. Tracking Issues & PRs Rust [disposition: merge] Decide whether blocks inside asm goto should default to safe [disposition: merge] #[inline(never)] does not work for async functions [disposition: not specified] Add LowerExp and UpperExp implementations to NonZero Cargo No Cargo Tracking Issues or PRs entered Final Comment Period this week. Language Team No Language Team Proposals entered Final Comment Period this week. Language Reference No Language Reference RFCs entered Final Comment Period this week. Unsafe Code Guidelines No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week. New and Updated RFCs [new] RFC: Labeled match [new] RFC: Never patterns [new] [RFC] Allow packed types to transitively contain aligned types [new] [RFC] Target Modifiers Upcoming Events Rusty Events between 2024-10-30 - 2024-11-27 🦀 Virtual 2024-10-31 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-10-31 | Virtual (Nürnberg, DE) | Rust Nurnberg DE Rust Nürnberg online 2024-11-01 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative Rust Coding / Game Dev Fridays Open Mob Session! 2024-11-02 | Virtual( Kampala, UG) | Rust Circle Kampala Rust Circle Meetup 2024-11-06 | Virtual (Indianapolis, IN, US) | Indy Rust Indy.rs - with Social Distancing 2024-11-07 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup 2024-11-08 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative Rust Coding / Game Dev Fridays Open Mob Session! 2024-11-12 | Virtual (Dallas, TX, US) | Dallas Rust Second Tuesday 2024-11-14 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Crafting Interpreters in Rust Collaboratively 2024-11-14 | Virtual and In-Person (Lehi, UT, US) | Utah Rust Green Thumb: Building a Bluetooth-Enabled Plant Waterer with Rust and Microbit 2024-11-14 | Virtual and In-Person (Seattle, WA, US) | Seattle Rust User Group November Meetup 2024-11-15 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative Rust Coding / Game Dev Fridays Open Mob Session! 2024-11-19 | Virtual (Los Angeles, CA, US) | DevTalk LA Discussion - Topic: Rust for UI 2024-11-19 | Virtual (Washington, DC, US) | Rust DC Mid-month Rustful 2024-11-20 | Virtual and In-Person (Vancouver, BC, CA) | Vancouver Rust Embedded Rust Workshop 2024-11-21 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup Trustworthy IoT with Rust--and passwords! 2024-11-21 | Virtual (Rotterdam, NL) | Bevy Game Development Bevy Meetup #7 2024-11-26 | Virtual (Dallas, TX, US) | Dallas Rust Last Tuesday Europe 2024-10-30 | Hamburg, DE | Rust Meetup Hamburg Rust Hack & Learn October 2024 2024-10-31 | Berlin, DE | OpenTechSchool Berlin + Rust Berlin Rust and Tell - Title 2024-10-31 | Copenhagen, DK | Copenhagen Rust Community Rust meetup #52 sponsored by Trifork and OpenZeppelin 2024-11-05 | Copenhagen, DK | Copenhagen Rust Community Rust Hack Night #10: Rust <3 Nix 2024-11-06 | Oxford, UK | Oxford Rust Meetup Group Oxford Rust and C++ social 2024-11-06 | Paris, FR | Paris Rustaceans Rust Meetup in Paris 2024-11-12 | Zurich, CH | Rust Zurich Encrypted/distributed filesystems, wasm-bindgen 2024-11-13 | Reading, UK | Reading Rust Workshop Reading Rust Meetup 2024-11-14 | Stockholm, SE | Stockholm Rust Rust Meetup @UXStream 2024-11-19 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig Daten sichern mit ZFS (und Rust) 2024-11-21 | Edinburgh, UK | Rust and Friends Rust and Friends (pub) 2024-11-21 | Oslo, NO | Rust Oslo Rust Hack'n'Learn at Kampen Bistro 2024-11-23 | Basel, CH | Rust Basel Rust + HTMX - Workshop #3 North America 2024-10-30 | Chicago, IL, US | Deep Dish Rust Rust Workshop: deploying your code 2024-10-31 | Mountain View, CA, US | Mountain View Rust Meetup Rust Meetup at Hacker Dojo 2024-11-04 | Brookline, MA, US | Boston Rust Meetup Coolidge Corner Brookline Rust Lunch, Nov 4 2024-11-07 | Montréal, QC, CA | Rust Montréal November Monthly Social 2024-11-07 | St. Louis, MO, US | STL Rust Game development with Rust and the Bevy engine 2024-11-12 | Ann Arbor, MI, US | Detroit Rust Rust Community Meetup - Ann Arbor 2024-11-14 | Mountain View, CA, US | Hacker Dojo Rust Meetup at Hacker Dojo 2024-11-15 | Mexico City, DF, MX | Rust MX Multi threading y Async en Rust parte 2 - Smart Pointes y Closures 2024-11-15 | Somerville, MA, US | Boston Rust Meetup Ball Square Rust Lunch, Nov 15 2024-11-19 | San Francisco, CA, US | San Francisco Rust Study Group Rust Hacking in Person 2024-11-23 | Boston, MA, US | Boston Rust Meetup Boston Common Rust Lunch, Nov 23 2024-11-25 | Ferndale, MI, US | Detroit Rust Rust Community Meetup - Ferndale 2024-11-27 | Austin, TX, US | Rust ATX Rust Lunch - Fareground Oceania 2024-10-31 | Auckland, NZ | Rust AKL Rust AKL: Rust on AWS: Sustainability + Peace: Zero Stress Automation 2024-11-12 | Christchurch, NZ | Christchurch Rust Meetup Group Christchurch Rust Meetup If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Jobs Please see the latest Who's Hiring thread on r/rust Quote of the Week An earnest effort to pursue [P1179R1] as a Lifetime TS[P3465R0] will compromise on C++’s outdated and unworkable core principles and adopt mechanisms more like Rust’s. In the compiler business this is called carcinization: a tendency of non-crab organisms to evolve crab-like features. – Sean Baxter on circle-lang.org Thanks to Collin Richards for the suggestion! Please submit quotes and vote for next week! This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez. Email list hosting is sponsored by The Rust Foundation Discuss on r/rust
  • Firefox Developer Experience: Firefox WebDriver Newsletter 132 (2024/10/29 14:00)
    WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional). This newsletter gives an overview of the work we’ve done as part of the Firefox 132 release cycle. Contributions Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in. We are always grateful to receive external contributions, here are the ones which made it in Firefox 132: Liam (ldebeasi) refactored our internal logic tracking navigation events to remove a redundant map and simplify the implementation Liam (ldebeasi) also improved the signature of one of our internal helpers used to retrieve browsing context details WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. WebDriver BiDi Retry commands to avoid AbortError failures In release 132, one of our primary focus areas was enhancing the reliability of command execution. Internally, we sometimes need to forward commands to content processes. This can easily fail, particularly when targeting a page which was either newly created or in the middle of a navigation. These failures often result in errors such as "AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved". <- { "type":"error", "id":14, "error":"unknown error", "message":"AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved", "stacktrace":"" } While there are valid technical reasons that prevent command execution in some cases, there are also many instances where retrying the command is a feasible solution. The browsingContext.setViewport command was specifically updated in order to retry an internal command, as it was frequently failing. Then we updated our overall implementation in order to retry commands automatically if we detect that the page is navigating or about to navigate. Note that retrying commands is not entirely new, it’s an internal feature we were already using in a few handpicked commands. The changes in Firefox 132 just made its usage much more prevalent. New preference: remote.retry-on-abort To go one step further, we decided to allow all commands to be retried by default when the remote.retry-on-abort preference is set to true. Note that true is the default value, which means that with Firefox 132, all commands which need to reach the content process might now be retried (documentation). If you were previously relying on or working around the aforementioned AbortError, and notice an unexpected issue with Firefox 132, you can update this preference to make the behavior closer to previous Firefox versions. Please also file a Bug to let us know about the problem. Bug fixes The browsingContext.contextCreated event is now correctly emitted for lazy-loaded frames. Previously the event would only be emitted when the iframe actually started loading its content. Network events are now correctly emitted for cached stylesheet requests. Network event timings were previously using the wrong unit and were provided in microseconds. They are now set in milliseconds as expected by the specification. The requestTime from network event timings should now be more accurate and really match the time where the request actually started. Fixed a bug where some commands (such as session.subscribe) could fail if a browsing context was not initialized or was being destroyed.
  • Support.Mozilla.Org: Contributor spotlight – Michele Rodaro (2024/10/29 06:11)
    Hi Mozillians, In today’s edition, I’d like to introduce you all to Michele Rodaro, a locale leader for Italian in the Mozilla Support platform. He is a professional architect, but finding pleasure and meaning in contributing to Mozilla since 2006. I’ve met him on several occasions in the past, and reading his answers feels exactly like talking to him in real life. I’m sure you can sense his warmth and kindness just by reading his responses. Here’s a beautiful analogy from Michele about his contributions to Mozilla as they relate to his background in architecture: I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority. Q: Hi Michele, can you tell us more about yourself and what keeps you busy these days? I live in Gemona del Friuli, a small town in the Friuli Venezia Giulia region, in the north-east of Italy, bordering Austria and Slovenia. I am a freelance architect, having graduated from Venice’s University many years ago. I own a professional studio and I mainly deal with residential planning, renovations, and design. In my free time I like to draw, read history, art, literature, satire and comics, listen to music, take care of my cats and, of course, translate or update SUMO Knowledge Base articles into Italian. When I was younger, I played many sports (skiing, basketball, rugby, and athletics). When I can, I continue to go skiing in the beautiful mountains of my region. Oh, I also played piano in a jazz rock band I co-founded in the late 70s and early 80s (good times). In this period, from a professional point of view, I am trying to survive the absurd bureaucracy that is increasingly oppressive in my working environment. As for SUMO, I am maintaining the Italian KB at 100% of the translations, and supporting new localizers to help them align with our translation style. Q: You get started with the Italian local forum in 2006 before you expand your contribution to SUMO in 2008. Can you tell us more about what are the different types of contributions that you’re doing for Mozilla? I found out about Firefox in November 2005 and discovered the Mozilla Italia community and their support forum. Initially, I used the forum to ask for help from other volunteers and, after a short time, I found myself personally involved in providing online assistance to Italian users in need. Then I became a moderator of the forum and in 2008, with the help of my friend @Underpass, I started contributing to the localization of SUMO KB articles (the KB was born in that year). It all started like that. Today, I am an Italian locale leader in SUMO. I take care of the localization of KB articles and train new Italian localizers. I continue to provide support to users on the Italian forums and when I manage to solve a problem I am really happy, but my priority is the SUMO KB because it is an essential source to help users who search online for an immediate solution to any problem encountered with Firefox on all platforms and devices or with Thunderbird, and want to learn the various features of Mozilla applications and services. Forum support has also benefited greatly from KB articles because, instead of having to write down all the procedures to solve a user’s problem every time, we can simply provide them with the link to the article that could solve the problem without having to write the same things every time, especially when the topic has already been discussed many times, but users have not searched our forum. Q: In addition to translating articles on SUMO, you’re also involved in product translation on Pontoon. With your experience across both platforms, what do you think SUMO can learn from Pontoon, and how can we improve our overall localization process? I honestly don’t know, they are quite different ways of doing things in terms of using translation tools specifically. I started collaborating with Pontoon’s Italian l10n team in 2014… Time flies… The rules, the style guides, and the QA process adopted for the Italian translations on Pontoon are the same ones we adopted for SUMO. I have to say that I am much more comfortable with SUMO’s localization process and tool, maybe because I have seen it start off, grow and evolve over time. Pontoon introduced Pretranslation, which helps a lot in translating strings, although it still needs improvements. A machine translation of strings that are not already in Pontoon’s “Translation Memory” is proposed. Sometimes that works fine, other times we need to correct the proposal and save it after escalating it on GitHub, so that in the future that translation becomes part of the “Translation Memory”. If the translation of a string is not accurate, it can be changed at any time. I don’t know if it can be a solution for some parts of SUMO articles. We already have templates, maybe we should further implement the creation and use of templates, focusing on this tool, to avoid typing the translation of procedures/steps that are repeated identically in many articles. Q: What are the biggest challenges you’re currently facing as a SUMO contributor? Are there any specific technical issues you think should be prioritized for fixing? Being able to better train potential new localizers, and help infuse the same level of passion that I have in managing the Italian KB of SUMO. As for technical issues, staying within the scope of translating support articles, I do not encounter major problems in terms of translating and updating articles, but perhaps it is because I now know the strengths and weaknesses of the platform’s tools and I know how to manage them. Maybe we could find a way to remedy what is usually the most frustrating thing for a contributor/localizer who, for example, is updating an article directly online: the loss of their changes after clicking the “Preview Content” button. That is when you click on the “Preview Content” button after having translated an article to correct any formatting/typing errors. If you accidentally click a link in the preview and don’t right-click the link to select “Open Link in New Tab” from the context menu, the link page opens replacing/overwriting the editing page and if you try to go back everything you’ve edited/translated in the input field is gone forever… And you have to start over. A nightmare that happened to me more than once often because I was in a hurry. I used to rely on a very good extension that saved all the texts I typed in the input fields and that I could recover whenever I wanted, but it is no longer updated for the newer versions of Firefox. I’ve tried others, but they don’t convince me. So, in my opinion, there should be a way to avoid this issue without installing extensions. I’m not a developer, I don’t know if it’s easy to find a solution, but we have Mozilla developers who are great ;) Maybe there could be a way to automatically save a draft of the edit every “x” seconds to recover it in case of errors with the article management. Sometimes, even the “Preview Content” button could be dangerous. If you accidentally lost your Internet connection and didn’t notice, if you click on that button, the preview is not generated, you lose everything and goodbye products! Q: Your background as a freelance architect is fascinating! Could you tell us more about that? Do you see any connections between your architectural work and your contribution to Mozilla, or do you view them as completely separate aspects of your life? As an architect I can only speak from my personal experience, because I live in a small town, in a beautiful region which presents me with very different realities than those colleagues have to deal with in big cities like Rome or Milan. Here everything is quieter, less frenetic, which is sometimes a good thing, but not always. The needs of those who commission a project are different if you have to carry it out in a big city, the goal is the same but, urban planning, local building regulations, available spaces in terms of square footage, market requests/needs, greatly influence the way an architect works. Professionally I have had many wonderful experiences in terms of design and creativity (houses, residential buildings, hotels, renovations of old rural or mountain buildings, etc.), challenges in which you often had to play with just a centimeter of margin to actually realize your project. Connection between architecture and contribution to Mozilla? Good question. I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority. If someone wants our “cookies” and unfortunately often not only those, they have to knock, ask permission and if we do not want to have intrusive guests, that someone has to turn around, go away and let us do our things without sticking their nose in. This is my idea of ​​Mozilla, this is the reason that pushed me to believe in its values ​​(The user and his privacy first) and to contribute as a volunteer, and this is what I would like to continue to believe even if someone might say that I am naive, that “they are all the same”. My duty as an architect is like that of a good parent, when necessary I must always warn my clients about why I would advise against certain solutions that I, from professional experience, already know are difficult to implement or that could lead to future management and functionality problems. In any case I always look for solutions that can satisfy my clients’ desires. Design magazines are beautiful, but it is not always possible to reproduce a furnishing solution in living environments that are completely different from the spaces of a showroom set up to perfection for a photo shoot… Mozilla must continue to do what it has always done, educate and protect users, even those who do not use its browser or its products, from those “design magazines” that could lead them to inadvertently make bad choices that they could regret one day. Q: Can you tell us more about the Italian locale team in SUMO and how do you collaborate with each other? First of all, it’s a fantastic team! Everyone does what they do best, there are those who help users in need on the forums, those who translate, those who check the translations and do QA by reporting things that need to be corrected or changed, from punctuation errors to lack of fluency or clarity in the translation, those who help with images for articles because often the translator needs the specific image for an operating system that he does not have. As for translations, which is my main activity, we usually work together with 4- 5 collaborators/friends, and we use a consolidated procedure. Translation of an article, opening a specific discussion for the article in the forum section dedicated to translations with the link of the first translation and the request for QA. Intervention of anyone who wants to report/suggest a correction or a change to be made, modification, link to the new revised version based on the suggestions, rereading and if everything is ok, approval and publication. The translation section is public — like all the other sections of the Mozilla Italia forum — and anyone can participate in the discussion. We are all friends, volunteers, some of us know each other only virtually, others have had the chance to meet in person. The atmosphere is really pleasant and even when a discussion goes on too long, we find a way to lighten the mood with a joke or a tease. No one acts as the professor, we all learn something new. Obviously, there are those like me who are more familiar with the syntax/markup and the tools of the SUMO Wiki and those who are less, but this is absolutely not a problem to achieve the final result which is to provide a valid guide to users. Q: Looking back on your contribution to SUMO, what was the most memorable experience for you? Anything that you’re most proud of? It’s hard to say… I’m not a tech geek, I don’t deal with code, scripts or computer language so my contribution is limited to translating everything that can be useful to Italian users of Mozilla products/programs. So I would say: the first time I reached the 100% translation percentage of all the articles in the Italian dashboard. I have always been very active and available over the years with the various Content Managers of SUMO. When I received their requests for collaboration, I did tests, opened bugs related to the platform, and contributed to the developers’ requests by testing the procedures to solve those bugs. As for the relationship with the Mozilla community, the most memorable experience was undoubtedly my participation in the Europe MozCamp 2009 in Prague, my “first time”, my first meeting with so many people who then became dear friends, not only in the virtual world. I remember being very excited about that invitation and fearful for my English, which was and is certainly not the best. An episode: Prague, the first Mozilla talk I attended. I was trying to understand as much as possible what the speaker was saying in English. I heard this strange word “eltenen… eltenen… eltenen” repeated several times. What did it mean? After a while I couldn’t take it anymore, I turned to an Italian friend who was more expert in the topics discussed and above all who knew the English language well. Q: What the hell does “eltenen” mean? A: “Localization”. Q: “Localization???” A: “l10n… L ten n… L ocalizatio n”. Silence, embarrassment, damn acronyms! How could I forget my first trip outside of Europe to attend the Mozilla Summit in Whistler, Canada in the summer of 2010? It was awesome, I was much more relaxed, decided not to think about the English language barrier and was able to really contribute to the discussions that we, SUMO localizers and contributors from so many countries around the world, were having to talk about our experience, try to fix the translation platform to make it better for us and discuss all the potential issues that Firefox was having at the time. I really talked a lot and I think the “Mozillians” I interacted with even managed to understand what I was saying in English :) The subsequent meetings, the other All Hands I attended, were all a great source of enthusiasm and energy! I met some really amazing people! Q: Lastly, can you share tips for those who are interested in contributing to Italian content localization or contributing to SUMO in general? Every time a new localizer starts collaborating with us I don’t forget all the help I received years ago! I bend over backwards to put them at ease, to guide them in their first steps and to be able to transmit to them the same passion that was transmitted to me by those who had to review with infinite patience my first efforts as a localizer. So I would say: first of all, you must have passion and a desire to help people. If you came to us it’s probably because you believe in this project, in this way of helping people. You can know the language you are translating from very well, but if you are not driven by enthusiasm everything becomes more difficult and boring. Don’t be afraid to make mistakes, if you don’t understand something ask, you’re among friends, among traveling companions. As long as an article is not published we can correct it whenever we want and even after publication. We were all beginners once and we are all here to learn. Take an article, start translating it and above all keep it updated. If you are helping on the support forums, be kind and remember that many users are looking for help with a problem and often their problems are frustrating. The best thing to do is to help the user find the answer they are looking for. If a user is rude, don’t start a battle that is already lost. You are not obligated to respond, let the moderators intervene. It is not a question of wanting to be right at all costs but of common sense.  
  • Don Marti: links for 29 Oct 2024 (2024/10/29 00:00)
    Satire Without Purpose Will Wander In Dark Places Broadly labelling the entirety of Warhammer 40,000 as satire is no longer sufficient to address what the game has become in the almost 40 years since its inception. It also fails to answer the rather awkward question of why, exactly, these fascists who are allegedly too stupid to understand satire are continually showing up in your satirical community in the first place. Why I’m staying with Firefox for now – Michael Kjörling [T]he most reasonable option is to keep using Firefox, despite the flaws of the organization behind it. So far, at least these things can be disabled through settings (for example, their privacy-preserving ad measurement), and those settings can be prepared in advance. Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft, Google’s Shadow Campaigns (so wait a minute, Microsoft won’t let companies use their existing Microsoft Windows licenses for VMs in the Google cloud, and Google is doing a sneaky advocacy campaign? Sounds like content marketing for Amazon Linux® Scripting News My friends at Automattic showed me how to turn on ActivityPub on a WordPress site. I wrote a test post in my simple WordPress editor, forgetting that it would be cross-posted to Mastodon. When I just checked in on Masto, there was the freaking post. After I recovered from passing out, I wondered what happens if I update the post in my editor, and save it to the WordPress site that’s hooked up to Masto via ActivityPub. So I made a change and saved it. I waited and waited, nothing happened. I got ready to add a comment saying ahh I guess it doesn’t update, when—it updated. (Like being happy when a new web site opening in a new browser, a good sign that ActivityPub is the connecting point for this kind of connected innovation.) Related: The Web Is a Customer Service Medium (Ftrain.com) by Paul Ford. China Telecom’s next 150,000 servers will mostly use local processors Among China Telecom’s server buys this year are machines running processors from local champion Loongson, which has developed an architecture that blends elements of RISC-V and MIPS. Removal of Russian coders spurs debate about Linux kernel’s politics Employees of companies on the Treasury Department’s Office of Foreign Assets Control list of Specially Designated Nationals and Blocked Persons (OFAC SDN), or connected to them, will have their collaborations subject to restrictions, and cannot be in the MAINTAINERS file. The TikTokification of Social Media May Finally Be Its Undoing by Julia Angwin. If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy. Cheap Solar Panels Are Changing the World The latest global report from the International Energy Agency (IEA) notes that solar is on track to overtake all other forms of energy by 2033. Conceptual models of space colonization - Charlie’s Diary (one more: Kurt Vonnegut’s concept for spreading genetic material) (protip: you can always close your browser tabs with creepy tech news, there will be more in a few minutes… Location tracking of phones is out of control. Here’s how to fight back. LinkedIn fined $335 million in EU for tracking ads privacy breaches Pinterest faces EU privacy complaint over tracking ads Dems want tax prep firms charged for improper data sharing Dow Jones says Perplexity is “freeriding,” sues over copyright infringement You Have a ‘Work Number’ on This Site, and You Should Freeze It Roblox stock falls after Hindenburg blasts the social gaming platform over bots and pedophiles) It Was Ten Years Ago Today that David Rosenthal predicted that cryptocurrency networks will be dominated by a few, perhaps just one, large participant. Writing Projects (good start for a checklist before turning in a writing project. Maybe I should write Git hooks for these.) Word.(s). (Includes some good vintage car ads. Remember when most car ads were about the car, not just buttering up the driver with how successful you must be to afford this thing?) Social Distance and the Patent System [I]t was clear from our conversation that [Judge Paul] Michel doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them. On a theoretical level, he knew that there was a lot of litigation in the software industry and that a lot of people were upset about it. But like Fed and the unemployment rate, this kind of theoretical knowledge doesn’t always create a sense of urgency. One has to imagine that if people close to Michel—say, a son who was trying to start a software company—were regularly getting hit by frivolous patent lawsuits, he would suddenly take the issue more seriously. But successful software entrepreneurs are a small fraction of the population, and most likely no judges of the Federal Circuit have close relationships with one. (Rapids is the script that gathers these, and it got a clean bill of health from the feed reader score report after I fixed the Last-Modified/If-Modified-Since and Etag handling. So expect more link dump posts here, I guess.)
  • Wil Clouser: Mozilla Accounts password hashing upgrades (2024/10/28 07:00)
    We’ve recently finished two significant changes to how Mozilla Accounts handles password hashes which will improve security and increase flexibility around changing emails. The changes are entirely transparent to end-users and are applied automatically when someone logs in. Randomizing Salts If a system is going to store passwords, best practice is to hash the password with a unique salt per row. When accounts was first built we used an account’s email address as the unique salt for password hashing. This saved a column in the database and some bandwidth but overall I think was a poor idea. It meant people couldn’t re-use their email addresses and it leaves PII sitting around unnecessarily. Instead, a better idea is just to generate a random salt. We’ve now transitioned Mozilla Accounts to random salts. Increasing Key Stretching Iterations Eight years ago Ryan Kelly filed bug 1320222 to review Mozilla Accounts’ client-side key stretching capabilities and sparked a spirited conversation about iterations and the priority of the bug. Overall, this is routine maintenance - we expect any amount of stretching we do will have to be revisited periodically due to hardware improving and the value we choose is a compromise between security and time to login, particularly on older hardware. Since we were generating new hashes for the random salts already we took the opportunity to increase our PBKDF2 iterations from 1000 to 650000 – a number we’re seeing others in the industry using. This means logging in with slower hardware (like older mobile phones) may be noticeably slower. Below is an excerpt from the analysis we did showing a Macbook from 2007 will take an additional ~3 seconds to log in: Key Stretch Iterations Overhead on 2007 Macbook Overhead on 2021 MacBook Pro M1 100,000 0.4800024 seconds 0.00000681 seconds 200,000 0.9581234 seconds 0.00000169 seconds 300,000 1.4539928 seconds 0.00000277 seconds 400,000 1.9337903 seconds 0.00029750 seconds 500,000 2.4146366 seconds 0.00079127 seconds 600,000 2.9482827 seconds 0.00112186 seconds 700,000 3.3960513 seconds 0.00117956 seconds 800,000 3.8675677 seconds 0.00117956 seconds 900,000 4.3614942 seconds 0.00141616 seconds Implementation Dan Schomburg did the heavy lifting to make this a smooth and successful project. He built the v2 system alongside v1 so both hashes are generated simultaneously and if the v2 exists the login system will use that. This lets us roll the feature out slowly and gives us control if we need to disable it or roll back. We tested the code for several months on our staging server before rolling it out in production. When we did enable it in production it was over the course of several weeks via small percentages while we watched for unintended side-effects and bug reports. I’m pleased to say everything appers to be working smoothly. As always, if you notice any issues please let us know.
  • Don Marti: typefaces that aren’t on this blog (yet?) (2024/10/27 00:00)
    Right now I’m not using these, but they look useful and/or fun. Departure Mono: vintage-looking, pixelated, lo-fi technical vibe. Atkinson Hyperlegible Font was carefully developed by the Braille Institute to help low-vision readers. It improves legibility and readability through clear, and distinctive letters and numbers. I’m trying to keep this site fairly small and fast, so getting by with Modern Font Stacks as much as possible. Related colophon Bonus links (these are all web development, editing, and business, more or less. Yes, I’m still working on my SCALE proposal, deadline coming up.) Before you buy a domain name, first check to see if it’s haunted Discover Wiped Out MFA Spend By Following These Four Basic Steps (This headline underrates the content. If all web advertisers did these tips, then 90% of the evil stuff on the Internet would be gone—most of the web’s problems are funded by advertisers and agencies who fail to pay attention to the context in which their ads appear.) Janky remote backups without root on the far end My solar-powered and self-hosted website Let’s bring back browsing Hell Gate NYC doubled its subscription revenue in its second year as a worker-owned news outlet Is Matt Mullenweg defending WordPress or sabotaging it? Gosub – An open-source browser engine Take that Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta A Bicycle for the Mind – Prologue Why I Migrated My Newsletter From Substack to Eleventy and Buttondown - Richard MacManus My Blog Engine is the Erlang Build Tool A Developer’s Guide to ActivityPub and the Fediverse
  • Don Marti: personal AI in the rugpull economy (2024/10/26 00:00)
    Doc Searls writes, in Personal Agentic AI, Wouldn’t it be good for corporate AI agents to have customer hands to shake that are also equipped with agentic AI? Wouldn’t those customers be better than ones whose agency is merely human, and limited to only what corporate AI agents allow? The obvious answer for business decision-makers today is: lol, no, a locked-in customer is worth more. If, as a person who likes to watch TV, you had an AI agent, then the agent could keep track of sports seasons and the availability of movies and TV shows, and turn your streaming subscriptions on and off. In the streaming business, like many others, the management consensus is to make things as hard and manual as possible on the customer side, and save the automation for the company side. Just keeping up with watching a National Football League team is hard…even for someone who is ON the team. Automation asymmetry, where the seller gets to reduce service costs while the customer has to do more and more manual work, is seen as a big win by the decision-makers on the high-automation side. Big company decision-makers don’t want to let smaller companies have their own agentic tools, either. Getting a DMCA Exemption to let McDonald’s franchisees fix their ice cream machines was a big deal that required a lengthy process with the US Copyright Office. Many other small businesses are locked in to the manual, low-information side of a business relationship with a larger one. (Web advertising is another example. Google shoots at everyone’s feet, and agencies, smaller firms, and browser extension developers dance.)Google employees and shareholders would be better off if it were split into two companies that could focus on useful projects for independent customers who had real choices. The first wave of user reactions to AI is happening, and it’s adversarial. Artists on sites like DeviantArt went first, and now Reddit users are deliberately posting fake answers to feed Google’s AI. On the shopping side, avoiding the output of AI and made-for-AI deceptive crap is becoming a must-have mainstream skill, as covered in How to find helpful content in a sea of made-for-Google BS and How Apple and Microsoft’s trusted brands are being used to scam you. As Baldur Bjarnason writes, The public has for a while now switched to using AI as a negative—using the term artificial much as you do with artificial flavouring or that smile’s artificial. It’s insincere creativity or deceptive intelligence. Other news is even worse. In today’s global conflict between evil oligarchs and everyone else, AI is firmly aligned with the evil oligarch side. Google, Microsoft, and Perplexity promote scientific racism in AI search results Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Thousands of creatives sign petition against AI data scraping Authors who release under Creative Commons licenses are disagreeing with the CC organization about whether AI training is fair use: fair use alignment chart The AI Boom Could Use a Shocking Amount of Electricity, and—Baldur Bjarnason again—Your use of AI is directly harming the environment I live in. But today’s Big AI situation won’t last. Small-scale and underground AI has sustainable advantages over the huge but money-losing contenders. And it sounds like Doc is already thinking post-bubble. Adversarial now, but what about later? So how do we get from the AI adversarial situation we have now to the win-win that Doc is looking for? Part of the answer will be resolving the legal issues. Today’s Napster-like free-for-all environment won’t persist, so eventually we will have an AI scene in which companies that want to use your work for training have to get permission and disclose provenance. The other part of the path from today’s situation—where big companies have AI that enables scam culture and chickenization while individuals and small companies are stuck rowing through funnels and pipelines—is personal, aligned AI that balances automation asymmetries. Whether it’s solving CAPTCHAs, getting data in hard-to-parse formats, or other awkward mazes, automation asymmetries mean that as a customer, you technically have more optionality than you practically have time to use. But AI has a lot more time. If a company gives you user experience grief, with the right tools you can get back to where you would have been if they had applied less obfuscation in the first place. (icymi: Video scraping: extracting JSON data from a 35 second screen capture for less than 1/10th of a cent Not a deliberate obfuscation example, but an approach that can be applied.) So we’re going to see something like this AI cartoon by Tom Fishburne (thanks to Doc for the link) for privacy labour. Companies are already getting expensive software-as-a-service to make privacy tasks harder for the customers, which means that customers are going to get AI services to make it easier. Eventually some companies will notice the extra layers, pay attention to the research, and get rid of the excess grief on their end so you can stop running de-obfuscation on your end. That will make it work better for everyone. (GPC all the things! Data Rights Protocol) The biggest win from personal AI will, strangely enough, be in de-personalizing your personal information environment. By doing the privacy labour for you, the agentic AI will limit your addressability and reduce personalization risks. The risks to me from buying the less suitable of two legit brands are much lower than the risk of getting stuck with some awful crap that was personalized to me and not picked up on by norms enforcers like Consumer Reports. Getting more of my privacy labour done for me will not just help me personally do better #mindfulConsumption, but also increase the rewards for win-win moves by sellers. Personalization might be nifty, but filtering out crap and rip-offs is a bigger immediate win: Sunday Internet optimism Doc writes, When you limit what customers can bring to markets, you limit what can happen in those markets. As far as I can tell, the real promise for agentic AI isn’t just in enabling existing processes or making them more efficient. It’s in establishing a credible deterrent to enshittification—if you’re trying to rip me off, don’t talk to me, talk to my bot army. For just a minute, put yourself in the shoes of a product manager with a proposal for some legit project that they’re trying to get approved. If that proposal is up against a quick win for the company, like one based on creepy surveillance, it’s going to lose. But if the customers have the automation power to lower the ROI from creepy growth hacking, the legit project has a chance. And that pushes up the long-term value of the entire company. An individual locked-in customer is more valuable to the brand than an individual independent customer, but a brand with independent customers is more valuable than a brand with an equal number of locked-in customers. Anyway, hope to see you at VRM Day. Bonus links Space is Dead. Why Do We Keep Writing About It? It’s Time to Build the Exoplanet Telescope The tech startups shaking up construction in Europe
  • Support.Mozilla.Org: What’s up with SUMO – Q3 2024 (2024/10/25 21:59)
    Each quarter, we gather insights on all things SUMO to celebrate our team’s contributions and showcase the impact of our work. The SUMO community is powered by an ever-growing global network of contributors. We are so grateful for your contributions, which help us improve our product and support experiences, and further Mozilla’s mission to make the internet a better place for everyone. This quarter we’re modifying our update to highlight key takeaways, outline focus areas for Q4, and share our plans to optimize our tools so we can measure the impact of your contributions more effectively. Below you’ll find our report organized by the following sections: Q3 Highlights at-a-glance, an overview of our Q4 Priorities & Focus Areas, Contributor Spotlights and Important Dates, with a summary of special events and activities to look forward to! Let’s dive right in: Q3 Highlights at-a-glance Forums: We saw over 13,000 questions posted to SUMO in Q3, up 83% from Q2. The increased volume was largely driven by the navigation redesign in July. We were able to respond to over 6,300 forum questions, a 49% increase from Q2! Our response rate was ~15 hours, which is a one-hour improvement over Q2, with a helpfulness rating of 66%. August was our busiest and most productive month this year. We saw more than 4,300 questions shared in the forum, and we were able to respond to 52.7% of total in-bounds. Trends in forum queries included questions about site breakages, account and data recovery concerns, sync issues, and PPA feedback. Knowledge Base: We saw 473 en-US revisions from 45 contributors, and more than 3,000 localization revisions from 128 contributors which resulted in an overall helpfulness rating of 61%, our highest quarterly average rating YTD! Our top contributor was AliceWyman. We appreciate your eagle eyes and dedication to finding opportunities to improve our resources. For localization efforts, our top contributor was Michele Rodaro. We are grateful for your time, efforts and expert language skills. Social: On our social channels, we interacted with over 1,100 tweets and saw more than 6,000 app reviews. Our top contributor on Twitter this quarter was Isaac H who responded to over 200 tweets, expertly navigating our channels to share helpful resources, provide troubleshooting support, and help redirect feature requests to Mozilla Connect. Thank you, Isaac! On the play store, our top contributor was Dmitry K who replied to over 400 reviews! Thank you for giving helpful feedback, advice and for providing such a warm and welcoming experience for users. SUMO platform updates: There were 5 major platform updates in Q3. Our focus this quarter was to improve navigation for users by introducing new standardized topics across products, and update the forum moderation tool to allow our support agents to moderate these topics for forum posts. Categorizing questions more accurately with our new unified topics will provide us with a foundation for better data analysis and reporting. We also introduced improvements to our messaging features, localized KB display times, fixed a bug affecting pageviews in the KB dashboard, and added a spam tag to make moderation work easier for the forum moderators. We acknowledge there was a significant increase in spam questions that began in July which is starting to trend downwards. We will continue to monitor the situation closely, and are taking note of moderator recommendations on a future resolution. We appreciate your efforts to help us combat this problem! Check out SUMO Engineering Board to see what the platform team is cooking up in the engine room. You’re welcome to join our monthly Community Calls to learn more about the latest updates to Firefox and chat with the team. Firefox Releases: We released Firefox 128, Firefox 129 and Firefox 130 in Q3 and we made significant updates to our wiki template for the Firefox train release. Q4 Priorities & Focus Areas CX: Enhancing the user experience and streamlining support operations. Kitsune: Improved article helpfulness survey and tagging improvements to help with more granular content categorization. SUMO: For the rest of 2024, we’re working on an internal SUMO Community Report, FOSDEM 2025 preparation, Firefox 20th anniversary celebration, and preparing for an upcoming Community Campaign around QA. Contributor Spotlights We have seen 37 new contributors this year, with 10 new contributors joining the team this quarter. Among them, ThePillenwerfer, Khalid, Mozilla-assistent, and hotr1pak, who shared more than 100 contributions between July–September. We appreciate your efforts! Cheers to our top contributors this quarter: Our multi-channel contributors made a significant impact by supporting the community across more than one channel (and in some cases, all three!)  All in all it was an amazing quarter! Thanks for all you do. Important dates October 29th: Firefox 132 will be released October 30th: RSVP to join our next Community Call! All are welcome. We do our best to create a safe space for everyone to contribute. You can join on video or audio, at your discretion. You are also welcome to share questions in advance via the contributor forum, or our Matrix channel. November 9th: Firefox’s 20th Birthday! November 14th Save the date for an AMA with the Firefox leadership team FOSDEM ’25: Stay tuned! We’ll put a call out for volunteers and for talks in early November Stay connected Join the conversation on the contributor forum to talk shop about our latest releases Learn about team updates on the SUMO Blog Connect with other contributors on our #SUMO Matrix group Follow us on X/Twitter Subscribe to our YouTube channel  Get daily updates from around the web (M-F) by subscribing to the Firefox Daily Digest Check out AirMozilla if you’re an NDA’ed contributor, where you’ll find recordings of our bi-weekly Release Meetings Thanks for reading! If you have any feedback or recommendations on future features for this update, please reach out to Kiki and Andrea.
  • Mozilla Localization (L10N): L10n report: October 2024 Edition (2024/10/24 22:22)
    Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.  New community/locales added We’re grateful for the Abzhaz community’s initiative in reaching out to localize our products. Thank you for your valuable involvement! New content and projects What’s new or coming up in Firefox desktop Search Mode Switcher A new feature in development has become available (behind a flag) with the release of the latest Nightly version 133: the Search Mode Switcher. You may have already seen strings for this land in Pontoon, but this feature enables you to enter a search term into the address bar and search through multiple engines. After entering the search term and selecting a provider, the search term will persist (instead of showing the site’s URL) and then you can select a different provider by clicking an icon on the left of the bar. Firefox Search Mode Switcher You can test this now in version 133 of Nightly by entering about:config in the address bar and pressing enter, proceed past the warning, and search for the following flag: browser.urlbar.scotchBonnet.enableOverride. Toggling the flag to true will enable the feature. New profile selector Starting in Version 134 of Nightly a new feature to easily select, create, and change profiles within Firefox will begin rolling out to a small number of users worldwide. Strings are planned to be made available for localization soon. Sidebar and Vertical Tabs Finally, as previously mentioned in the previous L10n Report, features for a new sidebar with expanded functionality along with the ability to change your tab layout from horizontal to vertical are available to test in Nightly through the Firefox Labs feature in your settings. Just go to your Nightly settings, select the Firefox Lab section from the left, and enable the feature by clicking the checkbox. Since these are experimental there may continue to be occasional string changes or additions. While you check out these features in your languages, if you have thoughts on the features themselves, we welcome you to share feedback through Mozilla Connect. What’s new or coming up in web projects AMO and AMO Frontend To improve user experience, the AMO team plans to implement changes that will enable only locales meeting a specific completion threshold. Locales with very low completion percentages will be disabled in production but will remain available on Pontoon for teams to continue working on them. The exact details and timeline will be communicated once the plan is finalized. Mozilla Accounts Currently Mozilla Accounts is going through a redesign of some of its log-in pages’ user experiences. So we will continue to see small updates here and there for the rest of the year. There is also a planned update to the Mozilla Accounts payment sub platform. We expect to see a new file added to the project before the end of the year – but a large number of the strings will be the same as now. We will be migrating those translations so they don’t need to be translated again, but there will be a number of new strings as well. Mozilla.org The Mozilla.org site is undergoing a series of redesigns, starting with updates to the footer and navigation bars. These changes will continue through the rest of the year and beyond. The next update will focus on the About page. Additionally, the team is systematically removing obsolete strings and replacing them with updated or new strings, ensuring you have enough time to catch up while minimizing effort on outdated content. There are a few new Welcome pages made available to a select few locales. Each of these pages have a different deadline. Make sure to complete them before they are due. What’s new or coming up in SUMO The SUMO platform just got a navigation redesign in July to improve navigation for users & contributors. The team also introduced new topics that are standardized across products, which lay the foundation for better data analysis and reporting. Most of the old topics, and their associated articles and questions, have been mapped to the new taxonomy, but a few remain that will be manually mapped to their new topics. On the community side, we also introduced improvements & fixes on the messaging feature, changing the KB display time in format appropriate to locale, fixed the bug so we can properly display pageviews number in the KB dashboard, and add a spam tag in the list of question if it’s marked as spam to make moderation work easier for the forum moderators. There will be a community call coming up on Oct 30 at 5pm UTC where we will be talking about Firefox 20th anniversary celebration and Firefox 132 release. Check out the agenda for more detail. What’s new or coming up in Pontoon Enhancements to Pontoon Search We’re excited to announce that Pontoon now allows for more sophisticated searches for strings, thanks to the addition of the new search panel! When searching for a string, clicking on the magnifying glass icon will open a dropdown, allowing users to select any combination of search options to help refine their search. Please note that the default search behavior has changed, as string identifiers must now be explicitly enabled in search options. Pontoon Enhanced Search Options User status banners As part of the effort to introduce badges/achievements into Pontoon, we’ve added status banners under user avatars in the translation workspace. Status banners reflect the permissions of the user within the respective locale and project, eliminating the need to visit their profile page to view their role. Namely, team managers will get the ‘MNGR’ tag, translators get the ‘TRNSL’ tag, project managers get the ‘PM’ tag, and those with site-wide admin permissions receive the ‘ADMIN’ tag. Users who have joined within the last three months will get the ‘NEW USER’ tag for their banner. Status banners also appear in comments made under translations. New Pontoon logo We hope you love the new Pontoon logo as much as we do! Thanks to all of you who expressed your preference by participating in the survey. Pontoon New Logo Friends of the Lion Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out! Useful Links #l10n-community channel on Element (chat.mozilla.org) Localization category on Discourse Twitter L10n blog Questions? Want to get involved? If you want to get involved, or have any question about l10n, reach out to: Francesco Lodolo (flod) – Engineering Manager Bryan – l10n Project Manager Delphine – l10n Project Manager for mobile Peiying (CocoMo) – l10n Project Manager for mozilla.org, marketing, and legal Francis – l10n Project Manager for Common Voice, Mozilla Foundation Théo Chevalier – l10n Project Manager for Mozilla Foundation Matjaž (mathjazz) – Pontoon dev Eemeli – Pontoon, Fluent dev Did you enjoy reading this report? Let us know how we can improve it.
  • Mozilla Open Policy & Advocacy Blog: Mozilla Participates to Ofcom’s Draft Transparency Reporting Guidance (2024/10/23 16:09)
    On 4th October 2024, Mozilla provided our input to Ofcom’s consultation on its draft transparency reporting guidance. Transparency plays a crucial role in promoting accountability and public trust, particularly when it comes to how tech platforms handle harmful or illegal content online and we were pleased to share our research, insight, and input with Ofcom. Scope of the Consultation Ofcom’s proposed guidance aims to improve transparency reporting, allowing the public, researchers, and regulators to better understand how categorized services operate and whether they are doing enough to respect users’ rights and protect users from harm. We support this effort and believe additional clarifications are needed to ensure that Ofcom’s transparency process fully meets its objectives. The following clarifications will ensure that the transparency reporting process effectively holds tech companies accountable, safeguards users, fosters public trust, and allows for effective use of transparency reporting by different stakeholders. The Importance of Standardization One of our key recommendations is the need for greater standardization in transparency elements. Mozilla’s research on public ad repositories developed by many of the largest online platforms finds that there are large discrepancies across these transparency tools, making it difficult for researchers and regulators to compare information across platforms. Ofcom’s guidance must ensure that transparency reports are clear, systematic, and easy to compare year-to-year. We recommend that Ofcom provide explicit guidelines on the specific data platforms must provide in their transparency reports and the formats in which they should be reported. This will enable platforms to comply uniformly and make it easier for regulators and researchers to monitor patterns over time. In particular, we encourage Ofcom to distinguish between ‘core’ and ‘thematic’ information in transparency reports. We understand that core information will be required consistently every year, while thematic data will focus on specific regulatory priorities, such as emerging areas of concern. However, it is important that platforms are given enough advance notice to prepare their systems for thematic information to avoid any disproportionate compliance burden. This is particularly important for smaller businesses who have limited resources and may find it challenging to comply with new reporting criteria, compared to big tech companies. We also recommend that data about content engagement and account growth should be considered ‘core’ information that needs to be collected and reported on a regular basis. This data is essential for monitoring civic discourse and election integrity. Engaging a Broader Range of Stakeholders Mozilla also believes that a broad range of stakeholders should be involved in shaping and reviewing transparency reporting. Ofcom’s consultative approach with service providers is commendable.  We encourage further expansion of this engagement to include stakeholders such as researchers, civil society organizations, and end-users. Based on our extensive research, we recommend “transparency delegates.” Transparency delegates are experts who can act as intermediaries between platforms and the public, by using their expertise to evaluate platforms’ transparency in a particular area (for example, AI) and to convey relevant information to a wider audience. This could help ensure that transparency reports are accessible and useful to a range of audiences, from policymakers to everyday users who may not have the technical expertise to interpret complex data. Enhancing Data Access for Researchers Transparency reports alone are not enough to ensure accountability. Mozilla emphasizes the importance of giving independent researchers access to platform data. In our view, data access is not just a tool for academic inquiry but a key component of public accountability. Ofcom should explore mechanisms for providing researchers with access to data in a way that protects user privacy while allowing for independent scrutiny of platform practices. This access is crucial for understanding how content moderation practices affect civic discourse, public safety, and individual rights online. Without it, we risk relying too heavily on self-reported data, which can be inconsistent or incomplete.  Multiple layers of transparency are needed, in order to build trust in the quality of platform transparency disclosures. Aligning with Other Regulatory Frameworks Finally, we encourage Ofcom to align its transparency requirements with those set out in other major regulatory frameworks, particularly the EU’s Digital Services Act (DSA). Harmonization will help reduce the compliance burden on platforms and allow users and researchers to compare transparency reports more easily across jurisdictions. Mozilla looks forward to continuing our work with Ofcom and other stakeholders to create a more transparent and accountable online ecosystem.   The post Mozilla Participates to Ofcom’s Draft Transparency Reporting Guidance appeared first on Open Policy & Advocacy.
  • Mozilla Thunderbird: Maximize Your Day: Focus Your Inbox with ‘Grouped by Sort’ (2024/10/21 13:11)
    For me, staying on top of my inbox has always seemed like an unattainable goal. I’m not an organized person by nature. Periodic and severe email anxiety (thanks, grad school!) often meant my inbox was in the quadruple digits (!). Lately, something’s shifted. Maybe it’s working here, where people care a lot about making email work for you. These past few months, my inbox has stayed if not manageable, then pretty close to it. I’ve only been here a year, which has made this an easier goal to reach. Treating my email like laundry is definitely helping! But how do you get a handle on your inbox when it feels out of control? R.L. Dane, one of our fans on Mastodon, reminded us Thunderbird has a powerful, built-in tool than can help: the ‘Grouped by Sort’ feature! Email Management for All Brains For those of us who are neurodiverse, email management can be a challenge. Each message that arrives in your inbox, even without a notification ding or popup, is a potential distraction. An email can contain a new task for your already busy to-do list. Or one email can lead you down a rabbit hole while other emails pile up around it. Eventually, those emails we haven’t archived, replied to, or otherwise processed take on a life of their own. Staring at an overgrown inbox isn’t fun for anyone. It’s especially overwhelming for those of us who struggle with executive function – the skills that help us focus, plan, and organize. A full or overfull inbox doesn’t seem like a hurdle we can overcome. We feel frozen, unsure where to even begin tackling it, and while we’re stuck trying to figure out what to do, new emails keep coming. Avoiding our inboxes entirely starts to seem like the only option – even if this is the most counterproductive thing we can do. So, how in the world do people like us dig out of our inboxes? Feature for Focus: Grouped by Sort We love seeing R.L. Dane’s regular Thunderbird tips, tricks, and hacks for productivity. In fact, he was the one who brought this feature to our attention on a Mastodon post! We were thrilled when we asked if we could turn it to a productivity post and got an excited “Yes!” in response. As he pointed out, using Grouped by Sort, you can focus on more recently received emails. Sorting by Date, this feature will group your emails into the following collapsible categories: Today Yesterday Last 7 Days Last 14 Days Older Turning on Grouped by Sort is easy. Click the message list display options, then click ‘Sort by.’ (In the top third, toggle the ‘Date’ option. In the second third, select your preferred order of Descending or Ascending. Finally, in the bottom third, toggle ‘Grouped by Sort.’ Now you’re ready to whittle your way through an overflowing inbox, one group at a time. And once you get down to a mostly empty and very manageable inbox, you’ll want to find strategies and habits to keep it there. Treating your email like laundry is a great place to start. We’d love to hear your favorite email management habits in the comments! Resources ADDitude Magazine: https://www.additudemag.com/addressing-e-mail/ Dixon Life Coaching: https://www.dixonlifecoaching.com/post/why-high-achievers-with-adhd-love-and-hate-their-email-inbox The post Maximize Your Day: Focus Your Inbox with ‘Grouped by Sort’ appeared first on The Thunderbird Blog.
  • Mozilla Open Policy & Advocacy Blog: Mozilla Responds to BIS’ Proposed Rule on Reporting Requirements for the Development of Advanced AI Models and Computing Clusters (2024/10/21 13:00)
    Lately, we’ve talked a lot about the importance of ensuring that governments take into account open source, especially when it comes to AI. We submitted comments to NIST on Dual-Use Foundation Models and NTIA on the benefits of openness, and advocated in Congress. As frontier models and big tech continue to dominate the policy discussion, we need to ensure that open source remains top of mind for policymakers and regulators. At Mozilla, we know that open source is a fundamental driver of software that benefits people instead of a few big tech corporations, and it helps enable breakthroughs in medicine, science, and allows smaller companies to compete with tech giants. That’s why we’ll continue to raise the voice of the open source community in regulatory circles whenever we can – and most recently, at the Department of Commerce. Last month, the Bureau of Industry and Security (BIS) released a proposed rule about reporting requirements for developing advanced AI models and computing clusters. This rule stems from the White House’s 2023 Executive Order on AI, which focuses on the safe and trustworthy development of AI. BIS asked for feedback from industry and stakeholders on topics such as the notification schedule for entities covered by the rule, how information is collected and stored, and what thresholds would trigger reporting requirements for these AI models and clusters. While BIS’ proposed rule seeks to balance national security with economic concerns, it doesn’t adequately take into account the needs of the open source community or provide clarity as to how the proposed rule may affect them. This is critical given some of the most capable and widely used AI models are open source or partially open source. Open source software is a key driver of technological progress in AI and creates tremendous economic and security benefits for the United States. In our full comments, we set out how BIS can further engage with the open-source community and we emphasize the value that open-source offers for both the economy and national security. Below are some key points from our feedback to BIS: 1. BIS should clarify how the proposed rules would apply to open-source projects, especially since many don’t have a specific owner, are distributed globally, and are freely available. Ideally BIS could work with organizations like the Open Source Initiative (OSI) to come up with a framework. 2. As BIS updates the technical conditions for collection thresholds in response to technological advancements, we suggest setting a minimum update cycle of six months. This is crucial given the rapid pace of change in the AI landscape. It’s also necessary to maintain BIS’ core focus on the regulation of frontier models and to not unnecessarily stymie innovation across the broader AI ecosystem. 3. BIS should provide additional clarity about what ‘planned applicable activities’ and when a project is considered ‘planned.’ Mozilla appreciates BIS’s efforts to try and balance the benefits and risks of AI when it comes to national and economic security. We hope that BIS further considers the potential impact of the proposed rule and future regulatory actions on the open source community and appropriately weighs the myriad benefits which open source AI and open source software more broadly produce for America’s national and economic interests. We look forward to providing views as the US Government continues work on these important issues. The post Mozilla Responds to BIS’ Proposed Rule on Reporting Requirements for the Development of Advanced AI Models and Computing Clusters appeared first on Open Policy & Advocacy.
  • Francesco Lodolo: The (pre)history of Mozilla’s localization repository infrastructure (2024/10/20 13:28)
    With many new faces joining Mozilla, as either staff or volunteer localizers, most are only familiar with the current, more streamlined localization infrastructure. I thought it might be interesting to take a look back at the technical evolution of Mozilla’s localization systems. Having personally navigated every version — first as a community localizer from 2004 to 2013, and later as staff — I’ll share my perspective. That said, I might not have all the details exactly right (or I may have removed some for the sake of my sanity), so feel free to point out any inaccuracies. <figcaption class="wp-element-caption">Attending one of the earliest events organized by the Italian Community (2007)</figcaption> Early days: Centralized version control Back in the early 2000s, smartphones weren’t a thing, Windows XP was an acceptable operating system — especially in comparison to Windows Me — and distributed version controls weren’t as common. Let’s be honest, centralized version controls were not fun: every commit meant interacting directly with the server. You had to remember to update your local copy, commit your changes, and then hope no one else had committed in the meantime — otherwise, you were stuck resolving conflicts. Given the high technical barriers, localizers at that time were primarily technical users, not discouraged by crappy text editors — encoding issues, BOMs, and other amenities — and command line tools. To make things more complicated, localizers had to deal with 2 different systems: CVS (Concurrent Versioning System) was used for products like Mozilla Suite, Phoenix/Firefox, etc. To increase confusion, it used branch names that followed the Gecko versions (e.g. MOZILLA_1_8_BRANCH), and those didn’t map at all to product versions. Truth be told, the whole release cadence and cycle felt like complete chaos back then, at least as a volunteer. SVN (Subversion) was used to localize mozilla.org, addons.mozilla.org (AMO), and other web projects. With time, desktop and web-based applications emerged to support localizers, hiding some of the complexity of version control systems and providing translation management features: Mozilla Translator (a local Java application. Yes kids, Java). Narro. Pootle. Verbatim: a customized Pootle instance run by Mozilla, used to localize web projects like addons.mozilla.org. This was shut down in 2015 and projects transitioned to Pontoon. Pontoon (here’s the first idea and repository, if you’re curious). Aisle, an internal experiment based on C9 that never got past the initial tests. This proliferation of new tools led to a couple of key principles that are still valid to this day: The repository, not the TMS (Translation Management System), is the source of truth. TMSs need to support bidirectional synchronization between their internal data storage and the repository, i.e. they need to read updated translated content from the repository and store it internally (establishing a conflict resolution policy), not just write updates. This might look trivial, but it’s an outlier in the localization industry, where the tool is the source of truth, and synchronization only happens in one direction (from the TMS to the repository). The shift to Mercurial At the end of 2007, Mozilla made the decision to transition from CVS to Mercurial, this time opting for a distributed version control system. For localization, this meant making the move to Mercurial as well, though it took a few more months of work. This marked the beginning of a new era where the infrastructure quickly started becoming more complex. As code development was happening in mozilla-central, localization was supposed to be stored in a matching l10n-central repository. But here’s the catch: instead of one repository, the decision was to use one repository per locale, each one including the localization for all shipping projects (Firefox, Firefox for Android, etc.). I’m not sure how many repositories that meant at the time — based on the dependencies of this bug, probably around 30 — but as of today, there are 156 l10n-central repositories, while Firefox Nightly only ships in 111 locales (a few of them added recently). The next massive change was the adoption of the rapid release cycle in 2011: 3 new sets of repositories had to be created for the corresponding Firefox versions: l10n/mozilla-aurora, l10n/mozilla-beta, l10n/mozilla-release. Localizers working against Nightly in l10n-central would need to manually move their updates to l10n/mozilla-aurora, which was becoming the main target for localization. At the end of the cycle (“merge day”), someone in the localization team would manually move content from Aurora to Beta, overwriting any changes. In order to allow localizers to make small fixes to Beta, 2 separate projects were set up in Pontoon (one working against Aurora, one against Beta), and it was up to localizers to keep them in sync, given that content in Beta would be overwritten on merge day. If you’re still trying to keep count, we’re now at about 600 Mercurial repositories to localize a project like Firefox (and a few hundreds more added later for Firefox OS, one for each locale and version, but that’s a whole different story). I won’t go into the fine details, but at this point localizers were also supposed to “sign off” on the version of their localization that they wanted to ship. Over time, this was done by: Calling out which changeset you wanted to ship in an email thread. Later, requesting sign-off in a web app called Elmo (because it was hosted on l10n.mozilla.org, (e)l.m.o., got it?). Someone in the localization team had to manually go through each request, check the diff from the previous sign-off to ensure that it would not break Firefox, and either accept or reject it. For context, at the time DTDs were still heavily in use for localization, and a broken translation could easily brick the browser (yellow screen of death).  With the drop of Aurora in 2017, the localization team started reviewing and managing sign-offs in Elmo without waiting for localizers to make a request. Yay for localizers, one less thing to do. In 2020, partly because of the lay-offs that impacted the team, we completely dropped the sign-off process and decommissioned Elmo, automatically taking the latest changeset in each l10n repository. The new kid on the block: GitHub In 2015 we started migrating repositories from SVN to GitHub. At the time, that meant mostly web projects, managed by Pascal Chevrel and me, with the notable exception of Firefox for iOS. That part of localization had a whole infrastructure of its own: a web dashboard to track progress, a tool called langchecker to update files and identify errors, and even a file format called dotlang (.lang) that was used for a while to localize mozilla.org (we switched to Fluent in 2019). The move to GitHub removed a lot of bureaucracy, as the team could create new repositories and grant access to localizers without going through an external team, like it was the case for Mercurial. Still today, GitHub is the go-to choice for new projects, although the introduction of SAML single sign-on created a significant hurdle when it comes to add external contributors to a project. Introduction of cross-channel for Firefox Remember the 600 repositories? Still there… Also, the most observant among you might wonder: didn’t Mozilla had another version of Firefox (Extended Support Release, or ESR)? You’re correct, but the compromise there was that ESR would be string-frozen, so we didn’t need another ~150 repositories: we used the content from mozilla-release at the time of launch, and that’s it, no more updates. In 2017, the Aurora channel was “removed”, leaving Nightly (based on mozilla-central), Developer Edition and Beta (based on mozilla-beta), Release (based on mozilla-release) and ESR. I use quotes, because “aurora” is still technically the internal channel name for Dev Edition. That was a challenge, as Aurora represented the main target for localization. That change forced us to move all locales to work on Nightly around April 2017.  Later in the year, Axel Hecht came up with a core concept that still supports how we localize Firefox nowadays: cross-channel. What if, instead of having to extract strings from 4 huge code repositories, we create a tool that generates a superset of the strings shipping in all supported versions (channels) of Firefox, and put them in a nimble, string-only repository? That’s exactly what cross-channel did, allowing us to drop ~300 repositories (plus ~150 already dropped because of the removal of Aurora). It also gave us the opportunity to support localization updates in release and ESR. At this point, localization for any shipping version of Firefox comes out of a single repository for each locale (e.g. l10n-central/fr for French). <figcaption class="wp-element-caption">Code repositories are used to generate cross-channel content, which in turn is used to feed Pontoon, storing translations in l10n-central repositories. From the chart, it’s also visible how English (en-US) is treated as a special case, going directly from code repositories to the build system.</figcaption> In hindsight, cross-channel was overly complex: it would not only create the superset content, but it would also replay the Mercurial history of the commit introducing the change. The content would land in the cross-channel repository with a reference to the original changeset (example), making it possible to annotate the file via Mercurial’s web interface. In order to do that, the code hooked directly into Mercurial internals, and it would break frequently thanks to the complexity of Mozilla’s repositories. In 2021 the code was changed to stop replaying history and only merging content. At this point, in late 2017, Firefox localization relied on ~150 l10n repositories, and 2 source repositories for cross-channel — one used as a quarantine, the other, called gecko-strings, connected to Pontoon to expose strings for community localization. Current Firefox infrastructure Fast-forward to 2024, with Mozilla’s decision to move development to Git, we had an opportunity to simplify things even further, and rethink some of the initial choices: Instead of 2 repositories for cross-channel, we decided to use only one repository with 2 branches. The cross-channel code was completely rewritten by Eemeli Aro and now runs as a GitHub workflow. Instead of ~150 repositories, we now have a single l10n repository, covering all supported versions of Firefox as l10n-central used to do. All locales, except for Italian and Japanese, are localized through Pontoon. Thunderbird has adopted a similar structure, with their own 3 repositories. The team completed the migration to Git in June, ahead of the rest of the organization, and all current versions of Firefox ship from the firefox-l10n repository (including ESR 115 and ESR 128). Conclusions So, this was the not-so-short story of how Mozilla’s localization infrastructure has evolved over time, with a focus on Firefox. Looking back, it’s remarkable to see how far we’ve come. Today, we’re in a much better place, also considering the constant effort to improve Pontoon and other tools used by the community. As I approach one of my many anniversaries — I have one for when I started as a volunteer (January 2004), when I became a member of staff as a contractor (April 2013), one “official” when I became an employee (November 2018) — it’s humbling to think about what a small team has accomplished over the past 22 years. These milestones remind me of the incredible contributions of so many brilliant individuals at Mozilla, whose passion helped build the foundations we stand on today. It’s also bittersweet to go back and read emails from over 15 years ago, remembering just how pivotal the community was in shaping Firefox into what it is today. The dedication of volunteers and localizers helped make Firefox a truly global browser, and their impact is still felt — and sometimes missed — today. <figcaption class="wp-element-caption">Mozilla L10n Community in Whistler, 2008 (Photo by Tristan Nitot)</figcaption>
  • Anne van Kesteren: WebKit and web-platform-tests (2024/10/19 18:59)
    Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else. Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests. I then run import-w3c-tests web-platform-tests/[testsDir] -s [wptParentDir] on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed. This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of: On macOS, don’t put development work, especially WebKit, inside ~/Documents. You might not have a good time. [wptParentDir] above needs to contain a directory named web-platform-tests, not wpt. This is annoyingly different from the default you get when cloning web-platform-tests (the repository was renamed to wpt at some point). Perhaps something to address in import-w3c-tests.
  • Chris H-C: Nine-Year Moziversary (2024/10/18 16:02)
    On this day (or near it) in 2015, I joined the Mozilla project by starting work as a full-time employee of Mozilla Corporation. I’m two hardware refreshes in (I was bad for doing them on time, leaving my 2017 refresh until 2018 and my 2020 refresh until 2022! (though, admittedly, the 2020 refresh was actually pushed to the end of 2021 by a policy change in early 2020 moving from 2-year to 3-year refreshes)) and facing a third in February. Organizationally, I’m three CEOs and sixty reorgs in. I’m still working on Data, same as last year. And I’m still trying to move Firefox Desktop to use solely Glean for its data collection system. Some of my predictions from last year’s moziversary post came true: I continued working on client code in Firefox Desktop, I hardly blogged at all, we continue to support collections in all of Legacy Telemetry’s systems (though we’ve excitingly just removed some big APIs), Glean has continued to gain ground in Firefox Desktop (we’re up to 4134 metrics at time of writing), and “FOG Migration” has continued to not happen (I suppose it was one missed prediction that top-down guidance would change — it hasn’t, but interpretations of it sure have), and I’m publishing this moziversary blog post a little ahead of my moziversary instead of after it. My biggest missed prediction was “We will quietly stop talking about AI so much, in the same way most firms have stopped talking about Web3 this year”. Mozilla, both Corporation and Foundation, seem unable to stop talking about AI (a phrase here meaning “large generative models built on extractive data mining which use chatbot UI”). Which, I mean, fair: it’s consuming basically all the oxygen and money in the industry at the moment. We have to have a position on it, and it’s appropriating “Open” language that Mozilla has a vested interest in protecting (though you’d be excused for forgetting that given how little we’ve tried to work with the FSF and assorted other orgs trying to shepherd the ideas and values of Open Source in the recent past). But we’ve for some reason been building products around these chatbots without interrogating whether that’s a good thing. And you’d think with all our worry about what a definition of Open Source might mean, we’d make certain to only release products that are Open Source. But no. I understand why we’re diving into products and trying to release innovative things in product shape… but Mozilla is famously terrible at building products. We’re okay at building services (I’m a fan of both Monitor and Relay). But where we seem to truly excel is in building platforms and infrastructure. We build Firefox, the only independent browser, a train that runs on the rails of the Web. We build Common Voice, a community and platform for getting underserved languages (where which languages are used is determined by the community) the support they need. We built Rust, a memory-safe systems language that is now succeeding without Mozilla’s help. We built Hubs, a platform for bringing people together in virtual space with nothing but a web browser. We’re just so much better at platforms and infrastructure. Why we don’t lean more into that, I don’t know. Well, I _do_ know. Or I can guess. Our golden goose might be cooked. How can Mozilla make money if our search deal becomes illegal? Maintaining a browser is expensive. Hosting services is expensive. Keeping the tech giants on their toes and compelling them to be better is expensive. We need money, and we’ve learned that there is no world where donations will be enough to fund even just the necessary work let alone any innovations we might try. How do you monetize a platform? How do you monetize infrastructure? Governments do it through taxation and funding. But Mozilla Corporation isn’t a government agency. It’s a conventional Silicon Valley private capital corporation (its relationship to Mozilla Foundation is unconventional, true, but I argue that’s irrelevant to how MoCo organizes itself these days). And the only process by which Silicon Valley seems to understand how to extract money to pay off their venture capitalists is products and consumers. Now, Mozilla Corporation doesn’t have venture capital. You can read in the State of Mozilla that we operate at a profit each and every year with net assets valued at over a billion USD. But the environment in which MoCo operates — the place from which we hire our C-Suite, the place where the people writing the checks live — is saturated in venture capital and the ways of thinking it encourages. This means Mozilla Corporation acts like its Bay Area peers, even though it’s special. Even though it doesn’t have to. This means it does layoffs even when it doesn’t need to. Even when there’s no shareholders or fund managers to impress. This means it increasingly speaks in terms of products and customers instead of projects and users. This means it quickly loses sight of anything specifically Mozilla-ish about Mozilla (like the community that underpins specific systems crucial to us continuing to exist (support and l10n for two examples) as well as the general systems of word-of-mouth and keeping Mozilla and Firefox relevant enough that tech press keep writing about us and grandpas keep installing us) because it doesn’t fit the patterns of thought that developed while directing leveraged capital. (( Which I don’t like, if my tone isn’t coming across clearly enough for you to have guessed. )) Okay, that’s more than enough editorial for a Moziversary post. Let’s get to the predictions for the next year: I still won’t blog as much as I’d like, “FOG Migration” might actually happen! We’ve finally managed to convince Firefox folks just how great Glean is and they might actually commit official resources! I predict that we’re still sending Legacy Telemetry by the end of next year, but only bits and pieces. A weak shadow of what we send today. There’ll be an All Hands, but depending on the result of the US federal election in November I might not attend because its location has been announced as Washington DC and I don’t know if the United States will be in a state next year to be trusted to keep me safe, We will stop putting AI in everything and hoping to accidentally make a product that’ll somehow make money and instead focus on finding problems Mozilla can solve and only then interrogating whether AI will help The search for the new CEO will not have completed by next October so I’ll still be three CEOs in, instead of four I will execute on my hardware refresh on time this February, and maybe also get a new monitor so I’m not using my personal one for work. Let’s see how it goes! Til next time. :chutten
  • The Talospace Project: Running Thunderbird with the OpenPower Baseline JIT (2024/10/18 00:34)
    The issues with Ion and Wasm in OpenPower Firefox notwithstanding, the Baseline JIT works well in Firefox ESR128, and many of you use it (including yours truly). Of course, that makes Thunderbird look sluggish without it. I wasn't able to get a full LTO-PGO build for Thunderbird to build properly so far with gcc (workin' on it), but with the JIT patches for ESR128 an LTO optimized build will complete and run, and that's good enough for now. The diff for the .mozconfig is more or less the following: export CC=/usr/bin/gcc export CXX=/usr/bin/g++ mk_add_options MOZ_MAKE_FLAGS="-j24" #ac_add_options --enable-application=browser #ac_add_options MOZ_PGO=1 # ac_add_options --enable-project=comm/mail mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/tbobj ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive" ac_add_options --enable-release ac_add_options --enable-linker=bfd ac_add_options --enable-lto=full ac_add_options --without-wasm-sandboxed-libraries ac_add_options --with-libclang-path=/usr/lib64 export GN=/home/censored/bin/gn # if you haz export RUSTC_OPT_LEVEL=2 You can use a unified .mozconfig like this to handle both the browser and the E-mail client; if you do, to build the browser the commented lines should be uncommented and the two lines below the previously commented section should be commented. You'll need comm-central embedded in your ESR128 tree as per the build instructions, and you may want to create an .hg/hgignore file inside your ESR128 source directory as well to keep changes to the core and Tbird from clashing, something like ^tbobj/ ^comm/ which will ignore those directories but isn't a change to .hgignore that you have to manually edit out. Once constructed, your built client will be in tbobj/. If you were using a prebuilt Thunderbird before, you may need to start it with tbobj/dist/bin/thunderbird -p default-release (substitute your profile name if it differs) to make sure you get your old mailbox back, though as always backup your profile first.
  • Firefox Add-on Reviews: YouTube your way — browser extensions put you in charge of your video experience (2024/10/17 22:56)
    YouTube wants you to experience YouTube in very prescribed ways. But with the right browser extension, you’re free to alter YouTube to taste. Change the way the site looks, behaves, and delivers your favorite videos.  Enhancer for YouTube With dozens of customization features, Enhancer for YouTube has the power to dramatically reorient the way you watch videos.  While a bunch of customization options may seem overwhelming, Enhancer for YouTube actually makes it very simple to navigate its settings and select just your favorite features. You can even choose which of your preferred features will display in the extension’s easy access interface that appears just beneath the video player. <figcaption class="wp-element-caption">Enhancer for YouTube offers easy access controls just beneath the video player.</figcaption> Key features…  Customize video player size  Change YouTube’s look with a dark theme Volume booster Ad blocking (with ability to whitelist channels you OK for ads) Take quick screenshots of videos Change playback speed Set default video quality from low to high def Shortcut configuration Return YouTube Dislike Do you like the Dislike? YouTube removed the display that revealed the number of thumbs-down Dislikes a video has, but with Return YouTube Dislike you can bring back the brutal truth.  “Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.” Firefox user OFG “i have never smashed 5 stars faster.” Firefox user 12918016 YouTube High Definition Though its primary function is to automatically play all YouTube videos in their highest possible resolution, YouTube High Definition has a few other fine features to offer.  In addition to automatic HD, YouTube High Definition can… Customize video player size HD support for clips embedded on external sites Specify your ideal resolution (4k – 144p) Set a preferred volume level  Also automatically plays the highest quality audio YouTube NonStop So simple. So awesome. YouTube NonStop remedies the headache of interrupting your music with that awful “Video paused. Continue watching?” message.  Works on YouTube and YouTube Music. You’re now free to navigate away from your YouTube tab for as long as you like and not fret that the rock will stop rolling.  Unhook: Remove YouTube Recommended Videos & Comments Instant serenity for YouTube! Unhook lets you strip away unwanted distractions like the promotional sidebar, endscreen suggestions, trending tab, and much more.  More than two dozen customization options make this an essential extension for anyone seeking escape from YouTube rabbit holes. You can even hide notifications and live chat boxes.  “This is the best extension to control YouTube usage, and not let YouTube control you.” Firefox user Shubham Mandiya PocketTube If you subscribe to a lot of YouTube channels PocketTube is a fantastic way to organize all your subscriptions by themed collections.  Group your channel collections by subject, like “Sports,” “Cooking,” “Cat videos” or whatever. Other key features include… Add custom icons to easily identify your channel collections Customize your feed so you just see videos you haven’t watched yet, prioritize videos from certain channels, plus other content settings Integrates seamlessly with YouTube homepage  Sync collections across Firefox/Android/iOS using Google Drive and Chrome Profiler <figcaption class="wp-element-caption">PocketTube keeps your channel collections neatly tucked away to the side. </figcaption> AdBlocker for YouTube It’s not just you who’s noticed a lot more ads lately. Regain control with AdBlocker for YouTube.  The extension very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster, more focused YouTube.  SponsorBlock It’s a terrible experience when you’re enjoying a video or music on YouTube and you’re suddenly interrupted by a blaring ad. SponsorBlock solves this problem in a highly effective and original way.  Leveraging the power of crowd sourced information to locate where—precisely— interruptive sponsored segments appear in videos, SponsorBlock learns where to automatically skip sponsored segments with its ever growing database of videos. You can also participate in the project by reporting sponsored segments whenever you encounter them (it’s easy to report right there on the video page with the extension).  SponsorBlock can also learn to skip non-music portions of music videos and intros/outros, as well. If you’d like a deeper dive of SponsorBlock we profiled its developer and open source project on Mozilla Distilled.  We hope one of these extensions enhances the way you enjoy YouTube. Feel free to explore more great media extensions on addons.mozilla.org.   
  • Firefox Add-on Reviews: How to turn your household pet into a Firefox theme (2024/10/17 22:54)
    Themes are a fun way to change the visual appearance of Firefox and give the browser a look that’s all your own. You’re free to explore more than a half-million community created themes on addons.mozilla.org (AMO), or better yet, create your own custom theme. Best of all — create a theme featuring a beloved pet! Then you can take your little buddy with you wherever you go on the web.  (You’ll need a Mozilla account to create and publish Firefox themes on AMO.) Prepare your pet pic for upload I find it helpful to first size my image properly. For Firefox themes, we recommend images with a height between 100 – 200 pixels. So I might first prepare an image with a couple of sizing options, perhaps one at 100 pixel height and another at 200, and see what works best. (Note: as you resize an image, be sure its height and width parameters change in sync so your image maintains proper dimensions.) <figcaption class="wp-element-caption">Tootsie strikes a pose to become a Firefox theme. </figcaption> Depending on what type of image editing software you have on your computer (PC users can resize pics with the standard Photo or Paint apps, while Mac users may be familiar with Preview), find the controls to resize and save your images in the recommended range. Supported file formats include PNG, JPG, APNG, SVG, or GIF (not animated) and can be up to 6.9MB.  Upload your pet pic & select custom colors Go to AMO’s Theme Generator page and… Name your theme Upload your image Select colors for the header background, text and icons <figcaption class="wp-element-caption">Point-and-click color palettes make it easy to create complementary color combinations. </figcaption> Once you like the way your new theme looks in the preview display, click Finish Theme and you’re done! All new theme submissions must first pass a review process, but that usually only takes a day or two, after which you’ll receive an email notifying you that your personalized pet theme is ready to install on Firefox. Now Tootsie accompanies me everywhere online, although sometimes she just stares at me.  For more tips on creating Firefox themes, please see this Theme Generator guide or visit the Extension Workshop. 
  • The Rust Programming Language Blog: Announcing Rust 1.82.0 (2024/10/17 00:00)
    The Rust team is happy to announce a new version of Rust, 1.82.0. Rust is a programming language empowering everyone to build reliable and efficient software. If you have a previous version of Rust installed via rustup, you can get 1.82.0 with: $ rustup update stable If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.82.0. If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across! What's in 1.82.0 stable cargo info Cargo now has an info subcommand to display information about a package in the registry, fulfilling a long standing request just shy of its tenth anniversary! Several third-party extensions like this have been written over the years, and this implementation was developed as cargo-information before merging into Cargo itself. For example, here's what you could see for cargo info cc: cc #build-dependencies A build-time dependency for Cargo build scripts to assist in invoking the native C compiler to compile native C code into a static archive to be linked into Rust code. version: 1.1.23 (latest 1.1.30) license: MIT OR Apache-2.0 rust-version: 1.63 documentation: https://docs.rs/cc homepage: https://github.com/rust-lang/cc-rs repository: https://github.com/rust-lang/cc-rs crates.io: https://crates.io/crates/cc/1.1.23 features: jobserver = [] parallel = [dep:libc, dep:jobserver] note: to see how you depend on cc, run `cargo tree --invert --package cc@1.1.23` By default, cargo info describes the package version in the local Cargo.lock, if any. As you can see, it will indicate when there's a newer version too, and cargo info cc@1.1.30 would report on that. Apple target promotions macOS on 64-bit ARM is now Tier 1 The Rust target aarch64-apple-darwin for macOS on 64-bit ARM (M1-family or later Apple Silicon CPUs) is now a tier 1 target, indicating our highest guarantee of working properly. As the platform support page describes, every change in the Rust repository must pass full tests on every tier 1 target before it can be merged. This target was introduced as tier 2 back in Rust 1.49, making it available in rustup. This new milestone puts the aarch64-apple-darwin target on par with the 64-bit ARM Linux and the X86 macOS, Linux, and Windows targets. Mac Catalyst targets are now Tier 2 Mac Catalyst is a technology by Apple that allows running iOS applications natively on the Mac. This is especially useful when testing iOS-specific code, as cargo test --target=aarch64-apple-ios-macabi --target=x86_64-apple-ios-macabi mostly just works (in contrast to the usual iOS targets, which need to be bundled using external tooling before they can be run on a native device or in the simulator). The targets are now tier 2, and can be downloaded with rustup target add aarch64-apple-ios-macabi x86_64-apple-ios-macabi, so now is an excellent time to update your CI pipeline to test that your code also runs in iOS-like environments. Precise capturing use<..> syntax Rust now supports use<..> syntax within certain impl Trait bounds to control which generic lifetime parameters are captured. Return-position impl Trait (RPIT) types in Rust capture certain generic parameters. Capturing a generic parameter allows that parameter to be used in the hidden type. That in turn affects borrow checking. In Rust 2021 and earlier editions, lifetime parameters are not captured in opaque types on bare functions and on functions and methods of inherent impls unless those lifetime parameters are mentioned syntactically in the opaque type. E.g., this is an error: //@ edition: 2021 fn f(x: &()) -> impl Sized { x } error[E0700]: hidden type for `impl Sized` captures lifetime that does not appear in bounds --> src/main.rs:1:30 | 1 | fn f(x: &()) -> impl Sized { x } | --- ---------- ^ | | | | | opaque type defined here | hidden type `&()` captures the anonymous lifetime defined here | help: add a `use<...>` bound to explicitly capture `'_` | 1 | fn f(x: &()) -> impl Sized + use<'_> { x } | +++++++++ With the new use<..> syntax, we can fix this, as suggested in the error, by writing: fn f(x: &()) -> impl Sized + use<'_> { x } Previously, correctly fixing this class of error required defining a dummy trait, conventionally called Captures, and using it as follows: trait Captures<T: ?Sized> {} impl<T: ?Sized, U: ?Sized> Captures<T> for U {} fn f(x: &()) -> impl Sized + Captures<&'_ ()> { x } That was called "the Captures trick", and it was a bit baroque and subtle. It's no longer needed. There was a less correct but more convenient way to fix this that was often used called "the outlives trick". The compiler even previously suggested doing this. That trick looked like this: fn f(x: &()) -> impl Sized + '_ { x } In this simple case, the trick is exactly equivalent to + use<'_> for subtle reasons explained in RFC 3498. However, in real life cases, this overconstrains the bounds on the returned opaque type, leading to problems. For example, consider this code, which is inspired by a real case in the Rust compiler: struct Ctx<'cx>(&'cx u8); fn f<'cx, 'a>( cx: Ctx<'cx>, x: &'a u8, ) -> impl Iterator<Item = &'a u8> + 'cx { core::iter::once_with(move || { eprintln!("LOG: {}", cx.0); x }) //~^ ERROR lifetime may not live long enough } We can't remove the + 'cx, since the lifetime is used in the hidden type and so must be captured. Neither can we add a bound of 'a: 'cx, since these lifetimes are not actually related and it won't in general be true that 'a outlives 'cx. If we write + use<'cx, 'a> instead, however, this will work and have the correct bounds. There are some limitations to what we're stabilizing today. The use<..> syntax cannot currently appear within traits or within trait impls (but note that there, in-scope lifetime parameters are already captured by default), and it must list all in-scope generic type and const parameters. We hope to lift these restrictions over time. Note that in Rust 2024, the examples above will "just work" without needing use<..> syntax (or any tricks). This is because in the new edition, opaque types will automatically capture all lifetime parameters in scope. This is a better default, and we've seen a lot of evidence about how this cleans up code. In Rust 2024, use<..> syntax will serve as an important way of opting-out of that default. For more details about use<..> syntax, capturing, and how this applies to Rust 2024, see the "RPIT lifetime capture rules" chapter of the edition guide. For details about the overall direction, see our recent blog post, "Changes to impl Trait in Rust 2024". Native syntax for creating a raw pointer Unsafe code sometimes has to deal with pointers that may dangle, may be misaligned, or may not point to valid data. A common case where this comes up are repr(packed) structs. In such a case, it is important to avoid creating a reference, as that would cause undefined behavior. This means the usual & and &mut operators cannot be used, as those create a reference -- even if the reference is immediately cast to a raw pointer, it's too late to avoid the undefined behavior. For several years, the macros std::ptr::addr_of! and std::ptr::addr_of_mut! have served this purpose. Now the time has come to provide a proper native syntax for this operation: addr_of!(expr) becomes &raw const expr, and addr_of_mut!(expr) becomes &raw mut expr. For example: #[repr(packed)] struct Packed { not_aligned_field: i32, } fn main() { let p = Packed { not_aligned_field: 1_82 }; // This would be undefined behavior! // It is rejected by the compiler. //let ptr = &p.not_aligned_field as *const i32; // This is the old way of creating a pointer. let ptr = std::ptr::addr_of!(p.not_aligned_field); // This is the new way. let ptr = &raw const p.not_aligned_field; // Accessing the pointer has not changed. // Note that `val = *ptr` would be undefined behavior because // the pointer is not aligned! let val = unsafe { ptr.read_unaligned() }; } The native syntax makes it more clear that the operand expression of these operators is interpreted as a place expression. It also avoids the term "address-of" when referring to the action of creating a pointer. A pointer is more than just an address, so Rust is moving away from terms like "address-of" that reaffirm a false equivalence of pointers and addresses. Safe items with unsafe extern Rust code can use functions and statics from foreign code. The type signatures of these foreign items are provided in extern blocks. Historically, all items within extern blocks have been unsafe to use, but we didn't have to write unsafe anywhere on the extern block itself. However, if a signature within the extern block is incorrect, then using that item will result in undefined behavior. Would that be the fault of the person who wrote the extern block, or the person who used that item? We've decided that it's the responsibility of the person writing the extern block to ensure that all signatures contained within it are correct, and so we now allow writing unsafe extern: unsafe extern { pub safe static TAU: f64; pub safe fn sqrt(x: f64) -> f64; pub unsafe fn strlen(p: *const u8) -> usize; } One benefit of this is that items within an unsafe extern block can be marked as safe to use. In the above example, we can call sqrt or read TAU without using unsafe. Items that aren't marked with either safe or unsafe are conservatively assumed to be unsafe. In future releases, we'll be encouraging the use of unsafe extern with lints. Starting in Rust 2024, using unsafe extern will be required. For further details, see RFC 3484 and the "Unsafe extern blocks" chapter of the edition guide. Unsafe attributes Some Rust attributes, such as no_mangle, can be used to cause undefined behavior without any unsafe block. If this were regular code we would require them to be placed in an unsafe {} block, but so far attributes have not had comparable syntax. To reflect the fact that these attributes can undermine Rust's safety guarantees, they are now considered "unsafe" and should be written as follows: #[unsafe(no_mangle)] pub fn my_global_function() { } The old form of the attribute (without unsafe) is currently still accepted, but might be linted against at some point in the future, and will be a hard error in Rust 2024. This affects the following attributes: no_mangle link_section export_name For further details, see the "Unsafe attributes" chapter of the edition guide. Omitting empty types in pattern matching Patterns which match empty (a.k.a. uninhabited) types by value can now be omitted: use std::convert::Infallible; pub fn unwrap_without_panic<T>(x: Result<T, Infallible>) -> T { let Ok(x) = x; // the `Err` case does not need to appear x } This works with empty types such as a variant-less enum Void {}, or structs and enums with a visible empty field and no #[non_exhaustive] attribute. It will also be particularly useful in combination with the never type !, although that type is still unstable at this time. There are some cases where empty patterns must still be written. For reasons related to uninitialized values and unsafe code, omitting patterns is not allowed if the empty type is accessed through a reference, pointer, or union field: pub fn unwrap_ref_without_panic<T>(x: &Result<T, Infallible>) -> &T { match x { Ok(x) => x, // this arm cannot be omitted because of the reference Err(infallible) => match *infallible {}, } } To avoid interfering with crates that wish to support several Rust versions, match arms with empty patterns are not yet reported as “unreachable code” warnings, despite the fact that they can be removed. Floating-point NaN semantics and const Operations on floating-point values (of type f32 and f64) are famously subtle. One of the reasons for this is the existence of NaN ("not a number") values which are used to represent e.g. the result of 0.0 / 0.0. What makes NaN values subtle is that more than one possible NaN value exists. A NaN value has a sign (that can be checked with f.is_sign_positive()) and a payload (that can be extracted with f.to_bits()). However, both the sign and payload of NaN values are entirely ignored by == (which always returns false). Despite very successful efforts to standardize the behavior of floating-point operations across hardware architectures, the details of when a NaN is positive or negative and what its exact payload is differ across architectures. To make matters even more complicated, Rust and its LLVM backend apply optimizations to floating-point operations when the exact numeric result is guaranteed not to change, but those optimizations can change which NaN value is produced. For instance, f * 1.0 may be optimized to just f. However, if f is a NaN, this can change the exact bit pattern of the result! With this release, Rust standardizes on a set of rules for how NaN values behave. This set of rules is not fully deterministic, which means that the result of operations like (0.0 / 0.0).is_sign_positive() can differ depending on the hardware architecture, optimization levels, and the surrounding code. Code that aims to be fully portable should avoid using to_bits and should use f.signum() == 1.0 instead of f.is_sign_positive(). However, the rules are carefully chosen to still allow advanced data representation techniques such as NaN boxing to be implemented in Rust code. For more details on what the exact rules are, check out our documentation. With the semantics for NaN values settled, this release also permits the use of floating-point operations in const fn. Due to the reasons described above, operations like (0.0 / 0.0).is_sign_positive() (which will be const-stable in Rust 1.83) can produce a different result when executed at compile-time vs at run-time. This is not a bug, and code must not rely on a const fn always producing the exact same result. Constants as assembly immediates The const assembly operand now provides a way to use integers as immediates without first storing them in a register. As an example, we implement a syscall to write by hand: const WRITE_SYSCALL: c_int = 0x01; // syscall 1 is `write` const STDOUT_HANDLE: c_int = 0x01; // `stdout` has file handle 1 const MSG: &str = "Hello, world!\n"; let written: usize; // Signature: `ssize_t write(int fd, const void buf[], size_t count)` unsafe { core::arch::asm!( "mov rax, {SYSCALL} // rax holds the syscall number", "mov rdi, {OUTPUT} // rdi is `fd` (first argument)", "mov rdx, {LEN} // rdx is `count` (third argument)", "syscall // invoke the syscall", "mov {written}, rax // save the return value", SYSCALL = const WRITE_SYSCALL, OUTPUT = const STDOUT_HANDLE, LEN = const MSG.len(), in("rsi") MSG.as_ptr(), // rsi is `buf *` (second argument) written = out(reg) written, ); } assert_eq!(written, MSG.len()); Output: Hello, world! Playground link. In the above, a statement such as LEN = const MSG.len() populates the format specifier LEN with an immediate that takes the value of MSG.len(). This can be seen in the generated assembly (the value is 14): lea rsi, [rip + .L__unnamed_3] mov rax, 1 # rax holds the syscall number mov rdi, 1 # rdi is `fd` (first argument) mov rdx, 14 # rdx is `count` (third argument) syscall # invoke the syscall mov rax, rax # save the return value See the reference for more details. Safely addressing unsafe statics This code is now allowed: static mut STATIC_MUT: Type = Type::new(); extern "C" { static EXTERN_STATIC: Type; } fn main() { let static_mut_ptr = &raw mut STATIC_MUT; let extern_static_ptr = &raw const EXTERN_STATIC; } In an expression context, STATIC_MUT and EXTERN_STATIC are place expressions. Previously, the compiler's safety checks were not aware that the raw ref operator did not actually affect the operand's place, treating it as a possible read or write to a pointer. No unsafety is actually present, however, as it just creates a pointer. Relaxing this may cause problems where some unsafe blocks are now reported as unused if you deny the unused_unsafe lint, but they are now only useful on older versions. Annotate these unsafe blocks with #[allow(unused_unsafe)] if you wish to support multiple versions of Rust, as in this example diff: static mut STATIC_MUT: Type = Type::new(); fn main() { + #[allow(unused_unsafe)] let static_mut_ptr = unsafe { std::ptr::addr_of_mut!(STATIC_MUT) }; } A future version of Rust is expected to generalize this to other expressions which would be safe in this position, not just statics. Stabilized APIs std::thread::Builder::spawn_unchecked std::str::CharIndices::offset std::option::Option::is_none_or [T]::is_sorted [T]::is_sorted_by [T]::is_sorted_by_key Iterator::is_sorted Iterator::is_sorted_by Iterator::is_sorted_by_key std::future::Ready::into_inner std::iter::repeat_n impl<T: Clone> DoubleEndedIterator for Take<Repeat<T>> impl<T: Clone> ExactSizeIterator for Take<Repeat<T>> impl<T: Clone> ExactSizeIterator for Take<RepeatWith<T>> impl Default for std::collections::binary_heap::Iter impl Default for std::collections::btree_map::RangeMut impl Default for std::collections::btree_map::ValuesMut impl Default for std::collections::vec_deque::Iter impl Default for std::collections::vec_deque::IterMut Rc<T>::new_uninit Rc<T>::assume_init Rc<[T]>::new_uninit_slice Rc<[MaybeUninit<T>]>::assume_init Arc<T>::new_uninit Arc<T>::assume_init Arc<[T]>::new_uninit_slice Arc<[MaybeUninit<T>]>::assume_init Box<T>::new_uninit Box<T>::assume_init Box<[T]>::new_uninit_slice Box<[MaybeUninit<T>]>::assume_init core::arch::x86_64::_bextri_u64 core::arch::x86_64::_bextri_u32 core::arch::x86::_mm_broadcastsi128_si256 core::arch::x86::_mm256_stream_load_si256 core::arch::x86::_tzcnt_u16 core::arch::x86::_mm_extracti_si64 core::arch::x86::_mm_inserti_si64 core::arch::x86::_mm_storeu_si16 core::arch::x86::_mm_storeu_si32 core::arch::x86::_mm_storeu_si64 core::arch::x86::_mm_loadu_si16 core::arch::x86::_mm_loadu_si32 core::arch::wasm32::u8x16_relaxed_swizzle core::arch::wasm32::i8x16_relaxed_swizzle core::arch::wasm32::i32x4_relaxed_trunc_f32x4 core::arch::wasm32::u32x4_relaxed_trunc_f32x4 core::arch::wasm32::i32x4_relaxed_trunc_f64x2_zero core::arch::wasm32::u32x4_relaxed_trunc_f64x2_zero core::arch::wasm32::f32x4_relaxed_madd core::arch::wasm32::f32x4_relaxed_nmadd core::arch::wasm32::f64x2_relaxed_madd core::arch::wasm32::f64x2_relaxed_nmadd core::arch::wasm32::i8x16_relaxed_laneselect core::arch::wasm32::u8x16_relaxed_laneselect core::arch::wasm32::i16x8_relaxed_laneselect core::arch::wasm32::u16x8_relaxed_laneselect core::arch::wasm32::i32x4_relaxed_laneselect core::arch::wasm32::u32x4_relaxed_laneselect core::arch::wasm32::i64x2_relaxed_laneselect core::arch::wasm32::u64x2_relaxed_laneselect core::arch::wasm32::f32x4_relaxed_min core::arch::wasm32::f32x4_relaxed_max core::arch::wasm32::f64x2_relaxed_min core::arch::wasm32::f64x2_relaxed_max core::arch::wasm32::i16x8_relaxed_q15mulr core::arch::wasm32::u16x8_relaxed_q15mulr core::arch::wasm32::i16x8_relaxed_dot_i8x16_i7x16 core::arch::wasm32::u16x8_relaxed_dot_i8x16_i7x16 core::arch::wasm32::i32x4_relaxed_dot_i8x16_i7x16_add core::arch::wasm32::u32x4_relaxed_dot_i8x16_i7x16_add These APIs are now stable in const contexts: std::task::Waker::from_raw std::task::Context::from_waker std::task::Context::waker $integer::from_str_radix std::num::ParseIntError::kind Other changes Check out everything that changed in Rust, Cargo, and Clippy. Contributors to 1.82.0 Many people came together to create Rust 1.82.0. We couldn't have done it without all of you. Thanks!
  • Mozilla Performance Blog: Announcing PerfCompare: the new comparison tool ! (2024/10/16 23:12)
    About two years ago, I joined the performance test team to help build PerfCompare, an improved performance tool designed to replace Perfherder’s Compare View. Around that time, we introduced PerfCompare to garner enthusiasm and feedback in creating a new workflow that would reduce the cognitive load and confusion of its predecessor. And, if we’re being honest, a tool that would also be more enjoyable from a design perspective for comparing the results of performance tests. But most importantly, we wanted to add new, relevant features while keeping Firefox engineers foremost in mind. Started from the bottom… PerfCompare’s first home page Now, after working with Senior Product Designer Dasha Andriyenko to create a sleek, intuitive UI/UX, integrating feedback from engineers and leaders across different teams, and achieving key milestones, we’re excited to announce that PerfCompare is live and ready to use at perf.compare. Now we’re on top! PerfCompare today! Time to celebrate! 🎉PerfCompare’s ultimate purpose is to become a tool that empowers developers to make performance testing a core part of their development process.We are targeting the end of this year to deprecate Compare View and make PerfCompare the primary tool to help Firefox developers analyze the performance impact of their patches. We are in the process of updating the Firefox source docs, but documentation for PerfCompare can be found at PerfCompare Documentation. It provides details on all the new features currently available on PerfCompare and instructions on how to use the tool.Some key highlights regarding features include: Allowing comparisons of up to three new revisions/patches versus the base revision of a repository (mozilla-central, autoland, etc.) Searching revisions by short hash, long hash, or author email A more visible and separate workflow for comparing revisions over time Editing the compared revisions on the results page to compute new comparisons for an updated results table without having to return to the home page Expanded rows in the results table with graphs for the base and new revisions And there’s much more in the works!I’d like to extend a huge congratulations to the performance test team, Dasha, and everyone who has contributed feedback and suggestions to our user research, team meetings, and presentations. We owe PerfCompare’s launch and continued improvement to you! If you have any questions or comments about PerfCompare, you can find us in the #PerfCompare matrix channel or join our #PerfCompareUserResearch channel. If you experience any issues, please report them on Bugzilla.
  • Spidermonkey Development Blog: 75x faster: optimizing the Ion compiler backend (2024/10/16 17:00)
    In September, machine learning engineers at Mozilla filed a bug report indicating that Firefox was consuming excessive memory and CPU resources while running Microsoft’s ONNX Runtime (a machine learning library) compiled to WebAssembly. This post describes how we addressed this and some of our longer-term plans for improving WebAssembly performance in the future. The problem SpiderMonkey has two compilers for WebAssembly code. First, a Wasm module is compiled with the Wasm Baseline compiler, a compiler that generates decent machine code very quickly. This is good for startup time because we can start executing Wasm code almost immediately after downloading it. Andy Wingo wrote a nice blog post about this Baseline compiler. When Baseline compilation is finished, we compile the Wasm module with our more advanced Ion compiler. This backend produces faster machine code, but compilation time is a lot higher. The issue with the ONNX module was that the Ion compiler backend took a long time and used a lot of memory to compile it. On my Linux x64 machine, Ion-compiling this module took about 5 minutes and used more than 4 GB of memory. Even though this work happens on background threads, this was still too much overhead. Optimizing the Ion backend When we investigated this, we noticed that this Wasm module had some extremely large functions. For the largest one, Ion’s MIR control flow graph contained 132856 basic blocks. This uncovered some performance cliffs in our compiler backend. VirtualRegister live ranges In Ion’s register allocator, each VirtualRegister has a list of LiveRange objects. We were using a linked list for this, sorted by start position. This caused quadratic behavior when allocating registers: the allocator often splits live ranges into smaller ranges and we’d have to iterate over the list for each new range to insert it at the correct position to keep the list sorted. This was very slow for virtual registers with thousands of live ranges. To address this, I tried a few different data structures. The first attempt was to use an AVL tree instead of a linked list and that was a big improvement, but the performance was still not ideal and we were also worried about memory usage increasing even more. After this we realized we could store live ranges in a vector (instead of linked list) that’s optionally sorted by decreasing start position. We also made some changes to ensure the initial live ranges are sorted when we create them, so that we could just append ranges to the end of the vector. The observation here was that the core of the register allocator, where it assigns registers or stack slots to live ranges, doesn’t actually require the live ranges to be sorted. We therefore now just append new ranges to the end of the vector and mark the vector unsorted. Right before the final phase of the allocator, where we again rely on the live ranges being sorted, we do a single std::sort operation on the vector for each virtual register with unsorted live ranges. Debug assertions are used to ensure that functions that require the vector to be sorted are not called when it’s marked unsorted. Vectors are also better for cache locality and they let us use binary search in a few places. When I was discussing this with Julian Seward, he pointed out that Chris Fallin also moved away from linked lists to vectors in Cranelift’s port of Ion’s register allocator. It’s always good to see convergent evolution :) This change from sorted linked lists to optionally-sorted vectors made Ion compilation of this Wasm module about 20 times faster, down to 14 seconds. Semi-NCA The next problem that stood out in performance profiles was the Dominator Tree Building compiler pass, in particular a function called ComputeImmediateDominators. This function determines the immediate dominator block for each basic block in the MIR graph. The algorithm we used for this (based on A Simple, Fast Dominance Algorithm by Cooper et al) is relatively simple but didn’t scale well to very large graphs. Semi-NCA (from Linear-Time Algorithms for Dominators and Related Problems by Loukas Georgiadis) is a different algorithm that’s also used by LLVM and the Julia compiler. I prototyped this and was surprised to see how much faster it was: it got our total compilation time down from 14 seconds to less than 8 seconds. For a single-threaded compilation, it reduced the time under ComputeImmediateDominators from 7.1 seconds to 0.15 seconds. Fortunately it was easy to run both algorithms in debug builds and assert they computed the same immediate dominator for each basic block. After a week of fuzz-testing, no problems were found and we landed a patch that removed the old implementation and enabled the Semi-NCA code. Sparse BitSets For each basic block, the register allocator allocated a (dense) bit set with a bit for each virtual register. These bit sets are used to check which virtual registers are live at the start of a block. For the largest function in the ONNX Wasm module, this used a lot of memory: 199477 virtual registers x 132856 basic blocks is at least 3.1 GB just for these bit sets! Because most virtual registers have short live ranges, these bit sets had relatively few bits set to 1. We replaced these dense bit sets with a new SparseBitSet data structure that uses a hashmap to store 32 bits per entry. Because most of these hashmaps contain a small number of entries, it uses an InlineMap to optimize for this: it’s a data structure that stores entries either in a small inline array or (when the array is full) in a hashmap. We also optimized InlineMap to use a variant (a union type) for these two representations to save memory. This saved at least 3 GB of memory but also improved the compilation time for the Wasm module to 5.4 seconds. Faster move resolution The last issue that showed up in profiles was a function in the register allocator called createMoveGroupsFromLiveRangeTransitions. After the register allocator assigns a register or stack slot to each live range, this function is responsible for connecting pairs of live ranges by inserting moves. For example, if a value is stored in a register but is later spilled to memory, there will be two live ranges for its virtual register. This function then inserts a move instruction to copy the value from the register to the stack slot at the start of the second live range. This function was slow because it had a number of loops with quadratic behavior: for a move’s destination range, it would do a linear lookup to find the best source range. We optimized the main two loops to run in linear time instead of being quadratic, by taking more advantage of the fact that live ranges are sorted. With these changes, Ion can compile the ONNX Wasm module in less than 3.9 seconds on my machine, more than 75x faster than before these changes. Adobe Photoshop These changes not only improved performance for the ONNX Runtime module, but also for a number of other WebAssembly modules. A large Wasm module downloaded from the free online Adobe Photoshop demo can now be Ion-compiled in 14 seconds instead of 4 minutes. The JetStream 2 benchmark has a HashSet module that was affected by the quadratic move resolution code. Ion compilation time for it improved from 2.8 seconds to 0.2 seconds. New Wasm compilation pipeline Even though these are great improvements, spending at least 14 seconds (on a fast machine!) to fully compile Adobe Photoshop on background threads still isn’t an amazing user experience. We expect this to only get worse as more large applications are compiled to WebAssembly. To address this, our WebAssembly team is making great progress rearchitecting the Wasm compiler pipeline. This work will make it possible to Ion-compile individual Wasm functions as they warm up instead of compiling everything immediately. It will also unlock exciting new capabilities such as (speculative) inlining. Stay tuned for updates on this as we start rolling out these changes in Firefox. - Jan de Mooij, engineer on the SpiderMonkey team
  • Hacks.Mozilla.Org: Llamafile v0.8.14: a new UI, performance gains, and more (2024/10/16 13:32)
    We’ve just released Llamafile 0.8.14, the latest version of our popular open source AI tool. A Mozilla Builders project, Llamafile turns model weights into fast, convenient executables that run on most computers, making it easy for anyone to get the most out of open LLMs using the hardware they already have. New chat interface The key feature of this new release is our colorful new command line chat interface. When you launch a Llamafile we now automatically open this new chat UI for you, right there in the terminal. This new interface is fast, easy to use, and an all around simpler experience than the Web-based interface we previously launched by default. (That interface, which our project inherits from the upstream llama.cpp project, is still available and supports a range of features, including image uploads. Simply point your browser at port 8080 on localhost). Other recent improvements This new chat UI is just the tip of the iceberg. In the months since our last blog post here, lead developer Justine Tunney has been busy shipping a slew of new releases, each of which have moved the project forward in important ways. Here are just a few of the highlights: Llamafiler: We’re building our own clean sheet OpenAI-compatible API server, called Llamafiler. This new server will be more reliable, stable, and most of all faster than the one it replaces. We’ve already shipped the embeddings endpoint, which runs three times as fast as the one in llama.cpp. Justine is currently working on the completions endpoint, at which point Llamafiler will become the default API server for Llamafile. Performance improvements: With the help of open source contributors like k-quant inventor @Kawrakow Llamafile has enjoyed a series of dramatic speed boosts over the last few months. In particular, pre-fill (prompt evaluation) speed has improved dramatically on a variety of architectures: Intel Core i9 went from 100 tokens/second to 400 (4x). AMD Threadripper went from 300 tokens/second to 2,400 (8x). Even the modest Raspberry Pi 5 jumped from 8 tokens/second to 80 (10x!). When combined with the new high-speed embedding server described above, Llamafile has become one of the fastest ways to run complex local AI applications that use methods like retrieval augmented generation (RAG). Support for powerful new models: Llamafile continues to keep pace with progress in open LLMs, adding support for dozens of new models and architectures, ranging in size from 405 billion parameters all the way down to 1 billion. Here are just a few of the new Llamafiles available for download on Hugging Face: Llama 3.2 1B and 3B: offering extremely impressive performance and quality for their small size. (Here’s a video from our own Mike Heavers showing it in action.) Llama 3.1 405B: a true “frontier model” that’s possible to run at home with sufficient system RAM. OLMo 7B: from our friends at the Allen Institute, OLMo is one of the first truly open and transparent models available. TriLM: a new “1.58 bit” tiny model that is optimized for CPU inference and points to a near future where matrix multiplication might no longer rule the day. Whisperfile, speech-to-text in a single file: Thanks to contributions from community member @cjpais, we’ve created Whisperfile, which does for whisper.cpp what Llamafile did for llama.cpp: that is, turns it into a multi-platform executable that runs nearly everywhere. Whisperfile thus makes it easy to use OpenAI’s Whisper technology to efficiently convert speech into text, no matter which kind of hardware you have. Get involved Our goal is for Llamafile to become a rock-solid foundation for building sophisticated locally-running AI applications. Justine’s work on the new Llamafiler server is a big part of that equation, but so is the ongoing work of supporting new models and optimizing inference performance for as many users as possible. We’re proud and grateful that some of the project’s biggest breakthroughs in these areas, and others, have come from the community, with contributors like @Kawrakow, @cjpais, @mofosyne, and @Djip007 routinely leaving their mark. We invite you to join them, and us. We welcome issues and PRs in our GitHub repo. And we welcome you to become a member of Mozilla’s AI Discord server, which has a dedicated channel just for Llamafile where you can get direct access to the project team. Hope to see you there!   The post Llamafile v0.8.14: a new UI, performance gains, and more appeared first on Mozilla Hacks - the Web developer blog.
  • Don Marti: Another easy-ish state law: the No Second-class Citizenship Act (2024/10/15 00:00)
    Tired of Big Tech companies giving consumer protections, fraud protections, and privacy protections to their users in other countries but not to people at home in the USA? Here’s another state law we could use, and I bet it could be a two-page PDF. If a company has more than 10% of our state’s residents as customers or users, and also does business in 50 or more countries, then if they offer a privacy or consumer protection feature in a non-US location they must also offer it in our state within 90 days. Have it enforced Texas SB 8 style, by individuals, so harder for Big Tech sockpuppet orgs to challenge. Reference Antitrust challenge to Facebook’s ‘superprofiling’ finally wraps in Germany — with Meta agreeing to data limits | TechCrunch We’ve asked Meta to confirm whether changes will be implemented globally — or only inside the German market where the Bundeskartellamt has jurisdiction. Related there ought to be a law (Big Tech lobbyists are expensive—instead of grinding out the PDFs they expect, make them fight an unpredictable distributed campaign of random-ish ideas, coded into bills that take the side of local small businesses?) Bonus links How the long-gone Habsburg Empire is still visible in Eastern European bureaucracies today The formal institutions of the empire ceased to exist with the collapse of the Habsburg Empire after World War I, breaking up into separate nation states that have seen several waves of drastic institutional changes since. We might therefore wonder whether differences in trust and corruption across areas that belonged to different empires in the past really still survive to this day. TikTok knows its app is harming kids, new internal documents show : NPR (this kind of stuff is why I’ll never love your brand—if a brand is fine with advertising on surveillance apps with all we know about how they work, then I’m enough opposed to them on fundamental issues that all transactions will be based on lack of trust.) Cloudflare Destroys Another Patent Troll, Gets Its Patents Released To The Public (time for some game theory) Conceptual models of space colonization (One that’s missing: Kurt Vonnegut’s concept involving large-scale outward transfer of genetic material. Probably most likely to happen if you add in Von Neumann machines and the systems required to grow live colonists from genetic data—which don’t exist but are not physically or economically impossible…) Cash incinerator OpenAI secures its $6.6 billion lifeline — ‘in the spirit of a donation’ (fwiw, there are still a bunch of copyright cases out there, too. (AI legal links) Related: The Subprime AI Crisis) The cheap chocolate system The giant chocolate companies want cocoa beans to be a commodity. They don’t want to worry about origin or yield–they simply want to buy indistinguishable cheap cacao. In fact, the buyers at these companies feel like they have no choice but to push for mediocre beans at cut rate prices, regardless of the human cost. (so it’s like adtech you eat?) How web bloat impacts users with slow devices CPU performance for web apps hasn’t scaled nearly as quickly as bandwidth so, while more of the web is becoming accessible to people with low-end connections, more of the web is becoming inaccessible to people with low-end devices even if they have high-end connections.
  • Niko Matsakis: The `Overwrite` trait and `Pin` (2024/10/14 15:12)
    In July, boats presented a compelling vision in their post pinned places. With the Overwrite trait that I introduced in my previous post, however, I think we can get somewhere even more compelling, albeit at the cost of a tricky transition. As I will argue in this post, the Overwrite trait effectively becomes a better version of the existing Unpin trait, one that effects not only pinned references but also regular &mut references. Through this it’s able to make Pin fit much more seamlessly with the rest of Rust. Just show me the dang code Before I dive into the details, let’s start by reviewing a few examples to show you what we are aiming at (you can also skip to the TL;DR, in the FAQ). I’m assuming a few changes here: Adding an Overwrite trait and changing most types to be !Overwrite by default. The Option<T> (and maybe others) would opt-in to Overwrite, permitting x.take(). Integrating pin into the borrow checker, extending auto-ref to also “auto-pin” and produce a Pin<&mut T>. The borrow checker only permits you to pin values that you own. Once a place has been pinned, you are not permitted to move out from it anymore (unless the value is overwritten). The first change is “mildly” backwards incompatible. I’m not going to worry about that in this post, but I’ll cover the ways I think we can make the transition in a follow up post. Example 1: Converting a generator into an iterator We would really like to add a generator syntax that lets you write an iterator more conveniently.1 For example, given some slice strings: &[String], we should be able to define a generator that iterates over the string lengths like so: fn do_computation() -> usize { let hashes = gen { let strings: Vec<String> = compute_input_strings(); for string in &strings { yield compute_hash(&string); } }; // ... } But there is a catch here! To permit the borrow of strings, which is owned by the generator, the generator will have to be pinned.2 That means that generators cannot directly implement Iterator, because generators need a Pin<&mut Self> signature for their next methods. It is possible, however, to implement Iterator for Pin<&mut G> where G is a generator.3 In today’s Rust, that means that using a generator as an iterator would require explicit pinning: fn do_computation() -> usize { let hashes = gen {....}; let hashes = pin!(hashes); // <-- explicit pin if let Some(h) = hashes.next() { // process first hash }; // ... } With pinned places, this feels more builtin, but it still requires users to actively think about pinning for even the most basic use case: fn do_computation() -> usize { let hashes = gen {....}; let pinned mut hashes = hashes; if let Some(h) = hashes.next() { // process first hash }; // ... } Under this proposal, users would simply be able to ignore pinning altogether: fn do_computation() -> usize { let mut hashes = gen {....}; if let Some(h) = hashes.next() { // process first hash }; // ... } Pinning is still happening: once a user has called next, they would not be able to move hashes after that point. If they tried to do so, the borrow checker (which now understands pinning natively) would give an error like: error[E0596]: cannot borrow `hashes` as mutable, as it is not declared as mutable --> src/lib.rs:4:22 | 4 | if let Some(h) = hashes.next() { | ------ value in `hashes` was pinned here | ... 7 | move_somewhere_else(hashes); | ^^^^^^ cannot move a pinned value help: if you want to move `hashes`, consider using `Box::pin` to allocate a pinned box | 3 | let mut hashes = Box::pin(gen { .... }); | +++++++++ + As noted, it is possible to move hashes after pinning, but only if you pin it into a heap-allocated box. So we can advise users how to do that. Example 2: Implementing the MaybeDone future The pinned places post included an example future called MaybeDone. I’m going to implement that same future in the system I describe here. There are some comments in the example comparing it to the version from the pinned places post. enum MaybeDone<F: Future> { // --------- // I'm assuming we are in Rust.Next, and so the default // bounds for `F` do not include `Overwrite`. // In other words, `F: ?Overwrite` is the default // (just as it is with every other trait besides `Sized`). Polling(F), // - // We don't need to declare `pinned F`. Done(Option<F::Output>), } impl<F: Future> MaybeDone<F> { fn maybe_poll(self: Pin<&mut Self>, cx: &mut Context<'_>) { // -------------------- // I'm not bothering with the `&pinned mut self` // sugar here, though certainly we could still // add it. if let MaybeDone::Polling(fut) = self { // --- // Just as in the original example, // we are able to project from `Pin<&mut Self>` // to a `Pin<&mut F>`. // // The key is that we can safely project // from an owner of type `Pin<&mut Self>` // to its field of type `Pin<&mut F>` // so long as the owner type `Self: !Overwrite` // (which is the default for structs in Rust.Next). if let Poll::Ready(res) = fut.poll(cx) { *self = MaybeDone::Done(Some(res)); } } } fn is_done(&self) -> bool { matches!(self, &MaybeDone::Done(_)) } fn take_output(&mut self) -> Option<F::Output> { // --------- // In pinned places, this method had to be // `&pinned mut self`, but under this design, // it can be a regular `&mut self`. // // That's because `Pin<&mut Self>` becomes // a subtype of `&mut Self`. if let MaybeDone::Done(res) = self { res.take() } else { None } } } Example 3: Implementing the Join combinator Let’s complete the journey by implementing a Join future: struct Join<F1: Future, F2: Future> { // These fields do not have to be declared `pinned`: fut1: MaybeDone<F1>, fut2: MaybeDone<F2>, } impl<F1, F2> Future for Join<F1, F2> where F1: Future, F2: Future, { type Output = (F1::Output, F2::Output); fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { // -------------------- // Again, I've dropped the sugar here. // This looks just the same as in the // "Pinned Places" example. This again // leans on the ability to project // from a `Pin<&mut Self>` owner so long as // `Self: !Overwrite` (the default for structs // in Rust.Next). self.fut1.maybe_poll(cx); self.fut2.maybe_poll(cx); if self.fut1.is_done() && self.fut2.is_done() { // This code looks the same as it did with pinned places, // but there is an important difference. `take_output` // is now an `&mut self` method, not a `Pin<&mut Self>` // method. This demonstrates that we can also get // a regular `&mut` reference to our fields. let res1 = self.fut1.take_output().unwrap(); let res2 = self.fut2.take_output().unwrap(); Poll::Ready((res1, res2)) } else { Poll::Pending } } } How I think about pin OK, now that I’ve lured you in with code examples, let me drive you away by diving into the details of Pin. I’m going to cover the way that I think about Pin. It is similar to but different from how Pin is presented in the pinned places post – in particular, I prefer to think about places that pin their values and not pinned places. In any case, Pin is surprisingly subtle, and I recommend that if you want to go deeper, you read boat’s history of Pin post and/or the stdlib documentation for Pin. The Pin<P> type is a modifier on the pointer P The Pin<P> type is unusual in Rust. It looks similar to a “smart pointer” type, like Arc<T>, but it functions differently. Pin<P> is not a pointer, it is a modifier on another pointer, so a Pin<&T> represents a pinned reference, a Pin<&mut T> represents a pinned mutable reference, a Pin<Box<T>> represents a pinned box, and so forth. You can think of a Pin<P> type as being a pointer of type P that refers to a place (Rust jargon for a location in memory that stores a value) whose value v has been pinned. A pinned value v can never be moved to another place in memory. Moreover, v must be dropped before its place can be reassigned to another value. Pinning is part of the “lifecycle” of a place The way I think about, every place in memory has a lifecycle: flowchart TD Uninitialized Initialized Pinned Uninitialized -- p = v where v: T --> Initialized Initialized -- move out, drop, or forget --> Uninitialized Initialized -- pin value v in p (only possible when T is !Unpin) --> Pinned Pinned -- drop value --> Uninitialized Pinned -- move out or forget --> UB Uninitialized -- free the place --> Freed UB[💥 Undefined behavior 💥] When first allocated, a place p is uninitialized – that is, p has no value at all. An uninitialized place can be freed. This corresponds to e.g. popping a stack frame or invoking free. p may at some point become initialized by an assignment like p = v. At that point, there are three ways to transition back to uninitialized: The value v could be moved somewhere else, e.g. by moving it somewhere else, like let p2 = p. At that point, p goes back to being uninitialized. The value v can be forgotten, with std::mem::forget(p). At this point, no destructor runs, but p goes back to being considered uninitialized. The value v can be dropped, which occurs when the place p goes out of scope. At this point, the destructor runs, and p goes back to being considered uninitialized. Alternatively, the value v can be pinned in place: At this point, v cannot be moved again, and the only way for p to be reused is for v to be dropped. Once a value is pinned, moving or forgetting the value is not allowed. These actions are “undefined behavior”, and safe Rust must not permit them to occur. A digression on forgetting vs other ways to leak As most folks know, Rust does not guarantee that destructors run. If you have a value v whose destructor never runs, we say that value is leaked. There are however two ways to leak a value, and they are quite different in their impact: Option A: Forgetting. Using std::mem::forget, you can forget the value v. The place p that was storing that value will go from initialized to uninitialized, at which point the place p can be freed. Forgetting a value is undefined behavior if that value has been pinned, however! Option B: Leak the place. When you leak a place, it just stays in the initialized or pinned state forever, so its value is never dropped. This can happen, for example, with a ref-count cycle. This is safe even if the value is pinned! In retrospect, I wish that Option A did not exist – I wish that we had not added std::mem::forget. We did so as part of working through the impact of ref-count cycles. It seemed equivalent at the time (“the dtor doesn’t run anyway, why not make it easy to do”) but I think this diagram shows why it adding forget made things permanently more complicated for relatively little gain.4 Oh well! Can’t win ’em all. Values of types implementing Unpin cannot be pinned There is one subtle aspect here: not all values can be pinned. If a type T implements Unpin, then values of type T cannot be pinned. When you have a pinned reference to them, they can still squirm out from under you via swap or other techniques. Another way to say the same thing is to say that values can only be pinned if their type is !Unpin (“does not implement Unpin”). Types that are !Unpin can be called address sensitive, meaning that once they pinned, there can be pointers to the internals of that value that will be invalidated if the address changes. Types that implement Unpin would therefore be address insensitive. Traditionally, all Rust types have been address insensitive, and therefore Unpin is an auto trait, implemented by most types by default. Pin<&mut T> is really a “maybe pinned” reference Looking at the state machine as I describe it here, we can see that possessing a Pin<&mut T> isn’t really a pinned mutable reference, in the sense that it doesn’t always refer to a place that is pinning its value. If T: Unpin, then it’s just a regular reference. But if T: !Unpin, then a pinned reference guarantees that the value it refers to is pinned in place. This fits with the name Unpin, which I believe was meant to convey that idea that, even if you have a pinned reference to a value of type T: Unpin, that value can become unpinned. I’ve heard the metaphor of “if T: Unpin, you can left out the pin, swap in a different value, and put the pin back”. Pin picked a peck of pickled pain Everyone agrees that Pin is confusing and a pain to use. But what makes it such a pain? If you are attempting to author a Pin-based API, there are two primary problems: Pin<&mut Self> methods can’t make use of regular &mut self methods. Pin<&mut Self> methods can’t access fields by default. Crates like pin-project-lite make this easier but still require learning obscure concepts like structural pinning. If you attempting to consume a Pin-based API, the primary annoyance is that getting a pinned reference is hard. You can’t just call Pin<&mut Self> methods normally, you have to remember to use Box::pin or pin! first. (We saw this in Example 1 from this post.) My proposal in a nutshell This post is focused on a proposal with two parts: Making Pin-based APIs easier to author by replacing the Unpin trait with Overwrite. Making Pin-based APIs easier to call by integrating pinning into the borrow checker. I’m going to walk through those in turn. Making Pin-based APIs easier to author Overwrite as the better Unpin The first part of my proposalis a change I call s/Unpin/Overwrite/. The idea is to introduce Overwrite and then change the “place lifecycle” to reference Overwrite instead of Unpin: flowchart TD Uninitialized Initialized Pinned Uninitialized -- p = v where v: T --> Initialized Initialized -- move out, drop, or forget --> Uninitialized Initialized -- pin value v in p (only possible whenT is 👉!Overwrite👈) --> Pinned Pinned -- drop value --> Uninitialized Pinned -- move out or forget --> UB Uninitialized -- free the place --> Freed UB[💥 Undefined behavior 💥] For s/Unpin/Overwrite/ to work well, we have to make all !Unpin types also be !Overwrite. This is not, strictly speaking, backwards compatible, since today !Unpin types (like all types) can be overwritten and swapped. I think eventually we want every type to be !Overwrite by default, but I don’t think we can change that default in a general way without an edition. But for !Unpin types in particular I suspect we can get away with it, because !Unpin types are pretty rare, and the simplification we get from doing so is pretty large. (And, as I argued in the previous post, there is no loss of expressiveness; code today that overwrites or swaps !Unpin values can be locally rewritten.) Why swaps are bad without s/Unpin/Overwrite/ Today, Pin<&mut T> cannot be converted into an &mut T reference unless T: Unpin.5 This because it would allow safe Rust code to create Undefined Behavior by swapping the referent of the &mut T reference and hence moving the pinned value. By requiring that T: Unpin, the DerefMut impl is effectively limiting itself to references that are not, in fact, in the “pinned” state, but just in the “initialized” state. As a result, Pin<&mut T> and &mut T methods don’t interoperate today This leads directly to our first two pain points. To start, from a Pin<&mut Self> method, you can only invoke &self methods (via the Deref impl) or other Pin<&mut Self> methods. This schism separates out the “regular” methods of a type from its pinned methods; it also means that methods doing field assignments don’t compile: fn increment_field(self: Pin<&mut Self>) { self.field = self.field + 1; } This errors because compiling a field assignment requires a DerefMut impl and Pin<&mut Self> doesn’t have one. With s/Unpin/Overwrite/, Pin<&mut Self> is a subtype of &mut self s/Unpin/Overwrite/ allows us to implement DerefMut for all pinned types. This is because, unlike Unpin, Overwrite affects how &mut works, and hence &mut T would preserve the pinned state for the place it references. Consider the two possibilities for the value of type T referred to by the &mut T: If T: Overwrite, then the value is not pinnable, and so the place cannot be in the pinned state. If T: !Overwrite, the value could be pinned, but we also cannot overwrite or swap it, and so pinning is preserved. This implies that Pin<&mut T> is in fact a generalized version of &mut T. Every &'a mut T keeps the value pinned for the duration of its lifetime 'a, but a Pin<&mut T> ensures the value stays pinned for the lifetime of the underlying storage. If we have a DerefMut impl, then Pin<&mut Self> methods can freely call &mut self methods. Big win! Today you must categorize fields as “structurally pinned” or not The other pain point today with Pin is that we have no native support for “pin projection”6. That is, you cannot safely go from a Pin<&mut Self> reference to a Pin<&mut F> method that referring to some field self.f without relying on unsafe code. The most common practice today is to use a custom crate like pin-project-lite. Even then, you also have to make a choice for each field between whether you want to be able to get a Pin<&mut F> reference or a normal &mut F reference. Fields for which you can get a pinned reference are called structurally pinned and the criteria for which one you should use is rather subtle. Ultimately this choice is required because Pin<&mut F> and &mut F don’t play nicely together. Pin projection is safe from any !Overwrite type With s/Unpin/Overwrite/, we can scrap the idea of structural pinning. Instead, if we have a field owner self: Pin<&mut Self>, pinned projection is allowed so long as Self: !Overwrite. That is, if Self: !Overwrite, then I can always get a Pin<&mut F> reference to some field self.f of type F. How is that possible? Actually, the full explanation relies on borrow checker extensions I haven’t introduced yet. But let’s see how far we get without them, so that we can see the gap that the borrow checker has to close. Assume we are creating a Pin<&'a mut F> reference r to some field self.f, where self: Pin<&mut Self>: We are creating a Pin<&'a mut F> reference to the value in self.f: If F: Overwrite, then the value is not pinnable, so this is equivalent to an ordinary &mut F and we have nothing to prove. Else, if F: !Overwrite, then we have to show that the value in self.f will not move for the remainder of its lifetime. Pin projection from ``*selfis only valid ifSelf: !Overwriteandself: Pin<&‘b mut Self>, so we know that the value in *self` is pinned for the remainder of its lifetime by induction. We have to show then that the value v_f in self.f will never be moved until the end of its lifetime. There are three ways to move a value out of self.f: You can assign a new value to self.f, like self.f = .... This will run the destructor, ending the lifetime of the value v_f. You can create a mutable reference r = &mut self.f and then… assign a new value to *r: but that will be an error because F: !Overwrite. swap the value in *r with another: but that will be an error because F: !Overwrite. QED. =) Making Pin-based APIs easier to call Today, getting a Pin<&mut> requires using the pin! macro, going through Box::pin, or some similar explicit action. This adds “syntactic salt” to calling a Pin<&mut Self> some other abstraction rooted in unsafe (e.g., Box::pin). There is no built-in way to safely create a pinned reference. This is fine but introduces ergonomic hurdles We want to make calling a Pin<&mut Self> method as easy as calling an &mut self method. To do this, we need to extra the compiler’s notion of “auto-ref” to include the option of “auto-pin-ref”: // Instead of this: let future: Pin<&mut impl Future> = pin!(async { ... }); future.poll(cx); // We would do this: let mut future: impl Future = async { ... }; future.poll(cx); // <-- Wowee! Just as a typical method call like vec.len() expands to Vec::len(&vec), the compiler would be expanding future.poll(cx) to something like so: Future::poll(&pinned mut future, cx) // ^^^^^^^^^^^ but what, what's this? This expansion though includes a new piece of syntax that doesn’t exist today, the &pinned mut operation. (I’m lifting this syntax from boats’ pinned places proposal.) Whereas &mut var results in an &mut T reference (assuming var: T), &pinned mut var borrow would result in a Pin<&mut T>. It would also make the borrow checker consider the value in future to be pinned. That means that it is illegal to move out from var. The pinned state continues indefinitely until var goes out of scope or is overwritten by an assignment like var = ... (which drops the heretofore pinned value). This is a fairly straightforward extension to the borrow checker’s existing logic. New syntax not strictly required It’s worth noting that we don’t actually need the &pinned mut syntax (which means we don’t need the pinned keyword). We could make it so that the only way to get the compiler to do a pinned borrow is via auto-ref. We could even add a silly trait to make it explicit, like so: trait Pinned { fn pinned(self: Pin<&mut Self>) -> Pin<&mut Self>; } impl<T: ?Sized> Pinned for T { fn pinned(self: Pin<&mut T>) -> Pin<&mut T> { self } } Now you can write var.pinned(), which the compiler would desugar to Pinned::pinned(&rustc#pinned mut var). Here I am using rustc#pinned to denote an “internal keyword” that users can’t type.7 Frequently asked questions So…there’s a lot here. What’s the key takeaways? The shortest version of this post I can manage is8 Pinning fits smoothly into Rust if we make two changes: Limit the ability to swap types by default, making Pin<&mut T> a subtype of &mut T and enabling uniform pin projection. Integrate pinning in the auto-ref rules and the borrow checker. Why do you only mention swaps? Doesn’t Overwrite affect other things? Indeed the Overwrite trait as I defined it is overkill for pinning. The more precise, we might imagine two special traits that affect how and when we can drop or move values: trait DropWhileBorrowed: Sized { } trait Swap: DropWhileBorrowed { } Given a reference r: &mut T, overwriting its referent *r with a new value would require T: DropWhileBorrowed; Swapping two values of type T requires that T: Swap. This is true regardless of whether they are borrowed or not. Today, every type is Swap. What I argued in the previous post is that we should make the default be that user-defined types implement neither of these two traits (over an edition, etc etc). Instead, you could opt-in to both of them at once by implementing Overwrite. But we could get all the pin benefits by making a weaker change. Instead of having types opt out from both traits by default, they could only opt out of Swap, but continue to implement DropWhileBorrowed. This is enough to make pinning work smoothly. To see why, recall the pinning state diagram: dropping the value in *r (permitted by DropWhileBorrowed) will exit the “pinned” state and return to the “uninitialized” state. This is valid. Swapping, in contrast, is UB. Two subtle observations here worth calling out: Both DropWhileBorrowed and Swap have Sized as a supertrait. Today in Rust you can’t drop a &mut dyn SomeTrait value and replace it with another, for example. I think it’s a bit unclear whether unsafe could do this if it knows the dynamic type of value behind the dyn. But under this model, it would only be valid for unsafe code do that drop if (a) it knew the dynamic type and (b) the dynamic type implemented DropWhileBorrowed. Same applies to Swap. The Swap trait applies longer than just the duration of a borrow. This is because, once you pin a value to create a Pin<&mut T> reference, the state of being pinned persists even after that reference has ended. I say a bit more about this in another FAQ below. EDIT: An earlier draft of this post named the trait Swap. This was wrong, as described in the FAQ on subtle reasoning. Why then did you propose opting out from both overwrites and swaps? Opting out of overwrites (i.e., making the default be neither DropWhileBorrowed nor Swap) gives us the additional benefit of truly immutable fields. This will make cross-function borrows less of an issue, as I described in my previous post, and make some other things (e.g., variance) less relevant. Moreover, I don’t think overwriting an entire reference like *r is that common, versus accessing individual fields. And in the cases where people do do it, it is easy to make a dummy struct with a single field, and then overwrite r.value instead of *r. To me, therefore, distinguishing between DropWhileBorrowed and Swap doesn’t obviously carry its weight. Can you come up with a more semantic name for Overwrite? All the trait names I’ve given so far (Overwrite, DropWhileBorrowed, Swap) answer the question of “what operation does this trait allow”. That’s pretty common for traits (e.g., Clone or, for that matter, Unpin) but it is sometimes useful to think instead about “what kinds of types should implement this trait” (or not implement it, as the case may be). My current favorite “semantic style name” is Mobile, which corresponds to implementing Swap. A mobile type is one that, while borrowed, can move to a new place. This name doesn’t convey that it’s also ok to drop the value, but that follows, since if you can swap the value to a new place, you can presumably drop that new place. I don’t have a “semantic” name for DropWhileBorrowed. As I said, I’m hard pressed to characterize the type that would want to implement DropWhileBorrowed but not Swap. What do DropWhileBorrowed and Swap have in common? These traits pertain to whether an owner who lends out a local variable (i.e., executes r = &mut lv) can rely on that local variable lv to store the same value after the borrow completes. Under this model, the answer depends on the type T of the local variable: If T: DropWhileBorrowed (or T: Swap, which implies DropWhileBorrowed), the answer is “no”, the local variable may point at some other value, because it is possible to do *r = /* new value */. But if T: !DropWhileBorrowed, then the owner can be sure that lv still stores the same value (though lv’s fields may have changed). Let’s use an analogy. Suppose I own a house and I lease it out to someone else to use. I expect that they will make changes on the inside, such as hanging up a new picture. But I don’t expect them to tear down the house and build a new one on the same lot. I also don’t expect them to drive up a flatbed truck, load my house onto it, and move it somewhere else (while proving me with a new one in return). In Rust today, a reference r: &mut T reference allows all of these things: Mutating a field like r.count += 1 corresponds to hanging up a picture. The values inside r change, but r still refers to the same conceptual value. Overwriting *r = t with a new value t is like tearing down the house and building a new one. The original value that was in r no longer exists. Swapping *r with some other reference *r2 is like moving my house somewhere else and putting a new house in its place. EDIT: Wording refined based on feedback. What does it mean to be the “same value”? One question I received was what it meant for two structs to have the “same value”? Imagine a struct with all public fields – can we make any sense of it having an identity? The way I think of it, every struct has a “ghost” private field $identity (one that doesn’t exist at runtime) that contains its identity. Every StructName { } expression has an implicit $identity: new_value() that assigns the identity a distinct value from every other struct that has been created thus far. If two struct values have the same $identity, then they are the same value. Admittedly, if a struct has all public fields, then it doesn’t really matter whether it’s identity is the same, except perhaps to philosophers. But most structs don’t. An example that can help clarify this is what I call the “scope pattern”. Imagine I have a Scope type that has some private fields and which can be “installed” in some way and later “deinstalled” (perhaps it modifies thread-local values): pub struct Scope {...} impl Scope { fn new() -> Self { /* install scope */ } } impl Drop for Scope { fn drop(&mut self) { /* deinstall scope */ } } And the only way for users to get their hands on a “scope” is to use with_scope, which ensures it is installed and deinstalled properly: pub fn with_scope(op: impl FnOnce(&mut Scope)) { let mut scope = Scope::new(); op(&mut scope); } It may appear that this code enforces a “stack discipline”, where nested scopes will be installed and deinstalled in a stack-like fashion. But in fact, thanks to std::mem::swap, this is not guaranteed: with_scope(|s1| { with_scope(|s2| { std::mem::swap(s1, s2); }) }) This could easily cause logic bugs or, in unsafe is involved, something worse. This is why lending out scopes requires some extra step to be safe, such as using a &-reference or adding a “fresh” lifetime paramteer of some kind to ensure that each scope has a unique type. In principle you could also use a type like &mut dyn ScopeTrait, because the compiler disallows overwriting or swapping dyn Trait values: but I think it’s ambiguous today whether unsafe code could validly do such a swap. EDIT: Question added based on feedback. There’s a lot of subtle reasoning in this post. Are you sure this is correct? I am pretty sure! But not 100%. I’m definitely scared that people will point out some obvious flaw in my reasoning. But of course, if there’s a flaw I want to know. To help people analyze, let me recap the two subtle arguments that I made in this post and recap the reasoning. Lemma. Given some local variable lv: T where T: !Overwrite mutably borrowed by a reference r: &'a mut T, the value in lv cannot be dropped, moved, or forgotten for the lifetime 'a. During 'a, the variable lv cannot be accessed directly (per the borrow checker’s usual rules). Therefore, any drops/moves/forgets must take place to *r: Because T: !Overwrite, it is not possible to overwrite or swap *r with a new value; it is only legal to mutate individual fields. Therefore the value cannot be dropped or moved. Forgetting a value (via std::mem::forget) requires ownership and is not accesible while lv is borrowed. Theorem A. If we replace T: Unpin and T: Overwrite, then Pin<&mut T> is a safe subtype of &mut T. The argument proceeds by cases: If T: Overwrite, then Pin<&mut T> does not refer to a pinned value, and hence it is semantically equivalent to &mut T. If T: !Overwrite, then Pin<&mut T> does refer to a pinned value, so we must show that the pinning guarantee cannot be disturbed by the &mut T. By our lemma, the &mut T cannot move or forget the pinned value, which is the only way to disturb the pinning guarantee. Theorem B. Given some field owner o: O where O: !Overwrite with a field f: F, it is safe to pin-project from Pin<&mut O> to a Pin<&mut F> reference referring to o.f. The argument proceeds by cases: If F: Overwrite, then Pin<&mut F> is equivalent to &mut F. We showed in Theorem A that Pin<&mut O> could be upcast to &mut O and it is possible to create an &mut F from &mut O, so this must be safe. If F: !Overwrite, then Pin<&mut F> refers to a pinned value found in o.f. The lemma tells us that the value in o.f will not be disturbed for the duration of the borrow. EDIT: It was pointed out to me that this last theorem isn’t quite proving what it needs to prove. It shows that o.f will not be disturbed for the duration of the borrow, but to meet the pin rules, we need to ensure that the value is not swapped even after the borrow ends. We can do this by committing to never permit swaps of values unless T: Overwrite, regardless of whether they are borrowed. I meant to clarify this in the post but forgot about it, and then I made a mistake and talked about Swap – but Swap is the right name. What part of this post are you most proud of? Geez, I’m so glad you asked! Such a thoughtful question. To be honest, the part of this post that I am happiest with is the state diagram for places, which I’ve found very useful in helping me to understand Pin: flowchart TD Uninitialized Initialized Pinned Uninitialized -- `p = v` where `v: T` --> Initialized Initialized -- move out, drop, or forget --> Uninitialized Initialized -- pin value `v` in `p` (only possible when `T` is `!Unpin`) --> Pinned Pinned -- drop value --> Uninitialized Pinned -- move out or forget --> UB Uninitialized -- free the place --> Freed UB[💥 Undefined behavior 💥] Obviously this question was just an excuse to reproduce it again. Some of the key insights that it helped me to crystallize: A value that is Unpin cannot be pinned: And hence Pin<&mut Self> really means “reference to a maybe-pinned value” (a value that is pinned if it can be). Forgetting a value is very different from leaking the place that value is stored: In both cases, the value’s Drop never runs, but only one of them can lead to a “freed place”. In thinking through the stuff I wrote in this post, I’ve found it very useful to go back to this diagram and trace through it with my finger. Is this backwards compatible? Maybe? The question does not have a simple answer. I will address in a future blog post in this series. Let me say a few points here though: First, the s/Unpin/Overwrite/ proposal is not backwards compatible as I described. It would mean for example that all futures returned by async fn are no longer Overwrite. It is quite possible we simply can’t get away with it. That’s not fatal, but it makes things more annoying. It would mean there exist types that are !Unpin but which can be overwritten. This in turn means that Pin<&mut Self> is not a subtype of &mut Self for all types. Pinned mutable references would be a subtype for almost all types, but not those that are !Unpin && Overwrite. Second, a naive, conservative transition would definitely be rough. My current thinking is that, in older editions, we add T: Overwrite bounds by default on type parameters T and, when you have a T: SomeTrait bound, we would expand that to include a Overwrite bound on associated types in SomeTrait, like T: SomeTrait<AssocType: Overwrite>. When you move to a newer edition I think we would just not add those bounds. This is kind of a mess, though, because if you call code from an older edition, you are still going to need those bounds to be present. That all sounds painful enough that I think we might have to do something smarter, where we don’t always add Overwrite bounds, but instead use some kind of inference in older editions to avoid it most of the time. Conclusion My takeaway from authoring this post is that something like Overwrite has the potential to turn Pin from wizard level Rust into mere “advanced Rust”, somewhat akin to knowing the borrow checker really well. If we had no backwards compatibility constraints to work with, it seems clear that this would be a better design than Unpin as it is today. Of course, we do have backwards compatibility constraints, so the real question is how we can make the transition. I don’t know the answer yet! I’m planning on thinking more deeply about it (and talking to folks) once this post is out. My hope was first to make the case for the value of Overwrite (and to be sure my reasoning is sound) before I invest too much into thinking how we can make the transition. Assuming we can make the transition, I’m wondering two things. First, is Overwrite the right name? Second, should we take the time to re-evaluate the default bounds on generic types in a more complete way? For example, to truly have a nice async story, and for myraid other reasons, I think we need must move types. How does that fit in? The precise design of generators is of course an ongoing topic of some controversy. I am not trying to flesh out a true design here or take a position. Mostly I want to show that we can create ergonomic bridges between “must pin” types like generators and “non pin” interfaces like Iterator in an ergonomic way without explicit mentioning of pinning. ↩︎ Boats has argued that, since no existing iterator can support borrows over a yield point, generators might not need to do so either. I don’t agree. I think supporting borrows over yield points is necessary for ergonomics just as it was in futures. ↩︎ Actually for Pin<impl DerefMut<Target: Generator>>. ↩︎ I will say, I use std::mem::forget quite regularly, but mostly to make up for a shortcoming in Drop. I would like it if Drop had a separate method, fn drop_on_unwind(&mut self), and we invoked that method when unwinding. Most of the time, it would be the same as regular drop, but in some cases it’s useful to have cleanup logic that only runs in the case of unwinding. ↩︎ In contrast, a Pin<&mut T> reference can be safely converted into an &T reference, as evidenced by Pin’s Deref impl. This is because, even if T: !Unpin, a &T reference cannot do anything that is invalid for a pinned value. You can’t swap the underlying value or read from it. ↩︎ Projection is the wonky PL term for “accessing a field”. It’s never made much sense to me, but I don’t have a better term to use, so I’m sticking with it. ↩︎ We have a syntax k#foo for explicitly referred to a keyword foo. It is meant to be used only for keywords that will be added in future Rust editions. However, I sometimes think it’d be neat to internal-ish keywords (like k#pinned) that are used in desugaring but rarely need to be typed explicitly; you would still be able to write k#pinned if for whatever reason you wanted to. And of course we could later opt to stabilize it as pinned (no prefix required) in a future edition. ↩︎ I tried asking ChatGPT to summarize the post but, when I pasted in my post, it replied, “The message you submitted was too long, please reload the conversation and submit something shorter.” Dang ChatGPT, that’s rude! Gemini at least gave it the old college try. Score one for Google. Plus, it called my post “thought-provoking!” Aww, I’m blushing! ↩︎
  • Don Marti: convert TTF to WOFF2 on Fedora Linux (2024/10/12 00:00)
    If you have a font in TTF (TrueType) format and need WOFF2 for web use, there is a woff2_compress utility packaged for Fedora (but still missing a man page and --help feature.) The package is woff2-tools. sudo dnf install woff2-tools woff2_compress example.ttf Also packaged for Debian: Details of package woff2 in sid WOFF For the older WOFF format (which I needed in order to have the font show up on a really old browser) the tool is sfnt2woff-zopfli. Install and run with: sudo dnf install sfnt2woff-zopfli sfnt2woff-zopfli example.ttf References Converting TTF fonts to WOFF2 (and WOFF) - DEV Community (covers cloning and building from source) How to Convert Font Formats to WOFF under Linux (compares several conversion tools) Related colophon (This site mostly uses Modern Font Stacks but has some Inconsolata.) Bonus links The AI bill Newsom didn’t veto — AI devs must list models’ training data From 2026, companies that make generative AI models available in California need to list their models’ training sets on their websites — before they release or modify the models. (The California Chamber of Commerce came out against this one, citing the technical difficulty in complying. They’re probably right, especially considering that under the CCPA, businesses are required to disclose inferences about people (PDF) and it’s hard to figure out which inferences are present in a large ML model.) Antitrust challenge to Facebook’s ‘superprofiling’ finally wraps in Germany — with Meta agreeing to data limits Meta has to offer a cookie setting that allows Facebook and Instagram users’ data to decide whether they want to allow it to combine their data with other information Meta collects about them — via third-party websites where its tracking technologies are embedded or from apps using its business tools — or kept separate. but some of the required privacy+competition fixes must be Germany-only. (imho some US state needs a law that any privacy or consumer protection feature that a large company offers to users outside the US must also be available in that state.) IAB, Others Urge Court To Reconsider Ruling That Curbed Section 230 10/10/2024 (Some background on this one: TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230 The problem with this case from TikTok’s point of view is that Big Tech wants to keep claiming that its recommendation algorithms are somehow both the company’s own free speech and speech by users. But the Third Circuit is making them pick one. Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under § 230, too.) California Privacy Act Sparks Website Tracking Technology Suits (This is a complicated one. Lawsuit accuses a company of breaking not one, not two, but three California privacy laws. And the California Constitution, too. Motion to dismiss mostly denied (PDF). Including a CCPA claim. Yes, there is a CCPA private right of action. CCPA claims survive a motion to dismiss where a plaintiff alleges that defendants disclosed plaintiff’s personal information without his consent due to the business’s failure to maintain reasonable security practices. In this case, Google Analytics tracking on a therapy site. I have some advice on how to get out in front of this kind of case, will share later.) Digital Scams More Likely to Hurt Black and Latino Consumers - Consumer Reports Compounding the problem, experts believe, is that Black and Latino consumers are disproportionately targeted by a wide variety of digital scams. (This is a big reason why the I have nothing to hide argument about privacy doesn’t work. When a user who is less likely to be discriminated against chooses to participate in a system with personalization risks, that user’s information helps make user-hostile personalization against others work better. Privacy is a collective problem.) ClassicPress: WordPress without the block editor [LWN.net] Once installed (or migrated), ClassicPress looks and feels like old-school WordPress. Google never cared about privacy It was a bit of a tell how the DV360 product team demonstrated zero sense of urgency around making it easier for some buyers to test Privacy Sandbox, let alone releasing test results to prove it worked. The Chrome cookie deprecation delays, the inability of any ad tech expert or observer to convincingly explain how Google could possibly regulate itself — all of these deserve renewed scrutiny, given what we now know. (Google Privacy Sandbox was never offered as an option for YouTube, either. The point of janky in-browser ads is to make the slick YouTube ads, which have better reporting, look better to advertisers who have to allocate budget between open web and YouTube.) Taylor Swift: Singer, Songwriter, Copyright Innovator [R]ecord companies are now trying to prohibit re-recordings for 20 or 30 years, not just two or three. And this has become a key part of contract negotiations. Will they get 30 years? Probably not, if the lawyer is competent. But they want to make sure that the artist’s vocal cords are not in good shape by the time they get around to re-recording.
  • Mozilla Security Blog: Behind the Scenes: Fixing an In-the-Wild Firefox Exploit (2024/10/11 12:14)
    At Mozilla, browser security is a critical mission, and part of that mission involves responding swiftly to new threats. Tuesday, around 8 AM Eastern time, we received a heads-up from the Anti-Virus company ESET, who alerted us to a Firefox exploit that had been spotted in the wild. We want to give a huge thank you to ESET for sharing their findings with us—it’s collaboration like this that keeps the web a safer place for everyone. We’ve already released a fix for this particular issue, so when Firefox prompts you to upgrade, click that button. If you don’t know about Session Restore, you can ask Firefox to restore your previous session on restart. The sample ESET sent us contained a full exploit chain that allowed remote code execution on a user’s computer. Within an hour of receiving the sample, we had convened a team of security, browser, compiler, and platform engineers to reverse engineer the exploit, force it to trigger its payload, and understand how it worked. During exploit contests such as pwn2own, we know ahead of time when we will receive an exploit, can convene the team ahead of time, and receive a detailed explanation of the vulnerabilities and exploit. At pwn2own 2024, we shipped a fix in 21 hours, something that helped us earn an industry award for fastest to patch. This time, with no notice and some heavy reverse engineering required, we were able to ship a fix in 25 hours. (And we’re continually examining the process to help us drive that down further.) While we take pride in how quickly we respond to these threats, it’s only part of the process. While we have resolved the vulnerability in Firefox, our team will continue to analyze the exploit to find additional hardening measures to make deploying exploits for Firefox harder and rarer. It’s also important to keep in mind that these kinds of exploits aren’t unique to Firefox. Every browser (and operating system) faces security challenges from time to time. That’s why keeping your software up to date is crucial across the board. As always, we’ll keep doing what we do best—strengthening Firefox’s security and improving its defenses. The post Behind the Scenes: Fixing an In-the-Wild Firefox Exploit appeared first on Mozilla Security Blog.
  • Mozilla Open Policy & Advocacy Blog: How Lawmakers Can Help People Take Control of Their Privacy (2024/10/10 14:23)
    At Mozilla, we’ve long advocated for universal opt-out mechanisms that empower people to easily assert their privacy rights. A prime example of this is Global Privacy Control (GPC), a feature built into Firefox. When enabled, GPC sends a clear signal to websites that the user does not wish to be tracked or have their personal data sold. California’s landmark privacy law, the CCPA, mandates that tools like GPC must be respected, giving consumers greater control over their data. Encouragingly, similar provisions are emerging in other state laws. Yet, despite this progress, many browsers and operating systems – including the largest ones – still do not offer native support for these mechanisms. That’s why we were encouraged by the advancement of California AB 3048, a bill that would require browsers and mobile operating systems to include an opt-out setting, allowing consumers to easily communicate their privacy preferences. Mozilla was disappointed that AB 3048 was not signed into law. The bill was a much-needed step in the right direction. As policymakers advance similar legislation in the future, there are small changes to the AB 3048 text that we’d propose, to ensure that the bill doesn’t create potential loopholes that undermine its core purpose and weaken existing standards like Global Privacy Control by leaving too much room for interpretation. It’s essential that rules prioritize consumer privacy and meet the expectations that consumers rightly have about treatment of their sensitive personal information. Mozilla remains committed to working alongside California as the legislature considers its agenda for 2025, as well as other states and ultimately the U.S. Congress, to advance meaningful privacy protections for all people online. We hope to see legislation bolstering this key privacy tool reemerge in California, and advance throughout the US. The post How Lawmakers Can Help People Take Control of Their Privacy appeared first on Open Policy & Advocacy.
  • Mozilla Thunderbird: Contributor Highlight: Toad Hall (2024/10/10 11:00)
    We’re back with another contributor highlight! We asked our most active contributors to tell us about what they do, why they enjoy it, and themselves. Last time, we talked with Arthur, and for this installment, we’re chatting with Toad Hall. If you’ve used Support Mozilla (SUMO) to get help with Thunderbird, Toad Hall may have helped you. They are one of our most dedicated contributors, and their answers on SUMO have helped countless people. How and Why They Use Thunderbird Thunderbird has been my choice of email client since version 3, so I have witnessed this product evolve and improve over the years. Sometimes, new design can initially derail you. Being of an older generation, I appreciate it is not necessarily so easy to adapt to change, but I’ve always tried to embrace new ideas and found that generally, the changes are an improvement. Thunderbird offers everything you expect from handling several email accounts in one location, filtering, address books and calendar, plus many more functionalities too numerous to mention. The built in Calendar with its Events and Tasks options is ideal for both business and personal use. In addition, you can also connect to online calendars.  I find using the pop up reminders so helpful whether it’s notifying you of an appointment, birthday or that a TV program starts in 15 minutes!  Personally, I particularly impressed that Thunderbird offers the ability to modify the view and appearance to suit my needs and preferences. I use a Windows OS, but Thunderbird offers release versions suitable for Windows, MAC and Linux variants of Operating Systems. So there is a download which should suit everyone.  In addition, I run a beta version so I can have more recent updates, meaning I can contribute by helping to test for bugs and reporting issues before it gets to a release version. How They Contribute The Thunderbird Support forum would be my choice as the first place to get help on any topic or query and there is a direct link to it via the ‘Help’ > ‘Get Help’ menu option in Thunderbird. As I have many years of experience using Thunderbird, I volunteer my free time to assist others in the Thunderbird Support Forum which I find a very rewarding experience. I have also helped out writing some Support Forum Help Articles. In more recent years I’ve assisted on the Bugzilla forum helping to triage and report potential bugs. So, people can get involved with Thunderbird in various ways. Share Your Contributor Highlight (or Get Involved!) Thanks to Toad Hall and all our contributors who have kept us alive and are helping us thrive! If you’re a contributor who would like to share your story, get in touch with us at community@thunderbird.net. If you want to get involved with Thunderbird, read our guide to learn about all the ways to contribute. The post Contributor Highlight: Toad Hall appeared first on The Thunderbird Blog.
  • Don Marti: drinking games with the Devil (2024/10/10 00:00)
    Should I get into a drinking game with the Devil? No, for three important reasons unrelated to your skill at the game. The Devil can out-drink you. The Devil can drink substances that are toxic to you even in small quantities. The Devil can cheat in ways that you will not be able to detect, and take advantage of rules loopholes that you might not understand. What if I am really good at the skills required for the game? Still no. Even if you have an accurate idea of your own skill level, it is hard to estimate the Devil’s skill level. And even if you have roughly equally matched skills, the Devil still has the three advantages above. What if I’m already in a drinking game with the Devil? I can’t offer a lot of help here, but I have read a fair number of comic books. As far as I can tell, your best hope is to delay playing and to delay taking a drink when required to. It is possible that some more powerful entity could distract the Devil in a way that results in the end of the game. Bonus links IAB, Others Urge Court To Reconsider Ruling That Curbed Section 230 (this is why the legit Internet is going to win. The lawyers needed to defend the blackout challenge are expensive, and a lot of state legislators will serve for gas money. As legislators learn to introduce more, and more diverse, laws on Big Tech the cost imbalance will become clearer.) In the Trenches with State Policymakers Working to Pass Data Privacy Laws Former state representative from Oklahoma, Collin Walke, said that one tech company with an office in his state hired about 30 more lobbyists just to lobby on the privacy bill he was trying to pass. Risks vs. Harms: Youth & Social Media Of course, there are harms that I do think are product liability issues vis-a-vis social media. For example, I think that many privacy harms can be mitigated with a design approach that is privacy-by-default. I also think that regulations that mandate universal privacy protections would go a long way in helping people out. But the funny thing is that I don’t think that these harms are unique to children. These are harms that are experienced broadly. And I would argue that older folks tend to experience harms associated with privacy much more acutely. Google Search user interface: A/B testing shows security concerns remain For the past few days, Google has been A/B testing some subtle visual changes to its user interface for the search results page….Despite a more simplified look and feel, threat actors are still able to use the official logo and website of the brand they are abusing. From a user’s point of view, such ads continue to be as misleading. Ukraine’s new F-16 simulator spotlights a ‘paradigm shift’ led from Europe (Europe isn’t against technology or innovation, they’re mainly just better at focusing on real problems.)
  • Firefox Nightly: Search Improvements Are On Their Way – These Weeks in Firefox: Issue 169 (2024/10/09 20:57)
    Highlights The search team is planning on enabling a series of improvements to the search experience this week in Nightly! This project is called “Scotch Bonnet”. We would love to hear your feedback via bug reports! We will also create a Connect page shortly. The pref is browser.urlbar.scotchBonnet.enableOverride for anyone who wants a sneak preview. The New Tab team has added a new experimental widget which shows a vertical list of interesting stories across multiple cells of the story grid: We’re testing out a vertical list of stories in regions where stories are enabled. You can test this out in Nightly by setting browser.newtabpage.activity-stream.discoverystream.contextualContent.enabled to true in about:config We will be running a small experiment with this new widget, slated for Firefox 132, for regions where stories are enabled. Friends of the Firefox team Resolved bugs (excluding employees) Volunteers that fixed more than one bug Henry Wilkes (they/them) [:henry-x] Meera Murthy Project Updates Add-ons / Web Extensions WebExtensions Framework Fixed mild performance regression in load times when the user browses websites that are registered as default/built-in search engines (fixed in Nightly 132, and uplifted to Beta 131) – Bug 1916240 Fixed startup error hit by static themes using MV3 manifest.json files – Bug 1917613 The WebExtensions popup notification shown when an extension is hiding Firefox tabs (using the tabs.hide method) is now anchored to the extensions button – Bug 1920706 Fixed browser.search.get regression (initially introduced in ESR 128 through the migration to the search-config-v2) that made the faviconUrl be set to blob urls (not accessible to other extensions). This regression has been fixed in Nightly 132 and then uplifted to Firefox 131 and ESR 128 Thanks to Standard8 for fixing the regression! WebExtension APIs The storage.session API now logs a warning message to raise extension developer awareness that the storage.session quota is being exceeded on channels where it is not enforced yet (currently only enforced on nightly >= 131) – Bug 1916276 DevTools DevTools Toolbox Sean Kim fixed DevTools offline mode, making sure that cached resources can still be retrieved (#1907304) Fatih Kilic added the origin attributes of Workers in about:debugging , which can be useful for dynamic and non-dynamic first party isolation (#1583891) This makes it easier to see where these workers came from. Fatih Kilic made it clear that light/dark mode simulation can’t work when privacy.resistFingerprinting is enabled by disabling the buttons (#1861328) Arai integrated the new SharedSubResourceCache into the Network Monitor (#1916960) Florian Quèze migrated devtools.main telemetry events to use Glean API (#1921751) Alexandre Poirot made the tracer significantly faster (#1919713) and removed the arbitrary callstack depth limit since we can now deal with infinite loop just fine (#1919804) Nicolas Chevobbe fixed the Fonts highlighter for iframes (#1572655) Nicolas Chevobbe continues his work on supporting High Contrast Mode in the toolbox (#1916614, #1916333, #1916341, #1916328, #1916712, #1916344, #1916363, #1916355, #1916329, #1916354, #1916394), refactoring some CSS files when needed (#1919452, #1920689, #1921427, #1921428, #1921434) Nicolas Chevobbe made the Inspector search input clear button (#1921001) and the Netmonitor “Raw” toggles (#1917296) accessible with the keyboard WebDriver BiDi External: Liam DeBeasi renamed the isRoot argument of getBrowsingContextInfo() to includeParentId to make the code easier to understand (bug). Updates: Thanks to jmaher for splitting the marionette job in several chunks (bug). Julian fixed the timings for network events to be in milliseconds instead of microseconds (bug) Henrik and Julian improved the framework used by WebDriver BiDi to avoid failing commands when browsing contexts are loading (bug, bug, bug) Sasha updated the WebDriver BiDi implementation for cookies to use the network.cookie.CHIPS.enabled preference. The related workarounds will be removed in the near future. (bug) Lint, Docs and Workflow hjones Introduced a CSS lint rule to prevent base design tokens from being used directly Standard8 updated the ESLint builders to use Node 18 Standard8 also worked on flat config: Removed html, json and prettier plugin dependencies from eslint-plugin-mozilla Updated node_modules related to ESLint as far as possible to pull in fixes from third party modules for flat config Migration Improvements mconley has patches up to add some additional telemetry to our backup mechanism mconley is working on a new messaging surface in the AppMenu that will allow us to try some message variations when the user is not signed into an account New Tab Page We’re going to be doing a slow, controlled rollout to change the endpoints with which we fetch sponsored top sites and stories. This is part of a larger architectural change to unify the mechanism with which we fetch this sponsored content. Search and Navigation Scotch Bonnet (search UI update) Related Changes General Daisuke connected Scotch Bonnet to Nimbus 1919813 Intuitive Search Keywords Mandy added telemetry for search restrict keywords 1917992 Unified Search Button Dale improved the UI of the Unified Search Button by aligning it closer to the design 1908922 Daisuke made the Unified Search Button more consistent depending on whether it was in an open/closed state 1913234 Persisted Search James changed Persisted Search to use a cleaner design in preparation for its use with the Unified Search Button. It now has a button on the right side to revert the address bar and show the URL. And the Persist feature works with non-default app provided engines  1919193, 1915273, 1913312 HTTPS Trimming Marco changed it so keyboard focus immediately untrims an https address 1898155

Discussion

Enter your comment. Wiki syntax is allowed:
I D A T V
 
  • news/planet/mozilla.txt
  • Last modified: 2021/10/30 11:41
  • by 127.0.0.1