Kernel Planet

November 15, 2019

Kees Cook: security things in Linux v5.3

Previously: v5.2.

Linux kernel v5.3 was released! I let this blog post get away from me, but it’s up now! :) Here are some security-related things I found interesting:

heap variable initialization
In the continuing work to remove “uninitialized” variables from the kernel, Alexander Potapenko added new init_on_alloc” and “init_on_free” boot parameters (with associated Kconfig defaults) to perform zeroing of heap memory either at allocation time (i.e. all kmalloc()s effectively become kzalloc()s), at free time (i.e. all kfree()s effectively become kzfree()s), or both. The performance impact of the former under most workloads appears to be under 1%, if it’s measurable at all. The “init_on_free” option, however, is more costly but adds the benefit of reducing the lifetime of heap contents after they have been freed (which might be useful for some use-after-free attacks or side-channel attacks). Everyone should enable CONFIG_INIT_ON_ALLOC_DEFAULT_ON=1 (or boot with “init_on_alloc=1“), and the more paranoid system builders should add CONFIG_INIT_ON_FREE_DEFAULT_ON=1 (or “init_on_free=1” at boot). As workloads are found that cause performance concerns, tweaks to the initialization coverage can be added.

pidfd_open() added
Christian Brauner has continued his pidfd work by creating the next needed syscall: pidfd_open(), which takes a pid and returns a pidfd. This is useful for cases where process creation isn’t yet using CLONE_PIDFD, and where /proc may not be mounted.

-Wimplicit-fallthrough enabled globally
Gustavo A.R. Silva landed the last handful of implicit fallthrough fixes left in the kernel, which allows for -Wimplicit-fallthrough to be globally enabled for all kernel builds. This will keep any new instances of this bad code pattern from entering the kernel again. With several hundred implicit fallthroughs identified and fixed, something like 1 in 10 were missing breaks, which is way higher than I was expecting, making this work even more well justified.

x86 CR4 & CR0 pinning
In recent exploits, one of the steps for making the attacker’s life easier is to disable CPU protections like Supervisor Mode Access (and Execute) Prevention (SMAP and SMEP) by finding a way to write to CPU control registers to disable these features. For example, CR4 controls SMAP and SMEP, where disabling those would let an attacker access and execute userspace memory from kernel code again, opening up the attack to much greater flexibility. CR0 controls Write Protect (WP), which when disabled would allow an attacker to write to read-only memory like the kernel code itself. Attacks have been using the kernel’s CR4 and CR0 writing functions to make these changes (since it’s easier to gain that level of execute control), but now the kernel will attempt to “pin” sensitive bits in CR4 and CR0 to avoid them getting disabled. This forces attacks to do more work to enact such register changes going forward. (I’d like to see KVM enforce this too, which would actually protect guest kernels from all attempts to change protected register bits.)

additional kfree() sanity checking
In order to avoid corrupted pointers doing crazy things when they’re freed (as seen in recent exploits), I added additional sanity checks to verify kmem cache membership and to make sure that objects actually belong to the kernel slab heap. As a reminder, everyone should be building with CONFIG_SLAB_FREELIST_HARDENING=1.

KASLR enabled by default on arm64
Just as Kernel Address Space Layout Randomization (KASLR) was enabled by default on x86, now KASLR has been enabled by default on arm64 too. It’s worth noting, though, that in order to benefit from this setting, the bootloader used for such arm64 systems needs to either support the UEFI RNG function or provide entropy via the “/chosen/kaslr-seed” Device Tree property.

hardware security embargo documentation
As there continues to be a long tail of hardware flaws that need to be reported to the Linux kernel community under embargo, a well-defined process has been documented. This will let vendors unfamiliar with how to handle things follow the established best practices for interacting with the Linux kernel community in a way that lets mitigations get developed before embargoes are lifted. The latest (and HTML rendered) version of this process should always be available here.

Those are the things I had on my radar. Please let me know if there are other things I should add! Linux v5.4 is almost here…

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

November 15, 2019 01:36 AM

November 01, 2019

Pete Zaitcev: Linode Object Storage

Linode made their Object Storage product generally available. They dropped the Swift API that they offered in the beta, and only rolled with S3.

They are careful not to mention what implementation underpins it, but I suspect it's Ceph RGW. They already provide Ceph block storage as public service. Although, I don't know for sure.

November 01, 2019 03:20 PM

October 29, 2019

Pete Zaitcev: Samsung shutting down CPU development in Austin

An acquaintance of mine was laid off from Samsung. He was a rank-and-file ASIC designer and worked on FPU unit for Samsung's new CPU. Another acquaintance, a project manager in the silicon field, relayed that supposedly ARM developed a new CPUs that are so great, that all competitors gave up and folded their CPU development, resulting in the layoffs. The online sources have details.

In the same time they gave up on in-house cores, Samsung announced Exynos 990, a standalone version of the 980, based on the successors of the Cortex family, developed by ARM, of course.

As someone said on Glassdoor, "great place to work until you're laid off".

October 29, 2019 07:58 PM

Paul E. Mc Kenney: The Old Man and His Smartphone

I recently started using my very first smartphone, and it was suggested that I blog about the resulting experiences.  So here you go!

October 29, 2019 01:57 PM

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode VII

The previous episode speculated about the past, so this episode will make some wild guesses about the future.

There has been much hue and cry about the ill effects of people being glued to their smartphones.  I have tended to discount this viewpoint due to having seen a great many people's heads buried in newspapers, magazines, books, and television screens back in the day.  And yes, there was much hue and cry about that as well, so I guess some things never change.

However, a few years back, the usual insanely improbable sequence of events resulted in me eating dinner with the Chief of Police of a mid-sized but prominent city, both of which will go nameless.  He called out increased smartphone use as having required him to revamp his training programs.  You see, back in the day, typical recruits could reasonably be expected to have the social skills required to defuse a tense situation, using what he termed "verbal jiujitsu".  However, present-day recruits need to take actual classes in order to master this lost art.

I hope that we can all agree that it is far better for officers of the law to maintain order through use of vocal means, perhaps augmented with force of personality, especially given that the alternative seems to the use of violence.  So perhaps the smartphone is responsible for some significant social change after all.  Me, I will leave actual judgment on this topic to psychologists, social scientists, and of course historians.  Not that any of them are likely to reach a conclusion that I would trust.  Based on past experience, far from it!  The benefit of leaving such judgments to them is instead that it avoids me wasting any further time on such judgments.  Or so I hope.

It is of course all too easy to be extremely gloomy about the overall social impact of smartphones.  One could easily argue that people freely choose spreading misinformation over accessing vast stores of information, bad behavior over sweetness and light, and so on and so forth.

But it really is up to each and every one of us.  After all, if life were easy, I just might do a better job of living mine.  So maybe we all need to brush up on our social skills.  And to do a better job of choosing what to post, to say nothing of what posts to pass on.  Perhaps including the blog posts in this series!

Cue vigorous arguments on the appropriateness of these goals, or, failing that, the best ways to accomplish them.  ;-)

October 29, 2019 01:50 PM

October 27, 2019

Pete Zaitcev: ?fbclid

As some of you might have noticed, Facebook started adding a tracking token to all URLs as a query string "?fbclid=XXXXXXXXX". I don't know how it works, exactly. Perhaps it rattles to FB when people re-inject these links into FB after they cycle through other social media. Either way, today I found a website that innocently fails to work when shared on FB: Whomp. If one tries to share a comic strip, such as "It's Training Men", FB appends its token, and makes the URL invalid.

October 27, 2019 03:30 AM

October 25, 2019

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode VI

A common science-fiction conceit is some advanced technology finding its way into a primitive culture, so why not consider what might happen if my smartphone were transported a few centuries back in time?

Of course, my most strongly anticipated smartphone use, location services, would have been completely useless as recently as 30 years ago, let alone during the 1700s.  You see, these services require at least 24 GPS satellites in near earth orbit, which didn't happen until 1993.

My smartphone's plain old telephony functionality would also have been useless surprisingly recently, courtesy of its need for large numbers of cell towers, to say nothing of the extensive communications network interconnecting them.  And I am not convinced that my smartphone would have been able to use the old analog cell towers that were starting to appear in the 1980s, but even if it could, for a significant portion of life, my smartphone would have completely useless as a telephone.

Of course, the impressive social-media capabilities of my smartphone absolutely require the huge networking and server infrastructure that has been in place only within the past couple of decades.

And even though my smartphone's battery lifetime is longer than I expected, extended operation relies on the power grid, which did not exist at all until the late 1800s. So any wonderment generated by my transported-back-in-time smartphone would be of quite limited duration.  But let's avoid this problem through use of a solar-array charger.

My smartphone probably does not much like water, large changes in temperature, or corrosive environments.  Its dislike of adverse environmental conditions would have quickly rendered it useless in a couple of my childhood houses, to say nothing of almost all buildings in existence a couple of centuries ago.  This means that long-term use would require confining my smartphone to something like a high-end library located some distance from bodies of salt water and from any uses of high-sulfur coal.  This disqualifies most 1700s Western Hemisphere environments, as well as many of the larger Eastern Hemisphere cities, perhaps most famously London with its pea-soup "fogs".  Environmental considerations also pose interesting questions regarding exactly how to deploy the solar array, especially during times of inclement weather.

So what could my smartphone do back in the 1700s?

I leave off any number of uses that involve ferrying information from 2019 back to the 1700s.  Even an accurate seacoast map could be quite useful and illuminating, but this topic has been examined quite closely by several generations of science-fiction writers, so I would not expect to be able to add anything new.  However, it might be both useful and entertaining to review some of this genre.  Everyone will have their favorites, but I will just list the three that came to mind most quickly:  Heinlein's "The Door into Summer", Zemeckis's and Gale's "Back to the Future" movies, and Rowling's "Harry Potter and the Cursed Child.  So let us proceed, leaving back-in-time transportation of information to today's legion of science-fiction writers.

The upshot of all this is that if my smartphone were transported back to the 1700s, it would be completely unable to provide its most commonly used 2019 functionality.  However, given a solar array charger and development environment, and given a carefully controlled environment, it might nevertheless be quite impressive to our 1700s counterparts.

In fact, it would be quite impressive much more recently.  Just imagine if, at the end of the Mother of all Demos, Douglas Engelbart had whipped out a ca-2019 smartphone.

But the hard cold fact is that in some ways, a 2019 smartphone would actually have been a step backwards from Engelbart's famous demo.  After all, Englebart's demo allowed shared editing of a document.  Lacking both wifi and a cell-phone network, and given the severe range limitations of NFC, my smartphone would be utterly incapable of shared editing of documents.

In short, although might smartphone might be recognized as a very impressive device back in the day, the hard cold fact is that it is but the tiniest tip of a huge iceberg of large-scale technology, including forests of cell towers, globe-girdling fiber-optic cables, vast warehouses stuffed with servers, and even a constellation of satellites.  Thus, the old adage "No one is an island" was never more true than it is today.  However, the bridges between our respective islands seem to be considerably more obscure than they were in the old days.  Which sooner or later will call into question whether these bridges will be properly maintained, but that question applies all too well to a depressingly broad range of infrastructure on which we all depend.

Which leads to another old adage: "The more things change, the more they stay the same".  :-)

October 25, 2019 08:51 PM

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode V

So after many years of bragging about my wristwatch's exemplary battery lifetime (years, I tell you, years!!!), I find myself a member of that very non-exclusive group that worries about battery lifetime.  But I have been pleasantly surprised to find that my smartphone's battery lasts quite a bit longer than I would have expected, in fact, it is not unusual for the battery to last through two or three days of normal usage.  And while that isn't years, it isn't at all bad.

That is, battery lifetime isn't at all bad as long as I have location services (AKA GPS) turned off.

With GPS on, the smartphone might or might not make it through the day. Which is a bit ironic given that GPS was the main reason that I knew I would eventually be getting a smartphone.  And the smartphone isn't all that happy about my current policy of keeping the GPS off unless I am actively using it.  The camera in particular keeps whining about how it could tag photos with their locations if only I would turn the GPS on.  However, I have grown used to this sort of thing, courtesy of my television constantly begging me to connect it to Internet.  Besides, if my smartphone were sincerely concerned about tagging my photos with their locations, it could always make use of the good and sufficient location data provided by the cell towers that I happen to know that it is in intimate contact with!

GPS aside, I am reasonably happy with my smartphone's current battery lifetime.  Nevertheless, long experience with other battery-powered devices leads me to believe that it will undergo the usual long slow decay over time.  But right now, it is much better than I would have expected, and way better than any of my laptops.

Then again, I am not (yet) in the habit of running rcutorture on my smartphone...

October 25, 2019 12:05 AM

October 23, 2019

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode IV

I took my first GPS-enabled road trip today in order to meet a colleague for lunch.  It was only about an hour drive each way, but that was nevertheless sufficient to highlight one big advantage of GPS as well as to sound a couple of cautionary notes.

The big advantage didn't seem particularly advantageous at first.  The phone had announced that I should turn left at Center Street, but then inexplicably changed its mind, instead asking me to instead turn left on a road with a multi-syllabic vaguely Germanic name.  On the return trip, I learned that I had actually missed the left turn onto Center Street courtesy of that road's name changing to Wyers Road at the crossing.  So I saw a sign for Wyers Road and sensibly (or so I thought) elected not to turn left at that point.  The phone seamlessly autocorrected, in fact so seamlessly that I was completely unaware that I had missed the turn.

The first cautionary note involved the phone very quickly changing its mind on which way I should go.  It initially wanted me to go straight ahead for the better part of a mile, but then quickly and abruptly asked me to instead take a hard right-hand turn.  In my youth, this abrupt change might have terminally annoyed me, but one does (occasionally) learn patience with advancing age.

That and the fact that following its initial advice to go straight ahead would have taken me over a ditch, through a fence, and across a pasture.

The second cautionary note was due to the Fall colors here in Upstate New York, which caused me to let the impatient people behind me pass, rather than following my usual practice of taking the presence of tailgaters as a hint to pick up the pace.  I therefore took a right onto a side road, intending to turn around in one of the conveniently located driveways so that I could continue enjoying the Fall colors as I made my leisurely way up the highway.  But my smartphone instead suggested driving ahead for a short way to take advantage of a loop in the road.  I figured it knew more about the local geography than I did, so I naively followed its suggestion.

My first inkling of my naivete appeared when my smartphone asked me to take a right turn onto a one-lane gravel road.  I was a bit skeptical, but the gravel road appeared to have been recently graveled and also appeared to be well-maintained, so why not?  A few hundred yards in, the ruts became a bit deeper than my compact rental car would have preferred, but it is easy to position each pair of wheels on opposite sides of the too-deep rut and continue onwards.

But then I came to the stream crossing the road.

The stream covered perhaps 15 or 20 feet of the road, but on the other hand, it appeared to be fairly shallow in most places, which suggested that crossing it (as my smartphone was suggesting) might be feasible.  Except that there were a few potholes of indeterminate depth filled with swiftly swirling water, with no clear way to avoid them.  Plus the water had eroded the road surface a foot or two below its level elsewhere, which suggested that attempting to drive into the stream might leave my rental car high-centered on the newly crafted bank, leaving my poor car stuck with its nose down in the water and its rear wheels spinning helplessly in the air.

Fortunately, rental cars do have a reverse gear, but unfortunately my body is less happy than it might once have been to maintain the bent-around posture required to look out the rear window while driving backwards several hundred yards down a windy gravel road.  Fortunately, like many late-model cars, this one has a rear-view camera that automatically activates when the car is put into the reverse gear, but unfortunately I was unable to convince myself that driving several hundred yards backwards down a narrow and windy gravel road running through a forest was a particularly good beginner's use of this new-age driving technique.  (So maybe I should take the hint and practice driving backwards using the video in a parking lot?  Or maybe not...)

Which led to the next option, that of turning the car around on a rutted one-lane gravel road.  Fortunately the car is a compact, so this turned out to be just barely possible, and even more fortunately there were no other cars on the road waiting for me to complete my multipoint-star turn-around manuever.  (Some of my acquaintances will no doubt point out that had I been driving a large pickup, crossing the stream would have been a trivial event unworthy of any notice.  This might be true, but I was in fact driving a compact.)

But all is well that ends well.  After a few strident but easily ignored protests, my phone forgave my inexplicable deviation from its carefully planned and well-crafted route and deigned to guide me the rest of the way to my destination.

And, yes, it even kept me on paved roads.

October 23, 2019 10:02 PM

October 18, 2019

Matthew Garrett: Letting Birds scooters fly free

(Note: These issues were disclosed to Bird, and they tell me that fixes have rolled out. I haven't independently verified)

Bird produce a range of rental scooters that are available in multiple markets. With the exception of the Bird Zero[1], all their scooters share a common control board described in FCC filings. The board contains three primary components - a Nordic NRF52 Bluetooth controller, an STM32 SoC and a Quectel EC21-V modem. The Bluetooth and modem are both attached to the STM32 over serial and have no direct control over the rest of the scooter. The STM32 is tied to the scooter's engine control unit and lights, and also receives input from the throttle (and, on some scooters, the brakes).

The pads labeled TP7-TP11 near the underside of the STM32 and the pads labeled TP1-TP5 near the underside of the NRF52 provide Serial Wire Debug, although confusingly the data and clock pins are the opposite way around between the STM and the NRF. Hooking this up via an STLink and using OpenOCD allows dumping of the firmware from both chips, which is where the fun begins. Running strings over the firmware from the STM32 revealed "Set mode to Free Drive Mode". Challenge accepted.

Working back from the code that printed that, it was clear that commands could be delivered to the STM from the Bluetooth controller. The Nordic NRF52 parts are an interesting design - like the STM, they have an ARM Cortex-M microcontroller core. Their firmware is split into two halves, one the low level Bluetooth code and the other application code. They provide an SDK for writing the application code, and working through Ghidra made it clear that the majority of the application firmware on this chip was just SDK code. That made it easier to find the actual functionality, which was just listening for writes to a specific BLE attribute and then hitting a switch statement depending on what was sent. Most of these commands just got passed over the wire to the STM, so it seemed simple enough to just send the "Free drive mode" command to the Bluetooth controller, have it pass that on to the STM and win. Obviously, though, things weren't so easy.

It turned out that passing most of the interesting commands on to the STM was conditional on a variable being set, and the code path that hit that variable had some impressively complicated looking code. Fortunately, I got lucky - the code referenced a bunch of data, and searching for some of the values in that data revealed that they were the AES S-box values. Enabling the full set of commands required you to send an encrypted command to the scooter, which would then decrypt it and verify that the cleartext contained a specific value. Implementing this would be straightforward as long as I knew the key.

Most AES keys are 128 bits, or 16 bytes. Digging through the code revealed 8 bytes worth of key fairly quickly, but the other 8 bytes were less obvious. I finally figured out that 4 more bytes were the value of another Bluetooth variable which could be simply read out by a client. The final 4 bytes were more confusing, because all the evidence made no sense. It looked like it came from passing the scooter serial number to atoi(), which converts an ASCII representation of a number to an integer. But this seemed wrong, because atoi() stops at the first non-numeric value and the scooter serial numbers all started with a letter[2]. It turned out that I was overthinking it and for the vast majority of scooters in the fleet, this section of the key was always "0".

At that point I had everything I need to write a simple app to unlock the scooters, and it worked! For about 2 minutes, at which point the network would notice that the scooter was unlocked when it should be locked and sent a lock command to force disable the scooter again. Ah well.

So, what else could I do? The next thing I tried was just modifying some STM firmware and flashing it onto a board. It still booted, indicating that there was no sort of verified boot process. Remember what I mentioned about the throttle being hooked through the STM32's analogue to digital converters[3]? A bit of hacking later and I had a board that would appear to work normally, but about a minute after starting the ride would cut the throttle. Alternative options are left as an exercise for the reader.

Finally, there was the component I hadn't really looked at yet. The Quectel modem actually contains its own application processor that runs Linux, making it significantly more powerful than any of the chips actually running the scooter application[4]. The STM communicates with the modem over serial, sending it an AT command asking it to make an SSL connection to a remote endpoint. It then uses further AT commands to send data over this SSL connection, allowing it to talk to the internet without having any sort of IP stack. Figuring out just what was going over this connection was made slightly difficult by virtue of all the debug functionality having been ripped out of the STM's firmware, so in the end I took a more brute force approach - I identified the address of the function that sends data to the modem, hooked up OpenOCD to the SWD pins on the STM, ran OpenOCD's gdb stub, attached gdb, set a breakpoint for that function and then dumped the arguments being passed to that function. A couple of minutes later and I had a full transaction between the scooter and the remote.

The scooter authenticates against the remote endpoint by sending its serial number and IMEI. You need to send both, but the IMEI didn't seem to need to be associated with the serial number at all. New connections seemed to take precedence over existing connections, so it would be simple to just pretend to be every scooter and hijack all the connections, resulting in scooter unlock commands being sent to you rather than to the scooter or allowing someone to send fake GPS data and make it impossible for users to find scooters.

In summary: Secrets that are stored on hardware that attackers can run arbitrary code on probably aren't secret, not having verified boot on safety critical components isn't ideal, devices should have meaningful cryptographic identity when authenticating against a remote endpoint.

Bird responded quickly to my reports, accepted my 90 day disclosure period and didn't threaten to sue me at any point in the process, so good work Bird.

(Hey scooter companies I will absolutely accept gifts of interesting hardware in return for a cursory security audit)

[1] And some very early M365 scooters
[2] The M365 scooters that Bird originally deployed did have numeric serial numbers, but they were 6 characters of type code followed by a / followed by the actual serial number - the number of type codes was very constrained and atoi() would terminate at the / so this was still not a large keyspace
[3] Interestingly, Lime made a different design choice here and plumb the controls directly through to the engine control unit without the application processor having any involvement
[4] Lime run their entire software stack on the modem's application processor, but because of [3] they don't have any realtime requirements so this is more straightforward

comment count unavailable comments

October 18, 2019 11:44 AM

October 14, 2019

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode III

I still haven't installed many apps, but I have already come across lookalike apps that offer interesting services that are less pertinent to my current mode of existence than the apps I was actually looking for.  So perhaps app spam will become as much an issue as plain old email spam.

I also took my first-ever selfie, thus learning that using the camera on the same side of the smartphone as the screen gets you a mirror-imaged photograph.  I left the selfie that way on my obligatory Facebook post for purposes of historical accuracy and as a cautionary tale, but it turns out to be quite easy to flip photos (both horizonally and vertically) from the Android Gallery app. It is also possible to change brightness and contrast, add captions, add simple graphics, scrawl over the photo in various colors, adjust perspective, and so on.  An application popped up and offered much much much more (QR code scanning! OCR! Other stuff I didn't bother reading!), but only if I would agree to the license.  Which I might do some time in the future.

I have not yet worked out how I will carry the smartphone long term.  For the moment, it rests in the classic nerd position in my shirt pocket (truth in advertising!!!).

My wife was not all that impressed with the smartphone, which is not too surprising given that grade-school students commonly have them.  She did note that if someone broke into it, they could be taking pictures of her without her knowledge.  I quickly overcame that potential threat by turning the smartphone the other side up, so that any unauthorized photography would be confined to the inside of my shirt pocket.  :-)

October 14, 2019 02:46 PM

October 12, 2019

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode II

At some point in the setup process, it was necessary to disable wifi.  And I of course forgot to re-enable it.  A number of apps insisted on downloading new versions.  Eventually I realized my mistake, and re-enabled wifi, but am still wondering just how severe a case of sticker shock I am in for at the end of the month.

Except that when I re-enabled wifi, I did so in my hotel room.  During the last day of my stay at that hotel.  So I just now re-enabled it again on the shuttle.  I can clearly see that I have many re-enablings of wifi in my future, apparently one per hotspot that I visit.  :-)

Some refinement of notifications is still required.  Some applications notify me, but upon opening the corresponding app, there is no indication of any reason for notification.  I have summarily disabled notifications for some of these, and will perhaps learn the hard way why this was a bad idea.  Another issue is that some applications have multiple devices on which they can notify me.  It would be really nice if they stuck to the device I was actively using at the time rather than hitting them all, but perhaps that is too much to ask for in this hyperconnected world.

My new smartphone's virtual keyboard represents a definite improvement over the multipress nature of text messaging on my old flip phone, but it does not hold a candle to a full-sized keyboard.  However, even this old man must confess that it is much faster to respond to the smartphone than to the laptop if both start in their respective sleeping states.  There is probably an optimal strategy in there somewhere!  :-)

October 12, 2019 12:32 AM

October 11, 2019

Michael Kerrisk (manpages): man-pages-5.03 is released

I've released man-pages-5.03. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from 45 contributors. The release includes over 200 commits that change around 80 pages.

The most notable of the changes in man-pages-5.03 are the following:

October 11, 2019 08:59 PM

October 10, 2019

Paul E. Mc Kenney: The Old Man and His Smartphone, Episode I

I have long known that I would sooner or later be getting a smartphone, and this past Tuesday it finally happened.  So, yes, at long last I am GPS-enabled, and much else besides, a little of which I actually know how to use.

It took quite a bit of assistance to get things all wired together, so a big "Thank You" to my fellow bootcamp members!  I quickly learned that simply telling applications that they can't access anything is self-defeating, though one particular application reacted by simply not letting go of the screen.  Thankfully someone was on hand to tell me about the button in the lower right, and how to terminate an offending application by sweeping up on it. And I then quickly learned that pressing an app button spawns a new instance of that app, whether or not an instance was already running.  I quickly terminated a surprising number of duplicate app instances that I had spawned over the prior day or so.

Someone also took pity on me and showed me how to silence alerts from attention-seeking apps, with one notable instance being an app that liked to let me know when spam arrived for each instance of spam.

But having a portable GPS receiver has proven handy a couple of times already, so I can already see how these things could become quite addictive.  Yes, I resisted for a great many years, but my smartphone-free days are now officially over.  :-)

October 10, 2019 08:47 PM

October 05, 2019

James Bottomley: Why Ethical Open Source Really Isn’t

A lot of virtual ink has been expended debating the practicalities of the new push to adopt so called ethical open source licences. The two principle arguments being it’s not legally enforceable and it’s against the Open Source Definition. Neither of these seems to be hugely controversial and the proponents of ethical licences even acknowledge the latter by starting a push to change the OSD itself. I’m not going to rehash these points but instead I’m going to examine the effects injecting this form of ethics would have on Open Source Communities and society in general. As you can see from the title I already have an opinion but I hope to explain in a reasoned way how that came about.

Ethics is Absolute Ethical Positions are Mostly Relative

Ethics itself is the actual process by which philosophical questions of human morality are resolved. The job of Ethics is to give moral weight to consequences in terms of good and evil (or ethical and unethical). However, ethics also recognizes that actions have indivisible compound consequences of which often some would be classified as unethical and some as ethical. There are actually very few actions where all compound consequences are wholly Ethical (or Unethical). Thus the absolute position that all compound consequences must be ethical rarely exists in practice and what people actually mean when they say an action is “ethical” is that in their judgment the unethical consequences are outweighed by the ethical ones. Where and how you draw this line of ethical being outweighed by unethical is inherently political and can vary from person to person.

To give a concrete example tied to the UN Declaration of Human Rights (since that seems to be being held up as the pinnacle of unbiased ethics): The right to bear arms is enshrined in the US constitution as Amendment 2 and thus is protected under the UNDHR Article 8. However, the UNHDR also recognizes under Article 3 the right to life, liberty and security of person and it’s arguable that flooding the country with guns precipitating mass shootings violates this article. Thus restricting guns in the US would violate 8 and support 3 and not restricting them do the opposite. Which is more important is essentially a political decision and where you fall depend largely on whether you see yourself as Republican or Democrat. The point being this is a classical ethical conundrum where there is no absolute ethical position because it depends on the relative weights you give to the ethical and unethical consequences. The way out of this is negotiation between both sides to achieve a position not necessarily that each side supports wholeheartedly but which each side can live with.

The above example shows the problem of ethical open source because there are so few wholly ethical actions as to make conditioning a licence on this alone pointlessly ineffective and to condition it on actions with mixed ethical consequences effectively injects politics because the line has to be drawn somewhere, which means that open source under this licence becomes a politicized process.

The Relativity of Protest

Once you’ve made the political determination that a certain mixed consequence thing is unethical there’s still the question of what you do about it. For the majority expressing their preference through the ballot box every few years is sufficient. For others the gravity is so great that some form of protest is required. However, what forms of protest you choose to adhere to and what you choose not to is also an ethically relative choice. For instance a lot of the people pushing ethical open source would support the #NoTechForICE political movement. However if you look at locations on twitter, most of them are US based and thus pay taxes to the US government that supports and funds the allegedly unethical behaviour of ICE. Obviously they could protest this by withdrawing their support via taxation but they choose not to because the personal consequences would be too devastating. Instead they push ethical licences and present this as a simple binary choice when it isn’t at all: the decision about whether forcing a political position via a licence is one which may have fewer personally devastating consequences, but which people who weigh the ethical consequences are still entitled to think might be devastating for open source itself and thus an incorrect protest choice.

Community, Discrimination and Society

One of the great advances Open Source Communities have made over the past few years is the attempts to eliminate all forms of discrimination either by the introduction of codes of conduct or via other means. What this is doing is making Open Source more inclusive even as society at large becomes more polarized. In the early days of open source, we realized that simple forms of inclusion, talking face to face, had huge advantages in social terms (the face on the end of the email) and that has been continued into modern times and enhanced with the idea that conferences should be welcoming to all people and promote unbiased discussion in an atmosphere of safety. If Society itself is ever to overcome the current political polarization it will have to begin with both sides talking to each other presumably in one of the few remaining unpolarized venues for such discussion and thus keeping Open Source Communities one of these unpolarized venues is a huge societal good. That means keeping open source unpoliticized and thus free from discrimination against people, gender, sexual orientation, political belief or field of endeavour; the very things our codes of conduct mostly say anyway.

It is also somewhat ironic that the very people who claim to be champions against discrimination in open source now find it necessary to introduce discrimination to further their own supposedly ethical ends.

Conclusion

I hope I’ve demonstrated that ethical open source is really nothing more than co-opting open source as a platform for protest and as such will lead to the politicization of open source and its allied communities causing huge societal harm by removing more of our much needed unpolarized venues for discussion. It is my ethical judgement that this harm outweighs the benefits of using open source as a platform for protest and is thus ethically wrong. With regard to the attempts to rewrite the OSD to be more reflective of modern society, I content that instead of increasing our ability to discriminate by removing the fields of endeavour restriction, we should instead be tightening the anti-discrimination clauses by naming more things that shouldn’t be discriminated against which would make Open Source and the communities which are created by it more welcoming to all manner of contributions and keep them as neutral havens where people of different beliefs can nevertheless observe first hand the utility of mutual collaboration, possibly even learning to bridge the political, cultural and economic divides as a consequence.

October 05, 2019 10:19 PM

October 04, 2019

James Morris: Linux Security Summit North America 2019: Videos and Slides

LSS-NA for 2019 was held in August in San Diego.  Slides are available at the Schedule, and videos of the talks may now be found in this playlist.

LWN covered the following presentations:

The new 3-day format (as previously discussed) worked well, and we’re expecting to continue this next year for LSS-NA.

Details on the 2020 event will be announced soon!

Announcements may be found on the event twitter account @LinuxSecSummit, on the linux-security-module mailing list, and via this very blog.

October 04, 2019 08:09 PM

Matthew Garrett: Investigating the security of Lime scooters

(Note: to be clear, this vulnerability does not exist in the current version of the software on these scooters. Also, this is not the topic of my Kawaiicon talk.)

I've been looking at the security of the Lime escooters. These caught my attention because:
(1) There's a whole bunch of them outside my building, and
(2) I can see them via Bluetooth from my sofa
which, given that I'm extremely lazy, made them more attractive targets than something that would actually require me to leave my home. I did some digging. Limes run Linux and have a single running app that's responsible for scooter management. They have an internal debug port that exposes USB and which, until this happened, ran adb (as root!) over this USB. As a result, there's a fair amount of information available in various places, which made it easier to start figuring out how they work.

The obvious attack surface is Bluetooth (Limes have wifi, but only appear to use it to upload lists of nearby wifi networks, presumably for geolocation if they can't get a GPS fix). Each Lime broadcasts its name as Lime-12345678 where 12345678 is 8 digits of hex. They implement Bluetooth Low Energy and expose a custom service with various attributes. One of these attributes (0x35 on at least some of them) sends Bluetooth traffic to the application processor, which then parses it. This is where things get a little more interesting. The app has a core event loop that can take commands from multiple sources and then makes a decision about which component to dispatch them to. Each command is of the following form:

AT+type,password,time,sequence,data$

where type is one of either ATH, QRY, CMD or DBG. The password is a TOTP derived from the IMEI of the scooter, the time is simply the current date and time of day, the sequence is a monotonically increasing counter and the data is a blob of JSON. The command is terminated with a $ sign. The code is fairly agnostic about where the command came from, which means that you can send the same commands over Bluetooth as you can over the cellular network that the Limes are connected to. Since locking and unlocking is triggered by one of these commands being sent over the network, it ought to be possible to do the same by pushing a command over Bluetooth.

Unfortunately for nefarious individuals, all commands sent over Bluetooth are ignored until an authentication step is performed. The code I looked at had two ways of performing authentication - you could send an authentication token that was derived from the scooter's IMEI and the current time and some other stuff, or you could send a token that was just an HMAC of the IMEI and a static secret. Doing the latter was more appealing, both because it's simpler and because doing so flipped the scooter into manufacturing mode at which point all other command validation was also disabled (bye bye having to generate a TOTP). But how do we get the IMEI? There's actually two approaches:

1) Read it off the sticker that's on the side of the scooter (obvious, uninteresting)
2) Take advantage of how the scooter's Bluetooth name is generated

Remember the 8 digits of hex I mentioned earlier? They're generated by taking the IMEI, encrypting it using DES and a static key (0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88), discarding the first 4 bytes of the output and turning the last 4 bytes into 8 digits of hex. Since we're discarding information, there's no way to immediately reverse the process - but IMEIs for a given manufacturer are all allocated from the same range, so we can just take the entire possible IMEI space for the modem chipset Lime use, encrypt all of them and end up with a mapping of name to IMEI (it turns out this doesn't guarantee that the mapping is unique - for around 0.01%, the same name maps to two different IMEIs). So we now have enough information to generate an authentication token that we can send over Bluetooth, which disables all further authentication and enables us to send further commands to disconnect the scooter from the network (so we can't be tracked) and then unlock and enable the scooter.

(Note: these are actual crimes)

This all seemed very exciting, but then a shock twist occurred - earlier this year, Lime updated their authentication method and now there's actual asymmetric cryptography involved and you'd need to engage in rather more actual crimes to obtain the key material necessary to authenticate over Bluetooth, and all of this research becomes much less interesting other than as an example of how other companies probably shouldn't do it.

In any case, congratulations to Lime on actually implementing security!

comment count unavailable comments

October 04, 2019 06:10 AM

October 01, 2019

James Bottomley: Retro Engineering: Updating a Nexus One for the modern world

A few of you who’ve met me know that my current Android phone is an ancient Nexus One. I like it partly because of the small form factor, partly because I’ve re-engineered pieces of the CyanogneMod OS it runs to suit me and can’t be bothered to keep upporting to newer versions and partly because it annoys a lot of people in the Open Source Community who believe everyone should always be using the latest greatest everything. Actually, the last reason is why, although the Nexus One I currently run is the original google gave me way back in 2010, various people have donated a stack of them to me just in case I might need a replacement.

However, the principle problem with running one of these ancient beasts is that they cannot, due to various flash sizing problems, run anything later than Android 2.3.7 (or CyanogenMod 7.1.0) and since the OpenSSL in that is ancient, it won’t run any TLS protocol beyond 1.0 so with the rush to move to encryption and secure the web, more and more websites are disallowing the old (and, lets admit it, buggy) TLS 1.0 protocol, meaning more and more of the web is steadily going dark to my mobile browser. It’s reached the point where simply to get a boarding card, I have to download the web page from my desktop and transfer it manually to the phone. This started as an annoyance, but it’s becoming a major headache as the last of the websites I still use for mobile service go dark to me. So the task I set myself is to fix this by adding the newer protocols to my phone … I’m an open source developer, I have the source code, it should be easy, right …?

First Problem, the source code and Build Environment

Ten years ago, I did build CyanogenMod from scratch and install it on my phone, what could be so hard about reviving the build environment today. Firstly there was finding it, but github still has a copy and the AOSP project it links to still keeps old versions, so simply doing a

curl https://dl-ssl.google.com/dl/googlesource/git-repo/repo > ~/bin/repo
repo init -u -u git://github.com/CyanogenMod/android.git -b gingerbread --repo-url=git://github.com/android/tools_repo.git
repo sync

Actually worked (of course it took days of googling to remember these basic commands). However the “brunch passion” command to actually build it crashed and burned somewhat spectacularly. Apparently the build environment has moved on in the last decade.

The first problem is that most of the prebuilt x86 binaries are 32 bit. This means you have to build the host for 32 bit, and that involves quite a quest on an x86_64 system to make sure you have all the 32 bit build precursors. The next problem is that java 1.6.0 is required, but fortunately openSUSE build service still has it. Finally, the big problem is a load of c++ compile issues which turn out to be due to the fact that the c++ standard has moved on over the years and gcc-7 tries the latest one. Fortunately this can be fixed with

export HOST_GLOBAL_CPPFLAGS=-std=gnu++98

And the build works. If you’re only building the OpenSSL support infrastructure, you don’t need to build the entire thing, but figuring out the pieces you do need is hard, so building everything is a good way to finesse the dependency problem.

Figuring Out how to Upgrade OpenSSL

Unfortunately, this is Android, so you can’t simply drop a new OpenSSL library into the system and have it work. Firstly, the version of OpenSSL that Android builds with (at least for 2.3.7) is heavily modified, so even an android build of vanilla OpenSSL won’t work because it doesn’t have the necessary patches. Secondly, OpenSSL is very prone to ABI breaks, so if you start with 0.9.8, for instance, you’re never going to be able to support TLS 1.2. Fortunately, Android 2.3.7 has OpenSSL 1.0.0a so it is in the 1.0.0 ABI and versions of openssl for that ABI do support later versions of TLS (but only in version 1.0.1 and beyond). The solution actually is to look at external/openssl and simply update it to the latest version the project has (for CyanogenMod this is cm-10.1.2 which is openssl 1.0.1c … still rather ancient but at least supporting TLS 1.2).

cd external/openssl
git checkout cm-10.1.2
mm

And it builds and even works when installed on the phone … great. Except that nothing can use the later ciphers because the java provider (JSSE) also needs updating to support them. Updating the JSSE provider is a bit of a pain but you can do it in two patches:

Once this is done and installed you can browse most websites. There are still some exceptions because of websites that have caught the “can’t use sha1 in any form” bug, but these are comparatively minor. The two patches apply to libcore and once you have them, you can rebuild and install it.

Safely Installing the updated files

Installing new files in android can be a bit of a pain. The ideal way would be to build the entire rom and reflash, but that’s a huge pain, so the simple way is simply to open the /system partition and dump the files in. Opening the system partition is easy, just do

adb shell
# mount -o remount,rw /system

Uploading the required files is more difficult primarily because you want to make sure you can recover if there’s a mistake. I do this by transferring the files to <file>.new:

adb push out/target/product/passion/system/lib/libcrypto.so /system/lib/libcrypto.so.new
adb push out/target/product/passion/system/lib/libssl.so /system/lib/libssl.so.new
adb push out/target/product/passion/system/framework/core.jar /system/framework/core.jar.new

Now move everything into place and reboot

adb shell
# mv /system/lib/libcrypto.so /system/lib/libcrypto.so.old && mv /system/lib/libcrtypto.so.new /system/lib/libcrypto.so
# mv /system/lib/libssl.so /system/lib/libssl.so.old && mv /system/lib/libssl.so.new /system/lib/libssl.so
# mv /system/framework/core.jar /system/framework/core.jar.old && mv /system/framework/core.jar.new /system/framework/core.jar

If the reboot fails, use adb to recover

adb shell
# mount /system
# mv /system/lib/libcrypto.so.old /system/lib/libcrypto.so
...

Conclusions

That’s it. Following the steps above, my Nexus One can now browse useful internet sites like my Airline and the New York times. The only website I’m still having trouble with is the Wall Street Journal because they disabled all ciphers depending on sha1

October 01, 2019 11:17 PM

Paul E. Mc Kenney: Announcement: Change of Venue

This week of September 30th marks my last week at IBM, and I couldn't be more excited to be moving on to the next phase of my career by joining a great team at Facebook! Yes, yes, I am bringing with me my maintainership of both Linux-kernel RCU and the Linux-kernel memory model, my editing of "Is Parallel Programming Hard, And, If So, What Can You Do About It?", and other similar items, just in case you were wondering. ;-)

Of course, it is only appropriate for me to express my gratitude and appreciation for the many wonderful colleagues at IBM, before that at Sequent, and more recently at Red Hat. Together with others in the various communities, we in our own modest way have changed the world several times over. It was a great honor and privilege to have worked with you, and I expect and hope that our path will cross again. For those in the Linux-kernel and C/C++ standards communities, our paths will continue to run quite closely, and I look forward to continued productive and enjoyable collaborations.

I also have every reason to believe that IBM will continue to be a valuable and essential part of the computing industry as it continues through its second century, especially given the recent addition of Red Hat to the IBM family.

But all that aside, I am eagerly looking forward to starting Facebook bootcamp next week. Which is said to involve one of those newfangled smartphones. ;-)

October 01, 2019 10:24 PM

September 27, 2019

Matthew Garrett: Do we need to rethink what free software is?

Licensing has always been a fundamental tool in achieving free software's goals, with copyleft licenses deliberately taking advantage of copyright to ensure that all further recipients of software are in a position to exercise free software's four essential freedoms. Recently we've seen people raising two very different concerns around existing licenses and proposing new types of license as remedies, and while both are (at present) incompatible with our existing concepts of what free software is, they both raise genuine issues that the community should seriously consider.

The first is the rise in licenses that attempt to restrict business models based around providing software as a service. If users can pay Amazon to provide a hosted version of a piece of software, there's little incentive for them to pay the authors of that software. This has led to various projects adopting license terms such as the Commons Clause that effectively make it nonviable to provide such a service, forcing providers to pay for a commercial use license instead.

In general the entities pushing for these licenses are VC backed companies[1] who are themselves benefiting from free software written by volunteers that they give nothing back to, so I have very little sympathy. But it does raise a larger issue - how do we ensure that production of free software isn't just a mechanism for the transformation of unpaid labour into corporate profit? I'm fortunate enough to be paid to write free software, but many projects of immense infrastructural importance are simultaneously fundamental to multiple business models and also chronically underfunded. In an era where people are becoming increasingly vocal about wealth and power disparity, this obvious unfairness will result in people attempting to find mechanisms to impose some degree of balance - and given the degree to which copyleft licenses prevented certain abuses of the commons, it's likely that people will attempt to do so using licenses.

At the same time, people are spending more time considering some of the other ethical outcomes of free software. Copyleft ensures that you can share your code with your neighbour without your neighbour being able to deny the same freedom to others, but it does nothing to prevent your neighbour using your code to deny other fundamental, non-software, freedoms. As governments make more and more use of technology to perform acts of mass surveillance, detention, and even genocide, software authors may feel legitimately appalled at the idea that they are helping enable this by allowing their software to be used for any purpose. The JSON license includes a requirement that "The Software shall be used for Good, not Evil", but the lack of any meaningful clarity around what "Good" and "Evil" actually mean makes it hard to determine whether it achieved its aims.

The definition of free software includes the assertion that it must be possible to use the software for any purpose. But if it is possible to use software in such a way that others lose their freedom to exercise those rights, is this really the standard we should be holding? Again, it's unsurprising that people will attempt to solve this problem through licensing, even if in doing so they no longer meet the current definition of free software.

I don't have solutions for these problems, and I don't know for sure that it's possible to solve them without causing more harm than good in the process. But in the absence of these issues being discussed within the free software community, we risk free software being splintered - on one side, with companies imposing increasingly draconian licensing terms in an attempt to prop up their business models, and on the other side, with people deciding that protecting people's freedom to life, liberty and the pursuit of happiness is more important than protecting their freedom to use software to deny those freedoms to others.

As stewards of the free software definition, the Free Software Foundation should be taking the lead in ensuring that these issues are discussed. The priority of the board right now should be to restructure itself to ensure that it can legitimately claim to represent the community and play the leadership role it's been failing to in recent years, otherwise the opportunity will be lost and much of the activist energy that underpins free software will be spent elsewhere.

If free software is going to maintain relevance, it needs to continue to explain how it interacts with contemporary social issues. If any organisation is going to claim to lead the community, it needs to be doing that.

[1] Plus one VC firm itself - Bain Capital, an investment firm notorious for investing in companies, extracting as much value as possible and then allowing the companies to go bankrupt

comment count unavailable comments

September 27, 2019 05:47 PM

September 24, 2019

Pete Zaitcev: Github

A job at LinkedIn in my area includes the following instruction statement:

Don't apply if you can't really write code and don't have a github profile. This is a job for an expert level coder.

I remember the simpler times when LWN authors fretted about the SourceForge monopoly capturing all FLOSS projects.

P.S. For the record, I do have a profile at Github. It is required in order to contribute to RDO, because it is the only way to login in their Gerrit and submit patches for review. Ironically, Fedora offers a single sign-on with FAS and RDO is a Red Hat sponsored project, but nope — it's easier to force contributors into Github.

September 24, 2019 08:06 PM

September 14, 2019

Matthew Garrett: It's time to talk about post-RMS Free Software

Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

[1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

(A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
(B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

[2] This argument is, obviously, bullshit

comment count unavailable comments

September 14, 2019 11:57 AM

September 10, 2019

Davidlohr Bueso: Linux v5.2: Performance Goodies

locking/rwsem: optimize trylocking for the uncontended case

This applies the idea that in most cases, a rwsem will be uncontended (single threaded). For example, experimentation showed that page fault paths really expect this. The change itself makes the code basically not read in a cacheline in a tight loop over and over. Note however that this can be a double edged sword, as microbenchmarks have show performance deterioration upon high amounts of tasks, albeit mainly pathological workloads.
[Commit ddb20d1d3aed a338ecb07a33]

lib/lockref: limit number of cmpxchg loop retries

Unbounded loops are rather froned upon, specially ones ones doing CAS operations. As such, Linus suggested adding an arbitrary upper bound to the loop to force the slowpath (spinlock fallback), which was seen to improve performance on an adhoc testcase on hardware that incurrs in the loop retry game.
[Commit 893a7d32e8e0]
 

rcu: avoid unnecessary softirqs when system is idle

Upon an idle system with no pending callbacks, rcu sofirqs to process callbacks were being triggered repeatedly. Specifically the mismatch between cpu_no_qs and core_need_rq was addressed.
[Commit 671a63517cf9]

rcu: fix potential cond_resched() slowdowns

When using the jiffies_till_sched_qs kernel boot parameter, a bug made jiffies_to_sched_qs become uinitialized as zero and therefore impacts negatively in cond_resched().
[Commit 6973032a602e]

mm: improve vmap allocation

Doing a vmalloc can be quite slow at times, and with it being done with preemption disabled, can affect workloads that are sensible to this. The problem relies in the fact that a new VA area is done over a busy list iteration until a suitable hole is found between two busy areas. The changes propose the always reliable red-black tree to keep blocks sorted by their offsets along with a list keeping the free space in order of increasing addresses.
[Commit 68ad4a330433 68571be99f32]


mm/gup: safe usage of get_user_pages_fast() with DAX

Users of get_user_pages_fast() have potential performance benefits compared to its non-fast cousin, by avoiding mmap_sem, than it's non-fast equivalent. However drivers such as rdma can pin these pages for a significant amount of time, where a number of issues come with the filesystem as referenced pages will block a number of critical operations and is known to mess up DAX. A new FOLL_LONGTERM flag is added and checked accordingly; which also means that other users such as xdp can now also be converted to gup_fast.
[Commit 932f4a630a69 b798bec4741b 73b0140bf0fe 7af75561e171 9fdf4aa15673 664b21e717cf f3b4fdb18cb5 ]

lib/sort: faster and smaller

Because CONFIG_RETPOLINE has made indirect calls much more expensive, these changes reduce the number made by the library sort functions, lib/sort and lib/list_sort. A number of optimizations and clever tricks are used such as a more efficient bottom up heapsort and playing nicer with store buffers.
[Commit 37d0ec34d111 22a241ccb2c1 8fb583c4258d 043b3f7b6388 b5c56e0cdd62]

ipc/mqueue: make msg priorities truly O(1)

By keeping the pointer to the tree's rightmost node, the process of consuming a message can be done in constant time, instead of logarithmic.
[Commit a5091fda4e3c]

x86/fpu: load FPU registers on return to userland

This is a large, 27-patch, cleanup and optimization to only load fpu registers on return to userspace, instead of upon every context switch. This means that tasks that remain in kernel space do not load the registers. Accessing the fpu registers in the kernel requires disabling preemption and bottom-halfs  for scheduler and softirqs, accordingly.

x86/hyper-v: implement EOI optimization

Avoid a vmexit on EOI. This was seen to slightly improve IOPS when testing nvme disks with raid and ext4.
[Commit ba696429d290]
 

btrfs: improve performance on fsync of files with multiple hardlinks

A fix to a performance regression seen in pgbench which can make fsync a full transaction commit in order to avoid losing hard links and new ancestors of the fsynced inode.
[Commit b8aa330d2acb]

fsnotify: fix unlink performance regression

This restores an unlink performance optimization that avoids take_dentry_name_snapshot().
[Commit 4d8e7055a405]

block/bfq: do not merge queues on flash storage with queuing

Disable queue merging on non-rotational devices with internal queueing, thus boosting throughput on interleaved IO.
[Commit 8cacc5ab3eac]

September 10, 2019 07:26 PM

September 08, 2019

James Bottomley: The Mythical Economic Model of Open Source

It has become fashionable today to study open source through the lens of economic benefits to developers and sometimes draw rather alarming conclusions. It has also become fashionable to assume a business model tie and then berate the open source community, or their licences, for lack of leadership when the business model fails. The purpose of this article is to explain, in the first part, the fallacy of assuming any economic tie in open source at all and, in the second part, go on to explain how economics in open source is situational and give an overview of some of the more successful models.

Open Source is a Creative Intellectual Endeavour

All the creative endeavours of humanity, like art, science or even writing code, are often viewed as activities that produce societal benefit. Logically, therefore, the people who engage in them are seen as benefactors of society, but assuming people engage in these endeavours purely to benefit society is mostly wrong. People engage in creative endeavours because it satisfies some deep need within themselves to exercise creativity and solve problems often with little regard to the societal benefit. The other problem is that the more directed and regimented a creative endeavour is, the less productive its output becomes. Essentially to be truly creative, the individual has to be free to pursue their own ideas. The conundrum for society therefore is how do you harness this creativity for societal good if you can’t direct it without stifling the very creativity you want to harness? Obviously society has evolved many models that answer this (universities, benefactors, art incubation programmes, museums, galleries and the like) with particular inducements like funding, collaboration, infrastructure and so on.

Why Open Source development is better than Proprietary

Simply put, the Open Source model, involving huge freedoms to developers to decide direction and great opportunities for collaboration stimulates the intellectual creativity of those developers to a far greater extent than when you have a regimented project plan and a specific task within it. The most creatively deadening job for any engineer is to find themselves strictly bound within the confines of a project plan for everything. This, by the way, is why simply allowing a percentage of paid time for participating in Open Source seems to enhance input to proprietary projects: the liberated creativity has a knock on effect even in regimented development. However, obviously, the goal for any Corporation dependent on code development should be to go beyond the knock on effect and actually employ open source methodologies everywhere high creativity is needed.

What is Open Source?

Open Source has it’s origin in code sharing models, permissive from BSD and reciprocal from GNU. However, one of its great values is the reasons why people do open source aren’t the same reasons why the framework was created in the first place. Today Open Source is a framework which stimulates creativity among developers and helps them create communities, provides economic benefits to corportations (provided they understand how to harness them) and produces a great societal good in general in terms of published reusable code.

Economics and Open Source

As I said earlier, the framework of Open Source has no tie to economics, in the same way things like artistic endeavour don’t. It is possible for a great artist to make money (as Picasso did), but it’s equally possible for a great artist to live all their lives in penury (as van Gough did). The demonstration of the analogy is that trying to measure the greatness of the art by the income of the artist is completely wrong and shortsighted. Developing the ability to exploit your art for commercial gain is an additional skill an artist can develop (or not, as they choose) it’s also an ability they could fail in and in all cases it bears no relation to the societal good their art produces. In precisely the same way, finding an economic model that allows you to exploit open source (either individually or commercially) is firstly a matter of choice (if you have other reasons for doing Open Source, there’s no need to bother) and secondly not a guarantee of success because not all models succeed. Perhaps the easiest way to appreciate this is through the lens of personal history.

Why I got into Open Source

As a physics PhD student, I’d always been interested in how operating systems functioned, but thanks to the BSD lawsuit and being in the UK I had no access to the actual source code. When Linux came along as a distribution in 1992, it was a revelation: not only could I read the source code but I could have a fully functional UNIX like system at home instead of having to queue for time to write up my thesis in TeX on the limited number of department terminals.

After completing my PhD I was offered a job looking after computer systems in the department and my first success was shaving a factor of ten off the computing budget by buying cheap pentium systems running Linux instead of proprietary UNIX workstations. This success was nearly derailed by an NFS bug in Linux but finding and fixing the bug (and getting it upstream into the 1.0.2 kernel) cemented the budget savings and proved to the department that we could handle this new technology for a fraction of the cost of the old. It also confirmed my desire to poke around in the Operating System which I continued to do, even as I moved to America to work on Proprietary software.

In 2000 I got my first Open Source break when the product I’d been working on got sold to a silicon valley startup, SteelEye, whose business plan was to bring High Availability to Linux. As the only person on the team with an Open Source track record, I became first the Architect and later CTO of the company, with my first job being to make the somewhat eccentric Linux SCSI subsystem work for the shared SCSI clusters LifeKeeper then used. Getting SCSI working lead to fund interactions with the Linux community, an Invitation to present on fixing SCSI to the Kernel Summit in 2002 and the maintainership of SCSI in 2003. From that point, working on upstream open source became a fixture of my Job requirements but progressing through Novell, Parallels and now IBM it also became a quality sought by employers.

I have definitely made some money consulting on Open Source, but it’s been dwarfed by my salary which does get a boost from my being an Open Source developer with an external track record.

The Primary Contributor Economic Models

Looking at the active contributors to Open Source, the primary model is that either your job description includes working on designated open source projects so you’re paid to contribute as your day job
or you were hired because of what you’ve already done in open source and contributing more is a tolerated use of your employer’s time, a third, and by far smaller group is people who work full-time on Open Source but fund themselves either by shared contributions like patreon or tidelift or by actively consulting on their projects. However, these models cover existing contributors and they’re not really a route to becoming a contributor because employers like certainty so they’re unlikely to hire someone with no track record to work on open source, and are probably not going to tolerate use of their time for developing random open source projects. This means that the route to becoming a contributor, like the route to becoming an artist, is to begin in your own time.

Users versus Developers

Open Source, by its nature, is built by developers for developers. This means that although the primary consumers of open source are end users, they get pretty much no say in how the project evolves. This lack of user involvement has been lamented over the years, especially in projects like the Linux Desktop, but no real community solution has ever been found. The bottom line is that users often don’t know what they want and even if they do they can’t put it in technical terms, meaning that all user driven product development involves extensive and expensive product research which is far beyond any open source project. However, this type of product research is well within the ability of most corporations, who can also afford to hire developers to provide input and influence into Open Source projects.

Business Model One: Reflecting the Needs of Users

In many ways, this has become the primary business model of open source. The theory is simple: develop a traditional customer focussed business strategy and execute it by connecting the gathered opinions of customers to the open source project in exchange for revenue for subscription, support or even early shipped product. The business value to the end user is simple: it’s the business value of the product tuned to their needs and the fact that they wouldn’t be prepared to develop the skills to interact with the open source developer community themselves. This business model starts to break down if the end users acquire developer sophistication, as happens with Red Hat and Enterprise users. However, this can still be combatted by making sure its economically unfeasible for a single end user to match the breadth of the offering (the entire distribution). In this case, the ability of the end user to become involved in individual open source projects which matter to them is actually a better and cheaper way of doing product research and feeds back into the synergy of this business model.

This business model entirely breaks down when, as in the case of the cloud service provider, the end user becomes big enough and technically sophisticated enough to run their own distributions and sees doing this as a necessary adjunct to their service business. This means that you can no-longer escape the technical sophistication of the end user by pursuing a breadth of offerings strategy.

Business Model Two: Drive Innovation and Standardization

Although venture capitalists (VCs) pay lip service to the idea of constant innovation, this isn’t actually what they do as a business model: they tend to take an innovation and then monetize it. The problem is this model doesn’t work for open source: retaining control of an open source project requires a constant stream of innovation within the source tree itself. Single innovations get attention but unless they’re followed up with another innovation, they tend to give the impression your source tree is stagnating, encouraging forks. However, the most useful property of open source is that by sharing a project and encouraging contributions, you can obtain a constant stream of innovation from a well managed community. Once you have a constant stream of innovation to show, forking the project becomes much harder, even for a cloud service provider with hundreds of developers, because they must show they can match the innovation stream in the public tree. Add to that Standardization which in open source simply means getting your project adopted for use by multiple consumers (say two different clouds, or a range of industry). Further, if the project is largely run by a single entity and properly managed, seeing the incoming innovations allows you to recruit the best innovators, thus giving you direct ownership of most of the innovation stream. In the early days, you make money simply by offering user connection services as in Business Model One, but the ultimate goal is likely acquisition for the talent possesed, which is a standard VC exit strategy.

All of this points to the hypothesis that the current VC model is wrong. Instead of investing in people with the ideas, you should be investing in people who can attract and lead others with ideas

Other Business Models

Although the models listed above have proven successful over time, they’re by no means the only possible ones. As the space of potential business models gets explored, it could turn out they’re not even the best ones, meaning the potential innovation a savvy business executive might bring to open source is newer and better business models.

Conclusions

Business models are optional extras with open source and just because you have a successful open source project does not mean you’ll have an equally successful business model unless you put sufficient thought into constructing and maintaining it. Thus a successful open source start up requires three elements: A sound business model, or someone who can evolve one, a solid community leader and manager and someone with technical ability in the problem space.

If you like working in Open Source as a contributor, you don’t necessarily have to have a business model at all and you can often simply rely on recognition leading to opportunities that provide sufficient remuneration.

Although there are several well known business models for exploiting open source, there’s no reason you can’t create your own different one but remember: a successful open source project in no way guarantees a successful business model.

September 08, 2019 09:35 AM

September 04, 2019

Linux Plumbers Conference: LPC waiting list closed; just a few days until the conference

The waiting list for this year’s Linux Plumbers Conference is now closed. All of the spots available have been allocated, so anyone who is not registered at this point will have to wait for next year. There will be no on-site registration. We regret that we could not accommodate everyone. The good news is that all of the microconferences, refereed talks, Kernel summit track, and Networking track will be recorded on video and made available as soon as possible after the conference. Anyone who could not make it to Lisbon this year will at least be able to catch up with what went on. Hopefully those who wanted to come will make it to a future LPC.

For those who are attending, we are just a few days away; you should have received an email with more details. Beyond that, the detailed schedule is available. There are also some tips on using the metro to get to the venue. As always, please send any questions or comments to “contact@linuxplumbersconf.org”.

September 04, 2019 09:31 PM

August 30, 2019

Pete Zaitcev: Docker Block Storage... say what again?

Found an odd job posting at the website of Rancher:

What you will be doing

Okay. Since they talk about consistency and replication together, this thing probably provides actual service, in addition to the necessary orchestration. Kind of the ill-fated Sheepdog. They may under-estimate the amount of work necesary, sure. Look no further than Ceph RBD. Remember how much work it took for a genius like Sage? But a certain arrogance is essential in a start-up, and Rancher only employs 150 people.

Also, nobody is dumb enough to write orchestration in Go, right? So this probably is not just a layer on top of Ceph or whatever.

Well, it's still possible that it's merely an in-house equivalent of OpenStack Cinder, and they want it in Go because they are a Go house and if you have a hammer everything looks like a nail.

Either way, here's the main question: what does block storage have to do with Docker?

Docker, as far as I know, is a container runtime. And containers do not consume block storage. They plug into a Linux kernel that presents POSIX to them, only namespaced. Granted, certain applications consume block storage through Linux, that is why we have O_DIRECT. But to roll out a whole service like this just for those rare appliations... I don't think so.

Why would anyone want block storage for (Docker) containers? Isn't it absurd? What am I missing and what is Rancher up to?

UPDATE:

The key to remember here is that while running containers aren't using block storage, Docker containers are distributed as disk images, and they get a their own root filesystem by default. Therefore, any time anyone adds a Docker container, they have to allocate a block device and dump the application image into it. So, yes, it is some kind of Docker Cinder they are trying to build.

See Red Hat docs about managing Docker block storage in Atomic Host (h/t penguin42).

August 30, 2019 05:52 PM

August 15, 2019

Pete Zaitcev: POST, PUT, and CRUD

Anyone who ever worked with object storage knows that PUT creates, GET reads, POST updates, and DELETE deletes. Naturally, right? POST is such a strange verb with oddball encodings that it's perfect to update, while GET and PUT are matching twins like read(2) and write(2). Imagine my surprise, then, when I found that the official definition of RESTful makes POST create objects and PUT update them. There is even a FAQ, which uses sophistry and appeals to the authority of RFCs in order to justify this.

So, in the world of RESTful solipcism, you would upload an object foo into a bucket buk by issuing "POST /buk?obj=foo" [1], while "PUT /buk/foo" applies to pre-existing resources. Although, they had to admit that RFC-2616 assumes that PUT creates.

All this goes to show, too much dogma is not good for you.

[1] It's worse, actually. They want you to do "POST /buk", and receive a resource ID, generated by the server, and use that ID to refer to the resource.

August 15, 2019 07:10 PM

August 14, 2019

Greg Kroah-Hartman: Patch workflow with mutt - 2019

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my “main” mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

Everything that does not require a response, or I’ve already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don’t achieve a constant “inbox zero”, but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that’s a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let’s get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it’s time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let’s look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don’t remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt
,t
l staging
T
s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it’s nice to see what I’m going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)

s
l erofs
T
s ../s1

I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)

So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I’m comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that’s it for applying development patches. It’s a bunch of the same tasks over and over:

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" <  ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:

That’s what I use for kernel development.

Oh, I forgot:

Bonus things that I have grown to rely on when using mutt is:

If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

August 14, 2019 12:37 PM

August 10, 2019

Pete Zaitcev: Comment to 'О цифровой экономике и глобальных проблемах человечества' by omega_hyperon

Я это всё каждый день слышу. Эти люди берут вполне определившуюся тенденцию к замедлению научно-технического прогресса, и говорят - мы лучше знаем, что человечеству нужно. Отберите деньги у недостойных, и дайте таким умным как я, и прогресс снова пойдёт. А заодно защитим природу! И всегда капитализм виноват.

О том, что цивилизация топчется на месте, спору нет. А вот пара вещей о которых этот гандон умалчивает.

Во-первых, если отнять деньги у Диснея и отдать их исследовательскому институту, то денег не будет. Вроде по-русски говорит, а такого простого урука из распада СССР не вынес. Американская наука разгромила советскую науку во времена НТР прежде всего потому, что капиталистическая экономика предоставила экономическую базу для этой науки, а социалистическая экономика была провальной.

Вообще, если сравнить бюджет Эппла и Диснея с бюджетом Housing and Urban Development и аналогичных учреждений, то там разница на 2 порядка. Если кто-то хочет дать науке больше денег, то нужно не грабить Дисней, а прекратить давать халявщикам бесплатное жильё. Замедление науки и капитализма идут рука об руку и вызваны государственной политикой, а не каким-то там биткойном.

Во-вторых, кто вообще верит этим шарлатанам? Нам забивали баки про детей в Африке десятилетиями, а за время ужасного голода в Эфиопии её население увеличилось с 38 миллионов до 75 миллионов. То же самое произошло с белыми медведями. Площадь лесов на планете растёт. Допустим в Бразилии срубили какие-то леса под паздбища... Но кто в это поверит?

Этот кризис экспертизы - не шутка. Боязнь вакцин создана не капитализмом и биткойном, а загниванием и распадом системы научных исследований в целом. Он не назвал институт, бюджет которого он сравнил с Диснеем, а вот интересно, сколько там бездельников среди сотрудников.

Коллапс науки отражается не только в том как публика утратила веру в учёних. Объективные показатели тоже просели. Подтверждаемость публикаций очень плохая, и идёт вниз. Тоже биткойн виноват?

В обшцем большая часть этого нытя мне видится крайне вредной. Если он не в состоянии диагностировать причины кризиса, предлогаемые решения ничего нам не дадут, и биодиверии не прибавится.

View the entire thread this comment is a part of

August 10, 2019 12:46 PM

August 02, 2019

Michael Kerrisk (manpages): man-pages-5.02 is released

I've released man-pages-5.02. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from 28 contributors. The release includes around 120 commits that change more than 50 pages.

The most notable of the changes in man-pages-5.02 is the following:


August 02, 2019 09:04 AM

July 25, 2019

Pete Zaitcev: Swift is 2 to 4 times faster than any competitor

Or so they say, at least for certain workloads.

In January of 2015 I led a project to evaluate and select a next-generation storage platform that would serve as the central storage (sometimes referred to as an active archive or tier 2) for all workflows. We identified the following features as being key to the success of the platform:

In hindsight, I wish we would have included two additional requirements:

We spent the next ~1.5 years evaluating the following systems:

SwiftStack was the only solution that literally checked every box on our list of desired features, but that’s not the only reason we selected it over the competition.

The top three reasons behind our selection of SwiftStack were as follows:

Note: While SwiftStack 1space was not a part of the SwiftStack platform at the time of our evaluation and purchase, it would have been an additional deciding factor in favor of SwiftStack if it had been.

Interesting. It should be noted that performance of Swift is a great match for some workloads, but not for others. In particluar, Swift is weak on small-file workloads, such as Gnocchi, which writes a ton of 16-byte objects again and again. The overhead is a killer there, and not just on the wire: Swift has to update its accounting databases each and every time a write is done, so that "swift stat" shows things like quotas. Swift is also not particularly good at HPC-style workloads, which benefit from a great bisectional bandwidth, because we transfer all user data through so-called "proxy" servers. Unlike e.g. Ceph, Swift keeps the cluster topology hidden from the client, while a Ceph client actually tracks the ring changes, placement groups and their leaders, etc.. But as we can see, once the object sizes start climbing and the number of clients increases, Swift rapidly approaches the wire speed.

I cannot help noticing that the architecture in question has a front-facing cache of pool (tier 1), which is what the ultimate clients see instead of Swift. Most of the time, Swift is selected for its ability to serve tens of thousands of clients simultaneously, but not in this case. Apparently, the end-user invented ProxyFS independently.

There's no mention of Red Hat selling Swift in the post. Either it was not part of the evaluation at all, or the author forgot about it for the passing of time. He did list a bunch of rather weird and obscure storage solutions though.

July 25, 2019 02:38 AM

July 18, 2019

Kees Cook: security things in Linux v5.2

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid various heap grooming and heap spraying techniques for exploiting Use-after-Free flaws, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

CLONE_PIDFD added
Christian Brauner added the new CLONE_PIDFD flag to the clone() system call, which complements the pidfd work in v5.1 so that programs can now gain a handle for a process ID right at fork() (actually clone()) time, instead of needing to get the handle from /proc after process creation. With signals and forking now enabled, the next major piece (already in linux-next) will be adding P_PIDFD to the waitid() system call, and common process management can be done entirely with pidfd.

Other things
Alexander Popov pointed out some more v5.2 features that I missed in this blog post. I’m repeating them here, with some minor edits/clarifications. Thank you Alexander!

Edit: added CLONE_PIDFD notes, as reminded by Christian Brauner. :)
Edit: added Alexander Popov’s notes

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

July 18, 2019 12:07 AM

July 17, 2019

Linux Plumbers Conference: System Boot and Security Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the System Boot and Security Microconference has been accepted into the 2019 Linux Plumbers Conference! Computer-system security is a topic that has gotten a lot of serious attention over the years, but there has not been anywhere near as much attention paid to the system firmware. But the firmware is also a target for those looking to wreak havoc on our systems. Firmware is now being developed with security in mind, but provides incomplete solutions. This microconference will focus on the security of the system especially from the time the system is powered on.

Expected topics for this year include:

Come and join us in the discussion of keeping your system secure even at boot up.

We hope to see you there!

July 17, 2019 04:57 PM

July 10, 2019

Linux Plumbers Conference: Power Management and Thermal Control Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Power Management and Thermal Control Microconference has been accepted into the 2019 Linux Plumbers Conference! Power management and thermal control are important areas in the Linux ecosystem to help improve the environment of the planet. In recent years, computer systems have been becoming more and more complex and thermally challenged at the same time and the energy efficiency expectations regarding them have been growing. This trend is likely to continue in the foreseeable future and despite the progress made in the power-management and thermal-control problem space since the Linux Plumbers Conference last year. That progress includes, but is not limited to, the merging of the energy-aware scheduling patch series and CPU idle-time management improvements; there will be more work to do in those areas. This gathering will focus on continuing to have Linux meet the power-management and thermal-control challenge.

Topics for this year include:

Come and join us in the discussion of how to extend the battery life of your laptop while keeping it cool.

We hope to see you there!

July 10, 2019 10:29 PM

July 09, 2019

Linux Plumbers Conference: Android Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Android Microconference has been accepted into the 2019 Linux Plumbers Conference! Android has a long history at Linux Plumbers and has continually made progress as a direct result of these meetings. This year’s focus will be a fairly ambitious goal to create a Generic Kernel Image (GKI) (or one kernel to rule them all!). Having a GKI will allow silicon vendors to be independent of the Linux kernel running on the device. As such, kernels could be easily upgraded without requiring any rework of the initial hardware porting efforts. This microconference will also address areas that have been discussed in the past.

The proposed topics include:

Come and join us in the discussion of improving what is arguably the most popular operating system in the world!

We hope to see you there!

July 09, 2019 11:29 PM

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comment count unavailable comments

July 09, 2019 09:15 PM

Linux Plumbers Conference: Update on LPC 2019 registration waiting list

Here is an update regarding the registration situation for LPC2019.

The considerable interest for participation this year meant that the conference sold out earlier than ever before.

Instead of a small release of late-registration spots, the LPC planning committee has decided to run a waiting list, which will be used as the exclusive method for additional registrations. The planning committee will reach out to individuals on the waiting list and inviting them to register at the regular rate of $550, as spots become available.

With the majority of the Call for Proposals (CfP) still open, it is not yet possible to release passes. The planning committee and microconferences leads are working together to allocate the passes earmarked for microconferences. The Networking Summit and Kernel Summit speakers are yet to be confirmed also.

The planning committee understands that many of those who added themselves to the waiting list wish to find out soon whether they will be issued a pass. We anticipate the first passes to be released on July 22nd at the earliest.

Please follow us on social media, or here on this blog for further updates.

July 09, 2019 01:36 AM

July 08, 2019

Linux Plumbers Conference: VFIO/IOMMU/PCI Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the VFIO/IOMMU/PCI Microconference has been accepted into the 2019 Linux Plumbers Conference!

The PCI interconnect specification and the devices implementing it are incorporating more and more features aimed at high performance systems. This requires the kernel to coordinate the PCI devices, the IOMMUs they are connected to and the VFIO layer used to manage them (for user space access and device pass-through) so that users (and virtual machines) can use them effectively. The kernel interfaces to control PCI devices have to be designed in-sync for all three subsystems, which implies that there are lots of intersections in the design of kernel control paths for VFIO/IOMMU/PCI requiring kernel code design discussions involving the three subsystems at once.

A successful VFIO/IOMMU/PCI Microconference was held at Linux Plumbers in 2017, where:

This year, the microconference will follow up on the previous microconference agendas and focus on ongoing patches review/design aimed at VFIO/IOMMU/PCI subsystems.

Topics for this year include:

Come and join us in the discussion in helping Linux keep up with the new features being added to the PCI interconnect specification.

We hope to see you there!

July 08, 2019 05:56 PM

July 07, 2019

Matthew Garrett: Creating hardware where no hardware exists

The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comment count unavailable comments

July 07, 2019 08:15 PM

Linux Plumbers Conference: Scheduler Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Scheduler Microconference has been accepted into the 2019 Linux Plumbers Conference! The scheduler determines what runs on the CPU at any given time. The lag of your desktop is affected by the scheduler, for example. There are a few different scheduling classes for a user to choose from, such as the default class (SCHED_OTHER) or a real-time class (SCHED_FIFO, SCHED_RT and SCHED_DEADLINE). The deadline scheduler is the newest and allows the user to control the amount of bandwidth received by a task or group of tasks. With cloud computing becoming popular these days, controlling bandwidth of containers or virtual machines is becoming more important. The Real-Time patch is also destined to become mainline, which will add more strain on the scheduling of tasks to make sure that real-time tasks make their deadlines (although, this Microconference will focus on non real-time aspects of the scheduler. Please defer real-time topics to the Real-time Microconference). This requires verification techniques to ensure the scheduler is properly designed.

Topics for this year include:

Come and join us in the discussion of controlling what tasks get to run on your machine and when.

We hope to see you there!

July 07, 2019 02:51 PM

July 03, 2019

Linux Plumbers Conference: RDMA Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the RDMA Microconference has been accepted into the 2019 Linux Plumbers Conference! RDMA has been a microconference at Plumbers for the last three years and will be continuing its productive work for a fourth year. The RDMA meetings at the previous Plumbers have been critical in getting improvements to the RDMA subsystem merged into mainline. These include a new user API, container support, testability/syzkaller, system bootup, Soft iWarp, and more. There are still difficult open issues that need to be resolved, and this year’s Plumbers RDMA Microconfernence is sure to come up with answers to these tough problems.

Topics for this year include:

And new developing areas of interest:

Come and join us in the discussion of improving Linux’s ability to access direct memory across high-speed networks.

We hope to see you there!

July 03, 2019 10:54 PM

July 02, 2019

Linux Plumbers Conference: Announcing the LPC 2019 registration waiting list

 

The current pool of registrations for the 2019 Linux Plumbers Conference has sold out.

Those not yet registered who wish to attend should fill out the form here to get on the waiting list.

As registration spots open up, the Plumbers organizing committee will allocate them to those  on the waiting list with priority given to those who will be participating in microconferences and BoFs.

 

 

July 02, 2019 07:22 PM

Linux Plumbers Conference: Preliminary schedule for LPC 2019 has been published

The LPC committee is pleased to announce the preliminary schedule for the 2019 Linux Plumbers Conference.

The vast majority of the LPC refereed track talks have been accepted and are listed there. The same is true for microconferences. While there are a few talks and microconferences to be announced, you will find the current overview LPC schedule here. The LPC refereed track talks can be seen here.

The call for proposals (CfP) is still open for the Kernel Summit, Networking Summit, BOFs and topics for accepted microconferences.

As new microconferences, talks, and BOFs are accepted, they will be published to the schedule.

July 02, 2019 04:48 PM

June 30, 2019

Matthew Garrett: Which smart bulbs should you buy (from a security perspective)

People keep asking me which smart bulbs they should buy. It's a great question! As someone who has, for some reason, ended up spending a bunch of time reverse engineering various types of lightbulb, I'm probably a reasonable person to ask. So. There are four primary communications mechanisms for bulbs: wifi, bluetooth, zigbee and zwave. There's basically zero compelling reasons to care about zwave, so I'm not going to.

Wifi


Advantages: Doesn't need an additional hub - you can just put the bulbs wherever. The bulbs can connect out to a cloud service, so you can control them even if you're not on the same network.
Disadvantages: Only works if you have wifi coverage, each bulb has to have wifi hardware and be configured appropriately.
Which should you get: If you search Amazon for "wifi bulb" you'll get a whole bunch of cheap bulbs. Don't buy any of them. They're mostly based on a custom protocol from Zengge and they're shit. Colour reproduction is bad, there's no good way to use the colour LEDs and the white LEDs simultaneously, and if you use any of the vendor apps they'll proxy your device control through a remote server with terrible authentication mechanisms. Just don't. The ones that aren't Zengge are generally based on the Tuya platform, whose security model is to have keys embedded in some incredibly obfuscated code and hope that nobody can find them. TP-Link make some reasonably competent bulbs but also use a weird custom protocol with hand-rolled security. Eufy are fine but again there's weird custom security. Lifx are the best bulbs, but have zero security on the local network - anyone on your wifi can control the bulbs. If that's something you care about then they're a bad choice, but also if that's something you care about maybe just don't let people you don't trust use your wifi.
Conclusion: If you have to use wifi, go with lifx. Their security is not meaningfully worse than anything else on the market (and they're better than many), and they're better bulbs. But you probably shouldn't go with wifi.

Bluetooth


Advantages: Doesn't need an additional hub. Doesn't need wifi coverage. Doesn't connect to the internet, so remote attack is unlikely.
Disadvantages: Only one control device at a time can connect to a bulb, so harder to share. Control device needs to be in Bluetooth range of the bulb. Doesn't connect to the internet, so you can't control your bulbs remotely.
Which should you get: Again, most Bluetooth bulbs you'll find on Amazon are shit. There's a whole bunch of weird custom protocols and the quality of the bulbs is just bad. If you're going to go with anything, go with the C by GE bulbs. Their protocol is still some AES-encrypted custom binary thing, but they use a Bluetooth controller from Telink that supports a mesh network protocol. This means that you can talk to any bulb in your network and still send commands to other bulbs - the dual advantages here are that you can communicate with bulbs that are outside the range of your control device and also that you can have as many control devices as you have bulbs. If you've bought into the Google Home ecosystem, you can associate them directly with a Home and use Google Assistant to control them remotely. GE also sell a wifi bridge - I have one, but haven't had time to review it yet, so make no assertions around its competence. The colour bulbs are also disappointing, with much dimmer colour output than white output.

Zigbee


Advantages: Zigbee is a mesh protocol, so bulbs can forward messages to each other. The bulbs are also pretty cheap. Zigbee is a standard, so you can obtain bulbs from several vendors that will then interoperate - unfortunately there are actually two separate standards for Zigbee bulbs, and you'll sometimes find yourself with incompatibility issues there.
Disadvantages: Your phone doesn't have a Zigbee radio, so you can't communicate with the bulbs directly. You'll need a hub of some sort to bridge between IP and Zigbee. The ecosystem is kind of a mess, and you may have weird incompatibilities.
Which should you get: Pretty much every vendor that produces Zigbee bulbs also produces a hub for them. Don't get the Sengled hub - anyone on the local network can perform arbitrary unauthenticated command execution on it. I've previously recommended the Ikea Tradfri, which at the time only had local control. They've since added remote control support, and I haven't investigated that in detail. But overall, I'd go with the Philips Hue. Their colour bulbs are simply the best on the market, and their security story seems solid - performing a factory reset on the hub generates a new keypair, and adding local control users requires a physical button press on the hub to allow pairing. Using the Philips hub doesn't tie you into only using Philips bulbs, but right now the Philips bulbs tend to be as cheap (or cheaper) than anything else.

But what about


If you're into tying together all kinds of home automation stuff, then either go with Smartthings or roll your own with Home Assistant. Both are definitely more effort if you only want lighting.

My priority is software freedom


Excellent! There are various bulbs that can run the Espurna or AiLight firmwares, but you'll have to deal with flashing them yourself. You can tie that into Home Assistant and have a completely free stack. If you're ok with your bulbs being proprietary, Home Assistant can speak to most types of bulb without an additional hub (you'll need a supported Zigbee USB stick to control Zigbee bulbs), and will support the C by GE ones as soon as I figure out why my Bluetooth transmissions stop working every so often.

Conclusion


Outside niche cases, just buy a Hue. Philips have done a genuinely good job. Don't buy cheap wifi bulbs. Don't buy a Sengled hub.

(Disclaimer: I mentioned a Google product above. I am a Google employee, but do not work on anything related to Home.)

comment count unavailable comments

June 30, 2019 08:10 PM

June 26, 2019

Linux Plumbers Conference: Databases Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Databases Microconference has been accepted into the 2019 Linux Plumbers Conference! Linux plumbing is heavily important to those who implement databases and their users who expect fast and durable data handling.

Durability is a promise never to lose data after advising a user of a successful update, even in the face of power loss. It requires a full-stack solution from the application to the database, then to Linux (filesystem, VFS, block interface, driver), and on to the hardware.

Fast means getting a database user a response in less that tens of milliseconds, which requires that Linux filesystems, memory and CPU management, and the networking stack do everything with the utmost effectiveness and efficiency.

For all Linux users, there is a benefit in having database developers interact with system developers; it will ensure that the promise of durability and speed are both kept as newer hardware technologies emerge, existing CPU/RAM resources grow, and while data stored grows even faster.

Topics for this Microconference include:

Come and join us in the discussion about making databases run smoother and faster.

We hope to see you there!

June 26, 2019 03:14 PM

June 25, 2019

James Morris: Linux Security Summit North America 2019: Schedule Published

The schedule for the 2019 Linux Security Summit North America (LSS-NA) is published.

This year, there are some changes to the format of LSS-NA. The summit runs for three days instead of two, which allows us to relax the schedule somewhat while also adding new session types.  In addition to refereed talks, short topics, BoF sessions, and subsystem updates, there are now also tutorials (one each day), unconference sessions, and lightning talks.

The tutorial sessions are:

These tutorials will be 90 minutes in length, and they’ll run in parallel with unconference sessions on the first two days (when the space is available at the venue).

The refereed presentations and short topics cover a range of Linux security topics including platform boot security, integrity, container security, kernel self protection, fuzzing, and eBPF+LSM.

Some of the talks I’m personally excited about include:

The schedule last year was pretty crammed, so with the addition of the third day we’ve been able to avoid starting early, and we’ve also added five minute transitions between talks. We’re hoping to maximize collaboration via the more relaxed schedule and the addition of more types of sessions (unconference, tutorials, lightning talks).  This is not a conference for simply consuming talks, but to also participate and to get things done (or started).

Thank you to all who submitted proposals.  As usual, we had many more submissions than can be accommodated in the available time.

Also thanks to the program committee, who spent considerable time reviewing and discussing proposals, and working out the details of the schedule. The committee for 2019 is:

  • James Morris (Microsoft)
  • Serge Hallyn (Cisco)
  • Paul Moore (Cisco)
  • Stephen Smalley (NSA)
  • Elena Reshetova (Intel)
  • John Johnansen (Canonical)
  • Kees Cook (Google)
  • Casey Schaufler (Intel)
  • Mimi Zohar (IBM)
  • David A. Wheeler (Institute for Defense Analyses)

And of course many thanks to the event folk at Linux Foundation, who handle all of the logistics of the event.

LSS-NA will be held in San Diego, CA on August 19-21. To register, click here. Or you can register for the co-located Open Source Summit and add LSS-NA.

 

June 25, 2019 08:43 PM

June 19, 2019

Linux Plumbers Conference: Real-Time Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Real-Time Microconference has been accepted into the 2019 Linux Plumbers Conference! The PREEMPT_RT patch set (aka “The Real-Time Patch”) was created in 2004 in the effort to make Linux into a hard real-time designed operating system. Over the years much of the RT patch has made it into mainline Linux, which includes: mutexes, lockdep, high-resolution timers, Ftrace, RCU_PREEMPT, priority inheritance, threaded interrupts and much more. There’s just a little left to get RT fully into mainline, and the light at the end of the tunnel is finally in view. It is expected that the RT patch will be in mainline within a year, which changes the topics of discussion. Once it is in Linus’s tree, a whole new set of issues must be handled. The focus on this year’s Plumbers events will include:

Come and join us in the discussion of making the LWN prediction of RT coming into mainline “this year” a reality!

We hope to see you there!

June 19, 2019 10:55 PM

Linux Plumbers Conference: Testing and Fuzzing Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Testing and Fuzzing Microconference has been accepted into the 2019 Linux Plumbers Conference! Testing and fuzzing are crucial to the stability that the Linux kernel demands.

Last year’s microconference brought about a number of discussions; for example, syzkaller evolved as syzbot, which keeps track of fuzzing efforts and the resulting fixes. The closing ceremony pointed out all the work that still has to be done: There are a number of overlapping efforts, and those need to be consolidated. The use of KASAN should be increased. Where is fuzzing going next? With real-time moving forward from “if” to “when” in the mainline, how does RT test coverage increase? The unit-testing frameworks may need some unification. Also, KernelCI will be announced as an LF project this time around. Stay around for the KernelCI hackathon after the conference to help further those efforts.

Come and join us for the discussion!

We hope to see you there!

June 19, 2019 02:29 AM

June 17, 2019

Linux Plumbers Conference: Toolchains Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Toolchains Microconference has been accepted into the 2019 Linux Plumbers Conference! The Linux kernel may
be one of the most powerful systems around, but it takes a powerful toolchain to make that happen. The kernel takes advantage of any feature
that the toolchains provide, and collaboration between the kernel and toolchain developers will make that much more seamless.

Toolchains topics will include:

Come and join us in the discussion of what makes it possible to build the most robust and flexible kernel in the world!

We hope to see you there!

June 17, 2019 06:10 PM

June 15, 2019

Greg Kroah-Hartman: Linux stable tree mirror at github

As everyone seems to like to put kernel trees up on github for random projects (based on the crazy notifications I get all the time), I figured it was time to put up a semi-official mirror of all of the stable kernel releases on github.com

It can be found at: https://github.com/gregkh/linux and I will try to keep it up to date with the real source of all kernel stable releases at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/

It differs from Linus’s tree at: https://github.com/torvalds/linux in that it contains all of the different stable tree branches and stable releases and tags, which many devices end up building on top of.

So, mirror away!

Also note, this is a read-only mirror, any pull requests created on it will be gleefully ignored, just like happens on Linus’s github mirror.

If people think this is needed on any other git hosting site, just let me know and I will be glad to push to other places as well.

This notification was also cross-posted on the new http://people.kernel.org/ site, go follow that for more kernel developer stuff.

June 15, 2019 08:10 PM

June 14, 2019

Linux Plumbers Conference: Open Printing Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Open Printing Microconference has been accepted into the 2019 Linux Plumbers Conference! In today’s world much is done online. But getting a hardcopy is still very much needed, even today. Then there’s the case of having a hardcopy and wanting to scan it to make it digital. All of this is needed to be functional on Linux to keep Linux-based and open source operating systems relevant. Also, with the progress in technology, the usage of modern printers and scanners is becoming simple. The driverless concept has made printing and scanning easier and gets the job done with some simple clicks without requiring the user to install any kind of driver software. The Open Printing organization has been tasked with getting this job done. This Microconference will focus on what needs to be accomplished to keep Linux and open source operating systems a leader in today’s market.

Topics for this Microconference include:

Come and join us in the discussion of keeping your printers working.

We hope to see you there!

June 14, 2019 03:18 PM

June 13, 2019

Linux Plumbers Conference: Live Patching Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Live Patching Microconference has been accepted into the 2019 Linux Plumbers Conference! There are some workloads that require 100% uptime so rebooting for maintenance is not an option. But this can make the system insecure as new security vulnerabilities may have been discovered in the running kernel. Live kernel patching is a technique to update the kernel without taking down the machine. As one can imagine, patching a running kernel is far from trivial. Although it is being used in production today[1][2], there are still many issues that need to be solved.

These include:

Come and join us in the discussion about changing your running kernel without having to take it down!

We hope to see you there!

June 13, 2019 08:19 PM

June 12, 2019

Linux Plumbers Conference: You, Me and IoT Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the You, Me and IoT Microconference has been accepted into the 2019 Linux Plumbers Conference! IoT is becoming an
integral part of our daily lives, controlling such devices as on/off switches, temperature controls, door and window sensors and so much more. But the technology itself requires a lot of infrastructure and communication frameworks such as Zigbee, OpenHAB and 6LoWPAN. Open source Real-Time embedded operating systems also come into play like Zephyr. A completely open source framework implementation is Greybus that already made it into staging. Discussions will be around Greybus:

– Device management
– Abstracted devices
– Management of Unique IDs
– Network management
– Userspace utilities
– Network Authentication
– Encryption
– Firmware updates
– And more

Come join us and participate in the discussion on what keeps the Internet of Things together.

We hope to see you there!

June 12, 2019 02:00 PM

June 07, 2019

Pete Zaitcev: PostgreSQL and upgrades

As mentioned previously, I run a personal Fediverse instance with Pleroma, which uses Postgres. On Fedora, of course. So, a week ago, I went to do the usual "dnf distro-sync --releasever=30". And then, Postgres fails to start, because the database uses the previous format, 10, and the packages in F30 require format 11. Apparently, I was supposed to dump the database with pg_dumpall, upgrade, then restore. But now that I have binaries that refuse to read the old format, dumping is impossible. Wow.

A little web searching found an upgrader that works across formats (dnf install postgresql-upgrade; postgresql-setup --upgrade). But that one also copies the database, like a dump-restore procedure would. What if the database is too large for this? Am I the only one who finds these practices unacceptable?

Postgres was supposed to be a solid big brother to a boisterous but unreliable upstart MySQL, kind of like Postfix and Exim. But this is just such an absurd fault, it makes me think that I'm missing something essential.

UPDATE: Kaz commented that a form of -compat is conventional:

When I've upgraded in the past, Ubuntu has always just installed the new version of postgres alongside the old one, to allow me to manually export and reimport at my leisure, then remove the old version afterward. Because both are installed, you could pipe the output of one dumpall to the psql command on the other database and the size doesn't matter. The apps continue to point at their old version until I redirect them.

Yeah, as much as I can tell, Fedora does not package anything like that.

June 07, 2019 02:04 PM

June 04, 2019

Linux Plumbers Conference: Containers and Checkpoint/Restore MC Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Containers and Checkpoint/Restore Microconference has been accepted into the 2019 Linux Plumbers Conference! Even after the success of last year’s Containers Microconference, there’s still more to work on this year.

Last year had a security focus that featured seccomp support and LSM namespacing and stacking, but now the need to look at the next steps and sets of blockers for those needs to be discussed.

Since last year’s Linux Plumbers in Vancouver, binderfs has been accepted into mainline, but more work is needed in order to fully support Android containers.

Another improvement since Vancouver is that shiftfs is now functional and included in Ubuntu, however, more work is required (including changes to VFS) before shiftfs can be accepted into mainline.

CGroup V2 is an ongoing task needing more work, with one topic of particular interest being feature parity with V1.

Additional important discussion topics include:

Come join us and participate in the discussion on what holds “The Cloud” together.

June 04, 2019 02:05 PM

June 03, 2019

Pete Zaitcev: Pi-hole

With the recent move by Google to disable the ad-blockers in Chrome (except for Enterprise level customers[1]), the interest is sure to increase for methods of protection against the ad-delivered malware, other than browser plug-ins. I'm sure Barracuda will make some coin if it's still around. And on the free software side, someone is making an all-in-one package for Raspberry Pi, called "Pi-hole". It works by screwing with DNS, which is actually an impressive demonstration of what an attack on DNS can do.

An obvious problem with Pi-hole is what happens to laptops when they are outside of the home site protection. I suppose one could devise a clone of Pi-hole that plugs into the dnsmasq. Every Fedora system runs one, because NM needs it in order to support the correct lookup on VPNs {Update: see below}. The most valuable part of Pi-hole is the blocklist, the rest is just scripting.

[1] "Google’s Enterprise ad-blocking exception doesn’t seem to include G Suite’s low and mid-tier subscribers. G Suite Basic is $6 per user per month and G Suite Business is $12 per user month."

UPDATE: Ouch. A link by Roy Schestovitz made me remember how it actually worked, and I was wrong above: NM does not run dnsmasq by default. It only has a capability to do so, if you want DNS lookup on VPNs work correctly. So, every user of VPN enables "dns=dnsmasq" in NM. But it is not the default.

UPDATE: A reader mentions that he was rooted by ads served by Space.com. Only 1 degree of separation (beyond Windows in my family).

June 03, 2019 02:37 PM

May 28, 2019

Kees Cook: security things in Linux v5.1

Previously: v5.0.

Linux kernel v5.1 has been released! Here are some security-related things that stood out to me:

introduction of pidfd
Christian Brauner landed the first portion of his work to remove pid races from the kernel: using a file descriptor to reference a process (“pidfd”). Now /proc/$pid can be opened and used as an argument for sending signals with the new pidfd_send_signal() syscall. This handle will only refer to the original process at the time the open() happened, and not to any later “reused” pid if the process dies and a new process is assigned the same pid. Using this method, it’s now possible to racelessly send signals to exactly the intended process without having to worry about pid reuse. (BTW, this commit wins the 2019 award for Most Well Documented Commit Log Justification.)

explicitly test for userspace mappings of heap memory
During Linux Conf AU 2019 Kernel Hardening BoF, Matthew Wilcox noted that there wasn’t anything in the kernel actually sanity-checking when userspace mappings were being applied to kernel heap memory (which would allow attackers to bypass the copy_{to,from}_user() infrastructure). Driver bugs or attackers able to confuse mappings wouldn’t get caught, so he added checks. To quote the commit logs: “It’s never appropriate to map a page allocated by SLAB into userspace” and “Pages which use page_type must never be mapped to userspace as it would destroy their page type”. The latter check almost immediately caught a bad case, which was quickly fixed to avoid page type corruption.

LSM stacking: shared security blobs
Casey Shaufler has landed one of the major pieces of getting multiple Linux Security Modules (LSMs) running at the same time (called “stacking”). It is now possible for LSMs to share the security-specific storage “blobs” associated with various core structures (e.g. inodes, tasks, etc) that LSMs can use for saving their state (e.g. storing which profile a given task confined under). The kernel originally gave only the single active “major” LSM (e.g. SELinux, Apprmor, etc) full control over the entire blob of storage. With “shared” security blobs, the LSM infrastructure does the allocation and management of the memory, and LSMs use an offset for reading/writing their portion of it. This unblocks the way for “medium sized” LSMs (like SARA and Landlock) to get stacked with a “major” LSM as they need to store much more state than the “minor” LSMs (e.g. Yama, LoadPin) which could already stack because they didn’t need blob storage.

SafeSetID LSM
Micah Morton added the new SafeSetID LSM, which provides a way to narrow the power associated with the CAP_SETUID capability. Normally a process with CAP_SETUID can become any user on the system, including root, which makes it a meaningless capability to hand out to non-root users in order for them to “drop privileges” to some less powerful user. There are trees of processes under Chrome OS that need to operate under different user IDs and other methods of accomplishing these transitions safely weren’t sufficient. Instead, this provides a way to create a system-wide policy for user ID transitions via setuid() (and group transitions via setgid()) when a process has the CAP_SETUID capability, making it a much more useful capability to hand out to non-root processes that need to make uid or gid transitions.

ongoing: refcount_t conversions
Elena Reshetova continued landing more refcount_t conversions in core kernel code (e.g. scheduler, futex, perf), with an additional conversion in btrfs from Anand Jain. The existing conversions, mainly when combined with syzkaller, continue to show their utility at finding bugs all over the kernel.

ongoing: implicit fall-through removal
Gustavo A. R. Silva continued to make progress on marking more implicit fall-through cases. What’s so impressive to me about this work, like refcount_t, is how many bugs it has been finding (see all the “missing break” patches). It really shows how quickly the kernel benefits from adding -Wimplicit-fallthrough to keep this class of bug from ever returning.

stack variable initialization includes scalars
The structleak gcc plugin (originally ported from PaX) had its “by reference” coverage improved to initialize scalar types as well (making “structleak” a bit of a misnomer: it now stops leaks from more than structs). Barring compiler bugs, this means that all stack variables in the kernel can be initialized before use at function entry. For variables not passed to functions by reference, the -Wuninitialized compiler flag (enabled via -Wall) already makes sure the kernel isn’t building with local-only uninitialized stack variables. And now with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL enabled, all variables passed by reference will be initialized as well. This should eliminate most, if not all, uninitialized stack flaws with very minimal performance cost (for most workloads it is lost in the noise), though it does not have the stack data lifetime reduction benefits of GCC_PLUGIN_STACKLEAK, which wipes the stack at syscall exit. Clang has recently gained similar automatic stack initialization support, and I’d love to this feature in native gcc. To evaluate the coverage of the various stack auto-initialization features, I also wrote regression tests in lib/test_stackinit.c.

That’s it for now; please let me know if I missed anything. The v5.2 kernel development cycle is off and running already. :)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

May 28, 2019 03:49 AM

May 27, 2019

Linux Plumbers Conference: Distribution Kernels Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Distribution Kernels Microconference has been accepted to the 2019 Linux Plumbers Conference. This is the
first time Plumbers has offered a microconference focused on kernel distribution collaboration.

Linux distributions come in many forms, ranging from community run distributions like Debian and Gentoo, to commercially supported ones offered by SUSE or Red Hat, to focused embedded distributions like Android or Yocto. Each of these distributions maintains a kernel, making choices related to features and stability. The focus of this track is on the pain points distributions face in maintaining their chosen kernel and common solutions every distribution can benefit from.

Example topics include:

“Distribution kernel” is used in a very broad manner. If you maintain a kernel tree for use by others, we welcome you to come and share your experiences.

Here is a list of proposed topics. For Linux Plumbers 2019, new topics for microconferences can be submitted via the Call for Proposals (CfP) interface. Please visit the CfP page for more information.

May 27, 2019 05:24 PM

May 26, 2019

Linux Plumbers Conference: Linux Plumbers Earlybird Registration Quota Reached, Regular Registration Opens 30 June

A few days ago we added more capacity to the earlybird registration quota, but that too has now filled up, so your next opportunity to register for Plumbers will be Regular Registration on 30 June … or alternatively the call for presentations to the refereed track is still open and accepted talks will get a free pass.

Quotas were added a few years ago to avoid the entire conference selling out months ahead of time and accommodate attendees whose approval process takes a while or whose company simply won’t allow them to register until closer to the date of the conference.

May 26, 2019 04:54 PM

May 22, 2019

Linux Plumbers Conference: Additional early bird slots available for LPC 2019

The Linux Plumbers Conference (LPC) registration web site has been showing “sold out” recently because the cap on early bird registrations
was reached. We are happy to report that we have reviewed the registration numbers for this year’s conference and were able to open more early bird registration slots. Beyond that, regular registration will open July 1st. Please note that speakers and microconference runners get free passes to LPC, as do some microconference presenters, so that may be another way to attend the conference. Time is running out for new refereed-track and microconference proposals, so visit the CFP page soon. Topics for accepted microconferences are welcome as well.

LPC will be held in Lisbon, Portugal from Monday, September 9 through Wednesday, September 11.

We hope to see you there!

May 22, 2019 01:03 PM