Kernel Planet
November 26, 2023
Bro, u mad?
Zubrin lies like he breathes, not bothering even calculate in Excel, not to mention take integrals!
But they continue to believe him — because it's Zubrin!
When they discussed his "engine on salt of Uranium" [NSWR — zaitcev] I wrote that the engine will not work in principle, because a reactor can only work in case of effective deceleration of neutrons, but already at the temperature of the moderator of 3000 degrees (like in KIWI), the cross-section of fission decreases 10 times, and the critical mass increases proportionally. But nobody paid attention — who am I, who is Zubrin!
The core has to be hot, and the moderator has to be cool, this is essential.
But they continued to fantasize, is it going to be 100,000 degrees in there, or only 10,000?
No matter how much I pointed out the principal contradiction — here is the cold sub-critical solution, and here is super-critical plasma, only in a meter or two away, and therefore neutrons from this plasma fly into the solution — which will inevitably capture them, decelerate, and react, and therefore the whole concept goes down the toilet.
But they disucss this salt engine over decades, without trying to check Zubrin's claims.
All "normal" nuclear reactors work only in the region between "first" (including the delayed neutrons) and "second" (with fast neutrons) criticalities. Only in this region, control of the reactor is possible. By the way, the difference in breeding ratios is only 1.000 and 1.007 for slow neutrons and 1.002 for the fast ones (in case of Plutonium, this much is the case even for slow neutrons).
And by the way, average delay for delayed neutrons is 0.1 seconds! The solution has to remain in the active zone for 100 milliseconds, in order to capture the delayed neutrons! Not even the solid phase RD-0410 reached that much.
Therefore, Zubrin's engine must be critical at the prompt neutrons. And because the moderator underperforms because it's hot, prompt neutrons become indistinguishable from fission neutrons, and therefore the density of plasma has to be the same as density of metal in order to achieve criticality — that is to say, almost 20 g/sm^3 for Uranium.
But this persuades nobody, because Zubrin is Zubrin, and who are you?
It all began as a discussion of Mars Direct among geeks, but escalated quickly.
November 26, 2023 03:27 AM
November 25, 2023
Videos and slides from the 2023 Linux Security summits may be found here:
Linux Security Summit North America (LSS-NA), May 10-12 2023, Vancouver, Canada.
Linux Security Summit Europe (LSS-EU), September 20-21 2023, Bilbao, Spain.
Note: if you wish to follow Linux Security Summit announcements and event updates via Mastodon, see https://social.kernel.org/LinuxSecSummit. You can follow this via the Fediverse or the RSS reader of your choice.
November 25, 2023 08:32 PM
November 17, 2023
The current plan is to be co-located in Vienna with OSS-EU. We don’t have exact dates to give (still finding conference space) but it will be three days on the week of 16 September.
November 17, 2023 01:31 PM
November 12, 2023
As a reminder, The live stream of each main track of Linux Plumbers Conference will be available in real time on Youtube. The Links are now live in the timetable. To view, go to the Schedule Overview and click on the paperclip on the upper right of the track you want to watch to bring up the Live Stream URL.
Live Stream viewers may interact over chat by joining the Matrix Room of that event. To see all our Matrix rooms for Plumbers, go to the space #lpc2023:lpc.events in matrix. The room names should be pretty intuitive.
November 12, 2023 11:24 PM
November 09, 2023
The URL for the training session we did on Thursday morning is:
https://bbb2.lpc.events/playback/presentation/2.3/62e3456da3c0598910e28d204ee24b669d714c04-1699539923588?t=37m55s
Note that the URL skips to time index 37:55 which is where the training actually begins (the hackroom got started early).
November 09, 2023 08:46 PM
November 08, 2023
We’ll be holding a BBB Training session on Thursday (8 November) at:
7am PST, 10am EST, 3pm UTC, 4pm CET, 8:30pm IST, 12am Friday JST
This will be recorded so that you can watch it later.
What is BBB? It’s an open source video software, similar to Zoom and Google Meets, but is much better for interactions between remote attendees and a live audience.
There are several features that BBB provides, and this training session will go over the common ones that you will likely use during your presentation.
This session is highly recommend for those that are presenting remotely, and may also be useful for those that are only attending remotely, to get a feel for the platform. In person attendees are welcome too, but we’ll have shepherds in the conference rooms on the day to help you out.
To join, you will need to log in to: https://meet.lpc.events
After logging in, to join the meeting, click the Hackroom entry in the leftnav then select the join button of Hackroom 1.
November 08, 2023 02:06 PM
November 05, 2023
Linus has pulled the initial GSP firmware support for nouveau. This is just the first set of work to use the new GSP firmware and there are likely many challenges and improvements ahead.
To get this working you need to install the firmware which hasn't landed in linux-firmware yet.
For Fedora this copr has the firmware in the necessary places:
https://copr.fedorainfracloud.org/coprs/airlied/nouveau-gsp/build/6593115/
Hopefully we can upstream that in next week or so.
If you have an ADA based GPU then it should just try and work out of the box, if you have Turing or Ampere you currently need to pass nouveau.config=NvGspRm=1 on the kernel command line to attempt to use GSP.
Going forward, I've got a few fixes and stabilization bits to land, which we will concentrate on for 6.7, then going forward we have to work out how to keep it up to date and support new hardware and how to add new features.
November 05, 2023 08:23 PM
November 01, 2023
"Why does ACPI exist" - - the greatest thread in the history of forums, locked by a moderator after 12,239 pages of heated debate, wait no let me start again.
Why does ACPI exist? In the beforetimes power management on x86 was done by jumping to an opaque BIOS entry point and hoping it would do the right thing. It frequently didn't. We called this Advanced Power Management (Advanced because before this power management involved custom drivers for every machine and everyone agreed that this was a bad idea), and it involved the firmware having to save and restore the state of every piece of hardware in the system. This meant that assumptions about hardware configuration were baked into the firmware - failed to program your graphics card exactly the way the BIOS expected? Hurrah! It's only saved and restored a subset of the state that you configured and now potential data corruption for you. The developers of ACPI made the reasonable decision that, well, maybe since the OS was the one setting state in the first place, the OS should restore it.
So far so good. But some state is fundamentally device specific, at a level that the OS generally ignores. How should this state be managed? One way to do that would be to have the OS know about the device specific details. Unfortunately that means you can't ship the computer without having OS support for it, which means having OS support for every device (exactly what we'd got away from with APM). This, uh, was not an option the PC industry seriously considered. The alternative is that you ship something that abstracts the details of the specific hardware and makes that abstraction available to the OS. This is what ACPI does, and it's also what things like Device Tree do. Both provide static information about how the platform is configured, which can then be consumed by the OS and avoid needing device-specific drivers or configuration to be built-in.
The main distinction between Device Tree and ACPI is that Device Tree is purely a description of the hardware that exists, and so still requires the OS to know what's possible - if you add a new type of power controller, for instance, you need to add a driver for that to the OS before you can express that via Device Tree. ACPI decided to include an interpreted language to allow vendors to expose functionality to the OS without the OS needing to know about the underlying hardware. So, for instance, ACPI allows you to associate a device with a function to power down that device. That function may, when executed, trigger a bunch of register accesses to a piece of hardware otherwise not exposed to the OS, and that hardware may then cut the power rail to the device to power it down entirely. And that can be done without the OS having to know anything about the control hardware.
How is this better than just calling into the firmware to do it? Because the fact that ACPI declares that it's going to access these registers means the OS can figure out that it shouldn't, because it might otherwise collide with what the firmware is doing. With APM we had no visibility into that - if the OS tried to touch the hardware at the same time APM did, boom, almost impossible to debug failures (This is why various hardware monitoring drivers refuse to load by default on Linux - the firmware declares that it's going to touch those registers itself, so Linux decides not to in order to avoid race conditions and potential hardware damage. In many cases the firmware offers a collaborative interface to obtain the same data, and a driver can be written to get that. this bug comment discusses this for a specific board)
Unfortunately ACPI doesn't entirely remove opaque firmware from the equation - ACPI methods can still trigger System Management Mode, which is basically a fancy way to say "Your computer stops running your OS, does something else for a while, and you have no idea what". This has all the same issues that APM did, in that if the hardware isn't in exactly the state the firmware expects, bad things can happen. While historically there were a bunch of ACPI-related issues because the spec didn't define every single possible scenario and also there was no conformance suite (eg, should the interpreter be multi-threaded? Not defined by spec, but influences whether a specific implementation will work or not!), these days overall compatibility is pretty solid and the vast majority of systems work just fine - but we do still have some issues that are largely associated with System Management Mode.
One example is a recent Lenovo one, where the firmware appears to try to poke the NVME drive on resume. There's some indication that this is intended to deal with transparently unlocking self-encrypting drives on resume, but it seems to do so without taking IOMMU configuration into account and so things explode. It's kind of understandable why a vendor would implement something like this, but it's also kind of understandable that doing so without OS cooperation may end badly.
This isn't something that ACPI enabled - in the absence of ACPI firmware vendors would just be doing this unilaterally with even less OS involvement and we'd probably have even more of these issues. Ideally we'd "simply" have hardware that didn't support transitioning back to opaque code, but we don't (ARM has basically the same issue with TrustZone). In the absence of the ideal world, by and large ACPI has been a net improvement in Linux compatibility on x86 systems. It certainly didn't remove the "Everything is Windows" mentality that many vendors have, but it meant we largely only needed to ensure that Linux behaved the same way as Windows in a finite number of ways (ie, the behaviour of the ACPI interpreter) rather than in every single hardware driver, and so the chances that a new machine will work out of the box are much greater than they were in the pre-ACPI period.
There's an alternative universe where we decided to teach the kernel about every piece of hardware it should run on. Fortunately (or, well, unfortunately) we've seen that in the ARM world. Most device-specific simply never reaches mainline, and most users are stuck running ancient kernels as a result. Imagine every x86 device vendor shipping their own kernel optimised for their hardware, and now imagine how well that works out given the quality of their firmware. Does that really seem better to you?
It's understandable why ACPI has a poor reputation. But it's also hard to figure out what would work better in the real world. We could have built something similar on top of Open Firmware instead but the distinction wouldn't be terribly meaningful - we'd just have Forth instead of the ACPI bytecode language. Longing for a non-ACPI world without presenting something that's better and actually stands a reasonable chance of adoption doesn't make the world a better place.
comments
November 01, 2023 06:30 AM
October 29, 2023
Suppose you want to create a pipeline with the subprocess and you want to capture the stderr. A colleague of mine upstream wrote this:
p1 = subprocess.Popen(cmd1,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
p2 = subprocess.Popen(cmd2, stdin=p1.stdout,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
p1.stdout.close()
p1_stderr = p1.communicate()
p2_stderr = p2.communicate()
return p1.returncode or p2.returncode, p1_stderr, p2_stderr
Unfortunately, the above saves the p1.stdout in memory, which may come back to bite the user once the amount piped becomes large enough.
I think the right answer is this:
with tempfile.TemporaryFile() as errfile:
p1 = subprocess.Popen(cmd1,
stdout=subprocess.PIPE, stderr=errfile, close_fds=True)
p2 = subprocess.Popen(cmd2, stdin=p1.stdout,
stdout=subprocess.PIPE, stderr=errfile, close_fds=True)
p1.stdout.close()
p2.communicate()
p1.wait()
errfile.seek(0)
px_stderr = errfile.read()
return p1.returncode or p2.returncode, px_stderr
Stackoverflow is overflowing with noise on this topic. Just ignore it.
October 29, 2023 02:02 AM
October 26, 2023
The Pixel 8 hardware (Tensor G3) supports the ARM Memory Tagging Extension (MTE), and software support is available both in Android userspace and the Linux kernel. This feature is a powerful defense against linear buffer overflows and many types of use-after-free flaws. I’m extremely happy to see this hardware finally available in the real world.
Turning it on for userspace is already wired up the Android UI: Settings / System / Developer options / Memory Tagging Extension / Enable MTE until you turn if off
. Once enabled it will internally change an Android “system property” named “arm64.memtag.bootctl
” by adding the option “memtag
“.
Turning it on for the kernel is slightly more involved, but not difficult at all. This requires manually setting the “arm64.memtag.bootctl
” property mentioned above to include “memtag-kernel
” as well:
- Plug your phone into a system that can run the
adb
tool
- If not already installed, install
adb
. For example on Debian/Ubuntu: sudo apt install adb
- Turn on “USB Debugging” in the phone’s “Developer options” menu, and accept the debugging session confirmation that will pop up when you first run
adb
- Verify the current setting:
adb shell getprop | grep memtag.bootctl
[arm64.memtag.bootctl]: [memtag]
- Enable kernel MTE:
adb shell setprop arm64.memtag.bootctl memtag,memtag-kernel
- Check the results:
adb shell getprop | grep memtag.bootctl
[arm64.memtag.bootctl]: [memtag,memtag-kernel]
- Reboot your phone
To check that MTE is enabled for the kernel (which is implemented using Kernel Address Sanitizer’s Hardware Tagging mode), you can check the kernel command line after rebooting:
$ mkdir foo && cd foo
$ adb bugreport
...
$ mkdir unpacked && cd unpacked
$ unzip ../bugreport*.zip
...
$ grep kasan= bugreport*.txt
...: Command line: ... kasan=off ... kasan=on ...
The latter “kasan=on
” overrides the earlier “kasan=off
“.
Enjoy!
© 2023, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
October 26, 2023 07:19 PM
October 21, 2023
Covenants are a construction to allow introspection: a transaction output can place conditions on the transaction which spends it (beyond the specific “must provide a valid signature of itself and a particular pubkey”).
I previously looked at Examining ScriptPubkeys, but another useful thing covenants want to enforce is amounts. This is easy for equality, but consider the case where you are allowed to merge inputs: perhaps the first output amount must be the sum of the first and second inputs.
The problem is that Bitcoin Script deals in signed ones-complement values, and 31 bits limits us to 21.47483648 bitcoin. However, using OP_MULTISHA256
or OP_CAT
, it’s possible to deal with full amounts. I’ve written some (untested!) script code below.
The Vexing Problem of Amounts
Using OP_TXHASH
, we can get SHA256(input amount) and SHA256(output amount) on the stack. Since this involves hashing, we can’t evaluate the number for anything but equality, so as in other cases where we don’t have Fully Complete Covenants we need to have the user supply the actual values on the witness stack, and we test those for the conditions we want, and then make sure they match what OP_TXHASH
says is in the transaction. I usually object to this backwards form (just give me the value on the stack!), but as you’ll see, we couldn’t natively use 64 bit values from OP_TX
anyway (I had proposed pushing two values, which is its own kind of ugly).
21M BTC is just under 2^51 satoshis.
We split these bits into a pair of stack values:
- lower 24 bits
- upper bits (27, but we allow up to 31)
I call this tuple “Script-friendly pair” (SFP) form. Note that all
script numbers on stack are represented in little-endian, with a sign bit (0x80
on the last byte). This is a nasty format to work with, unfortunately.
Converting A Script-Friendly Pair to an 8-byte Little-Endian Value
Here’s the code to takes a positive CScriptNum, and produces two stack
values which can be concatenated to make a 4 byte unsigned value:
# !UNTESTED CODE!
# Stack (top to bottom): lower, upper
OP_SWAP
# Generate required prefix to append to stack value to make it 4 bytes long.
OP_SIZE
OP_DUP
OP_NOTIF
# 0 -> 00000000
OP_DROP
4 OP_PUSHDATA1 0x00 0x00 0x00 0x00
OP_ELSE
OP_DUP
1 OP_EQUAL OP_IF
# Single byte: prepend 0x00 0x00 0x00
OP_DROP
3 OP_PUSHDATA1 0x00 0x00 0x00
OP_ELSE
OP_DUP
2 OP_EQUAL OP_IF
# Two bytes: prepend 0x00 0x00
2 OP_PUSHDATA1 0x00 0x00
OP_ELSE
3 OP_EQUAL OP_IF
# Three bytes: prepend 0x00
1 OP_PUSHDATA1 0x00
OP_ELSE
# Prepend nothing.
0
OP_ENDIF
OP_ENDIF
OP_ENDIF
OP_ENDIF
OP_SWAP
# Stack (top to bottom): upper, pad, lower
That 46 bytes handles upper. Now lower is a CScriptNum between 0 and 16777215,
and we want to produce two stack values which can be concatenated to
make an 3 byte unsigned value. Here we have to remove the zero-padding
in the four-byte case:
# !UNTESTED CODE!
# Stack (top to bottom): upper, pad, lower
OP_ROT
# Generate required prefix to append to stack value to make it 3 bytes long.
OP_SIZE
OP_DUP
OP_NOTIF
# 0 -> 000000
OP_DROP
3 OP_PUSHDATA1 0x00 0x00 0x00
OP_ELSE
OP_DUP
1 OP_EQUAL OP_IF
# Single byte: prepend 0x00 0x00
OP_DROP
2 OP_PUSHDATA1 0x00 0x00
OP_ELSE
OP_DUP
2 OP_EQUAL OP_IF
# Two bytes. Now maybe final byte is 0x00 simply so it doesn't
# appear negative, but we don't care.
1 OP_PUSHDATA1 0x00
OP_ELSE
# Three bytes: empty append below
3 OP_EQUAL OP_NOTIF
# Four bytes, e.g. 0xff 0xff 0xff 0x00
# Convert to three byte version: negate and add 2^23
# => 0xff 0xff 0xff
OP_NEG
4 OP_PUSHDATA1 0x00 0x00 0x80 0x00
OP_ADD
OP_ENDIF
# Prepend nothing.
0
OP_ENDIF
OP_ENDIF
OP_ENDIF
OP_SWAP
# Stack (top to bottom): lower, pad, upper, pad
You can optimize these 47 bytes a little, but I’ll leave that as an
exercise for the reader!
Now we use OP_MULTISHA256
(or OP_CAT
3 times and OP_SHA256
) to
concatentate them to form an 8-byte little-endian number, for
comparison against the format used by OP_TXHASH
.
Basically, 95 bytes to compare our tuple to a hashed value.
Adding Two Script-Friendly Pairs
Let’s write some code to add two well-formed Script-Friendly Pairs!
# !UNTESTED CODE!
# Stack (top to bottom): a_lower, a_upper, b_lower, b_upper
OP_ROT
OP_ADD
OP_DUP
4 OP_PUSHDATA1 0x00 0x00 0x00 0x01
OP_GREATERTHANOREQUAL
OP_IF
# lower overflow, bump upper.
# FIXME: We can OP_TUCK this constant above!
4 OP_PUSHDATA1 0x00 0x00 0x00 0x01
OP_SUB
OP_SWAP
OP_1ADD
OP_ELSE
OP_SWAP
OP_ENDIF
# Stack now: a_upper(w/carry), lower_sum, b_upper.
OP_ROT
OP_ADD
OP_SWAP
# Stack now: lower_sum, upper_sum
Note that these 26 bytes don’t check that upper doesn’t overflow: if we’re dealing with verified amounts, we can add 16 times before it’s even possible (and it’s never possible with distinct amounts of course). Still, we can add OP_DUP 0 OP_GREATERTHANOREQUAL OP_VERIFY
before the final OP_SWAP
.
Checking Script-Friendly Pairs
The code above assumes well-formed pairs, but since the pairs will come from the witness stack, we need to have a routine to check that a pair is wel-formed:
# !UNTESTED CODE!
# Stack: lower, upper
OP_DUP
# lower must be 0 - 0xFFFFFF inclusive
0
4 OP_PUSHDATA1 0xFF 0xFF 0xFF 0x00
OP_WITHIN
OP_VERIFY
OP_OVER
# upper must be 0 - 0x7FFFFFF inclusive
0
4 OP_PUSHDATA1 0xFF 0xFF 0xFF 0x07
OP_WITHIN
OP_VERIFY
This ensures the ranges are all within spec: no negative numbers, no giant numbers.
Summary
While this shows that OP_CAT
/OP_MULTISHA256
is sufficient to deal with bitcoin amounts in Script, the size (about 250 bytes to validate that two inputs equals one output) makes a fairly compelling case for optimization.
It’s worth noting that this is why Liquid chose to add the following 64-bit opcodes to bitscoin script: OP_ADD64
, OP_SUB64
, OP_MUL64
, OP_DIV64
, OP_NEG64
, OP_LESSTHAN64
, OP_LESSTHANOREQUAL64
, OP_GREATERTHAN64
, OP_GREATERTHANOREQUAL64
.
(They also reenabled the bitwise opcodes (OP_XOR
etc) to work just fine with these. They also implemented OP_SCRIPTNUMTOLE64
, OP_LE64TOSCRIPTNUM
and OP_LE32TOLE64
for conversion.)
In my previous post I proposed OP_LESS
which works on arbitrary values, which doen’t work for these because the endian is wrong! As a minimum, we’d need to add OP_LESSTHAN64
, OP_ADD64
and OP_NEG64
to allow 64-bit comparison, addition and subtraction.
But, with only OP_CAT
or OP_MULTISHA256
, it’s possible to deal with amounts. It’s just not pretty!
Thanks for reading!
October 21, 2023 01:30 PM
October 19, 2023
Covenants are a construction to allow introspection: a transaction output can place conditions on the transaction which spends it (beyond the specific “must provide a valid signature of itself and a particular pubkey”).
My preferred way of doing instrospection is for Bitcoin Script have a way of asking for various parts of the transaction onto the stack (aka OP_TX
) for direct testing (Fully Complete Covenants, as opposed to using some tx hash, forcing the Script to produce a matching hash to pass (Equality Covenants). In the former case, you do something like:
# Is the nLocktime > 100?
OP_TX_BIT_NLOCKTIME OP_TX 100 OP_GREATERTHAN OP_VERIFY
In the latter you do something like:
# They provide nLocktime on the stack.
OP_DUP
# First check it's > 100
100 OP_GREATERTHAN OP_VERIFY
# Now check it's actually the right value, by comparing its hash the hash of nLocktime
OP_SHA256
OP_TX_BIT_NLOCKTIME OP_TXHASH OP_EQUALVERIFY
However, when we come to examining an output’s ScriptPubkey, we’re forced into the latter mode unless we’re seeking an exact match: the ScriptPubkey is (almost always) a one-way function of the actual spending conditions.
Making a Simple Taproot, in Script
Let’s take a simple taproot case. You want to assert that the scriptPubkey pays to a known key K
, or a script given by the covenent spender. This is the simplest interesting form of Taproot, with a single script path.
The steps to make this into a ScriptPubkey (following BIP 341) are:
- Get a tagged tapleaf hash of the script
- Tweak the key
K
by this value.
- Prepend two bytes “0x51 0x20”.
- Compare with the ScriptPubkey of this tx.
Step 1: We need OP_CAT, or OP_MULTISHA256
If we spell out the things we need to hash, it looks like:
SHA256(SHA256("TapLeaf") + SHA256("TapLeaf") + 0xC0 + CSCRIPTNUM(LEN(script)) + script)
CSCRIPTNUM(X)
is (if X
is in canonical form, as it will be from OP_SIZE):
- if
X
is less than 253:
- otherwise, if the length is less than 256:
- otherwise, if the length is less than 65536:
- otherwise, we don’t care, make shorter scripts!
The obvious way to do this is to enable OP_CAT
, but this was removed because it allows construction of giant stack variables. If that is an issue, we can instead use a “concatenate-and-hash” function OP_MULTISHA256
, which turns out to be easiest to use if it hashes the stack from top to bottom.
OP_MULTISHA256
definition:
- If the stack is empty, fail.
- Pop
N
off the stack.
- If
N
is not a CScriptNum, fail.
- If there are fewer than
N
entries on the stack, fail.
- Initialize a SHA256 context.
- while
N
> 0:
- Pop the top entry off the stack.
- Hash it into the SHA256 context
- Decrement
N
- Finish the SHA256 context, and push the resulting 32 bytes onto the stack.
The result is either:
# Script is on stack, produce tagged tapleaf hash
# First, encode length
OP_SIZE
OP_DUP
# < 253?
OP_PUSHDATA1 1 253 OP_LESSTHAN
OP_IF
# Empty byte on stack:
0
OP_ELSE
OP_DUP
# > 255?
OP_PUSHDATA1 1 0xFF OP_GREATERTHAN
OP_IF
OP_PUSHDATA1 1 0xFD
OP_ELSE
# Needs padding byte
OP_PUSHDATA1 2 0xFD 0x00
OP_ENDIF
OP_ENDIF
# Push 0xC0 leaf_version on stack
OP_PUSHDATA1 1 0xC0
# Push hashed tag on stack, twice.
OP_PUSHDATA1 7 "TapLeaf"
OP_SHA256
OP_DUP
# Now, hash them together
6 OP_MULTISHA256
Or, using OP_CAT
(assuming it also concatenates the top of stack to second on stack):
# Script is on stack, produce tagged tapleaf hash
# First, encode length
OP_SIZE
OP_DUP
# < 253?
OP_PUSHDATA1 1 253 OP_LESSTHAN
OP_NOTIF
OP_DUP
# > 255?
OP_PUSHDATA1 1 0xFF OP_GREATERTHAN
OP_IF
OP_PUSHDATA1 1 0xFD
OP_ELSE
# Needs padding byte
OP_PUSHDATA1 2 0xFD 0x00
OP_ENDIF
OP_CAT
OP_ENDIF
# Prepend length to script
OP_CAT
# Prepend 0xC0 leaf_version
OP_PUSHDATA1 1 0xC0
OP_CAT
# Push hashed tag on stack, twice, and prepend
OP_PUSHDATA1 7 "TapLeaf"
OP_SHA256
OP_DUP
OP_CAT
OP_CAT
# Hash the lot.
OP_SHA256
Step 2: We need to Tweak a Key, OP_KEYADDTWEAK
Now, we need to tweak a public key, as detailed in BIP 341:
def taproot_tweak_pubkey(pubkey, h):
t = int_from_bytes(tagged_hash("TapTweak", pubkey + h))
if t >= SECP256K1_ORDER:
raise ValueError
P = lift_x(int_from_bytes(pubkey))
if P is None:
raise ValueError
Q = point_add(P, point_mul(G, t))
return 0 if has_even_y(Q) else 1, bytes_from_int(x(Q))
Let’s assume OP_KEYADDTWEAK
works like so:
- If there are less than two items on the stack, fail.
- Pop the tweak
t
off the stack. If t >= SECP256K1_ORDER, fail.
- Pop the key
P
off the stack. If it is not a valid compressed pubkey, fail. Convert to Even-Y if necessary. (i.e. lift_x()
).
Q = P + t*G
.
- Push the X coordinate of Q on the stack.
So now we just need to create the tagged hash, and feed it to OP_KEYADDTWEAK
:
# Key, tapscript hash are on stack.
OP_OVER
OP_PUSHDATA1 8 "TapTweak"
OP_SHA256
OP_DUP
# Stack is now: key, tapscript, key, H(TapTweak), H(TapTweak)
4 OP_MULTISHA256
OP_KEYADDTWEAK
Or with OP_CAT
instead of OP_MULTISHA256
:
# Key, tapscript hash are on stack.
OP_OVER
OP_PUSHDATA1 8 "TapTweak"
OP_SHA256
OP_DUP
# Stack is now: key, tapscript, key, H(TapTweak), H(TapTweak)
OP_CAT
OP_CAT
OP_CAT
OP_SHA256
OP_KEYADDTWEAK
Step 3: We Need To Prepend The Taproot Bytes
This is easy with OP_CAT
:
# ScriptPubkey, Taproot key is on stack.
# Prepend "OP_1 32" to make Taproot v1 ScriptPubkey
OP_PUSHDATA1 2 0x51 0x20
OP_CAT
OP_EQUALVERIFY
With OP_MULTISHA256
we need to hash the ScriptPubkey to compare it (or, if we only have OP_TXHASH
, it’s already hashed):
# ScriptPubkey, Taproot key is on stack.
OP_SHA256
# Prepend "OP_1 32" to make Taproot v1 ScriptPubkey
OP_PUSHDATA1 2 0x51 0x20
2 OP_MULTISHA256
# SHA256(ScriptPubkey) == SHA256(0x51 0x20 taproot)
OP_EQUALVERIFY
Making a More Complete Taproot, in Script
That covers the “one key, one script” case.
If we have more than one taproot leaf, we need to perform the merkle
on them, rather than simply use the taproot leaf directly. Let’s
assume for simplicity that we have two scripts:
- Produce the tagged leaf hash for scripts, call them
H1
and H2
.
- If
H1
< H2
, merkle is TaggedHash("TapBranch", H1 + H2)
, otherwise TaggedHash("TapBranch", H2 + H1)
Step 1: Tagged Hash
We’ve done this before, it’s just Step 1 as before.
Step 2: Compare and Hash: We Need OP_LESS or OP_CONDSWAP
Unfortunately, all the arithmetic functions except OP_EQUAL
only take CScriptNums, so we need a new opcode to compare 32-byte blobs. Minimally, this would be OP_LESS
, though OP_CONDSWAP
(put lesser one on top of stack) is possible too. In our case we don’t care what happens in unequal lengths, but if we assume big-endian values are most likely, we could zero-prepend to the shorter value before comparing.
The result looks like this:
# Hash1, Hash2 are on the stack.
# Put lesser hash top of stack if not already
OP_LESS
OP_NOTIF OP_SWAP OP_ENDIF
OP_PUSHDATA1 9 "TapBranch"
OP_SHA256
OP_DUP
4 OP_MULTISHA256
Or, using OP_CAT
and OP_CONDSWAP
:
# Hash1, Hash2 are on the stack.
# Put lesser hash top of stack if not already
OP_CONDSWAP
OP_PUSHDATA1 9 "TapBranch"
OP_SHA256
OP_DUP
OP_CAT
OP_CAT
OP_CAT
OP_SHA256
So now we can make arbitrarily complex merkle trees from parts, in Script!
Making More Useful Templates: Reducing the Power of OP_SUCCESS
Allowing the covenant spender to specify a script branch of their own
is OK if we simply want a condition which is “… OR anything you
want”. But that’s not generally useful: consider vaults, where you
want to enforce a delay, after which they can spend. In this case,
we want “… AND anything you want”.
We can, of course, insist that the script they provide starts with
1000 OP_CHECKSEQUENCEVERIFY
. But because any unknown opcode causes
immediate script success (without actually executing anything), they
can override this test by simply inserting an invalid opcode in the
remainder of the script!
There are two ways I can see to resolve this: one is delegation, where
the remainder of the script is popped off the stack (OP_POPSCRIPT
?).
You would simply insist that the script they provide be exactly 1000
OP_CHECKSEQUENCEVERIFY OP_POPSCRIPT
.
The other way is to weaken OP_SUCCESSx
opcodes. This must be done
carefully! In particular, we can use a separator, such as
OP_SEPARATOR
, and change the semantics of OP_SUCCESSx
:
- If there is an
OP_SEPARATOR
before OP_SUCCESSx
:
- Consider the part before the
OP_SEPARATOR
:
- if (number of
OP_IF
) + (number of OP_NOTIF
) > (number of OP_ENDIF
): fail
- Otherwise execute it as normal: if it fails, fail.
- Succeed the script
This insulates a prefix from OP_SUCCESSx
, but care has to be taken
that it is a complete script fragment: a future OP_SUCCESSx
definition
must not turn an invalid script into a valid one (by revealing an
OP_ENDIF
which would make the script valid).
Summary
I’ve tried to look at what it would take to make generic convenants in Script: ones which can meaningfully interrogate spending conditions assuming some way (e.g. OP_TXHASH
) of accessing an output’s script. There are reasons to believe this is desirable (beyond a completeness argument): vaulting in particular requires this.
We need three new Script opcodes: I’ve proposed OP_MULTISHA256
, OP_KEYADDTWEAK
and OP_LESS
, and a (soft-fork) revision to treatment of OP_SUCCESSx
. None of these are grossly complex.
The resulting scripts are quite long (and mine are untested and no doubt buggy!). It’s 41 bytes to hash a tapleaf, 19 to combine two tapleaves, 8 to compare the result to the scriptpubkey. That’s at least 109 witness weight to do a vault, and in addition you need to feed it the script you’re using for the output. That seems expensive, but not unreasonable: if this were to become common then new opcodes could combine several of these steps.
I haven’t thought hard about the general applicability of these opcodes, so there may be variants which are better when other uses are taken into account.
Thanks for reading!
October 19, 2023 01:30 PM
October 13, 2023
The schedule for when the miniconferences and tracks are going to occur is now posted at: https://lpc.events/event/17/timetable/#all
The Linux Plumbers Refereed track schedule is now available at: https://lpc.events/event/17/timetable/#all.detailed
The runners for the miniconferences and kernel summit will be adding more details to each of their schedules over the coming weeks, as will the leads for the networking and toolchain tracks.
For those that are registered as in person, you are free to continue to submit Birds of a Feather(BOF) sessions. They will be allocated space in the BOF rooms on a first come, first serve basis. Please note that these BOFs will not be recorded.
We’re looking forward to a great 3 days of presentations and discussions. We hope you can join us either in-person or virtually!
October 13, 2023 03:29 AM
October 12, 2023
The Free Software Foundation Europe and the Software Freedom Conservancy recently released a statement that they would no longer work with Eben Moglen, chairman of the Software Freedom Law Center. Eben was the general counsel for the Free Software Foundation for over 20 years, and was centrally involved in the development of version 3 of the GNU General Public License. He's devoted a great deal of his life to furthering free software.
But, as described in the joint statement, he's also acted abusively towards other members of the free software community. He's acted poorly towards his own staff. In a professional context, he's used graphically violent rhetoric to describe people he dislikes. He's screamed abuse at people attempting to do their job.
And, sadly, none of this comes as a surprise to me. As I wrote in 2017, after it became clear that Eben's opinions diverged sufficiently from the FSF's that he could no longer act as general counsel, he responded by threatening an FSF board member at an FSF-run event (various members of the board were willing to tolerate this, which is what led to me quitting the board). There's over a decade's evidence of Eben engaging in abusive behaviour towards members of the free software community, be they staff, colleagues, or just volunteers trying to make the world a better place.
When we build communities that tolerate abuse, we exclude anyone unwilling to tolerate being abused[1]. Nobody in the free software community should be expected to deal with being screamed at or threatened. Nobody should be afraid that they're about to have their sexuality outed by a former boss.
But of course there are some that will defend Eben based on his past contributions. There were people who were willing to defend Hans Reiser on that basis. We need to be clear that what these people are defending is not free software - it's the right for abusers to abuse. And in the long term, that's bad for free software.
[1] "Why don't people just get better at tolerating abuse?" is a terrible response to this. Why don't abusers stop abusing? There's fewer of them, and it should be easier.
comments
October 12, 2023 04:32 PM
September 23, 2023
Now that the MC selection process is finished, we’ve recovered enough passes to reopen general registration. If you still wish to register, please go to our Attend page.
Hopefully we recovered enough passes to keep registration open for a couple of weeks, if not longer, but please don’t wait …
September 23, 2023 06:57 PM
September 14, 2023
At least two people have contacted me concerning the 2 BTC bounty:
2 BTC for a human-readable bolt 12 offer generator feature integrated into a popular iOS or android bitcoin wallet. “Human-readable” means something that can be used on feature phone without QR or copy/paste ability. For example, something that looks like LN address.
This, of course, is asking to solve Zooko’s Triangle, so one of decentralizationm, human readability, or security needs to compromise! Fortunately, the reference to LN address gives a hint on how we might proceed.
The scenario, presumably, is Bob wants to pay Alice, where Alice shows Bob a “Human Readable Offer” and Bob types it into his phone. Each one runs Phoenix, Greenlight, or (if their phone is too low-end) uses some hosted service, but any new third party trust should be minimized.
There are three parts we need here:
- Bob finds Alice’s node.
- Bob requests Alice’s node for invoice.
- If she wants, Alice can easily check Bob’s going to pay the right thing.
The Imagined Scenario
Consider the normal offer case: the offer encodes Alice’s nodeid and description (and maybe other info) about what’s on offer. Bob turns this into an invoice_request, sends an onion message to Alice’s node, which returns the (signed) invoice, which Bob pays. We need to encode that nodeid and extra information as compactly as we can.
Part 1: Finding Alice’s Node from a Human Readable Offer
The issue of “finding Alice’s node” has been drafted already for BOLT12, at https://github.com/rustyrussell/bolt12address (but it needs updating!). This means that if you say “rusty@blockstream.com” you can get a valid generic offer, either by contacting the webserver at “blockstream.com” or having someone else do it for you (important for privacy!), or even downloading a public list of common receivers.
Note that it’s easier to type *
than @
on feature phones, so I suggest allowing both rusty@blockstream.com
and RUSTY*BLOCKSTREAM.COM
.
What’s Needed On The Server
- The BOLT 12 Address Format needs to be updated.
- It needs to be implemented for some Web server.
- Ideally, integrate it into BTC Payserver or the like.
Part 2: Getting the Invoice
Now, presumably, we want a specific invoice: it might be some default “donate to Alice”, but it could be a specific thing “$2 hot dog”. So you really want some (short!) short-code to indicate which invoice you want. I suggest a hash, followed by some randomly chosen alphanumeric string here (case-insensitive!): an implementation may choose to restrict themselves to numbers however, as that’s faster to enter on a feature phone.
What’s Needed On The Server
- We can put the short-code in the
invreq_payer_note
field in BOLT 12 or add a new odd field.
- We need to implement (presumably in Core Lightning):
- A way to specify/assign a short-code for each offer.
- A way of serving a particular invoice based on this short-code match.
Part 3: Checking the Invoice
So, did you even get the right node id? That’s the insecure part; you’re trusting blockstream.com! Checking the nodeid is hard: someone can grind out a nodeid with the same first 16 digits in a few weeks. But I think you can provide some assurance, by creating a 4-color “flag” using the node id and the latest bitcoin blocks: this will change every new block, and is comparable between Alice and Bob at a glance:

This was made using this hacky code which turns my node id 024b9a1fa8e006f1e3937f65f66c408e6da8e1ca728ea43222a7381df1cc449605 into an RGB color (by hashing the nodeid+blockhash).
For a moment, when a new block comes in, one image might be displaced, hence the number, but it’ll only be out by one.
Putting it All Together
What’s Needed On Alice’s Client
- Alice needs to configure her BOLT12 Address with some provider when she sets up the phone: it should check that it works!
- She should be able to choose an existing offer (may be a “donation” by default), or create a new one on the fly (with a new short code).
- Display the BOLT12-ADDRESS
#
SHORT-CODE, and the current nodeid flag.
What’s Needed On Bob’s Client
- It needs to be able to convert BOLT12-ADDRESS into a bolt12 address request:
- Either via some service (to be implemented!), or by directly query (ideally over Tor).
- It needs to be able to produce an offer from the returns bolt12 address response, by putting the SHORT-CODE into the invreq_payer_note.
- It needs to be able to fetch an invoice for this offer.
- It needs to be able to display the current nodeid flag for the invoice’s node id.
- Allow Bob to confirm to send payment.
Is There Anything Else?
There are probably other ways of doing this, but this method has the advantage of driving maturity in several different areas which we want to see in Bitcoin:
- bolt12 address to support vendor field validation for offers.
- Simple name support for bootstrapping.
- Driving Bitcoin to be more accessible to everyone!
Feel free to contact me with questions!
September 14, 2023 02:30 PM
September 13, 2023
TPMs contain a set of registers ("Platform Configuration Registers", or PCRs) that are used to track what a system boots. Each time a new event is measured, a cryptographic hash representing that event is passed to the TPM. The TPM appends that hash to the existing value in the PCR, hashes that, and stores the final result in the PCR. This means that while the PCR's value depends on the precise sequence and value of the hashes presented to it, the PCR value alone doesn't tell you what those individual events were. Different PCRs are used to store different event types, but there are still more events than there are PCRs so we can't avoid this problem by simply storing each event separately.
This is solved using the event log. The event log is simply a record of each event, stored in RAM. The algorithm the TPM uses to calculate the PCR values is known, so we can reproduce that by simply taking the events from the event log and replaying the series of events that were passed to the TPM. If the final calculated value is the same as the value in the PCR, we know that the event log is accurate, which means we now know the value of each individual event and can make an appropriate judgement regarding its security.
If any value in the event log is invalid, we'll calculate a different PCR value and it won't match. This isn't terribly helpful - we know that at least one entry in the event log doesn't match what was passed to the TPM, but we don't know which entry. That means we can't trust any of the events associated with that PCR. If you're trying to make a security determination based on this, that's going to be a problem.
PCR 7 is used to track information about the secure boot policy on the system. It contains measurements of whether or not secure boot is enabled, and which keys are trusted and untrusted on the system in question. This is extremely helpful if you want to verify that a system booted with secure boot enabled before allowing it to do something security or safety critical. Unfortunately, if the device gives you an event log that doesn't replay correctly for PCR 7, you now have no idea what the security state of the system is.
We ran into that this week. Examination of the event log revealed an additional event other than the expected ones - a measurement accompanied by the string "Boot Guard Measured S-CRTM". Boot Guard is an Intel feature where the CPU verifies the firmware is signed with a trusted key before executing it, and measures information about the firmware in the process. Previously I'd only encountered this as a measurement into PCR 0, which is the PCR used to track information about the firmware itself. But it turns out that at least some versions of Boot Guard also measure information about the Boot Guard policy into PCR 7. The argument for this is that this is effectively part of the secure boot policy - having a measurement of the Boot Guard state tells you whether Boot Guard was enabled, which tells you whether or not the CPU verified a signature on your firmware before running it (as I wrote before, I think Boot Guard has user-hostile default behaviour, and that enforcing this on consumer devices is a bad idea).
But there's a problem here. The event log is created by the firmware, and the Boot Guard measurements occur before the firmware is executed. So how do we get a log that represents them? That one's fairly simple - the firmware simply re-calculates the same measurements that Boot Guard did and creates a log entry after the fact[1]. All good.
Except. What if the firmware screws up the calculation and comes up with a different answer? The entry in the event log will now not match what was sent to the TPM, and replaying will fail. And without knowing what the actual value should be, there's no way to fix this, which means there's no way to verify the contents of PCR 7 and determine whether or not secure boot was enabled.
But there's still a fundamental source of truth - the measurement that was sent to the TPM in the first place. Inspired by Henri Nurmi's work on sniffing Bitlocker encryption keys, I asked a coworker if we could sniff the TPM traffic during boot. The TPM on the board in question uses SPI, a simple bus that can have multiple devices connected to it. In this case the system flash and the TPM are on the same SPI bus, which made things easier. The board had a flash header for external reprogramming of the firmware in the event of failure, and all SPI traffic was visible through that header. Attaching a logic analyser to this header made it simple to generate a record of that. The only problem was that the chip select line on the header was attached to the firmware flash chip, not the TPM. This was worked around by simply telling the analysis software that it should invert the sense of the chip select line, ignoring all traffic that was bound for the flash and paying attention to all other traffic. This worked in this case since the only other device on the bus was the TPM, but would cause problems in the event of multiple devices on the bus all communicating.
With the aid of this analyser plugin, I was able to dump all the TPM traffic and could then search for writes that included the "0182" sequence that corresponds to the command code for a measurement event. This gave me a couple of accesses to the locality 3 registers, which was a strong indication that they were coming from the CPU rather than from the firmware. One was for PCR 0, and one was for PCR 7. This corresponded to the two Boot Guard events that we expected from the event log. The hash in the PCR 0 measurement was the same as the hash in the event log, but the hash in the PCR 7 measurement differed from the hash in the event log. Replacing the event log value with the value actually sent to the TPM resulted in the event log now replaying correctly, supporting the hypothesis that the firmware was failing to correctly reconstruct the event.
What now? The simple thing to do is for us to simply hard code this fixup, but longer term we'd like to figure out how to reconstruct the event so we can calculate the expected value ourselves. Unfortunately there doesn't seem to be any public documentation on this. Sigh.
[1] What stops firmware on a system with no Boot Guard faking those measurements? TPMs have a concept of "localities", effectively different privilege levels. When Boot Guard performs its initial measurement into PCR 0, it does so at locality 3, a locality that's only available to the CPU. This causes PCR 0 to be initialised to a different initial value, affecting the final PCR value. The firmware can't access locality 3, so can't perform an equivalent measurement, so can't fake the value.
comments
September 13, 2023 09:02 PM
September 03, 2023
Linux Plumbers is now sold out and in-person registration is closed.
This year it happened not as fast as in 2022, but the registration is still sold out long before the event.
We are setting up a waitlist for in-person registration (virtual attendee places are still available). Please fill in this form and try to be clear about your reasons for wanting to attend. This year we’re giving waitlist priority to new attendees and people expected to contribute content.
September 03, 2023 10:00 AM
The Containers and Checkpoint/Restore micro-conference focuses on both userspace and kernel related work. The micro-conference targets the wider container ecosystem ideally with participants from all major container runtimes as well as init system developers.
The microconference will be discussing recent advancements in container technologies with some of the usual candidates being:
- CGroupV2 feature parity with CGroupV1
- Emulation of various files and system calls through FUSE and/or Seccomp
- Dealing with the eBPF-ification of the world
- Making user namespaces more accessible
- VFS idmap improvements
On the checkpoint/restore front, some of the potential topics include:
- Restoring FUSE services
- Handling GPUs
- Dealing with restartable sequences
And quite likely a variety of other container and checkpoint/restore topics as things evolve between now and the event.
Past editions of this micro-conference have been the source of many developments in the Linux kernel, including:
- PIDfds
- VFS idmap
- FUSE in user namespace
- Unprivileged overlayfs
- Time namespace
- A variety of CRIU features and checkpoint/restore kernel interfaces with the latest among them being
Use LPC abstract submission page to submit your proposals and select “Containers and Checkpoint/Restart” track.
September 03, 2023 09:26 AM
September 01, 2023
Sriram invited me to the oneAPI meetup, and I felt I hadn't summed up the state of compute and community development in a while. Enjoy 45 minutes of opinions!
https://www.youtube.com/watch?v=HzzLY5TdnZo
September 01, 2023 07:12 PM
August 31, 2023
The Power Management and Thermal Control microconference focuses on power management and thermal control infrastructure, CPU and device power-management mechanisms, and thermal control methods.
In particular, we are interested in improving the thermal control infrastructure in the kernel to cover more use cases and utilizing energy-saving opportunities offered by modern hardware in new ways.
The goal is to facilitate cross-framework and cross-platform discussions that can help improve energy-awareness and thermal control in Linux.
The current list of topics proposed so far includes the following:
August 31, 2023 11:37 AM
August 29, 2023
Work involves supporting Windows (there's a lot of specialised hardware design software that's only supported under Windows, so this isn't really avoidable), but also involves git, so I've been working on extending our support for hardware-backed SSH certificates to Windows and trying to glue that into git. In theory this doesn't sound like a hard problem, but in practice oh good heavens.
Git for Windows is built on top of msys2, which in turn is built on top of Cygwin. This is an astonishing artifact that allows you to build roughly unmodified POSIXish code on top of Windows, despite the terrible impedance mismatches inherent in this. One is that until 2017, Windows had no native support for Unix sockets. That's kind of a big deal for compatibility purposes, so Cygwin worked around it. It's, uh, kind of awful. If you're not a Cygwin/msys app but you want to implement a socket they can communicate with, you need to implement this undocumented protocol yourself. This isn't impossible, but ugh.
But going to all this trouble helps you avoid another problem! The Microsoft version of OpenSSH ships an SSH agent that doesn't use Unix sockets, but uses a named pipe instead. So if you want to communicate between Cygwinish OpenSSH (as is shipped with git for Windows) and the SSH agent shipped with Windows, you need something that bridges between those. The state of the art seems to be to use npiperelay with socat, but if you're already writing something that implements the Cygwin socket protocol you can just use npipe to talk to the shipped ssh-agent and then export your own socket interface.
And, amazingly, this all works? I've managed to hack together an SSH agent (using Go's SSH agent implementation) that can satisfy hardware backed queries itself, but forward things on to the Windows agent for compatibility with other tooling. Now I just need to figure out how to plumb it through to WSL. Sigh.
comments
August 29, 2023 06:57 AM
Confidential Computing is continuing to remain a popular topic in computing industry. From memory encryption to trusted I/O, hardware has been constantly improving and broadening. In the past years, confidential computing microconferences have brought together developers working on various features in hypervisors, firmware, Linux kernel, low level userspace up to container runtimes. We have discussed a broad range of topics, ranging from, hardware enablement to generic attestation workflows.
Just in the last year, we have seen support for Intel TDX and AMD SEV-SNP guests merged into Linux. Support for unaccepted memory has also landed in mainline. We have also had support for running as a CVM under Hyper-V partially merged into the kernel. However, there is still a long way to go before a complete Confidential Computing stack with open source software and Linux as the hypervisor becomes a reality. We invite contributions to this microconference to help make progress to that goal.
Topics of interest include
Please use the LPC CfP process to submit your proposals. Submissions can be made via the LPC abstract submission page. Make sure to select “Confidential Computing MC” as the track.
August 29, 2023 04:48 AM
August 23, 2023
The IoT Microconference is a forum for developers to discuss all things IoT. Topics include tools, telemetry, device drivers, and protocols in not only the Linux kernel but also Real-Time Operating Systems such as Zephyr.
Since last year, there have been a number of new technical topics with significant updates.
- Opportunities in IoT and Edge computing with the Linux /dev/accel API
- Using the Thrift RPC framework between Linux and Zephyr
- Zephyr’s new HTTP Server (a GSoC project)
- Rust in the Zephyr RTOS: Benefits, Challenges and Missing Pieces
- BeagleConnect Freedom Updates, Greybus, and the Linux Interface
- Linux-wpan updates on 6lowpan, 802.15.4 PAN coordinators and UWB
Current Problems that require attention (stakeholders):
- IEEE 802.15.4 SubGHz improvement areas in Zephyr, Linux (Florian Grandel, Stefan Schmidt, BeagleBoard.org)
- WpanUSB upstreaming in the Linux kernel, potentially dropping Zephyr support (Andrei Emeltchenko, BeagleBoard.org)
- IEEE 802.15.4 Linux subsystem device association handling (Miquel Raynal, Alexander Aring, Stefan Schmidt)
- Zephyr potentially dropping Bluetooth IPSP?
On a slightly less technical topic.
- Reflections after Two Years of Zephyr LTSv2
We are pleased to announce that the IoT Microconference is now accepting proposals!
If you are interested in presenting an IoT-related topic involving the Linux kernel, userspace tools, firmware, Zephyr, or frameworks, please upload your submission before September 15th.
Submissions can be made via the LPC Call for Proposals, by selecting Internet of Things MC for your track.
August 23, 2023 02:53 PM
August 21, 2023
On behalf of the PCI sub-system maintainers, we would like to invite everyone to join the VFIO/IOMMU/PCI micro-conference (MC) this year.
We are hoping to bring together, both in person and online, everyone interested in the VFIO, IOMMU, and PCI space to talk about the latest developments and challenges in these areas.
The PCI interconnect specification, the devices that implement it, and the system IOMMUs that provide memory and access control to them are nowadays a de-facto standard for connecting high-speed components, incorporating more and more features such as:
These features are aimed at high-performance systems, server and desktop computing, embedded and SoC platforms, virtualisation, and ubiquitous IoT devices.
The kernel code that enables these new system features focuses on coordination between the PCI devices, the IOMMUs they are connected to, and the VFIO layer used to manage them (for userspace access and device passthrough) with related kernel interfaces and userspace APIs to be designed in-sync and in a clean way for all three sub-systems.
The VFIO/IOMMU/PCI MC focuses on the kernel code that enables these new system features, often requiring coordination between the VFIO, IOMMU and PCI sub-systems.
Following the success of LPC 2017, 2019, 2020, 2021, and 2022 VFIO/IOMMU/PCI MC, the Linux Plumbers Conference 2023 VFIO/IOMMU/PCI track will focus on promoting discussions on the PCI core but also current kernel patches aimed at VFIO/IOMMU/PCI sub-systems with specific sessions targeting discussions requiring the three sub-systems coordination.
See the following video recordings from 2022: LPC 2022 – VFIO/IOMMU/PCI MC
Older recordings can be accessed through our official YouTube channel at @linux-pci and the archived LPC 2017 VFIO/IOMMU/PCI MC web page at Linux Plumbers Conference 2017, where the audio recordings from the MC track and links to presentation materials are available.
The tentative schedule will provide an update on the current state of VFIO/IOMMU/PCI kernel sub-systems, followed by a discussion of current issues in the proposed topics.
The following was a result of last year’s successful Linux Plumbers MC:
Tentative topics that are under consideration for this year include (but are not limited to):
- PCI
- VFIO
- Write-combine on non-x86 architectures
- I/O Page Fault (IOPF) for passthrough devices
- Shared Virtual Addressing (SVA) interface
- Single-root I/O Virtualization(SRIOV)/Process Address Space ID (PASID) integration
- PASID in SRIOV virtual functions
- Device assignment/sub-assignment
- IOMMU
- /dev/iommufd development
- IOMMU virtualisation
- IOMMU drivers SVA interface
- DMA-API layer interactions and the move towards generic dma-ops for IOMMU drivers
- Possible IOMMU core changes (e.g., better integration with the device-driver core, etc.)
If you are interested in participating in this MC and have topics to propose, please use the Call for Proposals (CfP) process.
Otherwise, join us to discuss helping Linux keep up with the new features added to the PCI interconnect specification. We hope to see you there!
Proposals can be submitted here here by selecting Track “VFIO/IOMMU/PCI MC“
August 21, 2023 08:41 AM
August 17, 2023
The Linux kernel has grown in complexity over the years. Complete understanding of how it works via code inspection has become virtually impossible. Today, tracing is used to follow the kernel as it performs its complex tasks. Tracing is used today for much more than simply debugging. Its framework has become the way for other parts of the Linux kernel to enhance and even make possible new features. Live kernel patching is based on the infrastructure of function tracing, as well as BPF function hooks. It is now even possible to model the behavior and correctness of the system via runtime verification which attaches to trace points. There is still much more that is happening in this space, and this microconference will be the forum to explore current and new ideas.
Results and accomplishments from the last Tracing microconference (2021):
- User events were introduced, and have finally made it into the kernel.
- The discussion around trace events to handle user faults initiated the event probe work around to the problem. That was to add probes on existing trace events to change their types. This works on synthetic events that can pass the user space file name of the entry of a system call to the exit of the system call which would have faulted in the file and make it available to the trace event.
- Dynamically creating the events directory with the eventfs patch set is queued to be accepted. This will save memory as the dentries and inodes will only be allocated when accessed.
- The discussion about function tracing with arguments has helped inspire both fprobes and function graph return value tracing.
- There’s still ongoing effort in unifying the return path tracers of function graph and kretprobes and fprobes.
Possible ideas for topics for this year’s conference:
- Use of sframes. How to get user space stack traces without requiring frame pointers.
- Updating perf and ftrace to extract user space stack frames from a schedulable context (as requested by NMI).
- Extending user events. Now that they are in the kernel, how to make them more accessible to users and applications.
- Getting more use cases with the runtime verifier. Now that the runtime verifier is in the kernel (uses tracepoints to model against), what else can it be used for.
- Wider use of ftrace_regs in fprobes and rethook from fprobes because rethook may not fill all registers in pt_regs too. How BPF handles this will also be discussed.
- Removing kretprobes from kprobes so that kprobe can focus on handling software breakpoint.
- Object tracing (following a variable throughout each function call). This has had several patches out, but has stopped due to hard issues to overcome. A live discussion could possibly come up with a proper solution.
- Hardware breakpoints and tracing memory changes. Object tracing follows a variable when it changes between function calls. But if the hardware supports it, tracing a variable when it actually changes would be more useful albeit more complex. Discussion around this may come up with a easier answer.
- MMIO tracer being used in SMP. Currently the MMIO tracer does not handle race conditions. Instead, it offlines all but one CPU when it is enabled. It would be great if this could be used in normal SMP environments. There’s nothing technically preventing that from happening. It only needs some clever thinking to come up with a design to do so.
- Getting perf counters onto the ftrace ring buffer. Ftrace is designed for fast tracing, and perf is a great profiler. Over the years it has been asked to have perf counters along side ftrace trace events. Perhaps its time to finally accomplish that. It could be that each function can show the perf cache misses of that function.
For more information, feel free to contact the MC Leads:
Please follow the suggestions from
this BLOG post when submitting a CFP for this track.
August 17, 2023 01:09 PM
August 09, 2023
Once again The Kernel Testing & Dependability Micro-conference will be taking place at LPC 2023, to discuss testing and dependability related topics.
Please submit proposals for discussion via LPC submission system.
The Linux Plumbers 2023 Kernel Testing & Dependability track focuses on advancing the current state of testing of the Linux Kernel and its related infrastructure. The main purpose is to improve software quality and dependability for applications that require predictability and trust.
The goal of this micro-conference is making connections between folks working on similar projects, and help individual projects make progress.
This track is intended to promote collaboration between all the communities and people interested in the Kernel testing & dependability. This will help move the conversation forward from where we left off at the LPC 2022 Kernel Testing & Dependability MC.
We ask that any topic discussions focus on issues/problems they are facing and possible alternatives to resolving them. The Micro-conference is open to all topics related to testing on Linux, not necessarily in the kernel space.
Suggested topics:
- KernelCI: Topics on improvements and enhancements for test coverage
- Growing KCIDB, integrating more sources
- Sanitizers
- Using Clang for better testing coverage
- How to spread KUnit throughout the kernel?
- Building and testing in-kernel Rust code
- Explore ways to improve testing framework and tests in the kernelwith a specific goal to increase traceability and code coverage
- Explore how do SBOMs figure into dependability?
List of accomplishments this past year after LPC 2022:
- Developed a new, modern API for KernelCI with Pub/Sub interface
- Added Rust coverage in KernelCI
- KCIDB is continuing to gather results from many test systems: KernelCI, Red Hat’s CKI, syzbot, ARM, Gentoo, Linaro’s TuxSuite etc. The current focus is on generating common email reports based on this data and dealing with known issues.
- KFENCE is continuing to aid in detecting Out-of-bound OOB accesses, use-after-free errors (UAF), Double free and Invalid free and so on.
- Clang: CFI, weeding out issues upstream, etc.
- Kselftest continues to add coverage for new and existing features and subsystems.
- KUnit is continuing to act as the standard for some drivers and a de facto unit testing framework in the kernel
- The Runtime Verification (RV) interface from Daniel Bristot de Oliveira was merged.
Proposals can be submitted here, by August 20th:
MC leads can be reached for question and further information::
Shuah Khan (shuah@kernel.org)
Sasha Levin <sashal@kernel.org>
Guillaume Tucker <guillaume.tucker@collabora.com>
August 09, 2023 03:40 PM
After a three-year hiatus, the Live Patching Microconference is back for 2023.
Accomplishments post 2019 Microconference:
- API enhancements: Livepatch pre/post (un)patch callback system state change tracking was added in v5.5. The new API enhances the safety of cumulative livepatch upgrades [v5.5]
- KLP-relocations: To facilitate module_disable_ro() removal, arch-specific livepatch .klp.arg sections were deprecated. Special arch section KLP-relocations (like x86 jump labels) are still supported for vmlinux cases, and are now applied at the same time as normal relocations. [v5.8]
- Documentation: Practical information on how to implement reliable stacktraces needed by the livepatching consistency model was added [v5.12]
- Architecture: Implemented Power32 support [v5.18]
- KLP-relocations: To support target module reloading, clear KLP-relocations in livepatch modules when their target module is unloaded. This satisfies a module loader sanity check when resolving relocations on the next target module load (x86_64 only) [v6.3]
Discussion Topics
The following topics have been proposed:
- Shadow variables are considered a livepatching power-feature that can require careful management, especially across livepatch up and downgrades. Is garbage collection or a refactoring of callbacks a better approach to manage these resources?
- klp-relocations were originally introduced to resolve livepatch / kernel and module symbol scoping issues. Recent security features like CET and IBT suggest another use case and renewed interest in having an in-tree klp-relocation build support. Is a simple conversion utility sufficient, or does said tool require greater features?
- The livepatching kselftests consist of test scripts under tools/testing/selftests and associated livepatch module code in lib/. Consolidating these under the former offers better flexibility in templating the livepatch modules as well as the benefits of building them out-of-tree. Are there any outstanding blockers to implement these changes?
- arm64 support is moving forward on several fronts: toolchain, reliable stack unwinding, user space, etc. The Toolchains MC plans to address topics like CFG in ELF and handling of noinstr functions. What issues remain in livepatching and the kernel at large to fully support arm64?
- Rust looks to be a hot topic at this year’s LPC. Its impact on kernel livepatching is relatively open ended as Rust code has only recently been merged in small parts. That said, which features, problems, patchsets should we be paying attention to as we all learn more about this newly supported kernel language?
These potential discussion topics were selected from on-going livepatching mailing list threads, but additional livepatching related topics are welcome for consideration as well. For ideas on what makes for an ideal Microconference topic, checkout this post.
August 09, 2023 03:38 PM
August 08, 2023
I dug out a computer running Fedora 28, which was released 2018-04-01 - over 5 years ago. Backing up the data and re-installing seemed tedious, but the current version of Fedora is 38, and while Fedora supports updates from N to N+2 that was still going to be 5 separate upgrades. That seemed tedious, so I figured I'd just try to do an update from 28 directly to 38. This is, obviously, extremely unsupported, but what could possibly go wrong?
Running sudo dnf system-upgrade download --releasever=38
didn't successfully resolve dependencies, but sudo dnf system-upgrade download --releasever=38 --allowerasing
passed and dnf started downloading 6GB of packages. And then promptly failed, since I didn't have any of the relevant signing keys. So I downloaded the fedora-gpg-keys package from F38 by hand and tried to install it, and got a signature hdr data: BAD, no. of bytes(88084) out of range
error. It turns out that rpm doesn't handle cases where the signature header is larger than a few K, and RPMs from modern versions of Fedora. The obvious fix would be to install a newer version of rpm, but that wouldn't be easy without upgrading the rest of the system as well - or, alternatively, downloading a bunch of build depends and building it. Given that I'm already doing all of this in the worst way possible, let's do something different.
The relevant code in the hdrblobRead function of rpm's lib/header.c is:
int32_t il_max = HEADER_TAGS_MAX;
int32_t dl_max = HEADER_DATA_MAX;
if (regionTag == RPMTAG_HEADERSIGNATURES) {
il_max = 32;
dl_max = 8192;
}
which indicates that if the header in question is RPMTAG_HEADERSIGNATURES, it sets more restrictive limits on the size (no, I don't know why). So I installed rpm-libs-debuginfo, ran gdb against librpm.so.8, loaded the symbol file, and then did disassemble hdrblobRead
. The relevant chunk ends up being:
0x000000000001bc81 <+81>: cmp $0x3e,%ebx
0x000000000001bc84 <+84>: mov $0xfffffff,%ecx
0x000000000001bc89 <+89>: mov $0x2000,%eax
0x000000000001bc8e <+94>: mov %r12,%rdi
0x000000000001bc91 <+97>: cmovne %ecx,%eax
which is basically "If ebx is not 0x3e, set eax to 0xffffffff - otherwise, set it to 0x2000". RPMTAG_HEADERSIGNATURES is 62, which is 0x3e, so I just opened librpm.so.8 in hexedit, went to byte 0x1bc81, and replaced 0x3e with 0xfe (an arbitrary invalid value). This has the effect of skipping the if (regionTag == RPMTAG_HEADERSIGNATURES)
code and so using the default limits even if the header section in question is the signatures. And with that one byte modification, rpm from F28 would suddenly install the fedora-gpg-keys package from F38. Success!
But short-lived. dnf now believed packages had valid signatures, but sadly there were still issues. A bunch of packages in F38 had files that conflicted with packages in F28. These were largely Python 3 packages that conflicted with Python 2 packages from F28 - jumping this many releases meant that a bunch of explicit replaces and the like no longer existed. The easiest way to solve this was simply to uninstall python 2 before upgrading, and avoiding the entire transition. Another issue was that some data files had moved from libxcrypt-common to libxcrypt, and removing libxcrypt-common would remove libxcrypt and a bunch of important things that depended on it (like, for instance, systemd). So I built a fake empty package that provided libxcrypt-common and removed the actual package. Surely everything would work now?
Ha no. The final obstacle was that several packages depended on rpmlib(CaretInVersions), and building another fake package that provided that didn't work. I shouted into the void and Bill Nottingham answered - rpmlib dependencies are synthesised by rpm itself, indicating that it has the ability to handle extensions that specific packages are making use of. This made things harder, since the list is hard-coded in the binary. But since I'm already committing crimes against humanity with a hex editor, why not go further? Back to editing librpm.so.8 and finding the list of rpmlib() dependencies it provides. There were a bunch, but I couldn't really extend the list. What I could do is overwrite existing entries. I tried this a few times but (unsurprisingly) broke other things since packages depended on the feature I'd overwritten. Finally, I rewrote rpmlib(ExplicitPackageProvide) to rpmlib(CaretInVersions) (adding an extra '\0' at the end of it to deal with it being shorter than the original string) and apparently nothing I wanted to install depended on rpmlib(ExplicitPackageProvide) because dnf finished its transaction checks and prompted me to reboot to perform the update. So, I did.
And about an hour later, it rebooted and gave me a whole bunch of errors due to the fact that dbus never got started. A bit of digging revealed that I had no /etc/systemd/system/dbus.service, a symlink that was presumably introduced at some point between F28 and F38 but which didn't get automatically added in my case because well who knows. That was literally the only thing I needed to fix up after the upgrade, and on the next reboot I was presented with a gdm prompt and had a fully functional F38 machine.
You should not do this. I should not do this. This was a terrible idea. Any situation where you're binary patching your package manager to get it to let you do something is obviously a bad situation. And with hindsight performing 5 independent upgrades might have been faster. But that would have just involved me typing the same thing 5 times, while this way I learned something. And what I learned is "Terrible ideas sometimes work and so you should definitely act upon them rather than doing the sensible thing", so like I said, you should not do this in case you learn the same lesson.
comments
August 08, 2023 05:54 AM
August 05, 2023
In the Linux ecosystems, there are many ways to build all the software used to put together a running system. Whether it’s building all the binary packages for a binary Linux distribution, using a source-based distribution, or building an embedded system from scratch, there are a lot of shared challenges which each system solves in its own way.
This microconference is a way to get people who work on disparate build systems to discuss common problems and possible shared solutions across the entire problem space. The kinds of topics we want to discuss are the following:
- Bootstrapping the build system
- Cross building software
- Make, autoconf, and other similar software build tools
- Package build systems, bitbake, emerge/portage, pacman, etc
- Packaging formats
- Managing software with language-specific package managers
- Patch sharing
- Building within a container
- Build systems for building containers
- License gathering and verification
- Security updates
- SBOMS
- Software chain-of-trust
- Repeatable builds
- Documentation and education
- Finding the next generation of maintainers
- Build-system visibility within the wider Plumbers attendeesThis is not a definitive list, and you are free to post abstracts for other related topics.
Build Systems micorconference would like to gather representatives (developers and maintainers) from all the various build systems and related technologies. This is not a definitive list of possible attendees.
- Android
- Arch Linux
- Buildroot
- ChromeOS
- Gentoo
- OpenEmbedded
- OpenWRT/LEDE
- Yocto Project
- Other traditional Binary Packaged distributions
For more information, feel free to contact the MC Leads:
Please follow the suggestions from
this BLOG post when submitting a CFP for this track.
August 05, 2023 09:48 AM
August 04, 2023
The initial NVK (nouveau vulkan) experimental driver has been merged into mesa master[1], and although there's lots of work to be done before it's application ready, the main reason it was merged was because the initial kernel work needed was merged into drm-misc-next[2] and will then go to drm-next for the 6.6 merge window. (This work is separate from the GSP firmware enablement required for reclocking, that is a parallel development, needed to make nvk useable). Faith at Collabora will have a blog post about the Mesa side, this is more about the kernel journey.
What was needed in the kernel?
The nouveau kernel API was written 10 years or more ago, and was designed around OpenGL at the time. There were two major restrictions in the current uAPI that made it unsuitable for Vulkan.
- buffer objects (physical memory allocations) were allocated 1:1 with virtual memory allocations for a file descriptor. This meant the kernel managed the virtual address space. For proper Vulkan support, the bo allocation and vm allocation have to be separate, and userspace should control the virtual address space.
- Command submission didn't use sync objects. The nouveau command submission wasn't wired up to the modern sync objects. These are pretty much a requirement for Vulkan fencing and semaphores to work properly.
How to implement these?
When we kicked off the nvk idea I made a first pass at implementing a new user API, to allow the above features. I took at look at how the GPU VMA management was done in current drivers and realized that there was a scope for a common component to manage the GPU VA space. I did a hacky implementation of some common code and a nouveau implementation. Luckily at the time, Danilo Krummrich had joined my team at Red Hat and needed more kernel development experience in GPU drivers. I handed my sketchy implementation to Danilo and let him run with it. He spent a lot of time learning and writing copious code. His GPU VA manager code was merged into drm-misc-next last week and his nouveau code landed today.
What is the GPU VA manager?
The idea behind the GPU VA manager is that there is no need for every driver to implement something that should essentially not be a hardware specific problem. The manager is designed to track VA allocations from userspace, and keep track of what GEM objects they are currently bound to. The implementation went through a few twists and turns and experiments.
For a long period we considered using maple tree as the core of it, but we hit a number of messy interactions between the dma-fence locking and memory allocations required to add new nodes to the maple tree. The dma-fence critical section is a hard requirement to make others deal with. In the end Danilo used an rbtree to track things. We will revisit if we can deal with maple tree again in the future.
We had a long discussion and a couple of implement it both ways and see, on whether we needed to track empty sparse VMA ranges in the manager or not, nouveau wanted these but generically we weren't sure they were helpful, but that also affected the uAPI as it needed explicit operations to create/drop these. In the end we started tracking these in the driver and left the core VA manager cleaner.
Now the code is in tree we will start to push future drivers to use it instead of spinning their own.
What changes are needed for nouveau?
Now that the VAs are being tracked, the nouveau API needed two new entrypoints. Since BO allocation will no longer create a VM, a new API is needed to bind BO allocations with VM addresses. This is called the VM_BIND API. It has two variants
- a synchronous version that immediately maps a BO to a VM and is used for the common allocation paths.
- an asynchronous version that is modeled after the Vulkan sparse API, and takes in/out sync objects, which use the drm scheduler to schedule the vm/bo binding.
The VM BIND backend then does all the page table manipulation required.
The second API added was an EXEC call. This takes in/out sync objects and a set of addresses that point to command buffers to execute. This uses the drm scheduler to deal with the synchronization and hands the firmware the command buffer address to execute.
Internally for nouveau this meant having to add support for the drm scheduler, adding new internal page table manipulation APIs, and wiring up the GPU VA.
Shoutouts:
My input was the sketchy sketch at the start, and doing the userspace changes to the nvk codebase to allow testing.
The biggest shoutout to Danilo, who took a sketchy sketch of what things should look like, created a real implementation, did all the experimental ideas I threw at him, and threw them and others back at me, negotiated with other drivers to use the common code, and built a great foundational piece of drm kernel infrastructure.
Faith at Collabora who has done the bulk of the work on nvk did a code review at the end and pointed out some missing pieces of the API and the optimisations it enables.
Karol at Red Hat on the main nvk driver and Ben at Red Hat for nouveau advice on how things worked, while he smashed away at the GSP rock.
(and anyone else who has contributed to nvk, nouveau and even NVIDIA for some bits :-)
[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24326
[2] https://cgit.freedesktop.org/drm-misc/log/
August 04, 2023 10:26 PM
August 02, 2023
August is now upon us, and the deadline for refereed track submissions is August 6, which is right around the corner. We have already received some excellent submissions, for which we gratefully thank our submitters!
For those thinking about submitting, please polish off your ideas, and point your browsers at the call-for-proposals page. Looking forward to your submissions.
Reminder: we’ve got a tight deadline to prepare the submissions for the LPC program committee to review, so, as communicated last year, we will not be extending the deadline this year, please submit by August 6th, anywhere on earth.
August 02, 2023 04:03 PM
July 30, 2023
LPC 2023 will host the second edition of the Rust MC. This microconference intends to cover talks and discussions on both Rust for Linux as well as other non-kernel Rust topics. Proposals can be submitted via LPC submission system, selecting the Rust MC track.
Rust is a systems programming language that is making great strides in becoming the next big one in the domain. Rust for Linux is the project adding support for the Rust language to the Linux kernel.
Rust has a key property that makes it very interesting as the second language in the kernel: it guarantees no undefined behavior takes place (as long as unsafe code is sound). This includes no use-after-free mistakes, no double frees, no data races, etc. It also provides other important benefits, such as improved error handling, stricter typing, sum types, pattern matching, privacy, closures, generics, etc.
Possible Rust for Linux topics:
- Rust in the kernel (e.g. status update, next steps…).
- Use cases for Rust around the kernel (e.g. subsystems, drivers,
other modules…).
- Discussions on how to abstract existing subsystems safely, on API design, on coding guidelines…
- Integration with kernel systems and other infrastructure (e.g. build system, documentation, testing and CIs, maintenance, unstable features, architecture support, stable/LTS releases, Rust versioning, third-party crates…).Updates on its subprojects (e.g. klint, pinned-init)
Possible Rust topics:
- Language and standard library (e.g. upcoming features, stabilization of the remaining features the kernel needs, memory model…).
- Compilers and codegen (e.g. rustc improvements, LLVM and Rust, rustc_codegen_gcc, Rust GCC…).
- Other tooling and new ideas (bindgen, Cargo, Miri, Clippy, Compiler Explorer, Coccinelle for Rust…).
- Educational material.
- Any other Rust topic within the Linux ecosystem.
Last year was the first edition of the Rust MC and the focus was on showing the ongoing efforts by different parties (compilers, Rust for Linux, CI, eBPF…). Shortly after the Rust MC, Rust got merged into the Linux kernel. Abstractions are getting upstreamed, with the first major drivers looking to be merged soon: Android Binder, the Asahi GPU driver and the NVMe driver (presented in that MC).
July 30, 2023 07:05 AM
July 28, 2023
EOSS in Prague was great, lots of hallway track, good talks, good food,
excellent tea at meetea - first time I had proper tea
in my life, quite an experience. And also my first talk since covid, pack room
with standing audience, apparently one of the top ten most attended talks per
LF’s conference
report.
The video recording is now
uploaded, I’ve uploaded the fixed
slides, including the missing slide that I
accidentally cut in a last-minute edit. It’s the same content as my blog posts
from last year, first talking about locking engineering
principles and then the hierarchy of
locking engineering patterns.
July 28, 2023 12:00 AM
July 27, 2023
The Android Microconference brings the upstream community and Android systems developers together to discuss issues and changes to the Android platform and their dependencies and interactions with the Linux kernel, allowing for collaboration on solutions for upstream.
Since last year’s conference, there has been quite a bit of progress, specifically around:
Currently planned discussion topics for this year include:
- 16k Pages
- RISC-V
- android-mainline on Pixel6
- Updates on Binder
- BPF usage w/ Android
- Kernel and platform integration testing
- Vendor Hook Usage
- Building Modules for Android GKI Kernels
- Resolving Priority Inversion w/ Proxy Execution
- AOSP Devboards
- And likely more…
People are encouraged to submit topics related to new Android functionality as well as issues in getting that functionality upstream.
Please consider that the goal is to discuss open problems, preferably with patch set submissions already in discussion on LKML. The slots are very short (10-15 mins), and the main portion of the time should be given to the debate – thus, the importance of having an open and relevant problem, with people in the community engaged in the solution.
The CFP for the Android Micro-conference closes on Aug 15th, so get your topics in early!
Additionally, we already have a busy tentative schedule, but please submit your topics, and should it not fit, we hope to have additional discussion space in a follow-on BoF.
July 27, 2023 05:12 AM
July 23, 2023
Here are the list of microconferences at the 2023 Linux Plumbers Conference:
Some of the above already have a blog describing them in detail, and blogs for the rest will be coming shortly. If you plan on submitting a topic to one of these microconferences, please read the blog on what an ideal microconference topic submission is. After that, submit your topic and make sure that you select the appropriate track that you are submitting for (they are all listed under LPC Microconference and end with MC).
July 23, 2023 08:15 PM
July 21, 2023
We are pleased to announce that we will have a CXL MC this year at Plumbers, and hereby invite the community in our call for participation.
Compute Express Link is a cache coherent fabric that in recent years has been gaining momentum in the industry. CXL 3.0 launched just before Plumbers 2022 (where very early discussions took place), bringing new challenges such as dynamic capacity devices and large scale fabrics, two features that bring significant challenges to Linux. There also has been controversy and confusion in the Linux kernel community about the state and future of CXL, regarding its usage and integration into, for example, the core memory management subsystem. Many concerns have been put to rest through proper clarification and setting of expectations.
The Compute Express Link microconference focuses on how to evolve the Linux CXL kernel driver and userspace components for support of the CXL 2.0 spec (and beyond). The microconference provides a pace to open the discussion, incorporate more perspectives, and grow the CXL community with a goal that the CXL Linux plumbing serves the needs of the CXL ecosystem while balancing the needs of the Linux project. Specifically, this microconference welcomes submissions detailing industry and academia use cases in order to develop usage model scenarios. Finally, it will be a good opportunity to have existing upstream CXL developers available in a forum to discuss current CXL support and to communicate areas that need additional involvement.
Suggested topics:
- Ecosystem & Architectural review
- Dynamic Capacity Devices
- Fabric Management
- QEMU support
- Security (ie: IDE/SPDM)
- Managing vendor specificity
- Type 2 accelerator support (bias flip management)
- Coherence management of type2/3 memory (back-invalidation)
- Peer2Peer (ie: Unordered IO)
- Reliability, availability and serviceability (ie: Advanced Error Reporting, Isolation, Maintenance).
- Hotplug (QoS throttling, policies, daxctl)
- Hot remove
- Documentation
- Memory tiering topics that can relate to cxl (out of scope of MM/performance MCs)
- Industry and academia use cases
Proposals can be submitted here, by September 1st:
https://lpc.events/event/17/abstracts/
For more information, feel free to contact the Compute Express Link MC Leads:
Davidlohr Bueso <dave@stgolabs.net>
Jonathan Cameron <Jonathan.Cameron@Huawei.com>
Adam Manzanares <a.manzanares@samsung.com>
Dan Williams <dan.j.williams@intel.com>
July 21, 2023 07:33 AM
July 14, 2023
I recently came across tinygrad as a small powerful nn framework that had an OpenCL backend target and could run LLaMA model.
I've been looking out for rusticl workloads, and this seemed like a good one, and I could jump on the AI train, and run an LLM in my house!
I started it going on my Radeon 6700XT with the latest rusticl using radeonsi with the LLVM backend, and I could slowly interrogate a model with a question, and it would respond. I've no idea how performant it is vs ROCm yet which seems to be where tinygrad is more directed, but I may get to that next week.
While I was there though I decided to give the Mesa ACO compiler backend a go, it's been tied into radeonsi recently, and I done some hacks before to get compute kernels to run. I reproduced said hacks on the modern code and gave it a run.
tinygrad comes with a benchmark script called benchmark_train_efficientnet so I started playing with it to see what low hanging fruit I could find in an LLVM vs ACO shootout.
The bench does 10 runs, the first is where lots of compilation happens, the last is well primed cache wise. There are the figures from the first and last runs with a release build of llvm and mesa. (and the ACO hacks).
LLVM:
215.78 ms cpy, 12245.04 ms run, 120.33 ms build, 12019.45 ms realize, 105.26 ms CL, -0.12 loss, 421 tensors, 0.04 GB used, 0.94 GFLOPS
10.25 ms cpy, 221.02 ms run, 83.50 ms build, 36.25 ms realize, 101.27 ms CL, -0.01 loss, 421 tensors, 0.04 GB used, 52.11 GFLOPS
ACO:
71.10 ms cpy, 3443.04 ms run, 112.58 ms build, 3214.13 ms realize, 116.34 ms CL, -0.04 loss, 421 tensors, 0.04 GB used, 3.35 GFLOPS
10.36 ms cpy, 234.90 ms run, 84.84 ms build, 36.51 ms realize, 113.54 ms CL, 0.05 loss, 421 tensors, 0.04 GB used, 49.03 GFLOPS
So ACO is about 4 times faster to compile but produces binaries that are less optimised.
The benchmark produces 148 shaders:
LLVM:
126 Max Waves: 16
6 Max Waves: 10
5 Max Waves: 9
6 Max Waves: 8
5 Max Waves: 4
ACO:
96 Max Waves: 16
36 Max Waves: 12
2 Max Waves: 10
10 Max Waves: 8
4 Max Waves: 4
So ACO doesn't quite get the optimal shaders for a bunch of paths, even with some local hackery I've done to make it do better.[1]
I'll investigate ROCm next week maybe, got a bit of a cold/flu, and large GPU stacks usually make me want to wipe the machine after I test them :-P
[1] https://gitlab.freedesktop.org/airlied/mesa/-/commits/radeonsi-rusticl-aco-wip
July 14, 2023 04:30 AM
July 11, 2023
The phrase "Root of Trust" turns up at various points in discussions about verified boot and measured boot, and to a first approximation nobody is able to give you a coherent explanation of what it means[1]. The Trusted Computing Group has a fairly wordy definition, but (a) it's a lot of words and (b) I don't like it, so instead I'm going to start by defining a root of trust as "A thing that has to be trustworthy for anything else on your computer to be trustworthy".
(An aside: when I say "trustworthy", it is very easy to interpret this in a cynical manner and assume that "trust" means "trusted by someone I do not necessarily trust to act in my best interest". I want to be absolutely clear that when I say "trustworthy" I mean "trusted by the owner of the computer", and that as far as I'm concerned selling devices that do not allow the owner to define what's trusted is an extremely bad thing in the general case)
Let's take an example. In verified boot, a cryptographic signature of a component is verified before it's allowed to boot. A straightforward implementation of a verified boot implementation has the firmware verify the signature on the bootloader or kernel before executing it. In this scenario, the firmware is the root of trust - it's the first thing that makes a determination about whether something should be allowed to run or not[2]. As long as the firmware behaves correctly, and as long as there aren't any vulnerabilities in our boot chain, we know that we booted an OS that was signed with a key we trust.
But what guarantees that the firmware behaves correctly? What if someone replaces our firmware with firmware that trusts different keys, or hot-patches the OS as it's booting it? We can't just ask the firmware whether it's trustworthy - trustworthy firmware will say yes, but the thing about malicious firmware is that it can just lie to us (either directly, or by modifying the OS components it boots to lie instead). This is probably not sufficiently trustworthy!
Ok, so let's have the firmware be verified before it's executed. On Intel this is "Boot Guard", on AMD this is "Platform Secure Boot", everywhere else it's just "Secure Boot". Code on the CPU (either in ROM or signed with a key controlled by the CPU vendor) verifies the firmware[3] before executing it. Now the CPU itself is the root of trust, and, well, that seems reasonable - we have to place trust in the CPU, otherwise we can't actually do computing. We can now say with a reasonable degree of confidence (again, in the absence of vulnerabilities) that we booted an OS that we trusted. Hurrah!
Except. How do we know that the CPU actually did that verification? CPUs are generally manufactured without verification being enabled - different system vendors use different signing keys, so those keys can't be installed in the CPU at CPU manufacture time, and vendors need to do code development without signing everything so you can't require that keys be installed before a CPU will work. So, out of the box, a new CPU will boot anything without doing verification[4], and development units will frequently have no verification.
As a device owner, how do you tell whether or not your CPU has this verification enabled? Well, you could ask the CPU, but if you're doing that on a device that booted a compromised OS then maybe it's just hotpatching your OS so when you do that you just get RET_TRUST_ME_BRO even if the CPU is desperately waving its arms around trying to warn you it's a trap. This is, unfortunately, a problem that's basically impossible to solve using verified boot alone - if any component in the chain fails to enforce verification, the trust you're placing in the chain is misplaced and you are going to have a bad day.
So how do we solve it? The answer is that we can't simply ask the OS, we need a mechanism to query the root of trust itself. There's a few ways to do that, but fundamentally they depend on the ability of the root of trust to provide proof of what happened. This requires that the root of trust be able to sign (or cause to be signed) an "attestation" of the system state, a cryptographically verifiable representation of the security-critical configuration and code. The most common form of this is called "measured boot" or "trusted boot", and involves generating a "measurement" of each boot component or configuration (generally a cryptographic hash of it), and storing that measurement somewhere. The important thing is that it must not be possible for the running OS (or any pre-OS component) to arbitrarily modify these measurements, since otherwise a compromised environment could simply go back and rewrite history. One frequently used solution to this is to segregate the storage of the measurements (and the attestation of them) into a separate hardware component that can't be directly manipulated by the OS, such as a Trusted Platform Module. Each part of the boot chain measures relevant security configuration and the next component before executing it and sends that measurement to the TPM, and later the TPM can provide a signed attestation of the measurements it was given. So, an SoC that implements verified boot should create a measurement telling us whether verification is enabled - and, critically, should also create a measurement if it isn't. This is important because failing to measure the disabled state leaves us with the same problem as before; someone can replace the mutable firmware code with code that creates a fake measurement asserting that verified boot was enabled, and if we trust that we're going to have a bad time.
(Of course, simply measuring the fact that verified boot was enabled isn't enough - what if someone replaces the CPU with one that has verified boot enabled, but trusts keys under their control? We also need to measure the keys that were used in order to ensure that the device trusted only the keys we expected, otherwise again we're going to have a bad time)
So, an effective root of trust needs to:
1) Create a measurement of its verified boot policy before running any mutable code
2) Include the trusted signing key in that measurement
3) Actually perform that verification before executing any mutable code
and from then on we're in the hands of the verified code actually being trustworthy, and it's probably written in C so that's almost certainly false, but let's not try to solve every problem today.
Does anything do this today? As far as I can tell, Intel's Boot Guard implementation does. Based on publicly available documentation I can't find any evidence that AMD's Platform Secure Boot does (it does the verification, but it doesn't measure the policy beforehand, so it seems spoofable), but I could be wrong there. I haven't found any general purpose non-x86 parts that do, but this is in the realm of things that SoC vendors seem to believe is some sort of value-add that can only be documented under NDAs, so please do prove me wrong. And then there are add-on solutions like Titan, where we delegate the initial measurement and validation to a separate piece of hardware that measures the firmware as the CPU reads it, rather than requiring that the CPU do it.
But, overall, the situation isn't great. On many platforms there's simply no way to prove that you booted the code you expected to boot. People have designed elaborate security implementations that can be bypassed in a number of ways.
[1] In this respect it is extremely similar to "Zero Trust"
[2] This is a bit of an oversimplification - once we get into dynamic roots of trust like Intel's TXT this story gets more complicated, but let's stick to the simple case today
[3] I'm kind of using "firmware" in an x86ish manner here, so for embedded devices just think of "firmware" as "the first code executed out of flash and signed by someone other than the SoC vendor"
[4] In the Intel case this isn't strictly true, since the keys are stored in the motherboard chipset rather than the CPU, and so taking a board with Boot Guard enabled and swapping out the CPU won't disable Boot Guard because the CPU reads the configuration from the chipset. But many mobile Intel parts have the chipset in the same package as the CPU, so in theory swapping out that entire package would disable Boot Guard. I am not good enough at soldering to demonstrate that.
comments
July 11, 2023 07:58 AM
July 08, 2023
Covenants are a construction to allow introspection: a transaction output can place conditions on the transaction which spends it (beyond the specific “must provide a valid signature of itself and a particular pubkey”).
This power extends script in useful ways, such as allow creation of force spending paths (such as vaults which force spending delays), and rebindable inputs (such as required for Lightning “state fixup” proposals, aka LN-Symmetry). But when we discuss specific proposals (such as OP_TX, OP_TXHASH or OP_CHECKTEMPLATEVERIFY) it’s been difficult to nail down the exact trade-offs made for each one. So I want to describe the landscape (or taxonomy) which we can use to categorize and assess covenants.
The Simplest Covenant (Which Doesn’t Quite Work!)
Firstly, consider the simplest covenant: OP_TXIDVERIFY. This would check the txid of the spending transaction is equal to the given txid. This is easy to implement both in existing script (replacing OP_NOP3), and in tapscript. It doesn’t actually work, since it makes a commitment circle (the txid contains the txid of the input, which contains the txid…; thanks Jeremy Rubin), but it’s a useful thought experiment.
Fully Complete Covenants
Now, consider the most complete covenant proposal: OP_TX. The idea is to push some specified field of the spending transaction onto the stack. For efficiency, it would take a bitmap to push multiple fields at once and (because we don’t have OP_CAT) have an option to concatenate them all as push them as one element. There are details here which matter, such as how primitive Bitcoin script is when dealing with numbers, and stack space limits for large transactions, but the idea is simple.
This allows you to do things like “output amount must be > 100000 sats”.
Equality Covenants
On the spectrum from simplest to most complete, is Russell O’Connor’s OP_TXHASH (which I generalized into the OP_TX proposal), which takes a bitmap from the stack and hashes those fields together, then pushes the resulting hash onto the stack. Of course, you can have multiple OP_IF branches allowing different equalities, but only a handful: you’ll run out of scripts space quite fast. With OP_CAT you could extend this further to assemble a template to compare against at runtime, but we don’t have that so I’ll ignore that for now.
This allows for simple equality tests, such as “output amount must be 100000 sats”.
OP_CHECKTEMPLATEVERIFY Covenant
OP_CHECKTEMPLATEVERIFY is a further restriction on basic equality covenants: it’s like OP_TXHASH with a fixed bitmap. It’s an opinionated subset though, which makes it more powerful than OP_TXIDVERIFY (and usable!): in particular it doesn’t commit to inputs at all (except the input number), but commits to all the outputs; this means you can theoretically add fees and still match, but you can’t have a change output. It’s also usable outside tapscript, since it’s written in the old “don’t-touch-the-stack” script soft-fork style.
Taproot Allows Us To Design, Then Restrict
Designing in the second half of 2023, I think it’s reasonable to assume covenants are only relevant inside taproot.
This means we have the ability to easily limit it in a way which can be unlimited in stages later via future soft-forks:
- If we only allow certain bits to be set, and otherwise treat it as OP_SUCCESS, we can trim its ability today, and softfork in new bits later without needing a new opcode.
- Similarly, we can also restrict it to a static analyzable set by requiring it to immediately follow a PUSH operation, otherwise degrade to OP_SUCCESS.
As an example, let’s turn OP_TX into OP_TXIDVERIFY. We only define one bit: OP_TX_BIT_TXID. That bit means “push the txid on the stack”, and if anything else is set, OP_TX is interpreted as OP_SUCCESS:
01 OP_TX_BIT_TXID OP_TX <txid> OP_EQUALVERIFY
Similarly, if we want OP_CHECKTEMPLATEVERIFY, we require the following OP_TX bits to be defined:
- OP_TX_BIT_COMBINE (meaning to concatenate onto one stack element)
- OP_TX_BIT_NVERSION
- OP_TX_BIT_NLOCKTIME
- OP_TX_BIT_SCRIPTSIG_HASHES
- OP_TX_BIT_INPUT_INDEX
- OP_TX_BIT_NSEQUENCES
- OP_TX_BIT_NUM_INPUTS
- OP_TX_BIT_OUTPUTS_AMOUNT
- OP_TX_BIT_OUTPUTS_SCRIPT
- OP_TX_BIT_NUM_OUTPUTS
i.e: (assuming they’re assigned bits from 0 to 10)
02 b1111111111 OP_TX OP_SHA256 <hash> OP_EQUALVERIFY
There are some differences in how fields are hashed, and perhaps their order, but these are cosmetic not functional differences.
Extending this in future simply means defining other fields, and what combinations are allowed.
The Recursion Distraction
There are many ways we can argue about how to clip covenants’ wings. But I want to address (and dismiss) one specifically: the idea of restricting recursive covenants which restrict all future descendants.
Any covenant system listed here can restrict outputs. That means I can require that the spending transaction spend to a spending transaction that spends to a spending transaction that spends to…. 100 million transactions later… an output to me. You could prevent this by requiring that any covenant-spending tx itself is not allowed to use covenants at all, but that adds complexity and reduces usefulness.
Mathematically, there’s a difference between being able to restrict transactions to arbitrary depth and to infinite depth. Nobody else cares: either way, there are far better ways to render your coins useless than placing them in a giant chain or loop.
Covenants by the Back Door
I wrote a previous post on Covenants via Signatures which noted that signatures with BIP-118 can be used to make covenants. This is not a neat design, it’s more like “we have a jackhammer, we can use it to knock in a nail”. On the covenant spectrum, it’s an Equality Covenant between OP_TXHASH and OP_CHECKTEMPLATEVERIFY, in that it can be used with several different field bitmaps, according to the SIGHASH flags used on the signature.
Introspection Is Not All We Want
It’s worth noting that Bitcoin’s OP_CHECKSIG (and family) do three things:
- Assemble parts of the current transaction.
- Hash it.
- Check the hash is signed with a given key.
OP_TX implements the first, and we already have various OP_SHA256 and similar operations for the second. OP_TXHASH and OP_CHECKTEMPLATEVERIFY combine the first two.
It’s logical to want a separate operation for the third one, hence the proposal to be able to check a signature signs a given hash: OP_CHECKSIGFROMSTACK. This would let you simulate any OP_CHECKSIG variation (depending on what OP_TX/OP_TXHASH flags were enabled):
02 <flags> OP_TX OP_SHA256 <pubkey> OP_CHECKSIGFROMSTACK
Summary
We should enable ANYPREVOUT. This will enable LN-symmetry which makes Lightning simpler (and thus more robust!), which has already been implemented. It will also enable covenants, though with a weird requirement for a signature-in-output, which makes them less efficient than they could be, but enables real uses and experimentation to inform future soft forks.
For future covenant soft forks, we should look at complete designs like OP_TX, then clip their wings as desired so we can enable the full functionality later. This may well end up looking like OP_CHECKTEMPLATEVERIFY!
Meanwhile, Greg Sanders, who both refined the OP_VAULT proposal and implemented LN-Symmetry (nee Eltoo), expressed the opinion that we’re fast approaching the edge of Bitcoin Script usability, and he now was firmly of the opinion that a soft fork to introduce Simplicity would be better. Perhaps that will happen instead of OP_TX or the like?
July 08, 2023 02:30 PM
July 07, 2023
Covenants are in Bitcoin already, almost?
[EDIT: An earlier version claimed we do covenants already. Oops! Thanks Ruben Somsen and Jimmy Song.]
In Bitcoin, covenants refers to restricting how an output is spent: this is from the legal term where conditions on property persist beyond sale. This is usually defined to exclude the “obvious” common requirement that the spending transaction be signed by a given key.
But to step back, there are logically three things OP_CHECKSIG (and friends) do:
- Assemble parts of the spending transaction (which parts depends on the SIGHASH flags).
- Hash that.
- Validates the hash has been signed by the given key.
For covenants, you really just want the first part: the ability to introspect so you can check whatever feature of the transaction you care about (this is the basis for OP_TX, which takes a bitmap telling it what about the spending transaction to push onto the stack so you can test it). But if you only care about equality, you can get away with something that does the first two things (this insight was the basis for Russell O’Connor’s OP_TXHASH, like OP_TX but always hashes before putting on the stack), and just check the hash is what you expected.
But, Burak points out that if you only care about equality, and are happy to specify all the fields that are covered by signatures (with the various SIGHASH variants), you can simply put a pubkey with pre-made signature and an OP_CHECKSIG into the output script! That constrains the transaction’s fields to hash to whatever that OP_CHECKSIG expects.
This, of course, is overkill: you don’t actually care about the signature operation, you’re just using it test hashes to create covenants. But unfortunately (as pointed out when I posted on Twitter, all excited!), it doesn’t quite work yet, because your signature has to commit to the script which contains the signature: a circular dependency.
But BIP-118 (a.k.a. ANYPREVOUT) proposes new SIGHASH flags which allow you not to commit to the input script, so this covenant-via-signature is possible, and Rearden Code has a tweak which makes this more powerful. I also suspect that you can probably use a simple 0x1
as the pubkey in some cases, since BIP-118 defines that to mean the taproot internal key, saving 32 bytes.
July 07, 2023 02:30 PM
July 01, 2023
We are pleased to announce the first ever Linux Kernel Debugging Microconference, and we are now accepting proposals and problem statements.
Kernel debugging can be done in many ways with many purpose-built tools, from printk to Crash, Drgn, KDB/KGDB, and more. These tools are built on layers of standards, formats, implicit standards, and undocumented assumptions that make everything tick. When things work well, the tools stay out of your way and help you resolve your bug. But when things don’t work so well, you’re left debugging your debugger.
The Linux Kernel Debugging Microconference aims to bring together the developers and users of these tools to discuss the shared problems we face. We hope to discuss ongoing work that will improve the state of kernel debuggers, as well as new ideas that will require coordinated development across projects. Some possible topics might include:
- Alternative sources of debuginfo beyond DWARF (kallsyms, BTF, etc)
- Problems related to core debugging tools & utilities (`/proc/vmcore`, `/proc/kcore`, kexec, kdump, makedumpfile, libkdumpfile, and many more).
- Strategies to handle the interpretation of core kernel subsystems across versions (e.g. slab & vfs).
- Core dump formats and ways they can break & be repaired
Topics outside this narrow list are welcomed: we welcome any topic that would improve the debugging experience, or merits the attention of the developers of these tools & kernel subsystems. The best submissions will describe active work or open problems, and they will welcome debate, discussion, and community consensus.
Submissions can be made via the LPC Call for Proposals, by selecting Linux Kernel Debugging MC for your track.
July 01, 2023 04:22 PM
June 26, 2023
The Linux Plumbers’ microconference is a three and a half hour session focused on one general focus area. It can be on Android, power management, tracing, real-time or any of the other many subsystems in the Linux ecosystem. These sessions are broken up into smaller topics that are highly focused work meetings with the goal of accomplishing something during the brief discussions that happen during that time. A topic session ranges from 15 to 30 minutes in length, where no more than half the time is a presentation to bring everyone in the room (or online) up to speed about the issues that need to be discussed, and the rest of the time is spent on brainstorming ideas with the audience on how to accomplish solving the problems at hand. The problem does not need to be solved in this short time, but when time is up, the audience should understand what is at stake well enough to be productive offline in mailing lists and chat rooms.
Submitting a microconference topic
A microconference topic submission should be considered a problem statement and not an abstract. The submission should explain what the issue is that the submitter is struggling with, what has currently been done to try to solve it, and sometimes that means showing multiple solutions where there are pros and cons to each solution and the submitter wants to discuss which is better with the audience. There is the possible chance that the audience may even come up with a new solution that is better than what is being presented. The topic should be focused on what is currently being worked on and not about what was already done, unless the submitter wants to talk about what new can be done with what was already done.
Presenting the topic
The topic should start off with a presentation. The goal of the session is to come up with answers to the problem at hand. If the audience does not know the details of the issue, they are highly unlikely to come up with any productive input. The more the audience understands the problem, the likelier they will be able to help out. Due to the short time of the microconference topic session, it is imperative that the presentation is extremely focused on a need to know basis. That is, only present what is critical knowledge to understand the problem at hand. The quicker the audience can come up to speed, the more time there will be to have a productive discussion with them. There is no limit to the number of slides, but the focus should be on the time spent on the presentation.
Another difference between a microconference topic session and a normal presentation, is that there is no Q and A, but only discussions. A Q and A in presentations is where the audience asks the presenter questions and the presenter answers them. In a microconference topic session, the presenter starts with asking the audience questions and then there should be a back and forth between the audience and the presenter as well as between different members of the audience.
General information topics
One exception to the above is if the general focus area requires an understanding of a specific topic that all the other topics depend on. Some examples of this include RISC-V coming out with a new specification. The first topic in the microconference may be a 30 minute presentation about what details the new specification has that will impact further development. This is required information for the rest of the microconference to know in order to have proper decision making. The Android microconference had a similar case where the presentations were required for the other topics to be discussed. The general rule of thumb is that if a presentation is needed to have productive discussions then it is allowed. Due to the short time of a microconference, it is encouraged to have few of these types of presentations and better yet to have people do their homework before attending the microconference.
Attendee preparation
The focus of a microconference is to solve problems that exist today and come up with further innovations of tomorrow. The time constraint requires that everyone involved should be well prepared for the discussions that are to take place. The topics descriptions should include links to patch discussions on mailing lists, to wiki pages that describe the general focus area, or to anything that is not common knowledge to those not directly involved in the work. Linux Plumbers is about getting other experts outside the field to give input with a different perspective. Attendees should make an effort to read through the topics of all the microconferences and if there’s a topic of interest, they should read the links and familiarize themselves with the discussions that will take place. This will allow the attendees to be more productive than if they just come in without the understanding of the general focus area.
By following these general guidelines, Linux Plumbers will remain the most productive technical conference that one can attend.
June 26, 2023 12:02 PM
June 23, 2023
We’re holding another edition of the RISC-V microconference for Plumbers at 2023. Broadly speaking anything related to both Linux and RISC-V is on topic, but discussions tend to involve the following categories:
- How to support new RISC-V ISA features in Linux, both for the standards and for vendor-specific extensions.
- Discussions related to RISC-V based SOCs, which frequently include interactions with other Linux subsystems as well as core arch/riscv code.
- Coordination with distributions and toolchains on userspace-visible behavior.
Accomplishments post 2022 Microconference
All the talks at the 2022 Plumbers microconference have made at least some progress, with many of them resulting in big chunks of merged code.
Specifically:
- The riscv_hwprobe() syscall has been merged.
- Support for ACPI has been merged.
- Kconfig.socs is in the process of being refactored.
- Preliminary patches for the RISC-V TEE have been posted.
- Some optimized routines have been merged, but there’s still a long way to go.
- Text patching is still up in the air, but we’ve been working through many of the issues pointed out during the discussions.
Likely Topics for Discussion Sections
The actual list of topics tends to be hard to pin down this early, but here’s a few topics that have been floating around the mailing lists and may be easier to resolve in real-time:
- Do we even bother with generic optimized lib routines, or just go vendor-specific?
- When can we start deprecating stuff? Likely-unused bits include: rv32, nommu, xip, old toolchains.
- Is it time to give up on profiles and just set a base ourselves?
- CI: Hosting PW-NIPA (currently hosted by Conor/Microchip), hosting “upstream kernel ci” on Github w/ sponsored runners?
- Hardware assisted control-flow integrity on RISC-V CPUs.
- Handling text patching on RISC-V systems.
- How do we deal with vendor-specific memory management?
Submissions are made via LPC submission systems, selecting Track RISC-V MC
June 23, 2023 09:34 PM
June 20, 2023
The real-time and scheduling micro-conference joins these two intrinsically connected communities to discuss the next steps together.
Over the past decade, many parts of PREEMPT_RT have been included in the official Linux codebase. Examples include real-time mutexes, high-resolution timers, lockdep, ftrace, RCU_PREEMPT, threaded interrupt handlers, and more. The number of patches that need integration has been significantly reduced, and the rest is mature enough to make their way into mainline Linux.
The scheduler is at the core of Linux performance. With different topologies and workloads, giving the user the best experience possible is challenging, from low latency to high throughput and from small power-constrained devices to HPC, where CPU isolation is critical.
The following accomplishments have been made as a result of last year’s micro-conference:
Ideas of topics to be discussed include (but are not limited to):
- Improve responsiveness for CFS tasks – e.g., latency-nice patch
- The new EEVDF scheduler proposal
- Impact of new topology on CFS including hybrid or heterogeneous system
- Taking into account task profile with IPCC or uclamp
- Improvements in CPU Isolation
- The status of PREEMPT_RT
- Locking improvements – e.g., proxy execution
- Improvements on SCHED_DEADLINE
- Tooling for debugging scheduling and real-time
It is fine if you have a new topic that is not on the list. People are encouraged to submit any topic related to real-time and scheduling.
Please consider that the goal is to discuss open problems, preferably with patch set submissions already in discussion on LKML. The presentations are very short, and the main portion of the time should be given to the debate – thus, the importance of having an open and relevant problem, with people in the community engaged in the solution.
Submissions are made via LPC submission systems, selecting Track Real-time and Scheduling MC
June 20, 2023 08:59 PM
June 16, 2023
We’re happy to announce that registration for LPC 2023 is now open. To register please go to our attend page.
To try to prevent the instant sellout we had last year we’ve updated our cancellation policy to no refunds only transfers of registrations. You will find more details during the registration process. LPC 2023 follows the Linux Foundation’s health & safety policy.
As usual we expect to sell our rather quickly so don’t delay your registration for too long!
June 16, 2023 03:00 PM
June 14, 2023
Registration for LPC 2023 will be opened soon. Past experience told us that in-person registration would be sold out very fast. If you plan to join us in Richmond, please follow our blog and social media for the announcements about the registration!
June 14, 2023 09:04 AM
June 12, 2023
The v2023.06.11a release of Is Parallel Programming Hard, And, If So, What Can You Do About It? is now available! The double-column version is also available from arXiv.org.
This release contains a new section on thermal throttling (along with a new cartoon), improvements to the memory-ordering chapter (including intuitive subsets of the Linux-kernel memory model), fixes to the deferred-processing chapter, additional clocksource-deviation material to the "What Time Is It?" section, and numerous fixes inspired by questions and comments from readers. Discussions with Yariv Aridor were especially fruitful. Akira Yokosawa contributed some quick quizzes and other upgrades of the technical discussions, along with a great many improvements to grammar, glossaries, epigraphs, and the build system. Leonardo Bras also provided some much-appreciated build-system improvements, and also started up continuous integration for some of the code samples.
Elad Lahav, Alan Huang, Zhouyi Zhou, and especially SeongJae Park contributed numerous excellent fixes for grammatical and typographical errors. SeongJae's fixes were from his Korean translation of this book.
Elad Lahav, Alan Huang, and Patrick Pan carried out some much-needed review of the code samples and contributed greatly appreciated fixes and improvements. In some cases, they drug the code kicking and screaming into the 2020s. :-)
June 12, 2023 04:40 PM
May 22, 2023
Thanks for all the suggestions, on here, and on twitter and on mastodon, anyway who noted I could use a single fd and avoid all the pain was correct!
I hacked up an ever growing ftruncate/madvise memfd and it seemed to work fine. In order to use it for sparse I have to use it for all device memory allocations in lavapipe which means if I push forward I probably have to prove it works and scales a bit better to myself. I suspect layering some of the pb bufmgr code on top of an ever growing fd might work, or maybe just having multiple 2GB buffers might be enough.
Not sure how best to do shaderResourceResidency, userfaultfd might be somewhat useful, mapping with PROT_NONE and then using write(2) to get a -EFAULT is also promising, but I'm not sure how best to avoid segfaults for read/writes to PROT_NONE regions.
Once I got that going, though I ran headfirst into something that should have been obvious to me, but I hadn't thought through.
llvmpipe allocates all it's textures linearly, there is no tiling (even for vulkan optimal). Sparse textures are incompatible with linear implementations. For sparseImage2D you have to be able to give the sparse tile sizes from just the image format. This typically means you have to work out how large the tile that fits into a hw page is in w/h. Of course for a linear image, this would be dependent on the image stride not just the format, and you just don't have that information.
I guess it means texture tiling in llvmpipe might have to become a thing, we've thought about it over the years but I don't think there's ever been a solid positive for implementing it.
Might have to put sparse support on the back burner for a little while longer.
May 22, 2023 03:12 AM
May 17, 2023
Mike nerdsniped me into wondering how hard sparse memory support would be in lavapipe.
The answer is unfortunately extremely.
Sparse binding essentially allows creating a vulkan buffer/image of a certain size, then plugging in chunks of memory to back it in page-size multiple chunks.
This works great with GPU APIs where we've designed this, but it's actually hard to pull off on the CPU.
Currently lavapipe allocates memory with an aligned malloc. It allocates objects with no backing and non-sparse bindings connect objects to the malloced memory.
However with sparse objects, the object creation should allocate a chunk of virtual memory space, then sparse binding should bind allocated device memory into the virtual memory space. Except Linux has no interfaces for doing this without using a file descriptor.
You can't mmap a chunk of anonymous memory that you allocated with malloc to another location. So if I malloc backing memory A at 0x1234000, but the virtual memory I've used for the object is at 0x4321000, there's no nice way to get the memory from the malloc to be available at the new location (unless I missed an API).
However you can do it with file descriptors. You can mmap a PROT_NONE area for the sparse object, then allocate the backing memory into file descriptors, then mmap areas from those file descriptors into the correct places.
But there are limits on file descriptors, you get 1024 soft, or 4096 hard limits by default, which is woefully low for this. Also *all* device memory allocations would need to be fd backed, not just ones going to be used in sparse allocations.
Vulkan has a limit maxMemoryAllocationCount that could be used for this, but setting it to the fd limit is a problem because some fd's are being used by the application and just in general by normal operations, so reporting 4096 for it, is probably going to explode if you only have 3900 of them left.
Also the sparse CTS tests don't respect the maxMemoryAllocationCount anyways :-)
I shall think on this a bit more, please let me know if anyone has any good ideas!
May 17, 2023 07:28 AM
May 11, 2023
(Edit 2023-05-10: This has now launched for a subset of Twitter users. The code that existed to notify users that device identities had changed does not appear to have been enabled - as a result, in its current form, Twitter can absolutely MITM conversations and read your messages)
Elon Musk appeared on an interview with Tucker Carlson last month, with one of the topics being the fact that Twitter could be legally compelled to hand over users' direct messages to government agencies since they're held on Twitter's servers and aren't encrypted. Elon talked about how they were in the process of implementing proper encryption for DMs that would prevent this - "You could put a gun to my head and I couldn't tell you. That's how it should be."
tl;dr - in the current implementation, while Twitter could subvert the end-to-end nature of the encryption, it could not do so without users being notified. If any user involved in a conversation were to ignore that notification, all messages in that conversation (including ones sent in the past) could then be decrypted. This isn't ideal, but it still seems like an improvement over having no encryption at all. More technical discussion follows.
For context: all information about Twitter's implementation here has been derived from reverse engineering version 9.86.0 of the Android client and 9.56.1 of the iOS client (the current versions at time of writing), and the feature hasn't yet launched. While it's certainly possible that there could be major changes in the protocol between now launch, Elon has asserted that they plan to launch the feature this week so it's plausible that this reflects what'll ship.
For it to be impossible for Twitter to read DMs, they need to not only be encrypted, they need to be encrypted with a key that's not available to Twitter. This is what's referred to as "end-to-end encryption", or e2ee - it means that the only components in the communication chain that have access to the unencrypted data are the endpoints. Even if the message passes through other systems (and even if it's stored on other systems), those systems do not have access to the keys that would be needed to decrypt the data.
End-to-end encrypted messengers were initially popularised by Signal, but the Signal protocol has since been incorporated into WhatsApp and is probably much more widely used there. Millions of people per day are sending messages to each other that pass through servers controlled by third parties, but those third parties are completely unable to read the contents of those messages. This is the scenario that Elon described, where there's no degree of compulsion that could cause the people relaying messages to and from people to decrypt those messages afterwards.
But for this to be possible, both ends of the communication need to be able to encrypt messages in a way the other end can decrypt. This is usually performed using AES, a well-studied encryption algorithm with no known significant weaknesses. AES is a form of what's referred to as a symmetric encryption, one where encryption and decryption are performed with the same key. This means that both ends need access to that key, which presents us with a bootstrapping problem. Until a shared secret is obtained, there's no way to communicate securely, so how do we generate that shared secret? A common mechanism for this is something called Diffie Hellman key exchange, which makes use of asymmetric encryption. In asymmetric encryption, an encryption key can be split into two components - a public key and a private key. Both devices involved in the communication combine their private key and the other party's public key to generate a secret that can only be decoded with access to the private key. As long as you know the other party's public key, you can now securely generate a shared secret with them. Even a third party with access to all the public keys won't be able to identify this secret. Signal makes use of a variation of Diffie-Hellman called Extended Triple Diffie-Hellman that has some desirable properties, but it's not strictly necessary for the implementation of something that's end-to-end encrypted.
Although it was rumoured that Twitter would make use of the Signal protocol, and in fact there are vestiges of code in the Twitter client that still reference Signal, recent versions of the app have shipped with an entirely different approach that appears to have been written from scratch. It seems simple enough. Each device generates an asymmetric keypair using the NIST P-256 elliptic curve, along with a device identifier. The device identifier and the public half of the key are uploaded to Twitter using a new API endpoint called /1.1/keyregistry/register. When you want to send an encrypted DM to someone, the app calls /1.1/keyregistry/extract_public_keys with the IDs of the users you want to communicate with, and gets back a list of their public keys. It then looks up the conversation ID (a numeric identifier that corresponds to a given DM exchange - for a 1:1 conversation between two people it doesn't appear that this ever changes, so if you DMed an account 5 years ago and then DM them again now from the same account, the conversation ID will be the same) in a local database to retrieve a conversation key. If that key doesn't exist yet, the sender generates a random one. The message is then encrypted with the conversation key using AES in GCM mode, and the conversation key is then put through Diffie-Hellman with each of the recipients' public device keys. The encrypted message is then sent to Twitter along with the list of encrypted conversation keys. When each of the recipients' devices receives the message it checks whether it already has a copy of the conversation key, and if not performs its half of the Diffie-Hellman negotiation to decrypt the encrypted conversation key. One it has the conversation key it decrypts it and shows it to the user.
What would happen if Twitter changed the registered public key associated with a device to one where they held the private key, or added an entirely new device to a user's account? If the app were to just happily send a message with the conversation key encrypted with that new key, Twitter would be able to decrypt that and obtain the conversation key. Since the conversation key is tied to the conversation, not any given pair of devices, obtaining the conversation key means you can then decrypt every message in that conversation, including ones sent before the key was obtained.
(An aside: Signal and WhatsApp make use of a protocol called Sesame which involves additional secret material that's shared between every device a user owns, hence why you have to do that QR code dance whenever you add a new device to your account. I'm grossly over-simplifying how clever the Signal approach is here, largely because I don't understand the details of it myself. The Signal protocol uses something called the Double Ratchet Algorithm to implement the actual message encryption keys in such a way that even if someone were able to successfully impersonate a device they'd only be able to decrypt messages sent after that point even if they had encrypted copies of every previous message in the conversation)
How's this avoided? Based on the UI that exists in the iOS version of the app, in a fairly straightforward way - each user can only have a single device that supports encrypted messages. If the user (or, in our hypothetical, a malicious Twitter) replaces the device key, the client will generate a notification. If the user pays attention to that notification and verifies with the recipient through some out of band mechanism that the device has actually been replaced, then everything is fine. But, if any participant in the conversation ignores this warning, the holder of the subverted key can obtain the conversation key and decrypt the entire history of the conversation. That's strictly worse than anything based on Signal, where such impersonation would simply not work, but even in the Twitter case it's not possible for someone to silently subvert the security.
So when Elon says Twitter wouldn't be able to decrypt these messages even if someone held a gun to his head, there's a condition applied to that - it's true as long as nobody fucks up. This is clearly better than the messages just not being encrypted at all in the first place, but overall it's a weaker solution than Signal. If you're currently using Twitter DMs, should you turn on encryption? As long as the limitations aren't too limiting, definitely! Should you use this in preference to Signal or WhatsApp? Almost certainly not. This seems like a genuine incremental improvement, but it'd be easy to interpret what Elon says as providing stronger guarantees than actually exist.
comments
May 11, 2023 12:40 AM
May 08, 2023
After some hiccups with Indico we’ve finally set up a page that lists submitted microconference proposals. Along with seasoned veterans like Containers and Checkpoint/Restore and RISC-V we are glad to see Live Patching microconference returning after a long break and a brand new Linux Kernel Debugging microconference.
The Proposed microconfences page will be updated from time until the CFP for microconference proposals will be closed on June, 1.
Be sure not to miss the deadline and submit your microconference!
May 08, 2023 05:20 PM
April 28, 2023
Much angst (and discussion ink) is wasted in open source over whether pulling in code from one project with a different licence into another is allowable based on the compatibility of the two licences. I call this problem self defeating because it creates sequestered islands of incompatibly licensed but otherwise fully open source code that can never ever meet in combination. Everyone from the most permissive open source person to the most ardent free software one would agree this is a problem that should be solved, but most of the islands would only agree to it being solved on their terms. Practically, we have got around this problem by judicious use of dual licensing but that requires permission from the copyright holders, which can sometimes be hard to achieve; so dual licensing is more a band aid than a solution.
In this blog post, I’m going to walk you through the reasons behind cone the most intractable compatibility disputes in open source: Apache-2 vs GPLv2. However, before we get there, I’m first going to walk through several legal issues in general contract and licensing law and then get on to the law and politics of open source licensing.
The Law of Contracts and Licences
Contracts and Licences come from very similar branches of the law and concepts that apply to one often apply to the other. For this legal tour we’ll begin with materiality in contracts followed by licences then look at repairable and irreparable legal harms and finally the conditions necessary to take court action.
Materiality in Contracts
This is actually a well studied and taught bit of the law. The essence is that every contract has a “heart” or core set of clauses which really represent what the parties want from each other and often has a set of peripheral clauses which don’t really affect the “heart” of the contract if they’re not fulfilled. Not fulfilling the latter are said to cause non-material breaches of the contract (i.e. breaches which don’t terminate the contract if they happen, although a party may still have an additional legal claim for the breach if it caused some sort of harm). A classic illustration, often used in law schools, is a contract for electrical the electrical wiring of a house that specifies yellow insulation. The contractor can’t find yellow, so wires the house with blue insulation. The contract doesn’t suffer a material breach because the wires are in the wall (where no-one can see) and there’s no safety issue with the colour and the heart of the contract was about wiring the house not about wire colour.
Materiality in Licensing
This is actually much less often discussed, but it’s still believed that licences are subject to the same materiality constraints as contracts and for this reason, licences often contain “materiality clauses” to describe what the licensor considers to be material to it. So for the licensing example, consider a publisher wishing to publish a book written by a famous author known as the “Red Writer”. A licence to publish for per copy royalties of 25% of the purchase price of the book is agreed but the author inserts a clause specifying by exact pantone number the red that must be the predominant colour of the binding (it’s why they’re known as the “Red Writer”) and also throws in a termination of copyright licence for breaches clause. The publisher does the first batch of 10,000 copies, but only after they’ve been produced discovers that the red is actually one pantone shade lighter than that specified in the licence. Since the cost of destroying the batch and reprinting is huge, the publisher offers the copies for sale knowing they’re out of spec. Some time later the “Red Writer” comes to know of the problem, decides the licence is breached and therefore terminated, so the publisher owes statutory damages (yes, they’ve registered their copyright) per copy on 10,000 books (about $300 million maximum), would the author win?
The answer of course is that no court is going to award the author $300 million. Most courts would take the view that the heart of the contract was about money and if the author got their royalties per book, there was no material breach and the licence continues in force for the publisher. The “Red Writer” may have a separate tort claim for reputational damage if any was caused by the mis-colouring of the book, but that’s it.
Open Source Enforcement and Harm
Looking at the examples above, you can see that most commercial applications of the law eventually boil down to money: you go to court alleging a harm, the court must agree and then assess the monetary compensation for the harm which becomes damages. Long ago in community open source, we agreed that money could never compensate for a continuing licence violation because if it could we’d have set a price for buying yourself out of the terms of the licence (and some Silicon Valley Rich Companies would actually be willing to pay it, since it became the dual licence business model of companies like MySQL). The principle that mostly applies in open source enforcement actions is that the harm is to the open source ecosystem and is caused by non-compliance with the licence. Since such harm can only be repaired by compliance that’s the essence of the demand. Most enforcement cases have been about egregious breaches: lack of any source code rather than deficiencies in the offer to provide source code, so there’s actually very little in court records with regard to materiality of licence breaches.
One final thing to note about enforcement cases is there must always be an allegation of material harm to someone or something because you can’t go into court and argue on abstract legal principles (as we seem to like to do in various community mailing lists), you must show actual consequences as well. In addition to consequences, you must propose a viable remedy for the harm that a court could impose. As I said above in open source cases it’s often about harms to the open source ecosystem caused by licence breaches, which is often accepted unchallenged by the defence because the case is about something obviously harmful to open source, like failure to provide source code (and the remedy is correspondingly give us the source code). However, when considering about the examples below it’s instructive to think about how an allegation of harm around a combination of incompatible open source licences would play out. Since the source code is available, there would be much more argument over what the actual harm to the ecosystem, if any, was and even if some theoretical harm could be demonstrated, what would the remedy be?
Applying this to Apache-2 vs GPLv2
The divide between the Apache Software Foundation (ASF) and the Free Software Foundation (FSF) is old and partly rooted in politics. For proof of this notice the FSF says that the two licences (GPLv2 and Apache-2) are legally incompatible and in response the ASF says no-one should use any GPL licences anyway. The purpose of this section is to guide you through the technicalities of the incompatibility and then apply the materiality lessons from above to see if they actually matter.
Why GPLv2 is Incompatible with Apache-2
The argument is that Apache-2 contains two incompatible clauses: the patent termination clause (section 3) which says that if you launch an action against anyone alleging the licensed code infringes your patent then all your rights to patents in the code under the Apache-2 licence terminate; and the Indemnity clause (Section 9) which says that if you want to offer an a warranty you must indemnify every contributor against any liability that warranty might incur. By contrast, GPLv2 contains an implied patent licence (Section 7) and a No Warranty clause (Section 11). Licence scholars mostly agree that the patent and indemnity terms in GPLv2 are weaker than those in Apache-2.
The incompatibility now occurs because GPLv2 says in Section 2 that the entire work after the combination must be shipped under GPLv2, which is possible: Apache is mostly permissive except for the stronger patent and indemnity clauses. However, it is arguable that without keeping those stronger clauses on the Apache-2 code, you’ve violated the Apache-2 licence and the GPLv2 no additional restrictions clause (Section 6) prevents you from keeping the stronger licensing and indemnity clauses even on the Apache-2 portions of the code. Thus Apache-2 and GPLv2 are incompatible.
Materiality and Incompatibility
It should be obvious from the above that it’s hard to make a materiality argument for dropping the stronger apache2 provisions because someone, somewhere might one day get into a situation where they would have helped. However, we can look at the materiality of the no additional restrictions clause in GPLv2. The FSF has always taken the absolutist position on this, which is why they think practically every other licence is GPLv2 incompatible: when you dig at least one clause in every other open source licence can be regarded as an additional restriction. We also can’t take the view that the whole clause is not material: there are obviously some restrictions (like you must pay me for every additional distribution of the code) that would destroy the open source nature of the licence. This is the whole point of the no additional restrictions clause: to prevent the downstream addition of clauses incompatible with the free software goal of the licence.
I mentioned in the section on Materiality in Licences that some licences have materiality clauses that try to describe what’s important to the licensor. It turns out that GPLv2 actually does have a materiality clause: the preamble. We all tend to skip the preamble when analysing the licence, but there’s no denying it’s 7 paragraphs of justification for why the licence looks like it does and what its goals are.
So, to take the easiest analysis first, does the additional indemnity Apache-2 requires represent a material additional restriction. The preamble actually says “for each author’s protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors’ reputations.” Even on a plain reading an additional strengthening of that by providing an indemnity to the original authors has to be consistent with the purpose as described, so the indemnity clause can’t be regarded as a material additional restriction (a restriction which would harm the aims of the licence) when read in combination with the preamble.
Now the patent termination clause. The preamble has this to say about patents “Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone’s free use or not licensed at all.” So giving licensees the ability to terminate the patent rights for patent aggressors would appear to be an additional method of fulfilling the last sentence. And, again, the patent termination clause seems to be consistent with the licence purpose and thus must also not be a material additional restriction.
Thus the final conclusion is that while the patent and indemnity clauses of Apache-2 do represent additional restrictions, they’re not material additional restrictions according to the purpose of the licence as outlined by its materiality clause and thus the combination is permitted. This doesn’t mean the combination is free of consequences: the added code still carries the additional restrictions and you must call that out to the downstream via some mechanism like licensing tags, but it can be done.
Proving It
The only way to prove the above argument is to win in court on it. However, here lies the another good reason why combining Apache-2 and GPLv2 is allowed: there’s no real way to demonstrate harm to anything (either the copyright holder who agreed to GPLv2 or the Community) and without a theory of actual Harm, no-one would have standing to get to court to test the argument. This may look like a catch-22, but it’s another solid reason why, even in the absence of the materiality arguments, this would ultimately be allowed (if you can’t prevent it, it must be allowable, right …).
Community Problems with the Materiality Approach
The biggest worry about the loosening of the “no additional restrictions” clause of the GPL is opening the door to further abuse of the licence by unscrupulous actors. While I agree that this should be a concern, I think it is adequately addressed by rooting the materiality of the licence in the preamble or in provable harm to the open source community. There is also the flip side of this: licences are first and foremost meant to serve the needs of their development community rather than become inflexible implements for a group of enforcers, so even if there were some putative additional abuse in this approach, I suspect it would be outweighed by the licence compatibility benefit to the development communities in general.
Conclusion
The first thing to note is that Open Source incompatible licence combination isn’t as easy as simply combining the code under a single licence: You have to preserve the essential elements of both licences in the code which is combined (although not necessarily the whole project), so for an Apache-2/GPLv2 combination, you’ll need a note on the files saying they follow the stronger Apache patent termination and indemnity even if they’re otherwise GPLv2. However, as long as you’re careful the combination works for either of two reasons: because the Apache-2 restrictions aren’t material additional restrictions under the GPLv2 preamble or because no-one was actually harmed in the making of the combination (or both).
One can see from the above that similar arguments can be applied to various other supposedly incompatible licence combinations (exercise for the reader: try it with BSD-4-Clause and GPLv2). One final point that should be made is that licences and contracts are also all about what was in the minds of the parties, so for open source licences on community code, the norms and practices of the community matter in addition to what the licence actually says and what courts have made of it. In the final analysis, if the community norm of, say, a GPLv2 project is to accept Apache-2 code allowing for the stronger patent and indemnity clauses, then that will become the understood basis for interpreting the GPLv2 licence in that community.
For completeness, I should point out I’ve used the no harm no foul reasoning before when arguing that CDDL and GPLv2 are compatible.
April 28, 2023 01:27 PM
eBPF has many uses in improving computer security, but just taking eBPF observability tools as-is and using them for security monitoring would be like driving your car into the ocean and expecting it to float.
Observability tools are designed have the lowest overhead possible so that they are safe to run in production while analyzing an active performance issue. Keeping overhead low can require tradeoffs in other areas: tcpdump(8), for example, will drop packets if the system is overloaded, resulting in incomplete visibility. This creates an obvious security risk for tcpdump(8)-based security monitoring: An attacker could overwhelm the system with mostly innocent packets, hoping that a few malicious packets get dropped and are left undetected. Long ago I encountered systems which met strict security auditing requirements with the following behavior: If the kernel could not log an event, it would immediately **halt**! While this was vulnerable to DoS attacks, it met the system's security auditing non-repudiation requirements, and logs were 100% complete.
There are ways to evade detection in other tools as well, like top(1) (since it samples processes and relies on its comm field) and even ls(1) (putting escape characters in files). Rootkits do this. These techniques have been known in the industry for decades and haven't been "fixed" because they aren't "broken." They are cars, not boats. Similar methods can be used to evade detection in the eBPF bcc and bpftrace observability tools as well: overwhelming them with events, doing time-of-check-time-of-use attacks (TOCTOU), escape characters, etc.
When will the eBPF community "fix" these tools? Well, when will Tesla fix my Model 3 so I can drive it under the Oakland bridge instead of over it? (I joke, and I don't drive a Tesla.) What you actually want is a security monitoring tool that meets a different set of requirements. Trying to adapt observability tools into security tools generally increases overhead (e.g., adding extra probes) which negates the main reason I developed these using eBPF in the first place. That would be like taking the wheels off a car to help make it float. There are other issues as well, like decreasing maintainability when moving probes from stable tracepoints to unstable inner workings for TOU tracing. Had I written these as security tools to start with, I would have done them differently: I'd start with LSM hooks, use a plugin model instead of standalone CLI tools, support configurable policies for event drop behavior, optimize event logging (which we still haven't [done](https://github.com/iovisor/bcc/issues/1033)), and lots more.
None of this should be news to experienced security engineers. I'm writing this post because others see the tools and examples I've shared and believe that, with a bit of shell scripting, they could have a good security monitoring product. I get that it looks that way, but in reality there's a bunch of work to do. Ideally I'd link to an example in bcc for security monitoring (we could create a subdirectory for them) but that currently doesn't exist. In the meantime my best advice is: If you are making a security monitoring product, hire a good security engineer (e.g., someone with solid pen-testing experience).
BPF for security monitoring was first explored by myself and a Netflix security engineer, Alex Maestretti, in a [2017 BSides talk] \(some slides below). Since then I've worked with other security engineers on the topic (hi Michael, Nabil, Sargun, KP). (I also did security work many years ago, so I'm not completely new to the topic.)


BSidesSF2017 BPF security monitoring: Alex Maesretti, Brendan Gregg
There is potential for an awesome eBPF security product, and it's not just the visibility that's valuable (all those arrows) it's also the low overhead. These slides included our [overhead evaluation] showing bcc/eBPF was far more efficient than auditd or go-audit. (It was pioneering work, but unfortunately the slides are all we have: Alex, I, and others left Netflix before open sourcing it.) There are now other eBPF security products, including open source projects (e.g., [tetragon]), but I don't know enough about them all to have a recommendation.
Note that I'm talking about the observability tools here and not the eBPF kernel runtime itself, which has been designed as a secure sandbox. Nor am I talking about privilege escalation, since to run the tools you already need root access (that car has sailed!).
[2017 BSides talk]: https://www.brendangregg.com/Slides/BSidesSF2017_BPF_security_monitoring
[overhead evaluation]: https://www.brendangregg.com/Slides/BSidesSF2017_BPF_security_monitoring/#17
[tetragon]: https://github.com/cilium/tetragon
April 28, 2023 12:00 AM
April 25, 2023
Season 2 – Episode 4 – 2023/04/24
Summary
The latest stable kernel is Linux 6.3, released by Linus Torvalds on Sunday, April 23rd, 2023.
The latest mainline (development) kernel is 6.3. The Linux 6.4 “merge window” is open.
Linux 6.3
Linus Torvalds announced the release of Linux 6.3, noting, “It’s been a calm release this time around, and the last week was really no different. So here we are, right on schedule”. As usual, the KernelNewbies website has a summary of Linux 6.3, including links to the appropriate LWN (Linux Weekly News) articles with deep dives for each new feature (if you like this podcast and want to support Linux Kernel journalism, please subscribe to Linux Weekly News).
Linux 6.3 includes additional support for the Rust programming language, a new red-black tree data structure for BPF programs, and the removal of a large number of legacy Arm systems.
With the release of Linux 6.3 comes the opening of the “merge window” (period of time during which disruptive changes are allowed to be merged into the kernel source code) for what will be Linux 6.4 in another couple of months. The next podcast release will include a full summary.
Thorsten Leemhuis has been doing his usual excellent work tracking regressions. He posted multiple updates during the Linux 6.3 development cycle as usual, at one point saying that “The list of regressions from the 6.3 cycle I track is still quite short”. Most seemed to relate to build problems that had stalled for fixes. He had been concerned that there “are two regressions from the 6.2 cycle still not fixed”. These included that “Wake-on-lan (WOL) apparently is broken for a huge number of users” and “a huge number of DISCARD request on NVME devices with Btrfs” causing “a performance regression for some users”. With the final release of Linux 6.3, he has “nothing much to report”, with just “two regression from the 6.3 cycle…worth mentioning”.
Sebastian Andrej Siewior announced pre-empt RT (Real Time) patch v6.3-rc5-rt8.
Shuah Khan posted a summary of complaints addressed by the Linux Kernel Code of Conduct Committee between October 1, 2022 through March 31, 2023. During that time, they received reports of “Unacceptable behavior of comments in email” 6 times. Most were resolved with “Clarification on the Code of Conduct related to maintainer rights and responsibility to reject code”. Overall “The reports were about the decisions made in rejecting code and these actions are not viewed as violations of the Code of Conduct”.
Russia
It cannot have escaped anyone’s attention that there is an active military conflict ongoing in Europe. I try to keep politics out of this podcast. We are, after all, not lacking for other places in which to debate our opinions. Similarly, for the most part, it can be convenient as Open Source developers to attempt to live in an online world devoid of politics and physical boundaries, but the real world very much continues to exist, and in the real world there are consequences (in the form of sanctions) faced by those who invade other sovereign nations. Those consequences can be imposed by governments, but also by fellow developers. The latter was the case over the past month with a patch posted to the Linux “netdev” networking development list.
An engineer from (sanctioned) Russian company Baikal Electronics attempted to post some network patches. His post was greeted by a terse response from one of the maintainers: “We don’t feel comfortable accepting patches from or relating to hardware produced by your organization. Please withhold networking contributions until further notice”. Baikal is known for its connections to the Russian state. The question of official policy was subsequently raised by James Harkonnen, citing a message allegedly from Linus in which he reportedly said “I will not stop any kernel developer I trust from taking patches from Russian sources that they in turn trust, but at the same time I will also not override anybody who goes “I don’t want to have anything to do with this” and doesn’t want to work with Russian companies”. James wanted a clarification as to any official position. As of this date no follow up discussion appears to have taken place, and there does not appear to be an official kernel-wide policy on Russian patches.
Introducing Bugbot
Konstantin Ryabitsev, who is responsible for running kernel.org on behalf of Linux Foundation, posted “Introducing bugbot”, in which he described a new tool that aims to be “a bridge between bugzilla [as in bugzilla.kernel.org] and public-inbox (the mailing list). The tool is “still a very early release” but it is able to “Create bugs from mailing list discussions, with full history”, and “Start mailing list threads from pre-triaged bugzilla bugs”. He closed (presciently) with “bugbot is very young and probably full of bugs, so it will still see a lot of change and will likely explode a couple of times”. True to the prediction, bugbot saw that it was summoned by the announcement of its existence and it replied to the thread, which Konstantin used as an example of the “may explode” comment he had made. Generally feedback to the new tool was positive.
Ongoing Development
Anjali Kulkarni posted version 3 of “Process connector bug fixes & enhancements”, a patch series to improve the performance of monitoring the exit of dependent threads. According to Anjali, “Oracle DB runs on a large scale with 100000s of short lived processes, starting up and exiting quickly. A process monitoring DB daemon which tracks and cleans up after processes that have died without a proper exit needs notifications only when a process died with a non-zero exit code (which should be rare)”. The patches allow a “client [to] register to listen for only exit or fork or a mix of all events. This greatly enhances performance”.
Vlastimil Babka posted “remove SLOB and allow kfree() with kmem_cache_alloc()”. In the patch posted, Vlastimil notes that “The SLOB allocator was deprecated in 6.2 so I think we can start exposing the complete removal in for-next and aim at 6.4 if there are no complaints”.
Thorsten Leemhuis (“the Linux kernel’s regression tracker”) poked an older thread about a 20% UDP performance degradation that Tariq Toukan (NVIDIA) had reported a few months ago. The report observed that a specific CFS (Completely Fair Scheduler, the current default Linux scheduler) patch was the culprit, but that the team discovering it “couldn’t come up with a good explanation how this patch causes this issue”. Thorsten tagged the mail for followup tracking.
Lukas Bulwahn posted “Updating information on lanana.org”. Lanana was setup to be “The Linux Assigned Names and Numbers Authority”, a play on organizations like the IANA: Internet Assigned Numbers Authority, that assigns e.g. IP addresses on the internet. As the patches note, “As described in Documentation/admin-guide/devices.rst, the device number register (or linux device list) is at Documentation/admin-guide/devices.txt and no longer maintained at lanana.org”. Lanana still technically hosts some of the LSB (Linux Standard Base) IDs.
On the Rust front, Asahi Lina posted “rust: add uapi crate” that “introduce[s] a new ‘uapi’ crate that will contain only these [uapi] publicly usable definitions” for use by userspace APIs.
Marcelo Tosatti posted “fold per-CPU vmstats remotely”, a patch that notes a (Red Hat) customer had encountered a system in which 48 out of 52 CPUs were in a “nohz_full” state (i.e. completely idle with the idle “tick” interrupt stopped), where a process on the system was “trapped in throttle_direct_reclaim” (a low memory “reclaim” codepath) but was not making progress because the counters the reclaim code wanted to use were stale (coming from a completely idle CPU) and not updating. The patch series causes the “vmstat_shepered” kernel thread to “flush the per-CPU counters to the global counters from remote [other] CPUs”.
Reinette Chatre posted “vfio/pci: Support dynamic allocation of MSI-X interrupts”. MSIs are “Message Signaled Interrupts”, typically used by modern buses, such as PCIe, in which an interrupt is not signaled using a traditional wiggling of a wire, but instead by a memory write to a special magic address that subsequently causes an actual hard-wired interrupt to be asserted. In the patch posting, Reinette noted that “Qemu allocates interrupts incrementally at the time the guest unmasks an interrupt, for example each time a Linux guest runs request_irq(). Dynamic allocation of MSI-X interrupts was not possible until v6.2. This prompted Qemu to, when allocating a new interrupt, first release a previously allocated interrupts (including disable of MSI-X) followed by re-allocation of all interrupts that includes the new interrupt”. This of course may not be possible while a device or accelerator is running. The patches are marked as RFC (Request For Comments) because “vfio support for dynamic MSI-X needs to work with existing user space as well as upcoming user space that takes advantage of this feature”. Reinette adds, “I would appreciate guidance on the expectations and requirements surrounding error handling when considering existing user space”. She provides several scenarios to consider.
Tejun Heo posted version 3 of “sched: Implement BPF extensible scheduler class”, which “proposed a new scheduler class called ‘ext_sched_class’, or sched_ext, which allows scheduling policies to be implemented as BPF programs”. BPF (Berkeley Packet Filter) programs are small specially processed “bytecode” programs that can be loaded into the kernel and run within a special form of sandbox. They are commonly used to implement certain tracing logic and come with restrictions (for obvious reasons) on the nature of the modifications they can make to a running kernel. Due to their complexity, and potential intrusiveness of allowing scheduling algorithms to be implemented in BPF programs, the patches come with a (lengthy) “Motivation” section, describing the “Ease of experimentation and exploration”, among other reasons for allowing BPF extension of the scheduler instead of requiring traditional patches. An example provided includes that of implementing an L1TF (L1 Terminal Fault, a speculation execution security side-channel bug in certain x86 CPUs) aware scheduler that performs co-scheduling of (safe to pair) peer threads using sibling hyperthreads using BPF.
Joel Fernandes sent a patch adding himself as a maintainer for RCU, noting “I have spent years learning / contributing to RCU with several features, talks and presentations, with my most recent work being on Lazy-RCU. Please consider me for M[aintainer], so I can tell my wife why I spend a lot of my weekends and evenings on this complicated and mysterious thing — which is mostly in the hopes of preventing the world from burning down because everything runs on this one way or another”. RCU (Read-Copy-Update) is a notoriously difficult subsystem to understand yet it is a feature of certain modern Operating Systems that allows them to gain significant performance enhancements from the fundamental notion of having different views into the same data, based upon point-in-time producers and consumers that come and go. Joel later followed up with “Core RCU patches for 6.4”, including the shiny new MAINTAINERS change and several other fixes.
Separately, Paul McKenney (the original RCU author, and co-inventor) posted assorted updates to sleepable RCU (SRCU) reducing cache footprint and marking it non-optional in Kconfig (kernel build configuration), “courtesy of new-age printk() requirements”.
Mike Kravetz raised a concern about THP (Transparent Huge Page) “backed thread stacks”. In his mail, he cited a “product team” that had “recently experienced ‘memory bloat’ in their environment” due to the alignment of the allocations they had used for thread local stacks within the Java Virtual Machine (JVM) runtime. Mike questioned whether stacks should always be THP given that “Stacks by their very nature grow in somewhat unpredictable ways over time”. Most replies were along the lines that the JVM should alter how it does allocations to use the MADV_NOHUGEPAGE parameter to madvise when allocating space for thread stacks.
Carlos Llamas posted “Using page-fault handler in binder” about “trying to remove the current page handling in [Android’s userspace IPC] binder and switch to using ->fault() and other mm/ infrastructure”. He was seeking pointers and input on the direction from other developers.
Mike Rapoport posted a patch series that “move[s] core MM initialization to mm/mm_init.c”.
Randy Dunlap noted that uclinux.org was dead and requested references to it be removed from the Linux kernel MAINTAINERS file.
Jonathan Corbet (of LWN) posted various cleanups to the kernel documentation (which he maintains), including an “arch reorg” to clean up architecture specific docs.
Architectures
Arm
Lukasz Luba posted “Introduce runtime modifiable Energy Model”, a patch set that “adds a new feature which allows to modify Energy Model (EM) power values at runtime. It will allow to better reflect power model of a recent SoCs and silicon. Different characteristics of the power usage can be leverages and thus better decisions made during task placement”. Thus, the kernel’s (CFS) scheduler can (with this patch) make a decision about where to schedule (place, or migrate) a running process (known as a task within the kernel) according to the power usage that the silicon knows will vary according to nature of the workload, and its use of hardware. For example, heavy GPU use will cause a GPU to heat up and alter a chip’s (SoC’s) thermal properties in a manner that may make it better to migrate other tasks to a different core.
Itanium
Reports of Itanium’s demise may not have been greatly exaggerated, but when it comes to the kernel they may have been a little premature by a month or two. Florian Weimer followed up to “Retire IA64/Itanium support” with a question, “Is this still going ahead? In userspace, ia64 is of course full of special cases, too, so many of us really want to see it gone, but we can’t really start the removal process while there is still kernel support”.
LoongArch
Tianrui Zhao posted version 5 of “Add KVM LoongArch support”.
Huacai Chen posted a patch, “LoongArch: Make WriteCombine configurable for ioremap()” that aims to work around a PCIe protocol violation in the implementation of the LS7A chipset.
Separately, Huacai also posted a patch enabling the kernel itself to use FPU (Floating Point Unit) functions. Quoting the patch, “They can be used by some other kernel components, e.g. the AMDGPU graphic driver for DCN”.
WANG Xuerui posted “LongArch: Make bounds-checking instructions useful”, referring to “BCE” (Bounds Checking Error) instructions, similar to those of other architectures, such as x86_64.
POWER
Laurent Dufour posted “Online new threads according to the current SMT level”, which aims to balance a hotplugged CPU’s SMT level against the current one used by the overall system. For example, a system capable of SMT8 but booted in SMT4 will currently nonetheless online all 8 SMT threads of a subsequently added CPU, rather than only 4 (to match the system).
RISC-V
Evan Green posted the fourth version of “RISC-V Hardware Probing User Interface”, which aims to handle the number of (potentially incompatible) ISA extensions present in implementations of the RISC-V architecture. The basic idea is to provide a vDSO (virtual Dynamic Shared Object – a kind of library that appears in userspace and is fast to link against, but is owned by the kernel) and backing syscall (for fallback use by the vDSO in certain cases) that can quickly hand an application key/value pairs representative of potential ISA features present on a system. The previous attempts had experienced pushback, so this time Evan came with performance numbers showing the (many) orders of magnitude differences in performance between using a vDSO/syscall approach vs. the sysfs file interface originally counter proposed by Greg KH (Greg Kroah-Hartman). Greg had preferred an application perform many open calls to parse sysfs files in order to determine the capabilities of a system, but this would be expensive for every binary. This patch series was later merged by Palmer Dabbelt (the RISC-V kernel maintainer) and should therefore make its way into the Linux 6.4 kernel series in the next couple of months.
Sia Jee Heng posted version 5 of a patch series implementing hibernation support for RISC-V. According to the posting, “This series adds RISC-V Hibernation/suspend to disk support. Low level Arch functions were created to support hibernation”. The cover letter explains how e.g. swsusp_arch_resume “creates a temporary page table that [covering only] the linear map. It copies the restore code to a ‘safe’ page, then [start] restore the memory image”.
Heiko Stuebner posted “RISC-V: support some cryptography accelerations”. These rely on version 14 of a previous patch series adding experimental support for the “v” (vector) extension, which has not been ratified (made official) by the RISC-V International organization yet. And speaking of this, a recent discussion of the non-standard implementation of the RISC-V vector extension in the “T-Head C9xx” cores suggests describing those as an “errata” implementation.
The PINE64 project recently began shipping a RISC-V development board known as “Star64”. This board uses the StarFive JH7110 SoC for which Samin Guo recently posted an updated ethernet driver, apparently based on the DesignWare MAC from Synopsys. Separately, Walker Chen posted a DMA driver for the same SoC, and Mason Huo posted cpufreq support (which included enabling “the axp15060 pmic for the cpu power source”). Seems an effort is underway to upstream support for this low-cost “Raspberry Pi”-like alternative in the RISC-V ecosystem.
Greg Ungerer posted “riscv: support ELF format binaries in nommu mode” which does what it says on the tin: “add the ability to run ELF format binaries when running RISC-V in nommu mode. That support is actually part of the ELF-FDPIC loader, so these changes are all about making that work on RISC-V”. Greg notes, “These changes have not been used to run actual ELF-FDPIC binaries. It is used to load and run normal ELF – compiled -pie format. Though the underlying changes are expected to work with full ELF-FDPIC binaries if or when that is supported on RISC-V in gcc”.
Anup Patel posted version 18 of “RISC-V IPI Improvements” which aims to teach RISC-V (on suitable hardware) how to use “normal per-CPU interrupts” to send IPIs (Inter-Processor Interrupts), as well as remote TLB (Translation Lookaside Buffer) flushes and cache maintenance operations without having to resort to calls into “M” mode firmware.
x86 (x86_64)
Rick Edgecombe posted version 8 of “Shadow stacks for userspace”, to which Borislav Petkov replied “Yes, finally! That was loooong in the making. Thanks for the persistence and patience”. He signed off as having reviewed the patches.
Ian Rogers posted “Event updates for GNR, MTL and SKL”. Apparently these perf events are generated automatically using a script on Intel’s github (that’s pretty sweet).
Usama Arif posted version 15 of “Parallel CPU bringup for x86_64”. This is about doing parallel calls to INIT/SIPI/SIPI (the initialization sequences used by x86 CPUs to bring them up) rather than the single threaded process that previously was used by the Linux kernel.
Tony Luck posted version 2 of “Handle corrected machine check interrupt storms”, which includes additional patches from Smita Koralahalli that “Extend the logic of handling Intel’s corrected machine check interrupt storms to AMD’s threshold interrupts”.
Yi Liu posted “iommu: Add nested domain support”, which “Introduce[s] a new domain type for a user space I/O address, which is nested on top of another address space address represented by a UNMANAGED domain”.
Kirill A. Shutemov posted version 16 of “Linear Address Masking enabling”. As he noted, “(LAM) modifies the checking that is applied to 64-bit linear addresses, allowing software to use of the untranslated address bits for metadata. The capability can be used for efficient address sanitizers (ASAN) implementation and for optimizations in JITs and virtual machines”. It’s also been present in architectures such as Arm for many, many years as TBI (Top Byte Ignore), etc.
Kuppuswamy Sathyanarayanan posted “TDX Guest Quote generation support”, which enables “TDX” (Trusted Domain Extensions – aka Confidential Compute) guests to attest to their “trustworthiness to other entities before provisioning secrets to the guest”. The patch describes a two step process including a “TDREPORT generation” and a “Quote generation”. The report captures measurements while the report is sent to a “Quoting Enclave” (QE) that generates a “remotely verifiable Quote”. A special conduit is provided for guests to send these quotes.
Shan Kang posted some benchmark results from KVM for Intel’s “FRED” (Flexible Return and Event Delivery) new syscall/sysenter enhanced architecture.
Mario Limonciello posted “Add vendor agnostic mechanism to report hardware sleep”, noting that “An import part of validating that S0ix [an SoC level idle power state] worked properly is to check how much of a cycle was spent in a hardware sleep state”.
April 25, 2023 12:11 AM
April 24, 2023
Linux Plumbers Conference 2023 is pleased to host the eBPF & Networking Track!
For the fourth year in a row, the eBPF & Networking Track is going to bring together developers, maintainers, and other contributors from all around the globe to discuss improvements to the Linux kernel’s networking stack as well as BPF subsystem and their surrounding user space ecosystems such libraries, loaders, compiler backends, and other related system tooling.
The gathering is designed to foster collaboration and face to face discussion of ongoing development topics as well as to encourage bringing new ideas into the development community for the advancement of both subsystems.
Proposals can cover a wide range of topics related to Linux networking and BPF covering improvements in areas such as (but not limited to) core networking, protocols, routing, performance, tunneling, drivers, BPF infrastructure and its use in tracing, security, networking, scheduling and beyond, as well as non-kernel components like libraries, compilers, testing infra and tools.
Please come and join us in the discussion. We hope to see you there!
April 24, 2023 08:49 AM
F38 just released and seeing a bunch of people complain that TF2 dies on AMD or other platforms when lavapipe is installed. Who's at fault? I've no real idea. How to fix it? I've no real idea.
What's happening?
AMD OpenGL drivers use LLVM as the backend compiler. Fedora 38 updated to LLVM 16. LLVM 16 is built with c++17 by default. C++17 introduces new "operator new/delete" interfaces[1].
TF2 ships with it's own libtcmalloc_minimal.so implementation, tcmalloc expects to replace all the new/delete interfaces, but the version in TF2 must not support or had incorrect support for the new align interfaces.
What happens is when TF2 probes OpenGL and LLVM is loaded, when DenseMap initializes, one "new" path fails to go into tcmalloc, but the "delete" path does, and this causes tcmalloc to explode with
"src/tcmalloc.cc:278] Attempt to free invalid pointer"
Fixing it?
I'll talk to Valve and see if we can work out something, LLVM 16 doesn't seem to support building with C++14 anymore. I'm not sure if static linking libstdc++ into LLVM might avoid the tcmalloc overrides, it might not also be acceptable to the wider Fedora community.
[1] https://www.cppstories.com/2019/08/newnew-align/
April 24, 2023 03:29 AM
April 19, 2023
There are plans for nouveau to support using the NVIDIA supplied GSP firmware in order to support new hardware going forward
The nouveau project doesn't have any input or control over the firmware. NVIDIA have made no promises around stable ABI or firmware versioning. The current status quo is that NVIDIA will release versioned signed gsp firmwares as part of their driver distribution packages that are version locked to their proprietary drivers (open source and binary). They are working towards allowing these firmwares to be redistributed in linux-firmware.
The NVIDIA firmwares are quite large. The nouveau project will control the selection of what versions of the released firmwares are to be supported by the driver, it's likely a newer firmware will only be pulled into linux-firmware for:
- New hardware support (new GPU family or GPU support)
- Security fix in the firmware
- New features that is required to be supported
This should at least limit the number of firmwares in the linux-firmware project.
However a secondary effect of the size of the firmwares is that having the nouveau kernel module at more and more MODULE_FIRMWARE lines for each iteration will mean the initramfs sizes will get steadily larger on systems, and after a while the initramfs will contain a few gsp firmwares that the driver doesn't even need to run.
To combat this I've looked into adding some sort of module grouping which dracut can pick one out off.
It currently looks something like:
MODULE_FIRMWARE_GROUP_ONLY_ONE("ga106-gsp");
MODULE_FIRMWARE("nvidia/ga106/gsp/gsp-5258902.bin");
MODULE_FIRMWARE("nvidia/ga106/gsp/gsp-5303002.bin");
MODULE_FIRMWARE_GROUP_ONLY_ONE("ga106-gsp");
This group only one will end up in the module info section and dracut will only pick one module from the group to install into the initramfs. Due to how the module info section is constructed this will end up picking the last module in the group first.
The dracut MR is:
https://github.com/dracutdevs/dracut/pull/2309
The kernel one liner is:
https://lore.kernel.org/all/20230419043652.1773413-1-airlied@gmail.com/T/#u
April 19, 2023 05:16 AM
April 18, 2023
Here's an article from a French anarchist describing how his (encrypted) laptop was seized after he was arrested, and material from the encrypted partition has since been entered as evidence against him. His encryption password was supposedly greater than 20 characters and included a mixture of cases, numbers, and punctuation, so in the absence of any sort of opsec failures this implies that even relatively complex passwords can now be brute forced, and we should be transitioning to even more secure passphrases.
Or does it? Let's go into what LUKS is doing in the first place. The actual data is typically encrypted with AES, an extremely popular and well-tested encryption algorithm. AES has no known major weaknesses and is not considered to be practically brute-forceable - at least, assuming you have a random key. Unfortunately it's not really practical to ask a user to type in 128 bits of binary every time they want to unlock their drive, so another approach has to be taken.
This is handled using something called a "key derivation function", or KDF. A KDF is a function that takes some input (in this case the user's password) and generates a key. As an extremely simple example, think of MD5 - it takes an input and generates a 128-bit output, so we could simply MD5 the user's password and use the output as an AES key. While this could technically be considered a KDF, it would be an extremely bad one! MD5s can be calculated extremely quickly, so someone attempting to brute-force a disk encryption key could simply generate the MD5 of every plausible password (probably on a lot of machines in parallel, likely using GPUs) and test each of them to see whether it decrypts the drive.
(things are actually slightly more complicated than this - your password is used to generate a key that is then used to encrypt and decrypt the actual encryption key. This is necessary in order to allow you to change your password without having to re-encrypt the entire drive - instead you simply re-encrypt the encryption key with the new password-derived key. This also allows you to have multiple passwords or unlock mechanisms per drive)
Good KDFs reduce this risk by being what's technically referred to as "expensive". Rather than performing one simple calculation to turn a password into a key, they perform a lot of calculations. The number of calculations performed is generally configurable, in order to let you trade off between the amount of security (the number of calculations you'll force an attacker to perform when attempting to generate a key from a potential password) and performance (the amount of time you're willing to wait for your laptop to generate the key after you type in your password so it can actually boot). But, obviously, this tradeoff changes over time - defaults that made sense 10 years ago are not necessarily good defaults now. If you set up your encrypted partition some time ago, the number of calculations required may no longer be considered up to scratch.
And, well, some of these assumptions are kind of bad in the first place! Just making things computationally expensive doesn't help a lot if your adversary has the ability to test a large number of passwords in parallel. GPUs are extremely good at performing the sort of calculations that KDFs generally use, so an attacker can "just" get a whole pile of GPUs and throw them at the problem. KDFs that are computationally expensive don't do a great deal to protect against this. However, there's another axis of expense that can be considered - memory. If the KDF algorithm requires a significant amount of RAM, the degree to which it can be performed in parallel on a GPU is massively reduced. A Geforce 4090 may have 16,384 execution units, but if each password attempt requires 1GB of RAM and the card only has 24GB on board, the attacker is restricted to running 24 attempts in parallel.
So, in these days of attackers with access to a pile of GPUs, a purely computationally expensive KDF is just not a good choice. And, unfortunately, the subject of this story was almost certainly using one of those. Ubuntu 18.04 used the LUKS1 header format, and the only KDF supported in this format is PBKDF2. This is not a memory expensive KDF, and so is vulnerable to GPU-based attacks. But even so, systems using the LUKS2 header format used to default to argon2i, again not a memory expensive KDFwhich is memory strong, but not designed to be resistant to GPU attack (thanks to the comments pointing out my misunderstanding here). New versions default to argon2id, which is. You want to be using argon2id.
What makes this worse is that distributions generally don't update this in any way. If you installed your system and it gave you pbkdf2 as your KDF, you're probably still using pbkdf2 even if you've upgraded to a system that would use argon2id on a fresh install. Thankfully, this can all be fixed-up in place. But note that if anything goes wrong here you could lose access to all your encrypted data, so before doing anything make sure it's all backed up (and figure out how to keep said backup secure so you don't just have your data seized that way).
First, make sure you're running as up-to-date a version of your distribution as possible. Having tools that support the LUKS2 format doesn't mean that your distribution has all of that integrated, and old distribution versions may allow you to update your LUKS setup without actually supporting booting from it. Also, if you're using an encrypted /boot, stop now - very recent versions of grub2 support LUKS2, but they don't support argon2id, and this will render your system unbootable.
Next, figure out which device under /dev corresponds to your encrypted partition. Run
lsblk
and look for entries that have a type of "crypt". The device above that in the tree is the actual encrypted device. Record that name, and run
sudo cryptsetup luksHeaderBackup /dev/whatever --header-backup-file /tmp/luksheader
and copy that to a USB stick or something. If something goes wrong here you'll be able to boot a live image and run
sudo cryptsetup luksHeaderRestore /dev/whatever --header-backup-file luksheader
to restore it.
(Edit to add: Once everything is working, delete this backup! It contains the old weak key, and someone with it can potentially use that to brute force your disk encryption key using the old KDF even if you've updated the on-disk KDF.)
Next, run
sudo cryptsetup luksDump /dev/whatever
and look for the Version: line. If it's version 1, you need to update the header to LUKS2. Run
sudo cryptsetup convert /dev/whatever --type luks2
and follow the prompts. Make sure your system still boots, and if not go back and restore the backup of your header. Assuming everything is ok at this point, run
sudo cryptsetup luksDump /dev/whatever
again and look for the PBKDF: line in each keyslot (pay attention only to the keyslots, ignore any references to pbkdf2 that come after the Digests: line). If the PBKDF is either "pbkdf2" or "argon2i" you should convert to argon2id. Run the following:
sudo cryptsetup luksConvertKey /dev/whatever --pbkdf argon2id
and follow the prompts. If you have multiple passwords associated with your drive you'll have multiple keyslots, and you'll need to repeat this for each password.
Distributions! You should really be handling this sort of thing on upgrade. People who installed their systems with your encryption defaults several years ago are now much less secure than people who perform a fresh install today. Please please please do something about this.
comments
April 18, 2023 06:35 PM
April 17, 2023
CPUs can't do anything without being told what to do, which leaves the obvious problem of how do you tell a CPU to do something in the first place. On many CPUs this is handled in the form of a reset vector - an address the CPU is hardcoded to start reading instructions from when power is applied. The address the reset vector points to will typically be some form of ROM or flash that can be read by the CPU even if no other hardware has been configured yet. This allows the system vendor to ship code that will be executed immediately after poweron, configuring the rest of the hardware and eventually getting the system into a state where it can run user-supplied code.
The specific nature of the reset vector on x86 systems has varied over time, but it's effectively always been 16 bytes below the top of the address space - so, 0xffff0 on the 20-bit 8086, 0xfffff0 on the 24-bit 80286, and 0xfffffff0 on the 32-bit 80386. Convention on x86 systems is to have RAM starting at address 0, so the top of address space could be used to house the reset vector with as low a probability of conflicting with RAM as possible.
The most notable thing about x86 here, though, is that when it starts running code from the reset vector, it's still in real mode. x86 real mode is a holdover from a much earlier era of computing. Rather than addresses being absolute (ie, if you refer to a 32-bit address, you store the entire address in a 32-bit or larger register), they are 16-bit offsets that are added to the value stored in a "segment register". Different segment registers existed for code, data, and stack, so a 16-bit address could refer to different actual addresses depending on how it was being interpreted - jumping to a 16 bit address would result in that address being added to the code segment register, while reading from a 16 bit address would result in that address being added to the data segment register, and so on. This is all in order to retain compatibility with older chips, to the extent that even 64-bit x86 starts in real mode with segments and everything (and, also, still starts executing at 0xfffffff0 rather than 0xfffffffffffffff0 - 64-bit mode doesn't support real mode, so there's no way to express a 64-bit physical address using the segment registers, so we still start just below 4GB even though we have massively more address space available).
Anyway. Everyone knows all this. For modern UEFI systems, the firmware that's launched from the reset vector then reprograms the CPU into a sensible mode (ie, one without all this segmentation bullshit), does things like configure the memory controller so you can actually access RAM (a process which involves using CPU cache as RAM, because programming a memory controller is sufficiently hard that you need to store more state than you can fit in registers alone, which means you need RAM, but you don't have RAM until the memory controller is working, but thankfully the CPU comes with several megabytes of RAM on its own in the form of cache, so phew). It's kind of ugly, but that's a consequence of a bunch of well-understood legacy decisions.
Except. This is not how modern Intel x86 boots. It's far stranger than that. Oh, yes, this is what it looks like is happening, but there's a bunch of stuff going on behind the scenes. Let's talk about boot security. The idea of any form of verified boot (such as UEFI Secure Boot) is that a signature on the next component of the boot chain is validated before that component is executed. But what verifies the first component in the boot chain? You can't simply ask the BIOS to verify itself - if an attacker can replace the BIOS, they can replace it with one that simply lies about having done so. Intel's solution to this is called Boot Guard.
But before we get to Boot Guard, we need to ensure the CPU is running in as bug-free a state as possible. So, when the CPU starts up, it examines the system flash and looks for a header that points at CPU microcode updates. Intel CPUs ship with built-in microcode, but it's frequently old and buggy and it's up to the system firmware to include a copy that's new enough that it's actually expected to work reliably. The microcode image is pulled out of flash, a signature is verified, and the new microcode starts running. This is true in both the Boot Guard and the non-Boot Guard scenarios. But for Boot Guard, before jumping to the reset vector, the microcode on the CPU reads an Authenticated Code Module (ACM) out of flash and verifies its signature against a hardcoded Intel key. If that checks out, it starts executing the ACM. Now, bear in mind that the CPU can't just verify the ACM and then execute it directly from flash - if it did, the flash could detect this, hand over a legitimate ACM for the verification, and then feed the CPU different instructions when it reads them again to execute them (a Time of Check vs Time of Use, or TOCTOU, vulnerability). So the ACM has to be copied onto the CPU before it's verified and executed, which means we need RAM, which means the CPU already needs to know how to configure its cache to be used as RAM.
Anyway. We now have an ACM loaded and verified, and it can safely be executed. The ACM does various things, but the most important from the Boot Guard perspective is that it reads a set of write-once fuses in the motherboard chipset that represent the SHA256 of a public key. It then reads the initial block of the firmware (the Initial Boot Block, or IBB) into RAM (or, well, cache, as previously described) and parses it. There's a block that contains a public key - it hashes that key and verifies that it matches the SHA256 from the fuses. It then uses that key to validate a signature on the IBB. If it all checks out, it executes the IBB and everything starts looking like the nice simple model we had before.
Except, well, doesn't this seem like an awfully complicated bunch of code to implement in real mode? And yes, doing all of this modern crypto with only 16-bit registers does sound like a pain. So, it doesn't. All of this is happening in a perfectly sensible 32 bit mode, and the CPU actually switches back to the awful segmented configuration afterwards so it's still compatible with an 80386 from 1986. The "good" news is that at least firmware can detect that the CPU has already configured the cache as RAM and can skip doing that itself.
I'm skipping over some steps here - the ACM actually does other stuff around measuring the firmware into the TPM and doing various bits of TXT setup for people who want DRTM in their lives, but the short version is that the CPU bootstraps itself into a state where it works like a modern CPU and then deliberately turns a bunch of the sensible functionality off again before it starts executing firmware. I'm also missing out the fact that this entire process only kicks off after the Management Engine says it can, which means we're waiting for an entirely independent x86 to boot an entire OS before our CPU even starts pretending to execute the system firmware.
Of course, as mentioned before, on modern systems the firmware will then reprogram the CPU into something actually sensible so OS developers no longer need to care about this[1][2], which means we've bounced between multiple states for no reason other than the possibility that someone wants to run legacy BIOS and then boot DOS on a CPU with like 5 orders of magnitude more transistors than the 8086.
tl;dr why can't my x86 wake up with the gin protected mode already inside it
[1] Ha uh except that on ACPI resume we're going to skip most of the firmware setup code so we still need to handle the CPU being in fucking 16-bit mode because suspend/resume is basically an extremely long reboot cycle
[2] Oh yeah also you probably have multiple cores on your CPU and well bad news about the state most of the cores are in when the OS boots because the firmware never started them up so they're going to come up in 16-bit real mode even if your boot CPU is already in 64-bit protected mode, unless you were using TXT in which case you have a different sort of nightmare that if we're going to try to map it onto real world nightmare concepts is one that involves a lot of teeth. Or, well, that used to be the case, but ACPI 6.4 (released in 2021) provides a mechanism for the OS to ask the firmware to wake the CPU up for it so this is invisible to the OS, but you're still relying on the firmware to actually do the heavy lifting here
comments
April 17, 2023 06:54 AM
Content copyright by their respective authors.