Kernel Planet

February 21, 2017

Pavel Machek: X220 to play with

Nice machine. Slightly bigger than X60, bezel around display way too big, but quite powerful. Biggest problem seems to be that it does not accept 9.5mm high drives...

I tried 4.10 there, and got two nasty messages during bootup. Am I the last one running 32 bit kernels?

I was hoping to get three-monitor configuration on my desk, but apparently X220 can not do that. xrandr reports 8 outputs (!), but it physically only has 3: LVDS, displayport and VGA. Unfortunately, it seems to only have 2 CRTCs, so only 2 outputs can be active at a time. Is there a way around that?

February 21, 2017 10:21 PM

Gustavo F. Padovan: Collabora Contributions to Linux Kernel 4.10

Linux Kernel v4.10 is out and this time Collabora contributed a total of 39 patches by 10 different developers. You can read more about the v4.10 merge window on LWN.net: part 1, part 2 and part 3.

Now here is a look at the changes made by Collaborans. To begin with Daniel Stone fixed an issue when waiting for fences on the i915 driver, while Emil Velikov added support to read the PCI revision for sysfs to improve the starting time in some applications.

Emilio López added a set of selftests for the Sync File Framework and Enric Balletbo i Serra added support for the ChromeOS Embedded Controller Sensor Hub. Fabien Lahoudere added support for the NVD9128 simple panel and enabled ULPI phy for USB on i.MX.

Gabriel Krisman fixed a spurious CARD_INT interrupts for SD cards that was preventing one of our kernelCI machines to boot. On the graphics side Gustavo Padovan added Explicit Synchronization support to DRM/KMS.

Martyn Welch added GPIO support for CP2105 USB serial device while Nicolas Dufresne fixed Exynos4 FIMC to roundup imagesize to row size for tiled formats, otherwise there would be enough space to fit the last row of the image. Last but not least, Tomeu Vizoso added debugfs interface to capture frames CRCs, which is quite helpful for debugging and automated graphics testing.

And now the complete list of Collabora contributions:

Daniel Stone (1):

Emil Velikov (1):

Emilio López (7):

Enric Balletbo i Serra (3):

Fabien Lahoudere (4):

Gabriel Krisman Bertazi (1):

Gustavo Padovan (18):

Martyn Welch (1):

Nicolas Dufresne (1):

Tomeu Vizoso (2):

February 21, 2017 04:02 PM

February 20, 2017

Kernel Podcast: Kernel Podcast for Feb 20th, 2017

UPDATE: Thanks to LWN for the mention. This podcast is in “alpha”. It will start to show up on iTunes and Google Play (which didn’t exist last time I did this thing!) stores within the next day or two. You can also subscribe (for the moment) by using this link: kernel podcast audio rss feed. This podcast format will be tweaked, and the format/layout will very likely change a bit as I figure out what works, and what does not. Equipment just started to arrive at home (Zoom H4N Pro, condenser mics, etc.), a new content publishing platform needs to get built (I intend ultimately for listeners to help to create summaries by annotating threads as they happen). And yes, my former girlfriend will once again be reprising her role as author of another catchy intro jingle…soon 😉

Audio: Kernel Podcast 20170220

Support for this podcast comes from Jon Masters, trying to bring back the Kernel Podcast since 2012.

In this week’s edition: Linus Torvalds announces Linux 4.10, Alan Tull updates his FPGA manager framework, and Intel’s latest 5-level paging patch series is posted for review. We will have this, and a summary of ongoing development in the first of the newly revived Linux Kernel Podcast.

Linux 4.10

Linus Torvalds announced the release of 4.10 final, noting that “it’s been quiet since rc8, but we did end up fixing several small issues, so the extra week was all good”. Linus added a (relatively rare) additional “RC8” (Release Candidate 8) to this kernel cycle due to the timing – many of us were attending the “Open Source Leadership Summit” (OSLS, formerly “Linux Foundation Collaboration Summit”, or “Collab”) over the past week. The 4.10 kernel contains about 13,000 commits, which used to seem large but somehow now…isn’t. Kernelnewbies.org has the usual summary of new features and fixes: https://kernelnewbies.org/Linux_4.10

With the announcement of 4.10 comes the opening of the merge window for Linux 4.11 (the period of up to two weeks at the beginning of a development cycle, during with new features and disruptive changes are “pulled” into Linus’s kernel (git) tree). The 4.11 merge window begins today.

FPGA Manager Updates

Alan Tull posted a patch series implementing “FPGA Region enhancements and fixes”, which “intends to enable expanding the user of FPGA regions beyond device tree overlays”. Alan’s FPGA manager framework allows the kernel to manage regions within FPGAs (Field Programmable Gate Arrays) known as “partial reconfigurable” regions – areas of the logic fabric that can be loaded with new bitstream configs. Part of the discussion around the latest patches centered on their providing a new sysfs interface for loading FPGA images, and in particular the need to ensure that this ABI handle FPGA bitstream metadata in a standard and portable fashion across different OSes.

Intel 5-level paging

Kirill A. Shutemov posted version 3 of Intel’s 5 level paging patch series that expands the supportable VA (Virtual Address) space on Intel Architecture from 256TiB (64TiB physical) to 128PiB (4PiB physical). Channeling his inner Bill Gates, he suggests that this “ought to be enough for anybody”. Key among the TODO items remains “boot-time switch between 4 and 5-level paging” to avoid the need for custom kernels. The latest patches introduce two new prctl calls to manage the maximum virtual address space available to userspace processes during mmap calls (PR_SET_MAX_VADDR and PR_GET_MAX_VADDR). This is intended to aid in compatibility by preventing certain legacy programs from breaking when confronted with a 56-bit address space they weren’t expecting. In particular, some JITs use high order “canonical” bits in existing x86 addresses to encode pointer tags and other information (that they should not per a strict interpretation of Intel’s “Canonical Addressing”).

Announcements

Steven Rostedt announced verious preempt-rt (“Real Time”) kernel trees (4.4.47-rt59, 4.1.38-rt45, 3.18.47-rt52, 3.12.70-rt94, and 3.10.104-rt118). Sebastian Andrzej also announced version v4.9.9-rt6 of the preempt-rt “Real Time” Linux patch series. It includes fixes for a spurious softirq wakeup, and a GPL symbol issue. A known issue is that CPU hotplug can still deadlock.

Junio C Hamano announced version v2.12.0-rc2 of git.

Bugfixes

Hoeun Ryu posted version 6 of a patch that takes care to properly free up virtually mapped (vmapped) stacks that might be in the kernel’s stack cache when cpus are offlined (otherwise the kernel was leaking these during offline/online operations).

New Drivers

Mahipal Challa posted version 2 of a patch series implementing a compression driver for the Cavium ThunderX “ZIP” IP on their 64-bit ARM server SoC (System-on-Chip) to plumb into the kernel cryptoapi.

Anup Patel posted version 3 of a patch implementing RAID offload
support for the Broadcom “SBA” RAID device on their SoCs.

Ongoing Development

Andi Kleen posted various perf vendor events for Intel uncore devices, Kan Liang posted new core events for Intel Goldmont, and Srinivas Pandruvada posted perf events for Intel Kaby Lake.

Velibor Markovski (Broadcom) posted a patch implementing ARM Cache Coherent Network (CCN) 502 support.

Sven Schmidt posted version 7 of a patch series updating the LZ4 compression module to support a mode known as “LZ4 fast”, in particular for the benefit of its use by the lustre filesystem.

Zhou Xianrong posted a patch (for the ARM Architecture) that attempts to save kernel memory by freeing parts of the the linear memmap for physical PFNs (page frame numbers) that are marked reserved in a DeviceTree. This had some pushback. The argument is that it saves memory on resource constrained machines – 6MB of RAM in the example.

Jessica Yu (who took over maintaining the in-kernel module loader infrastructure from Rusty Russell some time back) posted a link to her module-next tree in the kernel MAINTAINERS document.

Bhupesh Sharma posted a patch moving in-kernel handling of ACPI BGRT (Boot(time) Graphics Resource) tables out of the x86 architecture tree and into drivers/firmware/efi (so that it can be shared with the 64-bit ARM Architecture).

Jarkko Sakkinen posted version 2 of a patch series implementing a new in-kernel resource manager for “TPM spaces” (these are “isolated execution context(s) for transient objects and HMAC and policy sessions.”. Various test scripts were provided also.

That’s all for this week. Tune in next time for the latest happenings in the Linux kernel community. Don’t forget to follow us @kernelpodcast

February 20, 2017 07:50 AM

Kernel Podcast: Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

February 20, 2017 06:43 AM

February 12, 2017

James Bottomley: Using letsencrypt certificates with DANE

If, like me, you run your own cloud server, at some point you need TLS certificates to export secure services.  Like a lot of people I object to paying a so called X.509 authority a yearly fee just to get a certificate, so I’ve been using a free startcom one for a while.  With web browsers delisting startcom, I’m unable to get a new usable certificate from them, so I’ve been investigating letsencrypt instead (if you’re in to fun ironies, by the way, you can observe that currently the letsencrypt site isn’t using a letsencrypt certificate, perhaps indicating the administrative difficulty of doing so).

The problem with letsencrypt certificates is that they currently have a 90 day expiry, which means you really need to use an automated tool to keep your TLS certificate current.  Fortunately the EFF has developed such a tool: certbot (they use a letsencrypt certificate for their site, indicating you can have trust that they do know what they’re doing).  However, one of the problems with certbot is that, by default, it generates a new key each time the certificate is renewed.  This isn’t a problem for most people, but if you use DANE records, it causes significant issues.

Why use both DANE and letsencrypt?

The beauty of DANE, as I’ve written before, is that it gives you a much more secure way of identifying your TLS certificate (provided you run DNSSEC).  People verifying your certificate may use DANE as the only verification mechanism (perhaps because they also distrust the X.509 authorities) which means the best practice is to publish a DANE TLSA record for each service and also use an X.509 authority rooted certificate.  That way your site just works for everyone.

The problem here is that being DNS based, DANE records can be cached for a while, so it can take a few days for DANE certificate updates to propagate through the DNS infrastructure. DANE records have an answer for this: they have a mode where the record identifies only the hash of the public key used by the website, not the certificate itself, which means you can change your certificate as much as you want provided you keep the same public/private key pair.  And here’s the rub: if certbot is going to try to give you a new key on each renewal, this isn’t going to work.

The internet society also has posts about this.

Making certbot work with DANE

Fortunately, there is a solution: the certbot manual mode (certonly) takes a –csr flag which allows you to construct your own certificate request to send to letsencrypt, meaning you can keep a fixed key … at the cost of not using most of the certbot automation.  So, how do you construct a correct csr for letsencrypt?  Like most free certificates, letsencrypt will only allow you to specify the certificate commonName, which must be a DNS pointer to the actual website.  If you need a certificate that covers multiple sites, all the other sites must be enumerated in the x509 v3 extensions field subjectAltName.  Let’s look at how openssl can generate such a certificate request.  One of the slight problems is that openssl, being a cranky tool, does not allow you to specify a subjectAltName on the command line, so you have to construct a special configuration file for it.  I called mine letsencrypt.conf

[req]
prompt = no
distinguished_name = req_dn
req_extensions = req_ext

[req_dn]
commonName = bedivere.hansenpartnership.com

[req_ext]
subjectAltName=@alt_names

[alt_names]
DNS.1=bedivere.hansenpartnership.com
DNS.2=www.hansenpartnership.com
DNS.3=hansenpartnership.com
DNS.4=blog.hansenpartnership.com

As you can see, I’ve given my canonical server (bedivere) as the common name and then four other subject alt names.  Once you have this config file tailored to your needs, you merely run

openssl req -new -key <mykey.key> -config letsencrypt.conf -out letsencrypt.csr

Where mykey.key is the path to your private key (you need a private key because even though the CSR only contains the public key, it is also signed).  However, once you’ve produced this letsencrypt.csr, you no longer need the private key and, because it’s undated, it will now work forever, meaning the infrastructure you put into place with certbot doesn’t need to be privileged enough to access your private key.  Once this is done, you make sure you have TLSA 3 1 1 records pointing to the hash of your public key (here’s a handy website to generate them for you) and you never need to alter your DANE record again.  Note, by the way that letsencrypt certificates can be used for non-web purposes (I use mine for encrypted SMTP as well), so you’ll need one DANE record for every service you use them for.

Putting it all together

Now that you have your certificate request, depending on what version of certbot you have, you may need it in DER format

openssl req -in letsencrypt.csr -out letsencrypt.der -outform DER

And you’re ready to run the following script from cron

#!/bin/bash
date=$(date +%Y-%m-%d)
dir=/etc/ssl/certs
cert="${dir}/letsencrypt-${date}.crt"
fullchain="${dir}/letsencrypt-${date}.pem"
chain="${dir}/letsencrypt-chain-${date}.pem"
csr=/etc/ssl/letsencrypt.der
out=/tmp/certbot.out

##
# certbot handling
#
# first it cannot replace certs, so ensure new locations (date suffix)
# each time mean the certificate is unique each time.  Next, it's
# really chatty, so the only way to tell if there was a failure is to
# check whether the certificates got updated and then get cron to
# email the log
##

certbot certonly --webroot --csr ${csr} --preferred-challenges http-01 -w /var/www --fullchain-path ${fullchain} --chain-path ${chain} --cert-path ${cert} > ${out} 2>&1

if [ ! -f ${fullchain} -o ! -f ${chain} -o ! -f ${cert} ]; then
    cat ${out}
    exit 1;
fi

# link into place

# cert only (apache needs)
ln -sf ${cert} ${dir}/letsencrypt.crt
# cert with chain (stunnel needs)
ln -sf ${fullchain} ${dir}/letsencrypt.pem
# chain only (apache needs)
ln -sf ${chain} ${dir}/letsencrypt-chain.pem

# reload the services
sudo systemctl reload apache2
sudo systemctl restart stunnel4
sudo systemctl reload postfix

Note that this script needs the ability to write files and create links in /etc/ssl/certs (can be done by group permission) and the systemctl reloads need the following in /etc/sudoers

%LimitedAdmins ALL=NOPASSWD: /bin/systemctl reload apache2
%LimitedAdmins ALL=NOPASSWD: /bin/systemctl reload postfix
%LimitedAdmins ALL=NOPASSWD: /bin/systemctl restart stunnel4

And finally you can run this as a cron script under whichever user you’ve chosen to have sufficient privilege to write the certificates.  I run this every month, so I know I if anything goes wrong I have at least two months to fix it.

Oh, and just in case you need proof that I got this all working, here you are!

February 12, 2017 06:55 PM

February 03, 2017

Daniel Vetter: LCA Hobart: Maintainers Don't Scale

Seems that there was a rift in the spacetime that sucked away the video of my LCA talk, but the awesome NextDayVideo team managed to pull it back out. And there’s still the writeup and slides available.

February 03, 2017 12:00 AM

February 02, 2017

Pete Zaitcev: Richard Feynman on Gerrit reviews

Here's a somewhat romanticized vision what a code review is:

What am I going to do? I get an idea. Maybe it's a valve. I take my finger and put it down on one of the mysterious little crosses in the middle of one of the blueprints on page three, and I say, ``What happens if this valve gets stuck?'' --figuring they're going to say, ``That's not a valve, sir, that's a window.''

So one looks at the other and says, ``Well, if that valve gets stuck--'' and he goes up and down on the blueprint, up and down, the other guy goes up and down, back and forth, back and forth, and then both look at each other. They turn around to me and they open their mouths like astonished fish and say, ``You're absolutely right, sir.''

Quoted from "Surely You Are Joking, Mr. Feynman!".

February 02, 2017 04:37 PM

January 27, 2017

Michael Kerrisk (manpages): Next Linux/UNIX System Programming course in Munich: 15-19 May, 2017

I've scheduled another 5-day Linux/UNIX System Programming course to take place in Munich, Germany, for the week of 15-19 May 2017.

The course is intended for programmers developing system-level, embedded, or network applications for Linux and UNIX systems, or programmers porting such applications from other operating systems (e.g., Windows) to Linux or UNIX. The course is based on my book, The Linux Programming Interface (TLPI), and covers topics such as low-level file I/O; signals and timers; creating processes and executing programs; POSIX threads programming; interprocess communication (pipes, FIFOs, message queues, semaphores, shared memory), and network programming (sockets).
     
The course has a lecture+lab format, and devotes substantial time to working on some carefully chosen programming exercises that put the "theory" into practice. Students receive printed and electronic copies of TLPI, along with a 600-page course book that includes all slides and exercises presented in the course. A reading knowledge of C is assumed; no previous system programming experience is needed.

Some useful links for anyone interested in the course:

Questions about the course? Email me via training@man7.org.

January 27, 2017 01:11 AM

January 26, 2017

Gustavo F. Padovan: Mainline Explicit Fencing – part 3

In the last two articles we talked about how Explicit Fencing can help the graphics pipeline in general and what happened on the effort to upstream the Android Sync Framework. Now on the third post of this series we will go through the Explicit Fencing implementation on DRM and other elements of the graphics stack.

The DRM implementation lays down on top of two kernel infrastructures, struct dma_fence, which represents the fence and struct sync file that provides the file descriptors to be shared with userspace (as it was discussed in the previous articles). With fencing the display infrastructure needs to wait for a signal on that fence before displaying the buffer on the screen. On a Explicit Fencing implementation that fence is sent from userspace to the kernel. The display infrastructure also sends back to userspace a fence, encapsulated in a struct sync_file, that will be signalled when the buffer is scanned out on the screen. The same process happens on the rendering side.

It is mandatory to use of Atomic Modesetting and here is not plan to support legacy APIs. The fence that DRM will wait on needs to be passed via the IN_FENCE_FD property for each DRM plane, that means it will receive one sync_file fd containing one or more dma_fence per plane. Remember that in DRM a plane directly relates to a framebuffer so one can also say that there is one sync_file per framebuffer.

On the other hand for the fences created by the kernel that are sent back to userspace the OUT_FENCE_PTR property is used. It is a DRM CRTC property because we only create one dma_fence per CRTC as all the buffers on it will be scanned out at the same time. The kernel sends this fence back to userspace by writing the fd number to the pointer provided in the OUT_FENCE_PTR property. Note that, unlike from what Android did, when the fence signals it means the previous buffer – the buffer removed from the screen – is free for reuse. On Android when the signal was raised it meant the current buffer was freed. However, the Android folks have patched SurfaceFlinger already to support the Mainline semantics when using Explicit Fencing!

Nonetheless, that is only one side of the equation and to have the full graphics pipeline running with Explicit Fencing we need to support it on the rendering side as well. As every rendering driver has its own userspace API we need to add Explicit Fencing support to every single driver there. The freedreno driver already has its Explicit Fencing support  mainline and there is work in progress to add support to i915 and virtio_gpu.

On the userspace side Mesa already has support for the EGL_ANDROID_native_fence_sync needed to use Explicit Fencing on Android. Libdrm incorporated the headers to access the sync file IOCTL wrappers. On Android, libsync now has support for both the old Android Sync and Mainline Sinc File APIs. And finally, on drm_hwcomposer, patches to use Atomic Modesetting and Explicit Fencing are available but they are not upstream yet.

Validation tests for both Sync Files and fences on the Atomic API were written and added to IGT.

January 26, 2017 03:23 PM

January 23, 2017

Matthew Garrett: Android permissions and hypocrisy

I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.

Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.

So, to that end, here are the permissions that Kaspersky request on Android:


Every single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?

There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?

The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job.

comment count unavailable comments

January 23, 2017 07:58 AM

January 20, 2017

Daniel Vetter: Maintainers Don't Scale

This is the write-up of my talk at LCA 2017 in Hobart. It’s not exactly the same, because this is a blog and not a talk, but the same contents. The slides for the talk are here, and I will link to the video as soon as it is available. Update: Video is now uploaded.

Linux Kernel Maintainers

First let’s look at how the kernel community works, and how a change gets merged into Linus Torvalds’ repository. Changes are submitted as patches to mailing list, then get some review and eventually get applied by a maintainer to that maintainer’s git tree. Each maintainer then sends pull request, often directly to Linus. With a few big subsystems (networking, graphics and ARM-SoC are the major ones) there’s a second or third level of sub-maintainers in. 80% of the patches get merged this way, only 20% are committed by a maintainer directly.

Most maintainers are just that, a single person, and often responsible for a bunch of different areas in the kernel with corresponding different git branches and repositories. To my knowledge there are only three subsystems that have embraced group maintainership models of different kinds: TIP (x86 and core kernel), ARM-SoC and the graphics subsystem (DRM).

The radical change, at least for the kernel community, that we implemented over a year ago for the Intel graphics driver is to hand out commit rights to all regular contributors. Currently there are 19 people with commit rights to the drm-intel repository. In the first year of ramp-up 70% of all patches are now committed directly by their authors, a big change compared to how things worked before, and still work everywhere else outside of the graphics subsystem. More recently we also started to manage the drm-misc tree for subsystem wide refactorings and core changes in the same way.

I’ve covered the details of the new process in my Kernel Recipes talk “Maintainers Don’t Scale”, and LWN has covered that, and a few other talks, in their article on linux kernel maintainer scalability. I also covered this topic at the kernel summit, again LWN covered the group maintainership discussion. I don’t want to go into more detail here, mostly because we’re still learning, too, and not really experts on commit rights for everyone and what it takes to make this work well. If you want to enjoy what a community does who really has this all figured out, watch Emily Dunham’s talk “Life is better with Rust’s community automation” from last year’s LCA.

What we are experts on is the Linux Kernel’s maintainer model - we’ve run things for years with the traditional model, both as single maintainers and small groups, and now gained the outside perspective by switching to something completely different. Personally, I’ve come to believe that the maintainer model as implemented by the kernel community just doesn’t scale. Not in the technical sense of big-O scalability, because obviously the kernel community scales to a rather big size. Much larger organizations, entire states are organized in a hierarchical way, the kernel maintainer hierarchy is not anything special. Besides that, git was developed specifically to support the Linux maintainer hierarchy, and git won. Clearly, the linux maintainer model scales to big numbers of contributors. Where I think it falls short is the constant factor of how efficiently contributions are reviewed and merged, especially for non-maintainer contributors. Which do 80% of all patches.

Cult of Busy

The first issue that routinely comes out when talking about maintainer topics is that everyone is overloaded. There’s a pervasive spirit in our industry (especially in the US) hailing overworked engineers as heroes, with an entire “cult of busy” around. If you have time, you’re a slacker and probably not worth it. Of course this doesn’t help when being a maintainer, but I don’t believe it’s a cause of why the Linux maintainer model doesn’t work. This cult of busy leads to burnout, which is in my opinion a prime risk when you’re an open source person. Personally I’ve gone through a few difficult phases until I understood my limits and respected them. When you start as a maintainer for 2-3 people, and it increases to a few dozen within a couple of years, then getting a bit overloaded is rather natural - it’s a new job, with a different set of responsibilities and I had no clue about a lot of things. That’s no different from suddenly being a leader of a much bigger team anywhere else. A great talk on this topic is “What part of “… for life” don’t you understand?” from Jacob Kaplan-Moss since it’s by a former maintainer. It also contains a bunch of links to talks on burnout specifically. Ignoring burnout is not healthy, or not knowing about the early warning signs, it is rampant in our communities, but for now I’ll leave it at that.

Boutique Trees and Bus Factors

The first issue I see is how maintainers usually are made: You scratch an itch somewhere, write a bit of code, suddenly a few more people find it useful, and “tag” you’re the maintainer. On top, you often end up being stuck in that position “for life”. If the community keeps growing, or your maintainer becomes otherwise busy with work&life, you have your standard-issue overloaded bottleneck.

That’s the point where I think the kernel community goes wrong. When other projects reach this point they start to build up a more formal community structure, with specialized roles, boards for review and other bits and pieces. One of the oldest, and probably most notorious, is Debian with its constitution. Of course a small project doesn’t need such elaborate structures. But if the goal is world domination, or at least creating something lasting, it helps when there’s solid institutions that cope with people turnover. At first just documenting processes and roles properly goes a long way, long before bylaws and codified decision processes are needed.

The kernel community, at least on the maintainer side, entirely lacks this.

What instead most often happens is that a new set of ad-hoc, chosen-by-default maintainers start to crop up in a new level of the hierarchy, below your overload bottleneck. Because becoming your own maintainer is the only way to help out and to get your own features merged. That only perpetuates the problem, since the new maintainers are as likely to be otherwise busy, or occupied with plenty of other kernel parts already. If things go well that area becomes big, and you have another git tree with another overloaded maintainer. More often than not people move around, and accumulate small bits allover under their maintainership. And then the cycle repeats.

The end result is a forest of boutique trees, each covering a tiny part of the project, maintained by a bunch of notoriously overloaded people. The resulting cross-tree coordination issues are pretty impressive - in the graphics subsystem we fairly often end up with with simple drivers that somehow need prep patches in 5 different trees before you can even land that simple driver in the graphics tree.

Unfortunately that’s not the bad part. Because these maintainers are all busy with other trees, or their work, or life in general, you’re guaranteed that one of them is not available at any given time. Worse, because their tree has relatively little activity because it covers a small area, many only pick up patches once per kernel release, which means a built-in 3 month delay. That’s all because each tree and area has just one maintainer. In the end you don’t even need the proverbial bus to hit anyone to feel the pain of having a single point of failure in your organization - there’s so many maintainer trees around that that absence always happens, and constantly.

Of course people get fed up trying to get features merged, and often the fix is trying to become a maintainer yourself. That takes a while and isn’t easy - only 20% of all patches are authored by maintainers - and after the new code landed it makes it all worse: Now there’s one more semi-absent maintainer with one more boutique tree, adding to all the existing troubles.

Checks and Balances

All patches merged into the Linux kernel are supposed to be reviewed, and rather often that review is only done by the maintainers who merges the patch. When maintainers send out pull requests the next level of maintainers then reviews those patch piles, until they land in Linus’ tree. That’s an organization where control flows entirely top-down, with no checks and balances to reign in maintainers who are not serving their contributors well. History of dicatorships tells us that despite best intentions, the end result tends to heavily favour the few over the many. As a crude measure for how much maintainers subject themselves to some checks&balances by their peers and contributors I looked at how many patches authored and committed by the same person (probably a maintainer) do not also carry a reviewed or acked tag. For the Intel driver that’s less than 3%. But even within the core graphics code it’s only 5%, and that covers the time before we started to experiment with commit rights for that area. And for the graphics subsystem overall the ratio is still only about 25%, including a lot of drivers with essentially just one contributor, who is always volunteered as the maintainer, and hence somewhat natural that those maintainers lack reviewers.

Outside of graphics only roughly 25% of all patches written by maintainers are reviewed by their peers - 75% of all maintainer patches lack any kind of recorded peer review, compared to just 25% for graphics alone. And even looking at core areas like kernel/ or mm/ the ratio is only marginally better at about 30%. In short, in the kernel at large, peer review of maintainers isn’t the norm.

And there’s nothing outside of the maintainer hierarchy that could provide some checks and balance either. The only way to escalate disagreement is by starting a revolution, and revolutions tend to be long, drawn-out struggles and generally not worth it. Even Debian only recently learned that they lack a way to depose maintainers, and that maybe going maintainerless would be easier (again, LWN has you covered).

Of course the kernel is not the only hierarchy where there’s no meaningful checks and balances. Professor at universities, and managers at work are in a fairly similar position, with minimal options for students or employers to meaningfully appeal decisions. But that’s a recognized problem, and at least somewhat countered by providing ways to provide anonymous feedback, often through regular surveys. The results tend to not be all that significant, but at least provide some control and accountability to the wider masses of first-level dwellers in the hierarchy. In the kernel that amounts to about 80% of all contributions, but there’s no such survey. On the contrary, feedback sessions about maintainer happiness only reinforce the control structure, with e.g. the kernel summit featuring an “Is Linus happy?” session each year.

Another closely related aspect to all this is how a project handles personal conflicts between contributors. For a very long time Linux didn’t have any formal structures in this area either, with the only options available to unhappy people to either take it or leave it. Well, or usurping a maintainer with a small revolution, but that’s not really an option. For two years we’ve now had the “Code of Conflict”, which de facto just throws up its hands and declares that conflict are the normal outcome, essentially just encoding the status quo. Refusing to handle conflicts in a project with thousands of contributors just doesn’t work, except that it results in lots of frustration and ultimately people trying to get away. Again, the lack of a poised board to enforce a strong code of conduct, independent of the maintainer hierarchy, is in line with the kernel community unwillingness to accept checks and balances.

Mesh vs. Hierarchy

The last big issue I see with the Linux kernel model, featuring lots of boutique trees and overloaded maintainer, is that it seems to harm collaboration and integration of new contributors. In the Intel graphics, driver maintainers only ever reviewed a small minority of all patches over the last few years, with the goal to foster direct collaboration between contributors. Still, when a patch was stuck, maintainers were the first point of contact, especially, but not only, for newer contributors. No amount of explaining that only the lack of agreement with the reviewer was the gating factor could persuade people to fully collaborate on code reviews and rework the code, tests and documentation as needed. Especially when they’re coming with previous experience where code review is more of a rubber-stamp step compared to the distributed and asynchronous pair-programming it often resembles in open-source. Instead, new contributors often just ended up falling back to pinging maintainers to make a decision or just merge the patches as-is.

Giving all regular contributors commit rights and fully trusting them to do the right thing entirely fixed that: If the reviewer or author have commit rights there’s no easy excuse anymore to involve maintainers when the author and reviewer can’t reach agreement. Of course that requires a lot of work in mentoring people, making sure requirements for merging are understood and documented, and automating as much as possible to avoid screw ups. I think maintainers who lament their lack of review bandwidth, but also state they can’t trust anyone else aren’t really doing their jobs.

At least for me, review isn’t just about ensuring good code quality, but also about diffusing knowledge and improving understanding. At first there’s maybe one person, the author (and that’s not a given), understanding the code. After good review there should be at least two people who fully understand it, including corner cases. And that’s also why I think that group maintainership is the only way to run any project with more than one regular contributor.

On the topic of patch review and maintainers, there’s also the habit of wholesale rewrites of patches written by others. If you want others to contribute to your project, then that means you need to accept other styles and can’t enforce your own all the time. Merging first and polishing later recognizes new contributions, and if you engage newcomers for the polish work they tend to stick around more often. And even when a patch really needs to be reworked before merging it’s better to ask the author to do it: Worst case they don’t have time, best case you’ve improved your documentation and training procedure and maybe gained a new regular contributor on top.

A great take on the consequences of having fixed roles instead of trying to spread responsibilities more evenly is Alice Goldfuss’ talk “Rock Stars, Builders, and Janitors: You’re doing it wrong”. I also think that rigid roles present a bigger bar for people with different backgrounds, hampering diversity efforts and in the spirit of Sarah Sharps post on what makes a good community, need to be fixed first.

Towards a Maintainer’s Manifest

I think what’s needed in the end is some guidelines and discussions about what a maintainer is, and what a maintainer does. We have ready-made licenses to avoid havoc, there’s code of conducts to copypaste and implement, handbooks for building communities, and for all of these things, lots of conferences. Maintainer on the other hand you become by accident, as a default. And then everyone gets to learn how to do it on their own, while hopefully not burning too many bridges - at least I myself was rather lost on that journey at times. I’d like to conclude with a draft on a maintainer’s manifest.

It’s About the People

If you’re maintainer of a project or code area with a bunch of full time contributors (or even a lot of drive-by contributions) then primarily you deal with people. Insisting that you’re only a technical leader just means you don’t acknowledge what your true role really is.

And then, trust them to do a good job, and recognize them for the work they’re doing. The important part is to trust people just a bit more than what they’re ready for, as the occasional challenge, but not too much that they’re bound to fail. In short, give them the keys and hope they don’t wreck the car too badly, but in all cases have insurance ready. And insurance for software is dirt cheap, generally a git revert and the maintainer profusely apologizing to everyone and taking the blame is all it takes.

Recognize Your Power

You’re a maintainer, and you have essentially absolute power over what happens to your code. For successful projects that means you can unleash a lot of harm on people who for better or worse are employed to deal with you. One of the things that annoy me the most is when maintainers engage in petty status fights against subordinates, thinly veiled as technical discussions - you end up looking silly, and it just pisses everyone off. Instead recognize your powers, try to stay on the good side of the force and make sure you share it sufficiently with the contributors of your project.

Accept Your Limits

At the beginning you’re responsible for everything, and for a one-person project that’s all fine. But eventually the project grows too much and you’ll just become a dictator, and then failure is all but assured because we’re all human. Recognize what you don’t do well, build institutions to replace you. Recognize that the responsibility you initially took on might not be the same as that which you’ll end up with and either accept it, or move on. And do all that before you start burning out.

Be a Steward, Not a Lord

I think one of key advantages of open source is that people stick around for a very long time. Even when they switch jobs or move around. Maybe the usual “for life” qualifier isn’t really a great choice, since it sounds more like a mandatory sentence than something done by choice. What I object to is the “dictator” part, since if your goal is to grow a great community and maybe reach world domination, then you as the maintainer need to serve that community. And not that the community serves you.

Thanks a lot to Ben Widawsky, Daniel Stone, Eric Anholt, Jani Nikula, Karen Sandler, Kimmo Nikkanen and Laurent Pinchart for reading and commenting on drafts of this text.

January 20, 2017 12:00 AM

January 19, 2017

Matthew Garrett: Android apps, IMEIs and privacy

There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

January 19, 2017 11:36 PM

January 12, 2017

Pete Zaitcev: git-codereview

Not content with a legacy git-review, Google developed another Gerrit front-end, the git-coderevew. They use it for contributions to Go. I have to admit, that was a bit less of a special move than Facebook's git-review that uses the same name but does something entirely different.

P.S. There used to be a post about creating a truly distributed github, which used blockchain in order to vote on globally unique names. Can't find a link though.

January 12, 2017 07:56 PM

January 08, 2017

Pete Zaitcev: Mirantis and the business of OpenStack

It seems that only in November we heard about massive layoffs at Mirantis, "The #1 Pure Play OpenStack Company" (per <title>). Now they are teaching us thus:

And what about companies like Mirantis adding Kubernetes and other container technologies to their slate? Is that a sign of the OpenStack Apocalypse?

In a word, “no”.

Gee, thanks. I'm sure they know what it's like.

January 08, 2017 06:07 PM

January 03, 2017

James Bottomley: TPM2 and Linux

Recently Microsoft started mandating TPM2 as a hardware requirement for all platforms running recent versions of windows.  This means that eventually all shipping systems (starting with laptops first) will have a TPM2 chip.  The reason this impacts Linux is that TPM2 is radically different from its predecessor TPM1.2; so different, in fact, that none of the existing TPM1.2 software on Linux (trousers, the libtpm.so plug in for openssl, even my gnome keyring enhancements) will work with TPM2.  The purpose of this blog is to explore the differences and how we can make ready for the transition.

What are the Main 1.2 vs 2.0 Differences?

The big one is termed Algorithm Agility.  TPM1.2 had SHA1 and RSA2048 only.  TPM2 is designed to have many possible algorithms, including support for elliptic curve and a host of government mandated (Russian and Chinese) crypto systems.  There’s no requirement for any shipping TPM2 to support any particular algorithms, so you actually have to ask your TPM what it supports.  The bedrock for TPM2 in the West seems to be RSA1024-2048, ECC and AES for crypto and SHA1 and SHA256 for hashes1.

What algorithm agility means is that you can no longer have root keys (EK and SRK see here for details) like TPM1.2 did, because a key requires a specific crypto algorithm.  Instead TPM2 has primary “seeds” and a Key Derivation Function (KDF).  The way this works is that a seed is simply a long string of random numbers, but it is used as input to the KDF along with the key parameters and the algorithm and out pops a real key based on the seed.  The KDF is deterministic, so if you input the same algorithm and the same parameters you get the same key again.  There are four primary seeds in the TPM2: Three permanent ones which only change when the TPM2 is cleared: endorsement (EPS), Platform (PPS) and Storage (SPS).  There’s also a Null seed, which is used for ephemeral keys and changes every reboot.  A key derived from the SPS can be regarded as the SRK and a key derived from the EPS can be regarded as the EK. Objects descending from these keys are called members of hierarchies2. One of the interesting aspects of the TPM is that the root of a hierarchy is a key not a seed (because you need to exchange secret information with the TPM), and that there can be multiple of these roots with different key algorithms and parameters.

Additionally, the mechanism for making use of keys has changed slightly.  In TPM 1.2 to import a secret key you wrapped it asymmetrically to the SRK and then called LoadKeyByBlob to get a use handle.  In TPM2 this is a two stage operation, firstly you import a wrapped (or otherwise protected) private key with TPM2_Import, but that returns a private key structure encrypted with the parent key’s internal symmetric key.  This symmetrically encrypted key is then loaded (using TPM2_Load) to obtain a use handle whenever needed.  The philosophical change is from online keys in TPM 1.2 (keys which were resident inside the TPM) to offline keys in TPM2 (keys which now can be loaded when needed).  This philosophy has been reinforced by reducing the space available to keep keys loaded in TPM2 (see later).

Playing with TPM2

If you have a recent laptop, chances are you either have or can software upgrade to a TPM2.  I have a dell XPS13 the skylake version which comes with a software upgradeable Nuvoton TPM.  Dell kindly provides a 1.2->2 switching program here, which seems to work under Freedos (odin boot) so I have a physical TPM2 based system.  For those of you who aren’t so lucky, you can still play along, but you need a TPM2 emulator.  The best one is here; simply download and untar it then type make in the src directory and run it as ./tpm_server.  It listens on two TCP ports, 2321 and 2322, for TPM commands so there’s no need to install it anywhere, it can be run directly from the source directory.

After that, you need the interface software called tss2.  The source is here, but Fedora 25 and recent Ubuntu already package it.  I’ve also built openSUSE packages here.  The configuration of tss2 is controlled by environment variables.  The most important one is TPM_INTERFACE_TYPE which tells it how to connect to the TPM2.  If you’re using a simulator, you set this to “socsim” and if you have a real TPM2 device you set it to “dev”.  One final thing about direct device connection: in tss2 there’s no daemon like trousers had to broker the connection, all your users connect directly to the TPM2 device /dev/tpm0.  To do this, the device has to support read and write by arbitrary users, so its permissions need to be 0666.  I’ve got a udev script to achieve this

# tpm 2 devices need to be world readable
SUBSYSTEM=="tpm", ACTION=="add", MODE="0666"

Which goes in /etc/udev/rules.d/80-tpm-2.rules on openSUSE.  The next thing you need to do, if you’re running the simulator, is power it on and start it up (for a real device, this is done by the bios):

tsspowerup
tssstartup

The simulator will now create a NVChip file wherever you started it to store NV ram based objects, which it will read on next start up.  The first thing you need to do is create an SRK and store it in NV memory.  Microsoft uses the well known key handle 81000001 for this, so we’ll do the same.  The reason for doing this is that a real TPM takes ages to run the KDF for RSA keys because it has to look for prime numbers:

jejb@jarvis:~> TPM_INTERFACE_TYPE=socsim time tsscreateprimary -hi o -st -rsa
Handle 80000000
0.03 user 0.00 system 0:00.06 elapsed

jejb@jarvis:~> TPM_INTERFACE_TYPE=dev time tsscreateprimary -hi o -st -rsa
Handle 80000000
0.04 user 0.00 system 0:20.51 elapsed

As you can see: the simulator created a primary storage key (the SRK) in a few milliseconds, but it took my real TPM2 20 seconds to do it3 … not something you want to wait for, hence the need to store this permanently under a well known key handle and get rid of the temporary copy

tssevictcontrol -hi o -ho 80000000 -hp 81000001
tssflushcontext -ha 80000000

tssevictcontrol tells the TPM to copy the key at transient handle 800000004  to permanent NV handle 81000001 and tssflushcontext erases the transient key.  Flushing transient objects is very important, because TPM2 has a lot less transient storage space than TPM1.2 did; usually only about three handles worth.  You can tell how much you have by doing

tssgetcapability -cap 6|grep -i transient
TPM_PT 0000010e value 00000003 TPM_PT_HR_TRANSIENT_MIN - the minimum number of transient objects that can be held in TPM RAM

Where the value (00000003) tells me that the TPM can store at least 3 transient objects.  After that you’ll start getting out of space errors from it.

The final step in taking ownership of a TPM2 is to set the authorization passwords.  Each of the four hierarchies (Null, Owner, Endorsement, Platform) and the Lockout has a possible authority password.  The Platform authority is cleared on startup, so there’s not much point setting it (it’s used by the BIOS or Firmware to perform TPM functions).  Of the other four, you really only need to set Owner, Endorsement and Lockout (I use the same password for all of them).

tsshierarchychangeauth -hi l -pwdn <your password>
tsshierarchychangeauth -hi e -pwdn <your password>
tsshierarchychangeauth -hi o -pwdn <your password>

After this is done, you’re all set.  Note that as well as these authorizations, each object can have its own authorization (or even policy), so the SRK you created earlier still has no password, allowing it to be used by anyone.  Note also that the owner authorization controls access to the NV memory, so you’ll need to supply it now to make other objects persistent.

An Aside about Real TPM2 devices and the Resource Manager

Although I’m using the code below to store my keys in the TPM2, there’s a couple of practical limitations which means it won’t work for you if you have multiple TPM2 using applications without a kernel update.  The two problems are

  1. The Linux Kernel TPM2 device /dev/tpm0 only allows one user at once.  If a second application tries to open the device it will get an EBUSY which causes TSS_Create() to fail.
  2. Because most applications make use of transient key slots and most TPM2s have only a couple of these, simultaneous users can end up running out of these and getting unexpected out of space errors.

The solution to both of these is something called a Resource Manager (RM).  What the RM does is effectively swap transient objects in and out of the TPM as needed to prevent it from running out of space.  Linux has an issue in that both the kernel and userspace are potential users of TPM keys so the resource manager has to live inside the kernel.  Jarkko Sakkinen has preliminary resource manager patches here, and they will likely make it into kernel 4.11 or 4.12.  I’m currently running my laptop with the RM patches applied, so multiple applications work for me, but since these are preliminary patches, I wouldn’t currently advise others to do this.  The way these patches work is that once you declare to the kernel via an ioctl that you want to use the RM, every time you send a command to the TPM, your context gets swapped in, the command is executed, the context is swapped out and the response sent meaning that no other user of the TPM sees your transient objects.  The moment you send the ioctl, the TPM device allows another user to open it as well.

Using TPM2 as a keystore

Once the implementation is sorted out, openssl and gnome-keyring patches can be produced for TPM2.  The only slight wrinkle is that for create_tpm2_key you require a parent key to exist in the NV storage (at the 81000001 handle we talked about previously).  So to convert from a password protected openssh RSA key to a TPM2 based one, you do

create_tpm2_key -a -p 81000001 -w id_rsa id_rsa.tpm
mv id_rsa.tpm id_rsa

And then gnome keyring manager will work nicely (make sure you keep a copy of your original private key in case you want to move to a new laptop or reseed the TPM2).  If you use the same TPM2 password as your original key password, you won’t even need to update the gnome loginkeyring for the new password.

Conclusions

Because of the lack of an in-kernel Resource Manager, TPM2 is ready for experimentation in Linux but definitely not ready for prime time yet (unless you’re willing to patch your kernel).  Hopefully this will change in the 4.11 or 4.12 kernel when the Resource Manager finally goes upstream5.

Looking forwards to the new stack, the lack of a central daemon is really a nice feature: tcsd crashing used to kill all of my TPM key based applications, but with tss2 having no central daemon, everything has just worked(tm) so far.  A kernel based RM also means that the kernel can happily use the TPM (for its trusted keys and disk encryption) without interfering with whatever userspace is doing.

January 03, 2017 12:55 AM

January 02, 2017

Paul E. Mc Kenney: Parallel Programming: January 2017 Update

Another year, another release of Is Parallel Programming Hard, And, If So, What Can You Do About It?!

Updates include:



  1. More formatting and build-system improvements, along with many bibliography updates, courtesy of Akira Yokosawa.
  2. A great many grammar and typo fixes from Akira and SeongJae Park.
  3. Numerous changes and fixes from Balbir Singh, Boqun Feng, Mike Rapoport, Praveen Kumar, and Tobias Klauser.
  4. Added code for concurrent skiplists, with the hope for added text in a later release.
  5. Added a running example to the deferred-processing chapter.
  6. Merged “Synchronization Primitives” into “Tools of Trade” section.
  7. Updated control-dependency discussion in memory-barriers section.
As always, git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git will be updated in real time.

January 02, 2017 05:23 PM

December 27, 2016

Pete Zaitcev: The idea of ARM has gone mainstream

We still don't have any usable servers on which I could install Fedora and have it supported for more than 3 releases, but gamers already debate the merits of ARM. The idea of SPEC-per-Watt has completely gone mainstream, like Marxism.

<sage> http://www.bitsandchips.it/english/52-english-news/7854-rumor-even-intel-is-studying-a-new-x86-uarch new uarch? it's about time
<sage> they just can't make x86 as power efficient as arm
<JTFish> What is the point
<JTFish> it's not like ARM will replace x86 in real servers any time soon
<sage> what is "real" servers?
<JTFish> anything that does REAL WORLD shit
<sage> what is "real world"?
<JTFish> serving internet content etc
<JTFish> database servers
<JTFish> I dunno
<JTFish> mass encoding of files
<sage> lots of startups and established companies are already betting on ARM for their cloud server offerings
<sage> database and mass encoding, ok
<sage> what else
<JTFish> are you saying
<JTFish> i'm 2 to 1
<JTFish> for x86
<JTFish> also I should just go full retard and say minecraft servers
<sage> the power savings are big, if they can run part of their operation on ARM and make it financially viable, they will do it

QUICK UPDATE: In the linked article:

The next Intel uArch will be very similar to the approach used by AMD with Zen – perfect balance of power consumption/performance/price – but with a huge news: in order to save physical space (Smaller Die) and to improve the power consumption/performance ratio, Intel will throw away some old SIMD and old hardware remainders.

The 100% backward hardware x86 compatibility will not guaranteed anymore, but could be not a handicap (Some SIMD today are useless, and also we can use emulators or cloud systems). Nowadays a lot of software house have to develop code for ARM and for x86, but ARM is lacking useful SIMD. So, frequently, these software are a watered-down compromise.

Intel will be able to develop a thin and fast x86 uArch, and ICC will be able to optimize the code both for ARM and for x86 as well.

This new uArch will be ready in 2019-2020.

Curious. Well, as long as they don't go full Transmeta on us, it may be fine.

December 27, 2016 06:51 PM

December 23, 2016

Dave Airlie: radv and doom - kinda

Yesterday Valve gave me a copy of DOOM for Christmas (not really for Christmas), and I got the wine bits in place from Fedora, then I spent today trying to get DOOM to render on radv.



Thanks to ParkerR on #radeon for taking the picture from his machine, I'm too lazy.

So it runs kinda, it hangs the GPU a fair bit, it misrenders some colors in some scenes, but you can see most of it. I'm not sure if I'll get back to this before next year (I'll try), but I'm pretty happy to have gotten it this far in a day, though I'm sure the next few things will me much more difficult to debug.

The branch is here:
https://github.com/airlied/mesa/commits/radv-wip-doom-wine

December 23, 2016 07:26 AM

December 22, 2016

Daniel Vetter: How do you do docs?

The fancy new Sphinx-based documentation has landed a while ago in upstream. Jani Nikula has written a nice overview on LWN (part 2). And it is getting used a lot. But judging by how often I type it in replies on the mailing list what’s missing is a super-short howto. To build the documentation, run:

$ make DOCBOOKS="" htmldocs

The output can then be found in Documentation/output/. When typing documentation please always check that your new text does get rendered. The output also contains documentation about kernel-doc and the toolchain itself. Since the build is incremental it is recommended that you first run it before touching anything. That way you’ll only see warnings in areas you’ve touched, not all of them - the build is unfortunately somewhat noisy.

December 22, 2016 06:00 AM

December 20, 2016

Gustavo F. Padovan: Collabora Contributions to Linux Kernel 4.9

Linux Kernel 4.9 was released this week and once more Collabora developers took part on the kernel development cycle. This time we contributed 37 patches by 11 different developers, our highest number of single contributors in a kernel release ever. Remember that in the previous release we had our highest number total contributions. The numbers shows how Collabora have been increasing its commitment in contributing to the upstream kernel community.

For those who want to see an overall report of what was happened in the 4.9 kernel take a look  on the always good LWN articles: part 1, 2  and 3.

As for Collabora contributions most of our work was in the DRM and DMABUF subsystems. Andrew Shadura and Daniel Stone added to fixes to the AMD and i915 drivers respectively. Emilio López added the missing install of sync_file.h uapi.

Gustavo Padovan advanced a few more steps on the goal to add explicit fencing to the DRM subsystem, besides a few improvements to Sync File and the virtio_gpu driver he also de-staged the SW_SYNC validation framework that helps with Sync File testing.

Peter Senna added drm_bridge support to imx-ldb device while Tomeu Vizoso improved drm_bridge support on RockChip’s analogic-dp and added documentation about validation of the DRM subsystem.

Outside of the Graphics world we had Enric Balletbo i Serra adding support to upload firmware on the ziirave watchdog device. Fabien Lahoudere and Martyn Welch enabled and improved DMA support for i.MX53 UARTs, allowing the device tree to decide whether DMA is used or not. Martyn also added a fake VMEbus (Versa Module Europa bus) to help with VME driver development.

On the Bluetooth, subsystem Frédéric Dalleau fixed an error code for SCO connections, that was causing big timeout and failures on SCO connections requests. Finally Robert Foss worked to clear the pipeline on errors for cdc-wdm USB devices.

Andrew Shadura (1):

Daniel Stone (1):

Emilio López (2):

Enric Balletbo i Serra (1):

Fabien Lahoudere (3):

Frédéric Dalleau (1):

Gustavo Padovan (14):

Martyn Welch (4):

Peter Senna Tschudin (1):

Robert Foss (2):

Tomeu Vizoso (7):

December 20, 2016 04:46 PM

December 14, 2016

Daniel Vetter: Midlayers, Once More With Feelings!

The collective internet troll fest had it’s fun recently discussing AMD’s DAL. Hacker news discussed the rejection and some of the reactions, reddit had some fun and of course everyone on phoronix forums was going totally nuts. Luckily reason seems to finally prevail with LWN covering things too. I don’t want to spill more bits over the topic itself (read the LWN coverage and mailing list threads for that), but I think it’s worth looking at the fundamental underlying problem a bit more.

Discussing midlayers seems to be one of the recuring topics in the linux kernel. There’s the original midlayer-mistake article from Neil Brown that seems to have started it all. But LWN gained more articles in the years since, covering the iscsi driver as a study in avoiding OS abstraction layers, or a similar case in wireless with the Broadcom driver. The dismissal of midlayers and hailing of helper libraries has become so prevalent that calling your shiny new subsystem libfoo (viz. libnvdimm) seems to be a powerful trick to speed up its acceptance.

It seems common knowledge and accepted fact, but still there’s a constant stream of drivers that come packaged with huge abstraction layers - mostly to abstract OS internals, but very often also just a few midlayers between different components within the driver. A major reason for this is certain that submissions by former proprietary teams, or just generally code developed internally behind closed doors is suffering from the platform problem - again LWN has you covered. If your driver is not open and part of upstream (or even for an open source OS), then the provided services and interfaces are fixed. And there’s no way to improve things, worse, when change does happen the driver team generally doesn’t have any influence at all over what and how things changes. Hence it makes sense to insert a big abstraction layer to isolate the driver from the outside madness.

But that does not explain why big drivers (and more so, subsystems) come with some nice abstraction layers wedged in-between different parts. I believe, with not any proof really, that this is because company planners are extremely risk averse: Sure, working together and sharing code across the entire project has long-term benefits. But the more people are involved the bigger the risk for a bikeshed fest or some other delay, and you never want to be the one team that delayed a release by a few months. Adding isolation in the form of lots of fixed abstraction layers helps with that. But long term, and for really big projects, sharing code and working together has clear benefits, even if there’s routinely a hiccup - the neck-breaking speed of Linux kernel development overall is testament enough for that I think.

All that is just technicalities really, because in the end upstream and open source is about collaboratively developing software. That requires shared control, and interfaces between different components need to be a lot more permeable. In my experience that core idea of handing control over development to outsiders is the really scary part of joining upstream, since followed through to its conclusions it means you need to completely rethink how products are planned and developed: The entire organisation, from individual developers, to teams and including the management chain have to change. And that freaks people out.

In summary I think code submissions with lots of midlayers are bound to stay with us, and that’s good: Because new midlayers means new teams and companies start to take upstream seriously. And new midlayers getting cleaned up means new teams and new companies are going to the painful changes necessary to adjust to an upstream first model. The code itself is just the canary for the real shifts happening.

In other news: World domination is still progressing according to plan.

December 14, 2016 10:00 AM

December 12, 2016

Kees Cook: security things in Linux v4.9

Previously: v4.8.

Here are a bunch of security things I’m excited about in the newly released Linux v4.9:

Latent Entropy GCC plugin

Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX’s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel’s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is “credited”, but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.

vmapped kernel stack and thread_info relocation on x86

Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process’s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.

Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.

CONFIG_DEBUG_RODATA mandatory on arm64

As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there’s no reason to make the protection optional.

random_page() cleanup

Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.

That’s it for now! Let me know if there are other fun things to call attention to in v4.9.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

December 12, 2016 07:05 PM

Michael Kerrisk (manpages): man-pages-4.09 is released

I've released man-pages-4.09. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from 44 contributors. This is one of the more substantial releases in recent times, with more than 500 commits changing around 190 pages. The changes include the addition of eight new pages and significant enhancements or rewrites to many existing pages.

Among the more significant changes in man-pages-4.09 are the following:

In addition to the above, substantial changes were also made to the close(2), getpriority(2), nice(2), timer_create(2), timerfd_create(2), random(4), and proc(5) pages.

December 12, 2016 02:22 PM

December 10, 2016

James Bottomley: TPM enabling gnome-keyring

One of the questions about the previous post on using your TPM as a secure key store was “could the TPM be used to protect ssh keys?”  The answer is yes, because openssh uses openssl (so you can simply convert an openssh private key to a TPM private key) but the ssh-agent wouldn’t work because ssh-add passes in the private keys by their component primes, which is not possible when the private key is guarded by the TPM.  However,  that made me actually look at gnome-keyring to figure out how it worked and whether it could be used with the TPM.  The answer is yes, but there are also some interesting side effects of TPM enabling gnome-keyring.

Gnome-keyring Architecture

Gnome-keyring consists of essentially three components: a pluggable store backend, a secure passphrase holder (which is implemented as a backend store) and an agent frontend.  The frontend and backend talk to each other using the pkcs11 protocol.  This also means that the backend can serve anything that also speaks pkcs11, which most encryption systems do.  The stores consist of a variety of file or directory backed keys (for instance the ssh-store simply loads all the ssh keys in your $HOME/.ssh; the secret-store uses the gnome default keyring store, $HOME/.local/shared/keyring/,  to store collections of passwords)  The frontends act as bridges between a variety of external protocols and the keyring daemon.  They take whatever external protocol they’re speaking in, convert the request to pkcs11 and query the backends for the information.  The most important frontend is the login one which is called by gnome-keyring-daemon at start of day to unlock the secret-store which contains all your other key passwords using your login password as the key.  The ssh-agent frontend speaks the ssh agent protocol and converts key and signing requests to pkcs11 which is mostly served by the ssh-store.  The gpg-agent store speaks a very cut down version of the gpg agent protocol: basically all it does is allow you to store gpg key passwords in the secret-store; it doesn’t do any cryptographic operations.

Pkcs11 Essentials

Pkcs11 is a highly complex protocol designed for opening sessions which query or operate on tokens, which roughly speaking represent a bundle of objects (If you have a USB crypto key, that’s usually represented as a token and your stored keys as objects).  There is some intermediate stuff about slots, but that mostly applies to tokens which may be offline and need insertion, which isn’t relevant to the gnome keyring.  The objects may have several levels of visibility, but the most common is public (always visible) and private (must be logged in to the token to see them).  Objects themselves have a variety of attributes, some of which depend on what type of object they are and some of which are universal (like id and label).  For instance, an object representing an RSA public key would have the public exponent and the public modulus as queryable attributes.

The pkcs11 protocol also has a reasonably comprehensive object finding protocol.  An arbitrary list of attributes and values can be passed to the query and it will return all objects that fully match.  The token identifier is a query attribute which may be present but doesn’t have to be, so if you omit it, you end up searching over every token the pkcs11 subsystem knows about.

The other operation that pkcs11 understands is logging into a token with a pin (which is pkcs11 speak for a passphrase).  The pin doesn’t have to be supplied by the entity performing the login, it may be supplied to the token via an out of band mechanism (for instance the little button on the yukikey, or even a real keypad).  The important thing for gnome keyring here is that logging into a token may be as simple as sending the login command and letting the token sort out the authorization, which is the way gnome keyring operates.

You also need to understand is conventions about searching for keys.  The pkcs11 standard recommends (but does not require) that public and private key pairs, which exist as two separate objects, should have the same id attribute.  This means that if you want to find an rsa private key, the way you do this is by searching the public objects for the exponent and modulus.  Once this is returned, you log into the token and retrieve the private key object by the public key id.

The final thing to understand is that once you have the private key object, it merely acts as an entitlement to have the token perform certain private key operations for you (like encryption, decryption or signing); it doesn’t mean you have access to the private key itself.

Gnome Keyring handling of ssh keys

Once the ssh-agent side of gnome keyring receives a challenge, it must respond by returning the private key signature of the challenge.  To do this, it searches the pkcs11 for the key used for the challenge (for RSA keys, it searches by modulus and exponent, for DSA keys it searches by signature primes, etc).  One interesting point to note is the search isn’t limited to the gnome keyring ssh token store, so if the key is found anywhere, in any pkcs11 token, it will be used.  The expectation is that they key id attribute will be the ssh fingerprint and the key label attribute will be the ssh comment, but these aren’t used in the actual search.  Once the public key is found, the agent logs into the token, retrieves the private key by label and id and proceeds to get the private key to sign the challenge.

Adding TPM key handling to the ssh store

Gnome keyring is based on GNU libgcrypt which handles all cryptographic objects via s-expressions (which are basically lisp like strings).  Gcrypt itself doesn’t seem to have an s-expression for a token, so the actual signing will be done inside the keyring code.  The s-expression I chose to represent a TPM based private key is

(private-key
  (rsa
    (tpm
      (blob <binary key blob>)
      (auth <authoriztion>))))

The rsa is necessary for it to be recognised for RSA signatures.  Now the ssh-store is modified to recognise the PEM guards for TPM keys, load the key blob up into the s-expression and ask the secret-store for the authorization passphrase which is also loaded.  In theory, it might be safer to ask the secret store for the authorization at key use time, but this method mirrors what happens to private keys, which are decrypted at load time and stored as s-expressions containing the component primes.

Now the RSA signing code is hooked to check for TPM s-expressions and divert the signature to the TPM if they’re found.  Once this is done, gnome keyring is fully enabled for TPM based ssh keys.  The initial email thread about this is here, and an openSUSE build repository of the modified code is here.

One important design constraint for this is that people (well, OK me) have a lot of ssh keys, sometimes more than could be effectively stored by the TPM in its internal shielded memory (plus if you’re like me, you’re using the TPM for other things like VPN), so the design of the keyring TPM additions is not to burden that memory further, thus TPM keys are only loaded into the TPM whenever they’re needed for an operation and are unloaded otherwise.  This means the TPM can scale to hundreds of keys, but at the expense of taking longer: Instead of simply asking the TPM to sign something, you have to first ask the TPM to load and unwrap (which is an RSA operation) the key, then use it to sign.  Effectively it’s two expensive TPM operations per real cryptographic function.

Using TPM based ssh keys

Once you’ve installed the modified gnome-keyring package, you’re ready actually to make use of it.  Since we’re going to replace all your ssh private keys with TPM equivalents, which are keyed to a given storage root key (SRK)1 which changes every time the TPM is cleared, it is prudent to take a backup of all your ssh keys on some offline encrypted USB stick, just in case you ever need to restore them (or transfer them to a new laptop2).

cd ~/.ssh/
for pub in *rsa*.pub; do
    priv=$(basename $pub .pub)
    echo $priv
    create_tpm_key -m -a -w $priv ${priv}.tpm
    mv ${priv}.tpm $priv
done

You’ll be prompted first to create a TPM authorization password for the key, then to verify it, then to give the PEM password for the ssh key (for each key).  Note, this only transfers RSA keys (the only algorithm the TPM can handle) and also note the script above is overwriting the PEM private key, so make sure you have a backup.  The create_tpm_key command comes from the openssl_tpm_engine package, which I’ve patched here to support random migration authority and well known SRK authority.

All you have to do now is log out and back in (to restart the gnome-keyring daemon with the new ssh keystore) and you’re using TPM based keys for all your ssh operations.  I’ve noticed that this adds a couple of hundred milliseconds per login, so if you batch stuff over ssh, this is why your scripts are slower.

December 10, 2016 07:48 PM

December 06, 2016

LPC 2016: Linux Plumbers Conference 2017

It’s our pleasure to announce that Linux Plumbers Conference 2017 will take place on September 13-15 2017, in Los Angeles, California, USA. The conference will be co-located with the Linux Foundation Open Source Summit North America.

Stay tuned for more information as the Linux Plumbers Conference committee is starting to plan for the 2017 edition.

We hope you’ll join us in 2017.

The LPC Planning Team.

 

December 06, 2016 07:51 PM

December 05, 2016

James Bottomley: Using Your TPM as a Secure Key Store

One of the new features of Linux Plumbers Conference this year was the TPM Microconference, which facilitated great discussions both in the session itself and in the hallways.  Quite a bit of discussion was generated by the Beginner’s Guide to the TPM talk I gave, mostly because I blamed the Trusted Computing Group for the abject failure to adopt TPMs for anything citing the incredible complexity of their stack.

The main thing that came out of this discussion was that a lot of this stack complexity can be hidden from users and we should concentrate on making the TPM “just work” for all cryptographic functions where we have parallels in the existing security layers (like the keystore).  One of the great advantages of the TPM, instead of messing about with USB pkcs11 tokens, is that it has a file format for TPM keys (I’ll explain this later) which can be used directly in place of standard private key files.  However, before we get there, lets discuss some of the basics of how your TPM works and how to make use of it.

TPM Basics

Note that all of what I’m saying below applies to a 1.2 TPM (the type most people have in their laptops) 2.0 TPMs are now appearing on the market, but chances are you have a 1.2.

A TPM is traditionally delivered in your laptop in an uninitialised state.  In older laptops, the TPM is traditionally disabled and you usually have to find an entry in the BIOS menu to enable it.  In more modern laptops (thanks to Windows 10) the TPM is enabled in the bios and ready for the OS install to make use of it.  All TPMs are delivered with one manufacturer set key called the Endorsement Key (EK).  This key is unique to your TPM (like an identifying label) and is used as part of the attestation protocol.  Because the EK is a unique label, the attestation protocol is rather complex involving a so called privacy CA to protect your identity, but because it isn’t necessary to use the TPM as a secure keystore, I won’t cover it further.

The other important key, which you have to generate, is called the Storage Root Key.  This key is generated internally within the TPM once somebody takes ownership of it.  The package you need to begin using the tpm is tpm-tools, which is packaged by most distros. You must also have the Linux TSS stack trousers installed (just installing tpm-tools will often pull this in) and have the tcsd part of trousers running (usually systemctl start tcsd; systemctl enable tcsd). I tend to configure my TPM with an owner password (for things like resetting dictionary attacks) but a well known storage root key authority.  To do this from a fully cleared and enabled TPM, execute

tpm_takeownership -z

And give your chosen owner password when prompted.  If you get an error, chances are you need to go back to the BIOS menu and actively clear and reset the TPM (usually under the security options).

Aside about Authority and the Trusted Security Stack

To the TPM, an “authority” is a 20 byte number you use to prove you’re allowed to manipulate whatever object you’re trying to use.  The TPM typically has a well known way of converting typed passwords into these 20 byte codes.  The way you prove you know the authority is to add a Hashed Message Authentication Code (HMAC) to your TPM command.  This means that the hash can only be generated by someone who knows the authority for the object, but anyone seeing the hash cannot derive the authority from it.  The utility of this is that the trousers library (tspi) generates the HMAC before the TPM command is passed to the central daemon (tcsd) meaning that nothing except you and the TPM know the authority

trousersThe final thing about authority you need to know is that the TPM has a concept of “well known authority” which simply means supply 20 bytes of zeros.  It’s kind of paradoxical to have a secret everyone knows, however, there are reasons for this:  For most objects in the TPM whether you require authority to use them is optional, but for some it is mandatory.  For objects (like the SRK) where authority is mandatory, using the well known authority is equivalent to saying actually I don’t need authorization for this object.

The Storage Root Key (SRK)

Once you’ve generated this above, the TPM keeps the secret part permanently hidden, but can be persuaded to give anyone the public part.  In TPM 1.2, the SRK is a RSA 2048 key.  On most modern TPMs, you have to tell the tpm you want anyone to be able to read the public part of the storage root key, which you do with this command

tpm_restrictsrk -a

You’ll get prompted for the owner password.  Once you execute this command, anyone who knows the SRK authority (which you’ve set to be well known) is allowed to read the public part.

Why all this fuss about SRK authorization?  Well, traditionally, the TPM is designed for use in a hostile multi-user environment.  In the relaxed, no authorization, environment I’ve advised you to set up, anyone who knows the SRK can upload any storage object (like a key or protected blob) into the TPM.  This means, since the TPM has very limited storage, that they could in theory do a DoS attack against the TPM simply by filling it with objects.  On a laptop where there’s only one user (you) this is not usually a concern, hence the advice to use a well known authority, which makes the TPM much easier to use.

The way external objects (like keys or data blobs) are uploaded into the TPM is that they all have a parent (which must be a storage key) and they are encrypted to the public part of this key (in TPM parlance, this is called wrapping).  The TPM can have deep key hierarchies (all eventually parented to the SRK), but for a laptop, it makes sense simply to use the SRK as the only storage key and wrap everything for it as the parent.  Now here’s the reason for the well known authority: to upload an object into the TPM, it not only needs to be wrapped to the parent key, you also need to use the parent key authority to perform the upload.  The object you’re using also has a separate authority.  This means that when you upload and use a key, if you’ve set a SRK password, you’ll end up having to type both the SRK password and the key password pretty much every time you use it, which is a bit of a pain.

The tools used to create wrapped keys are found in the openssl_tpm_engine package.  I’ve done a few patches to make it easier to use (mostly by trying well known authority first before asking for the SRK password), so you can see my patched version here.  The first thing you can do is take any PEM key file you have and wrap it for your tpm

create_tpm_key -m -w test.key test.tpm.key

This creates a TPM key file test.tpm.key containing a wrapped key for your TPM with no authority (to add an authority password, use the -a option).  If you cat the test.tpm.key file, you’ll see it looks like a standard PEM file, except the guards are now

-----BEGIN TSS KEY BLOB-----
-----END TSS KEY BLOB-----

This key is now wrapped for your TPM’s SRK and would only be usable on your laptop.  If you’re fortunate enough to be using an application linked with gnutls, you can simply use this key with the URI  tpmkey:file=<path to test.tpm.key>.  If you’re using openssl, you need to patch it to get it to use TPM keys easily (see below).

The ideal, however, would be since these are PEM files with unique guards, any ssl provider should simply recognise the guards  and load the key into the TPM  This means that in order to use a TPM key, you take a standard PEM private key file, transform it into a TPM key file and then simply copy it back to where the original key file was being used from and voila! you’re using a TPM based key.  This is what the openssl patches below do.

Getting TPM keys to “just work” with openssl

In openssl, external encryption processors, like the TPM or USB keys are used by things called engines.  The engine you need for the TPM is also in the openssl_tpm_engine package, so once you’ve installed that package, the engine is available.  Unfortunately, openssl doesn’t naturally use a particular engine unless told to do so (most of the openssl tools have a -engine option for this).  However, having to specify the engine in every application somewhat spoils the “just works” aspect we’re looking for, so the openssl patches here allow an engine to specify that it knows how to parse a PEM file and can load a key from it.  This allows you simply to replace the original key file with a TPM protected key file and have your application continue working with it.

As a demo of the usefulness, I’m using it on my current laptop with all my VPN keys.  It is also possible to use it with openssh keys, since they’re standard PEM files.  However, the way openssh works with agents means that the agent cannot handle the keys and you have to type the password (if you set one one the key) each time you use it.

It should be noted that the idea of having PEM based TPM keys just work in openssl is encountering resistance.  However, it does just work in gnutls (provided you change the file name to be a tpmkey:file= URL).

Conclusions (or How Well is it Working?)

As I said above, I’m currently using this scheme for my openvpn and ssh keys.  I have to confess, since I use openssh a lot, I got very tired of having to type the password on every ssh operation, so I’ve gone back to using non-TPM based keys which can be handled by the agent.  Fixing this is on my list of things to look at.  However, I still am using TPM based keys for my openvpn.

Even for openvpn, though there are hiccoughs: the trousers daemon, tcsd, crashes periodically on my platform.  When it does, the vpn goes down (because the VPN needs a key based authentication transaction every hour to rotate the symmetric encryption keys).  Unfortunately, just restarting tcsd isn’t enough because the design of trousers doesn’t seem to be robust to this failure (even though the tspi part linked with the application could recreate all the keys), so the VPN itself must be restarted when this happens, which makes it rather user unfriendly.  Fixing trousers to cope with tcsd failure is also on my list of things to fix …

December 05, 2016 04:41 PM

December 02, 2016

Matthew Garrett: Ubuntu still isn't free software

Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.

The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu.

This remains incompatible with the principles of free software. The freedom to take someone else's work and redistribute it is a vital part of the four freedoms. It's legitimate for Canonical to insist that you not pass it off as their work when doing so, but their IP policy continues to insist that you remove all references to Canonical's trademarks even if their use would not infringe trademark law.

If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.

Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.

[1] And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise

comment count unavailable comments

December 02, 2016 09:37 AM

November 16, 2016

Pavel Machek: Linux did not win, yet

http://www.cio.com/article/3141918/linux/linux-has-won-microsoft-joins-the-linux-foundation.html Yes, Linux won on servers. Unfortunately... servers are not that important, and Linux still did not win on desktops (and is not much closer now than it was in 1998, AFAICT). We kind-of won on phones, but are not getting any benefits from that. Android is incompatible with X applications. Kernels on phones are so patched that updating kernel on phone is impossible... :-(. This means that Microsoft sponsors Linux Foundation. Well, nice, but not a big deal. Has Microsoft promised not to use their patents against Linux? Does their kernel actually contain vfat code? Can I even get source for "their" Linux kernel? [Searching for Linux on microsoft.com does not reveal anything interesting; might be switching to english would help...]

November 16, 2016 11:49 PM

November 15, 2016

Paul E. Mc Kenney: Another great Linux Plumbers Conference!

A big “thank you” to the program committee, to the microconference leads, to the refereed-track speakers, and, most of all, to the attendees! We had a great Linux Plumbers Conference this year, and we could not have done it without all of you!!!

November 15, 2016 10:47 PM

November 14, 2016

Pavel Machek: foxtrotgps: not suitable for spacecraft navigation

Subject: foxtrotgps: not suitable for spacecraft navigation
Package: foxtrotgps
Version: 1.2.0-1
Severity: normal
Dear Maintainer,
Trying to use foxtrotgps in the spacecraft leads to some interesting
glitches.
When date line is reached, "track traveled" jumps over the whole
world, and "your position" gets de-synchronized from point when the
red line is painted.
Reproduced with Vostok-1 spacecraft.

November 14, 2016 10:22 AM

November 10, 2016

Matthew Garrett: Tor, TPMs and service integrity attestation

One of the most powerful (and most scary) features of TPM-based measured boot is the ability for remote systems to request that clients attest to their boot state, allowing the remote system to determine whether the client has booted in the correct state. This involves each component in the boot process writing a hash of the next component into the TPM and logging it. When attestation is requested, the remote site gives the client a nonce and asks for an attestation, the client OS passes the nonce to the TPM and asks it to provide a signed copy of the hashes and the nonce and sends them (and the log) to the remote site. The remoteW site then replays the log to ensure it matches the signed hash values, and can examine the log to determine whether the system is trustworthy (whatever trustworthy means in this context).

When this was first proposed people were (justifiably!) scared that remote services would start refusing to work for users who weren't running (for instance) an approved version of Windows with a verifiable DRM stack. Various practical matters made this impossible. The first was that, until fairly recently, there was no way to demonstrate that the key used to sign the hashes actually came from a TPM[1], so anyone could simply generate a set of valid hashes, sign them with a random key and provide that. The second is that even if you have a signature from a TPM, you have no way of proving that it's from the TPM that the client booted with (you can MITM the request and either pass it to a client that did boot the appropriate OS or to an external TPM that you've plugged into your system after boot and then programmed appropriately). The third is that, well, systems and configurations vary so much that outside very controlled circumstances it's impossible to know what a "legitimate" set of hashes even is.

As a result, so far remote attestation has tended to be restricted to internal deployments. Some enterprises use it as part of their VPN login process, and we've been working on it at CoreOS to enable Kubernetes clusters to verify that workers are in a trustworthy state before running jobs on them. While useful, this isn't terribly exciting for most people. Can we do better?

Remote attestation has generally been thought of in terms of remote systems requiring that clients attest. But there's nothing that requires things to be done in that direction. There's nothing stopping clients from being able to request that a server attest to its state, allowing clients to make informed decisions about whether they should provide confidential data. But the problems that apply to clients apply equally well to servers. Let's work through them in reverse order.

We have no idea what expected "good" values are

Yes, and this is a problem. CoreOS ships with an expected set of good values, and we had general agreement at the Linux Plumbers Conference that other distributions would start looking at what it would take to do the same. But how do we know that those values are themselves trustworthy? In an ideal world this would involve reproducible builds, allowing anybody to grab the source code for the OS, build it locally and verify that they have the same hashes.

Ok. So we're able to verify that the booted OS was good. But how about the services? The rkt container runtime supports measuring each container into the TPM, which means we can verify which container images were started. If container images are also built in such a way that they're reproducible, users can grab the source code, rebuild the container locally and again verify that it has the same hashes. Users can then be sure that the remote site is running the code they're looking at.

Or can they? Not really - a general purpose OS has all kinds of ways to inject code into containers, so an admin could simply replace the binaries inside the container after it's been measured, or ptrace() the server, or modify rkt so it generates correct measurements regardless of the image or, well, there's lots they could do. So a general purpose OS is probably a bad idea here. Instead, let's imagine an immutable OS that does nothing other than bring up networking and then reads a config file that tells it which container images to download and run. This reduces the amount of code that needs to support reproducible builds, making it easier for a client to verify that the source corresponds to the code the remote system is actually running.

Is this sufficient? Eh sadly no. Even if we know the valid values for the entire OS and every container, we don't know the legitimate values for the system firmware. Any modified firmware could tamper with the rest of the trust chain, making it possible for you to get valid OS values even if the OS has been subverted. This isn't a solved problem yet, and really requires hardware vendor support. Let's handwave this for now, or assert that we'll have some sidechannel for distributing valid firmware values.

Avoiding TPM MITMing

This one's more interesting. If I ask the server to attest to its state, it can simply pass that through to a TPM running on another system that's running a trusted stack and happily serve me content from a compromised stack. Suboptimal. We need some way to tie the TPM identity and the service identity to each other.

Thankfully, we have one. Tor supports running services in the .onion TLD. The key used to identify the service to the Tor network is also used to create the "hostname" of the system. I wrote a pretty hacky implementation that generates that key on the TPM, tying the service identity to the TPM. You can ask the TPM to prove that it generated a key, and that allows you to tie both the key used to run the Tor service and the key used to sign the attestation hashes to the same TPM. You now know that the attestation values came from the same system that's running the service, and that means you know the TPM hasn't been MITMed.

How do you know it's a TPM at all?

This is much easier. See [1].



There's still various problems around this, including the fact that we don't have this immutable minimal container OS, that we don't have the infrastructure to ensure that container builds are reproducible, that we don't have any known good firmware values and that we don't have a mechanism for allowing a user to perform any of this validation. But these are all solvable, and it seems like an interesting project.

"Interesting" isn't necessarily the right metric, though. "Useful" is. And I think this is very useful. If I'm about to upload documents to a SecureDrop instance, it seems pretty important that I be able to verify that it is a SecureDrop instance rather than something pretending to be one. This gives us a mechanism.

The next few years seem likely to raise interest in ensuring that people have secure mechanisms to communicate. I'm not emotionally invested in this one, but if people have better ideas about how to solve this problem then this seems like a good time to talk about them.

[1] More modern TPMs have a certificate that chains from the TPM's root key back to the TPM manufacturer, so as long as you trust the TPM manufacturer to have kept control of that you can prove that the signature came from a real TPM

comment count unavailable comments

November 10, 2016 08:48 PM

November 04, 2016

LPC 2016: Closing party at the compound

We have a map here for you to locate the compound with

November 04, 2016 10:18 PM

October 28, 2016

Matthew Garrett: Of course smart homes are targets for hackers

The Wirecutter, an in-depth comparative review site for various electrical and electronic devices, just published an opinion piece on whether users should be worried about security issues in IoT devices. The summary: avoid devices that don't require passwords (or don't force you to change a default and devices that want you to disable security, follow general network security best practices but otherwise don't worry - criminals aren't likely to target you.

This is terrible, irresponsible advice. It's true that most users aren't likely to be individually targeted by random criminals, but that's a poor threat model. As I've mentioned before, you need to worry about people with an interest in you. Making purchasing decisions based on the assumption that you'll never end up dating someone with enough knowledge to compromise a cheap IoT device (or even meeting an especially creepy one in a bar) is not safe, and giving advice that doesn't take that into account is a huge disservice to many potentially vulnerable users.

Of course, there's also the larger question raised by the last week's problems. Insecure IoT devices still pose a threat to the wider internet, even if the owner's data isn't at risk. I may not be optimistic about the ease of fixing this problem, but that doesn't mean we should just give up. It is important that we improve the security of devices, and many vendors are just bad at that.

So, here's a few things that should be a minimum when considering an IoT device:

  • Does the vendor publish a security contact? (If not, they don't care about security)
  • Does the vendor provide frequent software updates, even for devices that are several years old? (If not, they don't care about security)
  • Has the vendor ever denied a security issue that turned out to be real? (If so, they care more about PR than security)
  • Is the vendor able to provide the source code to any open source components they use? (If not, they don't know which software is in their own product and so don't care about security, and also they're probably infringing my copyright)
  • Do they mark updates as fixing security bugs? (If not, they care more about hiding security issues than fixing them)
  • Has the vendor ever threatened to prosecute a security researcher? (If so, again, they care more about PR than security)
  • Does the vendor provide a public minimum support period for the device? (If not, they don't care about security or their users)

    I've worked with big name vendors who did a brilliant job here. I've also worked with big name vendors who responded with hostility when I pointed out that they were selling a device with arbitrary remote code execution. Going with brand names is probably a good proxy for many of these requirements, but it's insufficient.

    So here's my recommendations to The Wirecutter - talk to a wide range of security experts about the issues that users should be concerned about, and figure out how to test these things yourself. Don't just ask vendors whether they care about security, ask them what their processes and procedures look like. Look at their history. And don't assume that just because nobody's interested in you, everybody else's level of risk is equal.


  • comment count unavailable comments

    October 28, 2016 05:23 PM

    October 25, 2016

    Valerie Aurora: Why I won’t be attending Systems We Love

    Systems We Love is a one day event in San Francisco to talk excitedly about systems computing. When I first heard about it, I was thrilled! I love systems so much that I moved from New Mexico to the Bay Area when I was 23 years old purely so that I could talk to more people about them. I’m the author of the Kernel Hacker’s Bookshelf series, in which I enthusiastically described operating systems research papers I loved in the hopes that systems programmers would implement them. The program committee of Systems We Love includes many people I respect and enjoy being around. And the event is so close to me that I could walk to it.

    So why I am not going to Systems We Love? Why am I warning my friends to think twice before attending? And why am I writing a blog post warning other people about attending Systems We Love?

    The answer is that I am afraid that Bryan Cantrill, the lead organizer of Systems We Love, will say cruel and humiliating things to people who attend. Here’s why I’m worried about that.

    I worked with Bryan in the Solaris operating systems group at Sun from 2002 to 2004. We didn’t work on the same projects, but I often talked to him at the weekly Monday night Solaris kernel dinner at Osteria in Palo Alto, participated in the same mailing lists as him, and stopped by his office to ask him questions every week or two. Even 14 years ago, Bryan was one of the best systems programmers, writers, and speakers I have ever met. I admired him and learned a lot from him. At the same time, I was relieved when I left Sun because I knew I’d never have to work with Bryan again.

    Here’s one way to put it: to me, Bryan Cantrill is the opposite of another person I admire in operating systems (whom I will leave unnamed). This person makes me feel excited and welcome and safe to talk about and explore operating systems. I’ve never seen them shame or insult or put down anyone. They enthusiastically and openly talk about learning new systems concepts, even when other people think they should already know them. By doing this, they show others that it’s safe to admit that they don’t know something, which is the first step to learning new things. They are helping create the kind of culture I want in systems programming – the kind of culture promoted by Papers We Love, which Bryan cites as the inspiration for Systems We Love.

    By contrast, when I’m talking to Bryan I feel afraid, cautious, and fearful. Over the years I worked with Bryan, I watched him shame and insult hundreds of people, in public and in private, over email and in person, in papers and talks. Bryan is no Linus Torvalds – Bryan’s insults are usually subtle, insinuating, and beautifully phrased, whereas Linus’ insults tend towards the crude and direct. Even as you are blushing in shame from what Bryan just said about you, you are also admiring his vocabulary, cadence, and command of classical allusion. When I talked to Bryan about any topic, I felt like I was engaging in combat with a much stronger foe who only wanted to win, not help me learn. I always had the nagging fear that I probably wouldn’t even know how cleverly he had insulted me until hours later. I’m sure other people had more positive experiences with Bryan, but my experience matches that of many others. In summary, Bryan is supporting the status quo of the existing culture of systems programming, which is a culture of combat, humiliation, and domination.

    People admire and sometimes hero-worship Bryan because he’s a brilliant technologist, an excellent communicator, and a consummate entertainer. But all that brilliance, sparkle, and wit are often used in the service of mocking and humiliating other people. We often laugh and are entertained by what Bryan says, but most of the time we are laughing at another person, or at a person by proxy through their work. I think we rationalize taking part in this kind of cruelty by saying that the target “deserves” it because they made a short-sighted design decision, or wrote buggy code, or accidentally made themselves appear ridiculous. I argue that no one deserves to be humiliated or laughed at for making an honest mistake, or learning in public, or doing the best they could with the resources they had. And if that means that people like Bryan have to learn how to be entertaining without humiliating people, I’m totally fine with that.

    I stopped working with Bryan in 2004, which was 12 years ago. It’s fair to wonder if Bryan has had a change of heart since then. As far as I can tell, the answer is no. I remember speaking to Bryan in 2010 and 2011 and it was déjà vu all over again. The first time, I had just co-founded a non-profit for women in open technology and culture, and I was astonished when Bryan delivered a monologue to me on the “right” way to get more women involved in computing. The second time I was trying to catch up with a colleague I hadn’t seen in a while and Bryan was invited along. Bryan dominated the conversation and the two of us the entire evening, despite my best efforts. I tried one more time about a month ago: I sent Bryan a private message on Twitter telling him honestly and truthfully what my experience of working with him was like, and asking if he’d had a change of heart since then. His reply: “I don’t know what you’re referring to, and I don’t feel my position on this has meaningfully changed — though I am certainly older and wiser.” Then he told me to google something he’d written about women in computing.

    But you don’t have to trust my word on what Bryan is like today. The blog post Bryan wrote announcing Systems We Love sounds exactly like the Bryan I knew: erudite, witty, self-praising, and full of elegant insults directed at a broad swathe of people. He gaily recounts the time he gave a highly critical keynote speech at USENIX, bashfully links to a video praising him at a Papers We Love event, elegantly puts down most of the existing operating systems research community, and does it all while using the words “ancillary,” “verve,” and “quadrennial.” Once you know the underlying structure – a layer cake of vituperation and braggadocio, frosted with eloquence – you can see the same pattern in most of his writing and talks.

    So when I heard about Systems We Love, my first thought was, “Maybe I can go but just avoid talking to Bryan and leave the room when he is speaking.” Then I thought, “I should warn my friends who are going.” Then I realized that my friends are relatively confident and successful in this field, but the people I should be worried about are the ones just getting started. Based on the reputation of Papers We Love and the members of the Systems We Love program committee, they probably fully expect to be treated respectfully and kindly. I’m old and scarred and know what to expect when Bryan talks, and my stomach roils at the thought of attending this event. How much worse would it be for someone new and open and totally unprepared?

    Bryan is a better programmer than I am. Bryan is a better systems architect than I am. Bryan is a better writer and speaker than I am. The one area I feel confident that I know more about than Bryan is increasing diversity in computing. And I am certain that the environment that Bryan creates and fosters is more likely to discourage and drive off women of all races, people of color, queer and trans folks, and other people from underrepresented groups. We’re already standing closer to the exit; for many of us, it doesn’t take much to make us slip quietly out the door and never return.

    I’m guessing that Bryan will respond to me saying that he humiliates, dominates, and insults people by trying to humiliate, dominate, and insult me. I’m not sure if he’ll criticize my programming ability, my taste in operating systems, or my work on increasing diversity in tech. Maybe he’ll criticize me for humiliating, dominating, and insulting people myself – and I’ll admit, I did my fair share of that when I was trying to emulate leaders in my field such as Bryan Cantrill and Linus Torvalds. It’s gone now, but for years there was a quote from me on a friend’s web site, something like: “I’m an elitist jerk, I fit right in at Sun.” It took me years to detox and unlearn those habits and I hope I’m a kinder, more considerate person now.

    Even if Bryan doesn’t attack me, people who like the current unpleasant culture of systems programming will. I thought long and hard about the friendships, business opportunities, and social capital I would lose over this blog post. I thought about getting harassed and threatened on social media. I thought about a week of cringing whenever I check my email. Then I thought about the people who might attend Systems We Love: young folks, new developers, a trans woman at her first computing event since coming out – people who are looking for a friendly and supportive place to talk about systems at the beginning of their careers. I thought about them being deeply hurt and possibly discouraged for life from a field that gave me so much joy.

    Come at me, Bryan.

    Note: comments are now closed on this post. You can read and possibly comment on the follow-up post, When is naming abuse itself abusive?


    Tagged: conferences, feminism, kernel

    October 25, 2016 03:24 AM

    October 24, 2016

    LPC 2016: Things to remember about the altitude in Santa Fe

    Santa Fe is at an altitude of 7,200 feet (2,200m). There are a few things that attendees who are not used to higher altitudes may want to bear in mind:

    October 24, 2016 12:36 AM

    October 23, 2016

    James Bottomley: Home Automation: Coping with Insecurity in the IoT

    Reading Matthew Garret’s exposés of home automation IoT devices makes most engineers think “hell no!” or “over my dead body!”.  However, there’s also the siren lure that the ability to program your home, or update its settings from anywhere in the world is phenomenally useful:  for instance, the outside lights in my house used to depend on two timers (located about 50m from each other).  They were old, loud (to the point the neighbours used to wonder what the buzzing was when they visited) and almost always wrongly set for turning the lights on at sunset.  The final precipitating factor for me was the need to replace our thermostat, whose thermistor got so eccentric it started cooling in winter; so away went all the timers and their loud noises and in came a z-wave based home automation system, and the guilty pleasure of having an IoT based home automation system.  Now the lights precisely and quietly turn on at sunset and off at 23:00 (adjusting themselves for daylight savings); the thermostat is accessible from my phone, meaning I can adjust it from wherever I happen to be (including Hong Kong airport when I realised I’d forgotten to set it to energy saving mode before we went on holiday).  Finally, there’s waking up at 3am to realise your wife has fallen asleep over her book again and being able to turn off her reading light from your alarm clock without having to get out of bed … Automation bliss!

    We all want the convenience; the trick is to work around the rampant insecurity that comes with today’s IoT to avoid your home automation system being part of the DDoS bot net that brings down the internet.

    Selecting your network

    For me, nothing IP/Wifi based was partly due to Matthew’s blog and partly because my home Wifi network looks different from everyone else’s: I actually run an internal, secure, home network that is wired and have my Wifi sit unsecured and outside the firewall.  This goes back to the good old days of expecting to find wifi wherever you travelled and returning the courtesy by ensuring your wifi was accessible, but it does mean that any wifi connected device would be outside my firewall and open to all, which, given the general insecurity of the devices, makes this a non-starter.

    The next level down is to use a private network, like zigbee or z-wave.  I chose z-wave because it covers longer distances (which I need) and it doesn’t interfere with wifi (I have a hard time covering the entire house, even with two wifi access points).  Z-wave also looks secure, but, if you dig deeply, you find that there are flaws in the protocol that lay you open to a local attacker.  This, by the way, shows the futility of demanding security from IoT vendors who really don’t understand how to do it: a flawed security implementation is pretty much as bad as no security at all.

    Once this decision is made, the next is to choose a gateway to the internet that does what you want, namely give you remote control without giving up your security.

    Gateway Phone Home?

    A surprising number of z-wave controllers are of the phone home type (this means phone their manufacturer’s home, not you), and almost all of these simply won’t work if they’re not allowed to phone home.  Google comprehensively demonstrated the issues this raises with nest: lots of early adopters now have so much non-functional junk.

    For me, there was also the burned hand experience with Google services: whenever I travel, I invariably get locked out because of some pseudo-security issue and it takes a fight to get back in again. This ultimately precipitated my move away from the Google cloud and on to Owncloud for calendar and contacts, but also means I really don’t want to have to trust another external service for my home automation.

    Given the significantly limited choice of non-phone home z-wave controllers, I chose the HomeSeer Zee S2.  It’s basically a raspberry pi with a z-wave dongle and Linux.  If you’re into Linux on evereything, you should be aware that the home automation system is actually written in .net and it uses mono to bridge the gap; an odd choice given that there’s no known windows platform that could actually possibly run this system.

    Secure Internet based Automation

    The ZS2 does actually come with wifi, but given my already listed wifi problems, it’s actually plugged into my secure wired network with all phone home capabilities disabled.  Great, but that means it’s only accessible over a VPN and I want to be able to control it from things like my phone, where running a VPN is cumbersome, so lets do some magic tricks to make it securely accessible by any member of the family from any device.

    Obviously, since I already run Owncloud, I have a server of my own in a co-located site.  It’s this server I propose to use as my secure gateway.  The obvious way of doing this is simply proxying the ZS2 controller web page, but there are a couple of problems: firstly if I do it globally the ZS2 will be visible to port scans and secondly it only actually has an unencrypted web page with http authentication, meaning the login credentials would go over the internet in clear text … oops!

    The solution to the first of these is to make the web page only accessible to authenticated devices.  My current method is to use firewall whitelisting and a hook to an existing service authentication to open up the port.  So in the firewall mangle table, all the ports which require whitelisting are marked.  Then, in the input firewall, any packet so marked is checked against the whitelist for a matching source IP.  If a match is found, then the packet is permitted, otherwise it is denied.

    Whitelisting itself is done by a simple pam script

    #!/usr/bin/perl
    use Socket;
    
    $xt_file = '/proc/net/xt_recent/whitelist';
    
    $name = $ENV{'PAM_RHOST'};
    if ($name =~ m/^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/) {
     $addr = $name;
    } else {
     $ip = gethostbyname($name);
     $addr = inet_ntoa($ip);
    }
    
    open(FD, ">$xt_file");
    print FD "+$addr\n";
    close(FD);
    
    exit 0

    And this script is executed from the dovecot pam file as

    # add session to cause ip address of successful login to be whitelisted
    session optional pam_exec.so /etc/pam.d/whitelist.pl

    Meaning that any IP address that gets an authenticated imap connection (which is basically everybody’s internet device, since they all connect to email) is now allowed to access the authenticated ports.  Since imap requires re-authentication after a configurable timeout, the whitelist entry only lasts for just over that timeout and hey presto, we have our secured port system.

    Obviously, this isn’t foolproof: in particular whitelisting by external IP means that anyone sharing the same ip address via nat (like at a hotel) also has access to the secured ports, but it does cut down enormously on generic internet visibility.

    The final thing is to add security to the insecure web page, so anyone in the path to my internet host can’t sniff the password.  This is easily achieved by an stunnel redirect from the secure incoming port to the ZS2 over the VPN that connects to the internal network.  The beauty of this is that stunnel can now use the existing web certificate for my internet host to afford protection from man in the middle attacks as well.

    Last thoughts about Security

    Obviously, the security above isn’t perfect.  Anyone sharing my external IP would be able to run a port scan and (if they’re clever) detect the https port the ZS2 is on.  However, it does require a lot of luck to do this and, obviously, even if they’re in the fortunate position of sharing an IP address, I’ve changed the default password, so the recent Mirai attack wouldn’t have been able to compromise the device.

    Do I think this is good enough security: absolutely.  In security, the bear principle applies: in that when escaping from a ravenous bear, you don’t have to be able to run faster than the bear itself, you merely need to be able to run faster than the slowest other potential food source …  In internet terms, this means that while there are so many completely insecure devices out there, no-one can be bothered to hack a moderately secure system like mine because the customisation makes it quite a bit harder.  It’s also instructive to think that the bear principle is why Linux has such a security reputation: it’s not that we have perfect security against virus and trojan systems, it’s just that Windows was always so much worse …

    Eventually, something like Mirai will look to attack the ZS2 web server itself (it is .net based, after all) rather than simply try a list of default passwords and then I’ll need to be a bit more clever, but while everyone else is so much more insecure, that day will be long delayed.

    October 23, 2016 07:20 PM

    Pete Zaitcev: FAA proposes to ban NavWorx

    Seen a curious piece of news today. As a short preamble, an aircraft in the U.S. may receive useful information from a ground station (TIS-B and FIS-B), but it has to transmit a certain ADS-B packet for that to happen. And all ADS-B packets include a field that specifies the system's claim that it operates according to a certain level of precision and integrity. The idea is, roughly, if you detect that e.g. one of your redundant GPS receivers is off-line, you should broadcast that you're downgraded. The protocol field is called SIL. The maximum level you can claim is determined by how crazily redundant and paranoid your design is. We are talking something in the order of $20,000 worth of cost, most of which is amortization of FAA paperwork certifying and you are entitled to claim SIL of 2. I lied about this explanation being short, BTW.

    So, apparently, NavWorks shipped cheap ADS-B boxes, which were made with a Raspberry Pie and a cellphone GPS chip (or such). They honestly transmitted a SIL of 0. Who cares, right? Well, FAA decided that TIS should stop to reply to airplanes flying around with a SIL Zero ADS-B boxes, because fuck the citizens, they should pay their $20k. Pilots called the NavWorks and complained that their iPads hooked to ADS600 do not display the weather reliably anymore. NavWorks issued a software update that programmed their boxes to transmit SIL of 2. No other change: the actual transmitted positions remained exactly as before, only the claimed reliability was faked. When FAA got the wind of this happening, they went nuclear on NavWorks users' asses. The proposed emergency directive orders owners to remove the offending equipment from their aircraft. They are grounded until the compliance.

    Now the good thing is, the ADS-B mandate comes in 2020. They still have 3 years to find a more compliant (and expensive) supplier, before they are prohibited from a vicinity of a major city. So it's only money.

    I don't have a dog in this fight, personally, so I can sympathize with both the bureaucrats who saw cheaters and threw a book at them, and the company that employed a workaround against a meaningless and capricious rule. However, here's a couple of observations.

    First, note how FAA maintains a database of individual (not aggregate) protocol compliance for each ADS-B ID. They will even helpfully send you a report about what they know about you (it's intended so you can test the performance your ADS-B equipment). Imagine if the government saved every query that your browser made, and could tell if your Chrome were not compliant with a certain RFC. This detailed tracking of everything is actually very necessary because the protocol has no encryption whatsoever and is trivially spoofed. Nothing stops a bad actor to use your ID in ADS-B. The only recourse is for the government to investigate reported issues and find the culprit. And they need the absolute tracking for it.

    Second, about the 2020 mandate. The airspace prohibition amounts to not letting someone into a city if the battery is flat in their EZ-pass transponder. Only in this case, the government sent you a letter saying that your transponder is banned, and you must buy a new one before you can get to work. In theory, your freedom of travel is not limited - you can take a bus. In practice though, not everyone has $20k, and the waiting list for the installer is 6 months.

    UPDATE 2016/12/19: NavWorx posted the following explanation on their website (no permalink, idiots):

    Our version 4.0.6 made our 12/13 products transmit SIL 3, which the FAA ground stations would recognize as sufficient to resume sending TIS-B traffic to our customers.

    Fortunately from product inception our internal GPS met SIL 3 performance. The FAA approved our internal GPS as SIL 3. During the TSO certification process, the FAA accepted our “compliance matrix” – which is the FAA’s primary means of compliance - showing our internal GPS integrity was 1x10-7, which translates to SIL of 3. However, FAA policy at that time was that ADS-B GPS must have its own separate TSO – our internal GPS was certified under TSO-C154c, the same as the UAT OUT/IN transceiver. It’s important to note that the FAA authorized us to certify our internal GPS in this manner, and that they know that our internal GPS is safe – applicants for TSO certification must present a project plan and the FAA reviews and approves this project plan before the FAA ever allows an applicant to proceed with TSO certification of any product. Although they approved our internal GPS to be SIL of 3 (integrity of 1x10-7), based on FAA policy at the time they made us transmit SIL 0, with the explanation that “uncertified GPS must transmit SIL 0”. This really is a misnomer, as our GPS is “certified” (under TSO-C154c), but the FAA refers to it as “uncertified”. The FAA AD states that “uncertified” GPS must transmit SIL of 0.

    So, basically, they never bothered to certify their GPS properly and used a fig leaf of TSO-C154c.

    The letter then goes on how unfair it is that all the shitty experimentals are allowed to signal SIL 3 if only they use a proper GPS.

    UPDATE 2016/12/20: AOPA weighs in a comment on NPRM:

    Specifically, AOPA recommends the FAA address the confusion over whether the internal position source meets the applicable performance requirements, the existence of an unsafe condition, and why the proposed AD applies to NavWorx’s experimental UAT model.

    The FAA requires a position source to meet the performance requirements in appendix B to AC 20-165B for the position source to be included in the ADS-B Out system and for an aircraft to meet the § 91.227(c) performance requirements (e.g., SIL = 3). The FAA does not require the position source be compliant with a specific TSO. Any person may demonstrate to the FAA that its new (uncertified) position source meets the requirements of appendix B to AC 20-165B, thereby qualifying that position source to be used in an ADS-B Out system. However, integrating a TSO-certified position source into a UAT means that a person will have fewer requirements to satisfy in AC 20-165B appendix B during the STC process for the ADS-B Out system.

    Around May 2014, the FAA issued NavWorx an STC for its ADS600-B UAT with part numbers 200-0012 and 200-0013 (Certified UATs). The STC allowed for the installation of those UATs into any type-certificated aircraft identified in the approved model list. The Certified UATs were compliant with TSO-C154c, but had internal, non-compliant GPS receivers. (ADS600-B Installation Manual 240-0008-00-36 (IM -36), at 17, 21, 28.) Specifically, section 2.3 of NavWorx’s March 2015 installation manual states:

    “For ADS600-B part numbers 200-0012 and 200-0013, the internal GPS WAAS receiver does not meet 14 CFR 91 FAA-2007-29305 for GPS position source. If the ADS600-B is configured to use the internal GPS as the position source the ADS-B messages transmitted by the unit reports: A Source Integrity Limit (SIL) of 0 indicating that the GPS position source does not meet the 14 CFR 91 FAA-2007-29305 rule.” (IM -36, at 19.)

    Hoo, boy. Per the above quote by AOPA, NavWorks previously admitted in writing that their internal GPS is not good enough, but they are trying to walk that back with the talk about "GPS integrity 1x10-7".

    In the same comment letter later, Justin T. Barkowski recommends to minimize the economic impact in the rulemaking and not force owners to pull NavWorx boxes out of the aircraft immediately.

    October 23, 2016 04:27 AM

    October 22, 2016

    Matthew Garrett: Microsoft aren't forcing Lenovo to block free operating systems

    Update: Patches to fix this have been posted

    There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

    The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

    In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

    Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

    (Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

    Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

    The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

    comment count unavailable comments

    October 22, 2016 05:51 AM

    Matthew Garrett: Fixing the IoT isn't going to be easy

    A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

    To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

    We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

    Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

    These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

    Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

    Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

    If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

    We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

    Right. I'm off to portscan another smart socket.

    [1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

    [2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

    [3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

    comment count unavailable comments

    October 22, 2016 05:14 AM

    October 20, 2016

    Kees Cook: CVE-2016-5195

    My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

    So, here are the graphs updated for the 668 CVEs known today:

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 20, 2016 11:02 PM

    October 19, 2016

    Kees Cook: Security bug lifetime

    In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

    As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

    Patches_linux:
     break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2
    

    This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

    Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

    CVE lifetimes 2011-2016

    And here it is zoomed in to just Critical and High:

    Critical and High CVE lifetimes 2011-2016

    The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

    This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

    While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

    (Edit: see my updated graphs that include CVE-2016-5195.)

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 19, 2016 04:46 AM

    October 18, 2016

    LPC 2016: Hotel Blocks now Expired

    All our block bookings at the hotels have now expired.  However, if you still haven’t booked a hotel, it may still be possible for the Linux Foundation to get you a room at one of them at the conference rate (availability permitting).  Please email mphillips@linuxfoundation.org if you are interested in this option.

    October 18, 2016 08:06 PM

    Gustavo F. Padovan: Mainline Explicit Fencing – part 2

    In the first post we covered the main concepts behind Explicit Synchronization for the Linux Kernel. Now in the second post of the series we are going to look to the Android Sync Framework, the first (out-of-tree) Explicit Fencing implementation for the Linux Kernel.

    The Sync Framework was the Android solution to implement Explicit Fencing in AOSP. It uses file descriptors to communicate fencing information between userspace and kernel and between userspace process.

    In the Sync Framework it all starts with the creation of a Sync Timeline, a struct created for each driver context to represent a monotonically increasing counter. It is the Sync Timeline who will guarantee the ordering between fences in the same Timeline. The driver contexts could be different GPU rings, or different Displays on your hardware.

    Sync Timeline

    Sync Timeline

    Then we have Sync Points(sync_pt), the name Android gave to fences, they represent a specific value in the Sync Timeline. When created the Sync Point is initialized in the Active state, and when it signals, i.e., the job it was associated to finishes, it transits to the Signaled state and informs the Sync Timeline to update the value of the last signaled Sync Point.

    Sync Point

    Sync Point

    To export and import Sync Points to/from userspace the Sync Fence struct is used. Under the hood the the Sync Fence is a Linux file and we use thte Sync Fence to store Sync Point information. To exported to userspace a unused file descriptor(fd) is associated to the Sync Fence file. Drivers can then use the file descriptor to pass the Sync Point information around.

    Sync Fence

    Sync Fence

    The Sync Fence is usually created just after the Sync Point creation, it then travel through the pipeline, via userspace, until the driver that is going to wait for the Sync Fence to signal. The Sync Fence signal when the Sync Point inside it signals.

    One of the most important features of the Android Sync Framework is the ability to merge Sync Fences into a new Sync Fence containing all Sync Points from both Sync Fences. It can contain as many Sync Points as your resource allows. A merged Sync Fence will only signal when all its Sync Points signals.

    Sync Fence with Merged fences

    Sync Fence with Merged Fences. Here we merge two Sync Points into one Sync File.

    When it comes to userspace API the Sync Framework has implements three ioctl calls. The first one is to wait on sync_fence to signal. There is also a call to merge two sync_fences into a third and new sync_fence. And finally there is a also a call to grab information about the sync_fence and all its sync_points.

    The Sync Fences fds are passed to/from the kernel in the calls to ask the kernel to render or display a buffer.

    This was intended to be a overview of the Sync Framework as we will see some of these concepts on the next article where we will talk about the effort to add explict fencing on mainline kernel. If you want to learn more about the Sync Framework you can find more info here and here.

    October 18, 2016 06:16 PM

    October 08, 2016

    Michael Kerrisk (manpages): man-pages-4.07 is released

    I've released man-pages-4.07. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

    This release resulted from patches, bug reports, reviews, and comments from around 50 contributors. The release includes changes to over 140 man pages. Among the more significant changes in man-pages-4.07 are the following:

    October 08, 2016 12:20 PM

    Michael Kerrisk (manpages): man-pages-4.06 is released

    I've released man-pages-4.06. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

    This release resulted from patches, bug reports, reviews, and comments from around 20 contributors. The release includes changes to just over 40 man pages. Among the more significant changes in man-pages-4.06 are the following:

    October 08, 2016 12:20 PM

    Michael Kerrisk (manpages): man-pages-4.05 is released

    I've released man-pages-4.05. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

    This release resulted from patches, bug reports, reviews, and comments from more than 70 contributors. The release includes changes to more than 400 man pages. Among the more significant changes in man-pages-4.05 are the following:

    October 08, 2016 12:20 PM

    Michael Kerrisk (manpages): man-pages-4.08 is released

    I've released man-pages-4.08. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

    This release resulted from patches, bug reports, reviews, and comments from around 40 contributors. The release includes changes to nearly 200 man pages. Among the more significant changes in man-pages-4.08 are the following:

    October 08, 2016 12:19 PM

    October 07, 2016

    Daniel Vetter: Neat drm/i915 Stuff for 4.8

    I procristanated rather badly on this one, so instead of the previous kernel release happening the v4.8 release is already out of the door. Read on for my slightly more terse catch-up report.

    Since I’m this late I figured instead of the usual comprehensive list I’ll do something new and just list some of the work that landed in 4.8, but with a bit more focus on the impact and why things have been done.

    Midlayers, Be Gone!

    The first thing I want to highlight is the driver de-midlayering. In the linux kernel community the mid-layer mistake or helper library design pattern, see the linked article from LWN is a set of rules to design subsystems and common support code for drivers. The underlying rule is that the driver itself must be in control of everything, like allocating memory, handling all requests. Common code is only shared in helper library functions, which the driver can call if they are suitable. The reason for that is that there is always some hardware which needs special treatment, and when you have a special case and there’s a midlayer, it will get in the way.

    Due to the shared history with BSD kernels DRM originally had a full-blown midlayer, but over time this has been fixed. For example kernel modesetting was designed from the start with the helper library pattern. The last hold is the device structure itself, and for the Intel driver this is now fixed. This has two main benefits:

    Thundering Herds

    GPUs process rendering asynchronously, and sometimes the CPU needs to wait for them. For this purpose there’s a wait queue in the driver. Userspace processes block on that until the interrupt handler wakes them up. The trouble now is that thus far there was just one wait queue per engine, which means every time the GPU completed something all waiters had to be woken up. Then they checked whether the work they needed to wait for completed, and if not, again block on the wait queue until the next batch job completed. That’s all rather inefficient. On top there’s only one per-engine knob to enable interrupts. Which means even if there was only one waiting process, it was woken for every completed job. And GPUs can have a lot of jobs in-flight.

    In summary, waiting for the GPU worked more like a frantic herd trampling all over things instead of something orderly. To fix this the request and completion tracking was entirely revamped, to make sure that the driver has a much better understanding of what’s going on. On top there’s now also an efficient search structure of all current waiting processes. With that the interrupt handler can quickly check whether the just completed GPU job is of interest, and if so, which exact process should be woken up.

    But this wasn’t just done to make the driver more efficient. Better tracking of pending and completed GPU requests is an important fundation to implement proper GPU scheduling on top of. And it’s also needed to interface the completion tracking with other drivers, to finally fixing tearing for multi-GPU machines. Having a thundering herd in your own backyard is unsightly, letting it loose on your neighbours is downright bad! A lot of this follow-up work already landed for the 4.9 kernel, hence I will talk more about this in a future installement of this seris.

    October 07, 2016 12:00 PM

    October 06, 2016

    James Morris: LinuxCon Europe Kernel Security Slides

    Yesterday I gave an update on the Linux kernel security subsystem at LinuxCon Europe, in Berlin.

    The slides are available here: http://namei.org/presentations/linux_kernel_security_linuxconeu2016.pdf

    The talk began with a brief overview and history of the Linux kernel security subsystem, and then I provided an update on significant changes in the v4 kernel series, up to v4.8.  Some expected upcoming features were also covered.  Skip to slide 31 if you just want to see the changes.  There are quite a few!

    It’s my first visit to Berlin, and it’s been fascinating to see the remnants of the Cold War, which dominated life in 1980s when I was at school, but which also seemed so impossibly far from Australia.

    brandenburg gate

    Brandenburg Gate, Berlin. Unity Day 2016.

    I hope to visit again with more time to explore.

    October 06, 2016 12:56 PM

    Pavel Machek: FlightGame

    FlightGame
    FlightGear is a very nice simulator, but it is not a lot of fun: page with "places to fly" helps. But when you setup your flight details, including weather and failures, you can kind of expect what is going to happen. FlightGame was designed to address this (not for me, unfortunately, alrough... if you ever debugged piece of software you know unexpected things happen): levels are prepared to be interesting, yet they try to provide enough information so that you don't need to
    study maps and aircraft specifications before the flight.
    Don't expect anything great/too complex, this is just python getting data from gpsd, and causing your aircaft probles over internal webserver. But it still should be fun.
    Code is at
    https://gitlab.com/tui/tui/tree/master/fgame
    . I guess I should really create a better README.
    Who wants to play?

    October 06, 2016 08:56 AM

    October 05, 2016

    Kees Cook: security things in Linux v4.8

    Previously: v4.7. Here are a bunch of security things I’m excited about in Linux v4.8:

    SLUB freelist ASLR

    Thomas Garnier continued his freelist randomization work by adding SLUB support.

    x86_64 KASLR text base offset physical/virtual decoupling

    On x86_64, to implement the KASLR text base offset, the physical memory location of the kernel was randomized, which resulted in the virtual address being offset as well. Due to how the kernel’s “-2GB” addressing works (gcc‘s “-mcmodel=kernel“), it wasn’t possible to randomize the physical location beyond the 2GB limit, leaving any additional physical memory unused as a randomization target. In order to decouple the physical and virtual location of the kernel (to make physical address exposures less valuable to attackers), the physical location of the kernel needed to be randomized separately from the virtual location. This required a lot of work for handling very large addresses spanning terabytes of address space. Yinghai Lu, Baoquan He, and I landed a series of patches that ultimately did this (and in the process fixed some other bugs too). This expands the physical offset entropy to roughly $physical_memory_size_of_system / 2MB bits.

    x86_64 KASLR memory base offset

    Thomas Garnier rolled out KASLR to the kernel’s various statically located memory ranges, randomizing their locations with CONFIG_RANDOMIZE_MEMORY. One of the more notable things randomized is the physical memory mapping, which is a known target for attacks. Also randomized is the vmalloc area, which makes attacks against targets vmalloced during boot (which tend to always end up in the same location on a given system) are now harder to locate. (The vmemmap region randomization accidentally missed the v4.8 window and will appear in v4.9.)

    x86_64 KASLR with hibernation

    Rafael Wysocki (with Thomas Garnier, Borislav Petkov, Yinghai Lu, Logan Gunthorpe, and myself) worked on a number of fixes to hibernation code that, even without KASLR, were coincidentally exposed by the earlier W^X fix. With that original problem fixed, then memory KASLR exposed more problems. I’m very grateful everyone was able to help out fixing these, especially Rafael and Thomas. It’s a hard place to debug. The bottom line, now, is that hibernation and KASLR are no longer mutually exclusive.

    gcc plugin infrastructure

    Emese Revfy ported the PaX/Grsecurity gcc plugin infrastructure to upstream. If you want to perform compiler-based magic on kernel builds, now it’s much easier with CONFIG_GCC_PLUGINS! The plugins live in scripts/gcc-plugins/. Current plugins are a short example called “Cyclic Complexity” which just emits the complexity of functions as they’re compiled, and “Sanitizer Coverage” which provides the same functionality as gcc’s recent “-fsanitize-coverage=trace-pc” but back through gcc 4.5. Another notable detail about this work is that it was the first Linux kernel security work funded by Linux Foundation’s Core Infrastructure Initiative. I’m looking forward to more plugins!

    If you’re on Debian or Ubuntu, the required gcc plugin headers are available via the gcc-$N-plugin-dev package (and similarly for all cross-compiler packages).

    hardened usercopy

    Along with work from Rik van Riel, Laura Abbott, Casey Schaufler, and many other folks doing testing on the KSPP mailing list, I ported part of PAX_USERCOPY (the basic runtime bounds checking) to upstream as CONFIG_HARDENED_USERCOPY. One of the interface boundaries between the kernel and user-space are the copy_to_user()/copy_from_user() family of functions. Frequently, the size of a copy is known at compile-time (“built-in constant”), so there’s not much benefit in checking those sizes (hardened usercopy avoids these cases). In the case of dynamic sizes, hardened usercopy checks for 3 areas of memory: slab allocations, stack allocations, and kernel text. Direct kernel text copying is simply disallowed. Stack copying is allowed as long as it is entirely contained by the current stack memory range (and on x86, only if it does not include the saved stack frame and instruction pointers). For slab allocations (e.g. those allocated through kmem_cache_alloc() and the kmalloc()-family of functions), the copy size is compared against the size of the object being copied. For example, if copy_from_user() is writing to a structure that was allocated as size 64, but the copy gets tricked into trying to write 65 bytes, hardened usercopy will catch it and kill the process.

    For testing hardened usercopy, lkdtm gained several new tests: USERCOPY_HEAP_SIZE_TO, USERCOPY_HEAP_SIZE_FROM, USERCOPY_STACK_FRAME_TO,
    USERCOPY_STACK_FRAME_FROM, USERCOPY_STACK_BEYOND, and USERCOPY_KERNEL. Additionally, USERCOPY_HEAP_FLAG_TO and USERCOPY_HEAP_FLAG_FROM were added to test what will be coming next for hardened usercopy: flagging slab memory as “safe for copy to/from user-space”, effectively whitelisting certainly slab caches, as done by PAX_USERCOPY. This further reduces the scope of what’s allowed to be copied to/from, since most kernel memory is not intended to ever be exposed to user-space. Adding this logic will require some reorganization of usercopy code to add some new APIs, as PAX_USERCOPY’s approach to handling special-cases is to add bounce-copies (copy from slab to stack, then copy to userspace) as needed, which is unlikely to be acceptable upstream.

    seccomp reordered after ptrace

    By its original design, seccomp filtering happened before ptrace so that seccomp-based ptracers (i.e. SECCOMP_RET_TRACE) could explicitly bypass seccomp filtering and force a desired syscall. Nothing actually used this feature, and as it turns out, it’s not compatible with process launchers that install seccomp filters (e.g. systemd, lxc) since as long as the ptrace and fork syscalls are allowed (and fork is needed for any sensible container environment), a process could spawn a tracer to help bypass a filter by injecting syscalls. After Andy Lutomirski convinced me that ordering ptrace first does not change the attack surface of a running process (unless all syscalls are blacklisted, the entire ptrace attack surface will always be exposed), I rearranged things. Now there is no (expected) way to bypass seccomp filters, and containers with seccomp filters can allow ptrace again.

    That’s it for v4.8! The merge window is open for v4.9…

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 05, 2016 12:26 AM

    October 03, 2016

    Matthew Garrett: The importance of paying attention in building community trust

    Trust is important in any kind of interpersonal relationship. It's inevitable that there will be cases where something you do will irritate or upset others, even if only to a small degree. Handling small cases well helps build trust that you will do the right thing in more significant cases, whereas ignoring things that seem fairly insignificant (or saying that you'll do something about them and then failing to do so) suggests that you'll also fail when there's a major problem. Getting the small details right is a major part of creating the impression that you'll deal with significant challenges in a responsible and considerate way.

    This isn't limited to individual relationships. Something that distinguishes good customer service from bad customer service is getting the details right. There are many industries where significant failures happen infrequently, but minor ones happen a lot. Would you prefer to give your business to a company that handles those small details well (even if they're not overly annoying) or one that just tells you to deal with them?

    And the same is true of software communities. A strong and considerate response to minor bug reports makes it more likely that users will be patient with you when dealing with significant ones. Handling small patch contributions quickly makes it more likely that a submitter will be willing to do the work of making more significant contributions. These things are well understood, and most successful projects have actively worked to reduce barriers to entry and to be responsive to user requests in order to encourage participation and foster a feeling that they care.

    But what's often ignored is that this applies to other aspects of communities as well. Failing to use inclusive language may not seem like a big thing in itself, but it leaves people with the feeling that you're less likely to do anything about more egregious exclusionary behaviour. Allowing a baseline level of sexist humour gives the impression that you won't act if there are blatant displays of misogyny. The more examples of these "insignificant" issues people see, the more likely they are to choose to spend their time somewhere else, somewhere they can have faith that major issues will be handled appropriately.

    There's a more insidious aspect to this. Sometimes we can believe that we are handling minor issues appropriately, that we're acting in a way that handles people's concerns, while actually failing to do so. If someone raises a concern about an aspect of the community, it's important to discuss solutions with them. Putting effort into "solving" a problem without ensuring that the solution has the desired outcome is not only a waste of time, it alienates those affected even more - they're now not only left with the feeling that they can't trust you to respond appropriately, but that you will actively ignore their feelings in the process.

    It's not always possible to satisfy everybody's concerns. Sometimes you'll be left in situations where you have conflicting requests. In that case the best thing you can do is to explain the conflict and why you've made the choice you have, and demonstrate that you took this issue seriously rather than ignoring it. Depending on the issue, you may still alienate some number of participants, but it'll be fewer than if you just pretend that it's not actually a problem.

    One warning, though: while building trust in this way enhances people's willingness to join your community, it also builds expectations. If a significant issue does arise, and if you fail to handle it well, you'll burn a lot of that trust in the process. The fact that you've built that trust in the first place may be what saves your community from disintegrating completely, but people will feel even more betrayed if you don't actively work to rebuild it. And if there's a pattern of mishandling major problems, no amount of getting the details right will matter.

    Communities that ignore these issues are, long term, likely to end up weaker than communities that pay attention to them. Making sure you get this right in the first place, and setting expectations that you will pay attention to your contributors, is a vital part of building a meaningful relationship between your community and its members.

    comment count unavailable comments

    October 03, 2016 05:14 PM

    Gustavo F. Padovan: Collabora Contributions to Linux Kernel 4.8

    Linux Kernel 4.8 is out and once more Collabora engineers did a significant contribution to the Kernel. For the 4.8 Collabora contributed 101 patches by 8 engineers, our record to date in single kernel release! We’ve also seen the first contribution from Frederic Dalleau since he joined Collabora. LWN.net covered the new features of the new kernel in three different posts, here, here and here.

    On the Collabora side of the contributions we touched a few different areas in the kernel. Bob Ham, who recently left Collabora, added support for the Alea I Random Number Generator, while Enric Balletbo improved the audio support on the Rockchip rk3288 SoC. Frederic Dalleau fixed an important memory leak on the Bluetooth stack.

    Gustavo Padovan continued his work add Explicit Synchronization for Buffer Sharing on the kernel. In this release he added fence_array support and prepared the SW_SYNC interfaces for de-staging, SW_SYNC meant to be used for Explict Syncronization testing. He also worked in removing some of the legacy functions from drm_irq.c from the kernel.

    Helen Koike added some improvements and clean ups to the ASoC subsystem mainly on the max9877 and tpa6130a2 drivers. Nicolas Dufresne fixed the bytes per line calculation on YUV planes on the uvcvideo driver.

    Thierry Escande added many improvements the NFC digital layer and Tomeu Vizoso added a new helper for the ChromeOS Embedded Controller and improved usage of DRM Core APIs on the Rockchip driver. He also fixed an issue with the Analogix DP on Rockchip that was not enabling clocks in the correct order.

    Bob Ham (2):

    Enric Balletbo i Serra (8):

    Frederic Dalleau (1):

    Gustavo Padovan (50):

    Helen Koike (8):

    Nicolas Dufresne (1):

    Thierry Escande (26):

    Tomeu Vizoso (5):

    October 03, 2016 01:59 PM

    Pavel Machek: Linux V4.8 on N900

    Basics work, good. GSM does not work too well, which is kind of a problem. Camera broke between 4.7 and 4.8. That is not good, either.

    If you want to talk about Linux and phones, I'll probably be on LinuxDays in Prague this weekend, and will have a talk about it at Ubucon Europe.

    October 03, 2016 11:13 AM

    Kees Cook: security things in Linux v4.7

    Previously: v4.6. Onward to security things I found interesting in Linux v4.7:

    KASLR text base offset for MIPS

    Matt Redfearn added text base address KASLR to MIPS, similar to what’s available on x86 and arm64. As done with x86, MIPS attempts to gather entropy from various build-time, run-time, and CPU locations in an effort to find reasonable sources during early-boot. MIPS doesn’t yet have anything as strong as x86’s RDRAND (though most have an instruction counter like x86’s RDTSC), but it does have the benefit of being able to use Device Tree (i.e. the “/chosen/kaslr-seed” property) like arm64 does. By my understanding, even without Device Tree, MIPS KASLR entropy should be as strong as pre-RDRAND x86 entropy, which is more than sufficient for what is, similar to x86, not a huge KASLR range anyway: default 8 bits (a span of 16MB with 64KB alignment), though CONFIG_RANDOMIZE_BASE_MAX_OFFSET can be tuned to the device’s memory, giving a maximum of 11 bits on 32-bit, and 15 bits on EVA or 64-bit.

    SLAB freelist ASLR

    Thomas Garnier added CONFIG_SLAB_FREELIST_RANDOM to make slab allocation layouts less deterministic with a per-boot randomized freelist order. This raises the bar for successful kernel slab attacks. Attackers will need to either find additional bugs to help leak slab layout information or will need to perform more complex grooming during an attack. Thomas wrote a post describing the feature in more detail here: Randomizing the Linux kernel heap freelists. (SLAB is done in v4.7, and SLUB in v4.8.)

    eBPF JIT constant blinding

    Daniel Borkmann implemented constant blinding in the eBPF JIT subsystem. With strong kernel memory protections (CONFIG_DEBUG_RODATA) in place, and with the segregation of user-space memory execution from kernel (i.e SMEP, PXN, CONFIG_CPU_SW_DOMAIN_PAN), having a place where user-space can inject content into an executable area of kernel memory becomes very high-value to an attacker. The eBPF JIT was exactly such a thing: the use of BPF constants could result in the JIT producing instruction flows that could include attacker-controlled instructions (e.g. by directing execution into the middle of an instruction with a constant that would be interpreted as a native instruction). The eBPF JIT already uses a number of other defensive tricks (e.g. random starting position), but this added randomized blinding to any BPF constants, which makes building a malicious execution path in the eBPF JIT memory much more difficult (and helps block attempts at JIT spraying to bypass other protections).

    Elena Reshetova updated a 2012 proof-of-concept attack to succeed against modern kernels to help provide a working example of what needed fixing in the JIT. This serves as a thorough regression test for the protection.

    The cBPF JITs that exist in ARM, MIPS, PowerPC, and Sparc still need to be updated to eBPF, but when they do, they’ll gain all these protections immediatley.

    Bottom line is that if you enable the (disabled-by-default) bpf_jit_enable sysctl, be sure to set the bpf_jit_harden sysctl to 2 (to perform blinding even for root).

    fix brk ASLR weakness on arm64 compat

    There have been a few ASLR fixes recently (e.g. ET_DYN, x86 32-bit unlimited stack), and while reviewing some suggested fixes to arm64 brk ASLR code from Jon Medhurst, I noticed that arm64’s brk ASLR entropy was slightly too low (less than 1 bit) for 64-bit and noticeably lower (by 2 bits) for 32-bit compat processes when compared to native 32-bit arm. I simplified the code by using literals for the entropy. Maybe we can add a sysctl some day to control brk ASLR entropy like was done for mmap ASLR entropy.

    LoadPin LSM

    LSM stacking is well-defined since v4.2, so I finally upstreamed a “small” LSM that implements a protection I wrote for Chrome OS several years back. On systems with a static root of trust that extends to the filesystem level (e.g. Chrome OS’s coreboot+depthcharge boot firmware chaining to dm-verity, or a system booting from read-only media), it’s redundant to sign kernel modules (you’ve already got the modules on read-only media: they can’t change). The kernel just needs to know they’re all coming from the correct location. (And this solves loading known-good firmware too, since there is no convention for signed firmware in the kernel yet.) LoadPin requires that all modules, firmware, etc come from the same mount (and assumes that the first loaded file defines which mount is “correct”, hence load “pinning”).

    That’s it for v4.7. Prepare yourself for v4.8 next!

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 03, 2016 07:47 AM

    October 01, 2016

    Kees Cook: security things in Linux v4.6

    Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella.

    seccomp support for parisc

    Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working.

    x86 32-bit mmap ASLR vs unlimited stack fixed

    Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. “ulimit -s unlimited“) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead.

    x86 execute-only memory

    Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for “execute only” memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I’m looking forward to either emulated QEmu support or access to one of these fancy CPUs.

    CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86

    Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel.

    On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar’s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we’ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it’s reasonable to continue to leave an “out” for developers that find themselves tripping over it.

    arm64 KASLR text base offset

    Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the “/chosen/kaslr-seed” property) or from UEFI (via EFI_RNG_PROTOCOL), so if you’re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader.

    zero-poison after free

    Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity’s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn’t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called “sanity checking”), which can catch another small subset of flaws.

    To understand the pieces of this, it’s worth describing that the kernel’s higher level allocator, the “page allocator” (e.g. __get_free_pages()) is used by the finer-grained “slab allocator” (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator).

    Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn’t worth the performance trade-off.

    Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you’re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well.

    To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:

    CONFIG_DEBUG_PAGEALLOC=n
    CONFIG_PAGE_POISONING=y
    CONFIG_PAGE_POISONING_NO_SANITY=y
    CONFIG_PAGE_POISONING_ZERO=y
    CONFIG_SLUB_DEBUG=y

    and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:

    page_poison=on slub_debug=P

    To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add “F” to slub_debug as “slub_debug=PF“.

    read-only after init

    I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity’s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above).

    Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared “const” in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked “__init“) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc.

    As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it’s trivial to declare a new data section (“.data..ro_after_init“) and add it to the existing read-only data section (“.rodata“). Kernel structures can be annotated with the new section (via the “__ro_after_init” macro), and they’ll become read-only once boot has finished.

    The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel’s attack surface can be made read-only for the majority of its lifetime.

    As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these).

    That’s it for v4.6, next up will be v4.7!

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 01, 2016 07:45 AM

    September 30, 2016

    LPC 2016: Last batch of LPC registrations available on October 1

    The last batch of registrations for the 2016 Linux Plumbers Conference will be available starting at noon Eastern Time (EDT) on October 1. This will be the last chance to register to attend the conference. Those interested should visit the registration web site after that time.

    The schedule for the conference has been posted, which includes information on the microconferences (and the discussions planned for those) as well as the refereed talks. Any conflicts noted should be sent to contact@linuxplumbersconf.org.

    We hope to see you in Santa Fe!

    September 30, 2016 02:49 PM

    James Morris: Linux Security Summit 2016 Wrapup

    Here’s a summary of the 2016 Linux Security Summit, which was held last month in Toronto.

    Presentation slides are available at http://events.linuxfoundation.org/events/archive/2016/linux-security-summit/program/slides.

    This year, videos were made of the sessions, and they may be viewed at https://www.linux.com/news/linux-security-summit-videos — many thanks to Intel for sponsoring the recordings!

    LWN has published some excellent coverage:

    This is a pretty good representation of the main themes which emerged in the conference: container security, kernel self-protection, and integrity / secure boot.

    Many of the core or low level security technologies (such as access control, integrity measurement, crypto, and key management) are now fairly mature. There’s more focus now on how to integrate these components into higher-level systems and architectures.

    One talk I found particularly interesting was Design and Implementation of a Security Architecture for Critical Infrastructure Industrial Control Systems in the Era of Nation State Cyber Warfare. (The title, it turns out, was a hack to bypass limited space for the abstract in the cfp system).  David Safford presented an architecture being developed by GE to protect a significant portion of the world’s electrical grid from attack.  This is being done with Linux, and is a great example of how the kernel’s security mechanisms are being utilized for such purposes.  See the slides or the video.  David outlined gaps in the kernel in relation to their requirements, and a TPM BoF was held later in the day to work on these.  The BoF was reportedly very successful, as several key developers in the area of TPM and Integrity were present.

    #linuxsecuritysummit TPM BOF session pic.twitter.com/l1ko9Meiud

    — LinuxSecuritySummit (@LinuxSecSummit) August 25, 2016

    Attendance at LSS was the highest yet with well over a hundred security developers, researchers and end users.

    Special thanks to all of the LF folk who manage the logistics for the event.  There’s no way we could stage something on this scale without their help.

    Stay tuned for the announcement of next year’s event!

     

    September 30, 2016 11:19 AM

    Daniel Vetter: Commit Rights in the Linux Kernel?!

    Since about a year we’re running the Intel graphics driver with a new process: Besides the two established maintainers we’ve added all regular contributors as committers to the main feature branch feeding into -next. This turned out into a tremendous success, but did require some initial adustments to how we run things in the first few months.

    I’ve presented the new model here at Kernel Recipes in Paris, and I will also talk about it at Kernel Summit in Santa Fe. Since LWN is present at both I won’t bother with a full writeup, but leave that to much better editors. Update: LWN on kernel maintainer scalability.

    Anyway, there’s a video recording and the slides. Our process is also documented - scroll down to the bottom for the more interesting bits around what’s expected of committers.

    On a related note: At XDC, and a bit before, Eric Anholt started a discussion about improving our patch submission process, especially for new contributors. He used the Rust community as a great example, and presented about it at XDC. Rather interesting to hear his perspective as a first-time contributor confirm what I learned in LCA this year in Emily Dunham’s awesome talk on Life is better with Rust’s community automation.

    September 30, 2016 05:32 AM

    September 28, 2016

    Kees Cook: security things in Linux v4.5

    Previously: v4.4. Some things I found interesting in the Linux kernel v4.5:

    CONFIG_IO_STRICT_DEVMEM

    The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled. (And if you have a system where you discover you need IO memory access from userspace, you can boot with “iomem=relaxed” to disable this at runtime.)

    If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled).

    ptrace fsuid checking

    Jann Horn fixed some corner-cases in how ptrace access checks were handled on special files in /proc. For example, prior to this fix, if a setuid process temporarily dropped privileges to perform actions as a regular user, the ptrace checks would not notice the reduced privilege, possibly allowing a regular user to trick a privileged process into disclosing things out of /proc (ASLR offsets, restricted directories, etc) that they normally would be restricted from seeing.

    ASLR entropy sysctl

    Daniel Cashman standardized the way architectures declare their maximum user-space ASLR entropy (CONFIG_ARCH_MMAP_RND_BITS_MAX) and then created a sysctl (/proc/sys/vm/mmap_rnd_bits) so that system owners could crank up entropy. For example, the default entropy on 32-bit ARM was 8 bits, but the maximum could be as much as 16. If your 64-bit kernel is built with CONFIG_COMPAT, there’s a compat version of the sysctl as well, for controlling the ASLR entropy of 32-bit processes: /proc/sys/vm/mmap_rnd_compat_bits.

    Here’s how to crank your entropy to the max, without regard to what architecture you’re on:

    for i in "" "compat_"; do f=/proc/sys/vm/mmap_rnd_${i}bits; n=$(cat $f); while echo $n > $f ; do n=$(( n + 1 )); done; done
    

    strict sysctl writes

    Two years ago I added a sysctl for treating sysctl writes more like regular files (i.e. what’s written first is what appears at the start), rather than like a ring-buffer (what’s written last is what appears first). At the time it wasn’t clear what might break if this was enabled, so a WARN was added to the kernel. Since only one such string showed up in searches over the last two years, the strict writing mode was made the default. The setting remains available as /proc/sys/kernel/sysctl_writes_strict.

    seccomp UM support

    Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures!

    seccomp NNP vs TSYNC fix

    Jann Horn noticed and fixed a problem where if a seccomp filter was already in place on a process (after being installed by a privileged process like systemd, a container launcher, etc) then the setting of the “no new privs” flag could be bypassed when adding filters with the SECCOMP_FILTER_FLAG_TSYNC flag set. Bypassing NNP meant it might be possible to trick a buggy setuid program into doing things as root after a seccomp filter forced a privilege drop to fail (generally referred to as the “sendmail setuid flaw”). With NNP set, a setuid program can’t be run in the first place.

    That’s it! Next I’ll cover v4.6

    Edit: Added notes about “iomem=…”

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    September 28, 2016 09:58 PM