Profile

Cover photo
Michael Foukarakis
Works at IOActive, Inc.
Lives in Athens, Greece
72 followers|40,473 views
AboutPosts

Stream

Michael Foukarakis

Shared publicly  - 
1
Have him in circles
72 people
Μάνος Μαγούλης's profile photo
Evangelos Markatos's profile photo
AegeanTimes GR's profile photo
George Tesseris's profile photo
Patroklos Argyroudis's profile photo
jason polakis's profile photo
jo gabe's profile photo
Ioannis Zacharos's profile photo
Manos Moschous's profile photo

Michael Foukarakis

Shared publicly  - 
 
 
The upcoming Linux kernel v3.18 will extend the slab merging feature of the SLUB allocator to the SLAB allocator (see, e.g., https://git.kernel.org/linus/423c929cbb and https://git.kernel.org/linus/12220dea07).

Lets see what that feature does by slightly instrumenting the kernel (the same information is available via sysfs but a printk() is more simple to grep for ;)...

[vanilla]$ git diff -- mm/slub.c
diff --git a/mm/slub.c b/mm/slub.c
index 3e8afcc07a76..650fbef4510c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@-3699,6+3699,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
                int i;
                struct kmem_cache *c;
 
+               pr_info(">>> %s: merging '%s' into '%s'\n", _func_, name, s->name);
                s->refcount++;
 
                /*

This gives us the following:

[vanilla]$ dmesg | grep merging | grep kmalloc
[    0.013565] >>> __kmem_cache_alias: merging 'pid' into 'kmalloc-128'
[    0.014213] >>> __kmem_cache_alias: merging 'anon_vma_chain' into 'kmalloc-64'
[    0.220085] >>> __kmem_cache_alias: merging 'cred_jar' into 'kmalloc-192'
[    0.220891] >>> __kmem_cache_alias: merging 'task_xstate' into 'kmalloc-512'
[    0.221653] >>> __kmem_cache_alias: merging 'fs_cache' into 'kmalloc-128'
[    0.223107] >>> __kmem_cache_alias: merging 'key_jar' into 'kmalloc-256'
[    0.223828] >>> __kmem_cache_alias: merging 'names_cache' into 'kmalloc-4096'
[    0.296000] >>> __kmem_cache_alias: merging 'pool_workqueue' into 'kmalloc-256'
[    0.298417] >>> __kmem_cache_alias: merging 'skbuff_head_cache' into 'kmalloc-256'
[    0.302642] >>> __kmem_cache_alias: merging 'uid_cache' into 'kmalloc-128'
[    0.304361] >>> __kmem_cache_alias: merging 'bio_integrity_payload' into 'kmalloc-192'
[    0.305139] >>> __kmem_cache_alias: merging 'biovec-16' into 'kmalloc-256'
[    0.305803] >>> __kmem_cache_alias: merging 'biovec-64' into 'kmalloc-1024'
[    0.306483] >>> __kmem_cache_alias: merging 'biovec-128' into 'kmalloc-2048'
[    0.307166] >>> __kmem_cache_alias: merging 'biovec-256' into 'kmalloc-4096'
[    0.307846] >>> __kmem_cache_alias: merging 'bio-0' into 'kmalloc-192'
[    0.349950] >>> __kmem_cache_alias: merging 'sgpool-8' into 'kmalloc-256'
[    0.350622] >>> __kmem_cache_alias: merging 'sgpool-16' into 'kmalloc-512'
[    0.351295] >>> __kmem_cache_alias: merging 'sgpool-32' into 'kmalloc-1024'
[    0.352003] >>> __kmem_cache_alias: merging 'sgpool-64' into 'kmalloc-2048'
[    0.352685] >>> __kmem_cache_alias: merging 'sgpool-128' into 'kmalloc-4096'
[    0.362359] >>> __kmem_cache_alias: merging 'eventpoll_epi' into 'kmalloc-128'
[    0.379876] >>> __kmem_cache_alias: merging 'request_sock_TCP' into 'kmalloc-256'
[    0.380644] >>> __kmem_cache_alias: merging 'RAW' into 'kmalloc-1024'
[    0.381277] >>> __kmem_cache_alias: merging 'PING' into 'kmalloc-1024'
[    0.382419] >>> __kmem_cache_alias: merging 'ip_dst_cache' into 'kmalloc-192'
[    0.384757] >>> __kmem_cache_alias: merging 'secpath_cache' into 'kmalloc-64'
[    0.385472] >>> __kmem_cache_alias: merging 'inet_peer_cache' into 'kmalloc-192'
[    0.386210] >>> __kmem_cache_alias: merging 'tcp_bind_bucket' into 'kmalloc-64'
[    0.617272] >>> __kmem_cache_alias: merging 'fasync_cache' into 'kmalloc-64'
[    0.618706] >>> __kmem_cache_alias: merging 'dnotify_struct' into 'kmalloc-32'
[    0.620263] >>> __kmem_cache_alias: merging 'fsnotify_mark' into 'kmalloc-128'
[    0.621750] >>> __kmem_cache_alias: merging 'kiocb' into 'kmalloc-128'
[    0.706589] >>> __kmem_cache_alias: merging 'virtio_scsi_cmd' into 'kmalloc-192'
[    0.708232] >>> __kmem_cache_alias: merging 'sd_ext_cdb' into 'kmalloc-32'
[    0.710648] >>> __kmem_cache_alias: merging 'scsi_sense_cache' into 'kmalloc-128'
[    0.731169] >>> __kmem_cache_alias: merging 'dm_io' into 'kmalloc-64'
[    0.734164] >>> __kmem_cache_alias: merging 'io' into 'kmalloc-64'
[    0.973911] >>> __kmem_cache_alias: merging 'request_sock_TCPv6' into 'kmalloc-256'
[    0.982847] >>> __kmem_cache_alias: merging 'fib6_nodes' into 'kmalloc-64'

So the slab merging is pretty effective. But looking at what kind of caches get merged with the general purpose caches -- i.e. the kmalloc-* ones -- is kinda scary if you throw kernel bugs into the game. If one assumes some random driver contains a user triggerable use-after-free bug that just has the right size, that bug might be abused to tamper with critical kernel data structures like the process credentials ('cred_jar') or the process' memory mappings ('anon_vma_chain'). The other slab caches are probably "usable" too, e.g. the 'io' slab handles objects containing a function pointer making hijacking the kernel control flow easier.

This exploitation scenario is only possible encouraged by slab merging  because the critical objects would be allocated from the same slab as the buggy driver's objects: the general purpose kmalloc slab. So, looking from a security angle, one may not want that slab merging feature. And, in fact, there's a knob to disable it: the kernel command line option "slub_nomerge" (or, starting with https://git.kernel.org/linus/12220dea07 "slab_nomerge"). This option disables the slab merging feature and therefore prevents the above exploitation scenario. It can be seen as a pro-active counter-measure as it enforces the creation of dedicated slabs. The use-after-free bug would still be there, but it couldn't be abused to manipulate the critical kernel objects -- as those would resist in a different slab cache.

So, lets have a look at what the PaX/grsecurity project has to say about that topic:

$ git diff v3.17..linux-grsec/v3.17-pax -- mm/slub.c
diff --git a/mm/slub.c b/mm/slub.c
index 3e8afcc07a76..74cd3bf90c8f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
...
@@-2710,7+2718,7 @@ static int slub_min_objects;
  * Merge control. If this is set then no merging of slab caches will occur.
  * (Could be removed. This was introduced to pacify the merge skeptics.)
  */
-static int slub_nomerge;
+static int slub_nomerge = 1;
 
 /*
  * Calculate the order of allocation given an slab object size.


Cache merging disabled by default -- as expected :)

EDIT: Clarified the exploitation probability due to slab merging. Exploiting slab use-after-free bugs is not "only" possible because of slab merging but slab merging increases the likelihood of an exploitable situation.
1
People
Have him in circles
72 people
Μάνος Μαγούλης's profile photo
Evangelos Markatos's profile photo
AegeanTimes GR's profile photo
George Tesseris's profile photo
Patroklos Argyroudis's profile photo
jason polakis's profile photo
jo gabe's profile photo
Ioannis Zacharos's profile photo
Manos Moschous's profile photo
Work
Occupation
Security Researcher
Skills
Problem solving
Employment
  • IOActive, Inc.
    Security Researcher, 2013 - present
  • Niometrics
    Software Engineer, 2012 - 2013
  • Nokia Siemens Networks
    R&D Engineer, 2010 - 2012
  • HNDGS
    R&D Engineer, 2009 - 2012
  • FORTH
    R&D Engineer, 2008 - 2009
Basic Information
Gender
Male
Story
Tagline
I grow software in my backyard.
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Athens, Greece
Previously
Manchester, UK - Singapore - Nice, France - Heraklion, Greece
Links
Other profiles