Profile cover photo
Profile photo
Michael Scott (Hashcode)

Post has attachment
Tomorrow is our extended "Cousin's" Christmas party.

This year, a couple of us agreed to bring a selection of Scotch, Whiskey and Bourbons as a great way to compare different flavor profiles.

I'm bringing this.. #opticalexperiments

My brother brought some sake up from Setting Sun Sake Brewing Co. (8680 Miralani Dr #120, San Diego, CA 92126) ... good #opticalexperiments

Post has attachment
Interesting crash / bootloop scenario on Kindle Fire HDX's running Android 6.0.1_r74 (In this case CM-13.0).

Turns out CM-13.0 nightlies from 11/8 and on were bootlooping on the Kindle Fire HDXs (seems like it's always the community dev who is the last to know about these issues?).

Around 11/7, CyanogenMod had merged in android-6.0.1_r74. In that merge, Google added a whitelist check of Zygote's file descriptors prior to forking as a security measure. This whitelist call can fail on "legacy" devices using prebuilt libs from previous version of Android which results in a bootloop scenario. Checking logcat would show something like:
"Zygote : Unsupported AF_UNIX socket (fd=3) with empty path."
And then a libandroid crash.

Quite a lot of debugging later, I found this was caused by one of the prebuilt libs linking against libcutils for android logging functions. There's a static version of liblog included for just this purpose (it's probably never used in current hardware).

The static version of liblog and the shared lib liblog will BOTH open file descriptors used for logging. Right before the Zygote whitelist check a call is made to the shared liblog lib's __android_log_close() function ( but the file descriptor in the static lib is left open. And since this local socket isn't named, the whitelist file descriptor check fail. Which was the reason they added the __android_log_close() function in the first place.

In my case, I can fix the boot by removing "liblog" from the LOCAL_WHOLE_STATIC_LIBRARIES line here in system/core/libcutils/

But, I'm still left wondering what libraries are now going to fail because they aren't linked directly against liblog.

Anyway, hope this helps someone else who may have this issue. It took quite a while to debug.

Post has attachment
Living on the Edge (of Ubuntu) ... can be painful!

I've been running Ubuntu 15.04 for a while now.  It's been great having a very current kernel alongside the latest improvements from Ubuntu and Debian.  (My past post on using zRAM ramdisk is one example).

However, having the newest greatest toys also has it's downsides.  I recently spent 4 days troubleshooting a build break in Android which started some time after March 25th.  I'm guessing I updated packages or inadvertently changed my glibc version.

The outcome was a build error during the checkapi stage of Android build:

Install: /out/mydroid-ara/host/linux-x86/bin/apicheck
Checking API: checkpublicapi-last
Checking API: checkpublicapi-current
/out/mydroid-ara/target/common/obj/PACKAGING/public_api.txt:20: error 5: Added public field android.Manifest.permission.BACKUP
/out/mydroid-ara/target/common/obj/PACKAGING/public_api.txt:82: error 5: Added public field android.Manifest.permission.INVOKE_CARRIER_SETUP
/out/mydroid-ara/target/common/obj/PACKAGING/public_api.txt:106: error 5: Added public field android.Manifest.permission.READ_PRIVILEGED_PHONE_STATE
/out/mydroid-ara/target/common/obj/PACKAGING/public_api.txt:116: error 5: Added public field android.Manifest.permission.RECEIVE_EMERGENCY_BROADCAST

You have tried to change the API from what has been previously approved.

To make these errors go away, you have two choices:
   1) You can add "@hide" javadoc comments to the methods, etc. listed in the
      errors above.

   2) You can update current.txt by executing the following command:
         make update-api

      To submit the revised current.txt to the main Android repository,
      you will need approval.

This occurred on both of my Ubuntu 15.04 boxes and was present when when build AOSP android-5.0.2_r1 and android-5.1.0_r1.

For those of you who are unfamiliar with this portion of the Android build, the Android framework exports all of the public portions of the API and makes sure that the current build matches up with what's located under frameworks/base/api/current.txt.  It does this by parsing frameworks/base/res/AndroidManifest.xml and any of the current device's overlay .xml files and processes items marked with various flags in the comments above them:@SystemApi, @hide, etc.   This parsing and processing portion of the checkapi stage is done by a binary "aapt" (Android Asset Packagng Tool).  It's source is located under frameworks/base/tools/aapt.

I started by checking for upstream fixes to the platform/build or platform/frameworks/base projects.   After striking out, I began debugging the android build via the use of:
"make checkapi showcommands"
and then manually running the commands with "strace" to see how each binary was involved and what output it generated.

After the first few hours of debugging, it became apparent that out/target/common/obj/APPS/frameworks-res_intermediates/src/android/ file had comments which were being corrupted when aapt was generating it.  I was able to make some manual changes to the AndroidManifest.xml file and get the build to pass (removing extra portions of the comments).

Digging deeper via strace and then looking at various static link sources, I found that during the AndroidManifest.xml comments processing the @SystemApi token was being filtered out via a String8.removeAll("@SystemApi") function call.  Experimentally, I removed this part of the processing.  Lo and Behold!  The build worked.  Taking a closer look at the removeAll function, I was able to pin point a memcpy function as the part of the function which was causing corruption.

I then researched memcpy a bit and noted that you are not supposed to use memcpy on overlapping memory addresses, instead memmove was preferred, because it makes a copy of the source prior to any changes being made to the destination.  After changing the use of memcpy to memmove the build was fixed and all was well with the world!

As a good player in the open source world, I immediately thought I should upstream this incredible feat of debugging to the master branch of system/core.  BUT, alas!  The fix has been in the master branch since November 11th 2014!  And hasn't been brought into any of the current development tags!  grumble

I've since contacted the Google team about this change and let them know of my experience in hopes that we may yet see this patch in future release tags of Android.

Conclusion: apparently glibc is undergoeing some changes and some of those have now filtered onto my Ubuntu boxes.  Where previously the memcpy usage was incorrect but still usable, it now causes the build break I was seeing.

If you see this kind of error in your Android builds, and you're on a newish version of Ubuntu or Debian distrobution, you may want to try this simple patch and see if it helps.

- Hash

A phone conversation with an older relative:

"How is your consulting thing going?"

Me: "I've been working for Linaro on Project Ara for a few months now. I don't really consult anymore."

"Oh. Neat! ... I'm not sure what that is."

Me: I'm working on the smart phone of tomorrow."

"Oh. Does that mean you can help fix my cable box?"

Me: "Yes. Yes, I can"

Post has attachment
L Android release has been fantastic!   Except, that it blows up my current build setup.

On both my laptop and build box, I use a ~30gb ramdisk as $OUT to save wear and tear on my SSD drives.  This also doesn't hurt compile times and keeps many "stale dependency issues" at bay when rebuilding often.

I originally set this up by adding 1 line to /etc/fstab:
tmpfs    /out    tmpfs    nodev,nosuid,size=30000M    0    0
And then exporting this in .bashrc:

However, when building L Android for Arm64, the size of the $OUT directory has nearly doubled!  Alas, my poor "old" hardware now seems inadequate for compiling to RAM ... or is it?

Enter: zRAM block device driver:

zRAM is used as a compressed memory block device.  Normally, enabled as SWAP in Linux to avoid slowness due to hard disk paging, this driver also supports actual filesystems where they are compressed in memory.

Questions: Can we effectively replace the previous ramdisk setup with a zram block device to allow for a larger RAM-based $OUT directory?  If so how large can one expect to make this device using ~30gb?

Answers: Yes.  Using zRAM, I've seen compression rates which effectively double the size of actual RAM used.  Right now, I'm using a max of about 51gb (~27gb actual memory used).  You could go larger if needed.

For reference, my system is running Ubuntu 14.10 and this requires a 3.17+ Linux kernel (more on why later)

Step 1: (if present) comment out the fstab line which sets up the ramfs device.  We'll be using a zram block device instead now.

Step 2: Ensure that you have the following export in your .bashrc file:

Step 3: Add the following lines to your /etc/rc.local file, making sure it lands above the "exit 0" line:
modprobe zram
echo 4 > /sys/block/zram0/max_comp_streams
echo lz4 > /sys/block/zram0/comp_algorithm
echo 50G > /sys/block/zram0/disksize
mkfs.ext4 -O dir_nlink,extent,extra_isize,flex_bg,^has_journal,uninit_bg -m0 -b 4096 -L "zram0" /dev/zram0
mount -o barrier=0,commit=240,noatime,discard /dev/zram0 /out
chmod 777 /out

Let's break down each of these lines:
1. Insert the zram kernel module and create a new block device (zram0)
2. Enable up to 4 compression streams if needed
3. Set lz4 compression algorithm for the block device (generally considered the best compression for performance)
4. Set 50gb disksize limit (compressed memory, so actual memory use is about 1/2 of this in most cases).  This setting should be replaced with a new "mem_limit" sysfs entry added in later kernels:
Oddly, I was unable to find this entry in the 3.17.4 kernel I'm using now.
5. Setup ext4 on the zram block device.  Drop journaling as it's not needed for this use-case and will help with speed.
6. Mount the block device as /out, notice that this line includes "discard".  More discussion on that below.
7. Lastly, I set 777 permissions on /out directory.  You don't need to. Here I was being quite lazy while fixing a permission error that Android build was giving me.  This should be something less than 777.

Reboot and check "mount" command to see your new compressed ramdisk:
/dev/zram0 on /out type ext4 (rw,noatime,barrier=0,commit=240,discard)

When you run builds in Android, a new sub-directory will be created under /out depending on what your Android folder name is.  Example: /out/mydroid-aosp/...  To clear out a build from the ramdisk you can execute: rm -rf /out/mydroid-aosp
(Warning: not for those prone to very bad typo's -- don't blame me if you delete the entire contents of your device... you've been warned.  No really.. don't delete your stuff on accident, use a GUI or something if you're concerned about this command.)

Let's talk about why kernel 3.17.x is needed and why "discard" is included in mount line.

Previously, the zram driver was intended as a SWAP device and all was well.  Later, once the filesystem use-case came into play, an issue was found where memory wasn't being released during the discard operation.  See kernel commit:
This was added in the 3.17 kernel to fix this issue. (hence the kernel version requirement)
Without this commit, memory will never be freed up by the block device.   Ending in terrible horrible unsufferable system lag once your box enters an endless loop of disk swapping.

The reason "discard" is added to the mount line is for convenience.  As files are deleted, memory gets freed up.  It is not required, and will add a few more cycles to your CPU.  Instead, you could manually run "fstrim /out" to trigger this memory free after clearing out files.  It's up to you.

Perhaps this will help someone else out there, in the same situation.


Post has attachment
Today is an exciting day in my very tiny corner of the universe! I formally accepted a job offer as Engineer with Linaro's Mobile Group ( My first assignment will be working as part of a team to introduce needed changes in the Android OS to support Project Ara (

Obviously, this is a huge opportunity. But also a very challenging task.

Over the last 3+ years, I've been working 2 jobs. During the day, I work with clients to solve business related problems via custom software solutions and at night work on Android projects ranging from device support, kernel development, CyanogenMod maintainer duties and Safestrap recovery. It has been an incredible learning experience, but at times it can also be draining. This setup might work due to the flexibility of my job as a consultant where I can determine my own working hours. However, it's not fair to Linaro or my team if I continue this way.

This means that I will be stepping away from much of my community development efforts. Affected projects include: Kindle Fire device support, Safestrap recovery and my maintainer duties at CyanogenMod. I hope to find some replacement developers for many of these projects so they can continue to live on.

Please don't misunderstand the above to mean: I don't support open source development, or that "I've given in to the man." Linaro is a huge proponent of open source and upstream contributions, and hopefully much of the work I'll be doing will end up in public repos.

Lastly, I wanted to say thank you. Over the last few years, I have chatted, emailed, responded to you in forums and you treated me with respect and have shown me a lot of love over the years. It has been a pleasure.

TL;DR: I'm making a huge career change and I want to be 100% focused on it. To that end, I'm going to step away from many very time consuming community Android projects.
Project Ara
Project Ara

Post has shared content
OtterX - A new approach to the 'otter' platform

Some of you otter users may already be familiar with the otterx project from +Michael Scott (Hashcode). 

For those of you who may not be familiar, this is a specially created variant of the Kindle Fire 'otter' platform, making use of a new bootloader, new partitioning setup and support for f2fs. 

These changes are being introduced to expand the life-cycle of this platform, and lay the groundwork for Kindle Fire support as we look forward to Android L. 

Nightlies for this new variant will begin tomorrow. Instructions are being drafted on our wiki to convert your generic 'otter' to the 'otterx' setup. 

As this moves forward, we will discontinue the old 'otter' device, and switch fully to 'otterx'.

Post has attachment
Kindle Fire 1st Edition gets F2FS support in the new OtterX builds!

What is F2FS?

How was this done?
1. Pull in 450 something commits from to support f2fs (I pulled through the 3.13 kernel commits)... and I may end up pulling in more to test up through 3.15-rc7, but I wanted to at least put something out which was more stable.
2. Use a couple of VERY HANDY kernel 3.4 backport commits from the Motorola guys (included in this series of commits):
3. Pull in some general fs compatibility updates from upstream and then fix anything else that was broken with a few commits of my own.

Source: (page 1 through page 13 for the f2fs commits)

To use this on your Kindle First 1st edition device you'll need to be using the newer OtterX u-boot 2014.01 bootloader v2.05 (for combined data/sdcard storage partition layout), a new TWRP , and a new CM11 build which both support F2FS.

More information here:

And here:

Note to the reader: I added F2FS support because I thought it might be a neat project.  Not because it'll make lightning shoot out of a 3 year old device.  Only time will tell how much improvement F2FS makes.  Let the useless benchmarking begin!

Special thanks to +Chris Fries, +olivier meirhaeghe and the other Motorola devs who help contribute to the community.

Kindle Fire HD models: 7 (2012) "Tate" and 8.9 "Jem" are now officially supported by CyanogenMod.

Kind of a long time in coming I know.  These devices are over a year old.  But I've always felt like they "just weren't good enough" to be added as official devices.  Most of that got cleared up over the last few months and now everyone can enjoy nightly builds from once of the best custom ROMs around.

- No official support for the KFire HD 8.9 LTE model, for now they can use the standard 8.9 builds.
- And there is no bootloader exploit for the 2013 Kindle Fire HD 7" model ("Soho") device.  So, users of that device PLEASE PLEASE DO NOT ATTEMPT TO FLASH ANYTHING.  IT'S A VERY HARD BRICK IF YOU DO.  You can tell if you have a "Soho" device by the lack of camera and HDMI output.

Wait while more posts are being loaded