Profile cover photo
Profile photo
Christopher Fox
Software engineer at Clarifai
Software engineer at Clarifai

Christopher's posts

Post has attachment
Fun with filters.

Post has attachment

Post has attachment
Huh, people are still responding to this question about random integer partitions I posted several years ago.

Post has attachment
Euler's approach to research, via C. Truesdell, Jeffrey Lagarias, and Dana Ernst. I think this might apply beyond mathematics research.

(1) Always attack a special problem. If possible solve the special problem in a way that leads to a general method.

(2) Read and digest every earlier attempt at a theory of the phenomenon in question.

(3) Let a key problem solved be a father to a key problem posed. The new problem finds its place on the structure provided by the solution of the old; its solution in turn will provide further structure.

(4) If two special problems solved seem cognate, try to unite them in a general scheme. To do so, set aside the differences, and try to build a structure on the common features.

(5) Never rest content with an imperfect or incomplete argument. If you cannot complete and perfect it yourself, lay bare its flaws for others to see.

(6) Never abandon a problem you have solved. There are always better ways. Keep searching for them, for they lead to a fuller understanding. While broadening, deepen and simplify.

Post has shared content
Golf ball hitting a steel plate at 150mph in 70,000 fps.
Animated Photo

Post has shared content
A look at the thee main problems that had to be addressed to make the Moves app work (activity recognition, trajectory prediction, and significant place detection) and how in a future update most of this data processing will happen on the phone rather than on Move's servers. I wonder if it's a net win for battery life. The transition from server side to client side processing as the algorithms mature is intersting.
The next generation of Moves technology

We have not talked publicly much about the technology powering Moves. However, we have recently accomplished a technical milestone that will bring major improvements to the user experience: Currently, the app does all heavy processing and intelligence on our servers, but in the future more processing will be done by the mobile phone itself. This means that you are going to get your activity updates almost instantly – even if you have poor reception or no signal at all, for instance, when going hiking in the wild. The core functionality of Moves will work abroad without worries of data roaming charges and it won’t be consuming almost any of your data plan.  

For the technically minded, here is a brief overview of how Moves actually works – currently and in the future. If you are interested, read on.

The magic of Moves is that it automatically detects both your activities (walking, running, bicycling or transportation) and where you have been. We call the underlying technology, which is completely developed in-house, TRACE (for TRajectory and Activity Classification Engine). The intelligence is based on machine learning and heuristics that operate on the sensor data collected by the phone (mainly the location data and short samples from the accelerometer). We believe that TRACE is the most accurate mobile phone based activity recognition software out there, and we are really proud of it.

TRACE is architected as a “pipeline”, consisting of currently about fifteen “phases”. Each of the phases does one thing, which is either to do some transformation to the data or infer higher level information about it. TRACE has three main tasks:

1.  Activity recognition: Mostly based on the accelerometer data, we use machine learning algorithms to classify short samples to the most likely activity classes. For each sample, we compute dozens of features and apply several transformations to normalize and filter the data. For example, we apply a principal axis transformation ( to the 3D accelerometer matrix and apply typical signal processing techniques ( to analyze the samples through different filters. Although we use state-of-the-art classification algorithms, they sometimes make mistakes. Thus, we also apply several phases of heuristics to produce the final estimates of the periods of activity. The biggest challenge so far has been to distinguish bicycling from driving a car or bus, train or tram. We have collected a massive amount of training data to make the recognition sufficiently good (and we continue work to improve it further).

2. Trajectory inference: The location data produced by phones is quite noisy and jumpy, for instance when entering buildings or near very tall buildings. Moreover, Moves does use GPS sparingly, so we have to work on much more sparse location data than typical GPS tracker apps. We smooth the location trajectories in order to show users pretty routes on map, to help our place detection algorithms, and to get a reliable estimate of the velocity of movement. Our approaches are based on Bayesian smoothing theory (, for which we have an in-house expert.

3. Place recognition and detection: Finally, to produce a meaningful daily storyline, we want to detect the places you visited and show them as individual map snippets. To do this, we first need to infer the significant periods of time that you stayed in some constrained location (for example, stopping at the red light does not usually constitute a meaningful place). Second, we need to compute the geographical location that best identifies the place. If possible, we snap to a place you have previously chosen from the place naming menu in the app. Place detection might appear as the simplest task of TRACE, but we have used at least as much time on it than on the activity recognition part. The difficulties arise from the peculiarities of location data provided by iOS and Android, especially indoors (sometimes the reported location suddenly jumps to a different country and then comes back!), and also from the ambiguity of defining a “place”. For example, a shopping mall is a place that includes many smaller businesses, which also could be recognized as separate places. 

Currently this all happens in the server. The pipeline is mainly written in Python, and we use Numpy ( ) and Scipy ( for the maths. In addition, we have written the most heavy parts in C++ using SWIG ( Running the pipeline takes on average just one second, but the user needs to wait several seconds more for the data to be transmitted to the servers and back. But due to the latest break-through, we can now run in the phone all the exactly same intelligence, in roughly the same time. The phone CPU is less powerful than our servers’, but the phone runs native code as opposed to Python on the server, and because we avoid network transmission latencies, the total wait will be usually much shorter than it is now. The new pipeline is written in C++ and thus can be run in iOS, Android -- and in the server. 

The reason we have had a server-based system, despite its drawbacks, is that it is much easier to develop TRACE in the back-end. If we find a bug or want to publish an improvement to the algorithms, we can deploy the fixes almost instantly. With a phone-based solution, we need to go through the App Store update process and wait for users to upgrade. In the beginning, the agility of development was paramount because we could not completely test TRACE prior to launching it to the public (when working on activity recognition, you never know if you have a sufficiently representative sample of different types of movements in the training data -- overfitting ( is a dangerous pitfall that is difficult to avoid), but we were prepared to iterate very rapidly. Our technology is now more mature, and we are confident about stuffing it into the app itself and accepting longer update cycles. We are very excited about the new developments, and will continue working on developing Moves as the best activity tracker in the market.

--Moves TRACE team

Post has shared content
Go behind the scenes with artist Heather Dewey-Hagborg who has been experimenting with taking hair samples of unknown DNA from public places and then sequencing them to produce an approximate of what the person might have looked like. From TED Blog, via Colossal:

DNA Portrait is a lovely short documentary shot by TED’s own Kari Mulholland. It features the work of the artist Heather Dewey-Hagborg, who spent time collecting hairs shed in public spaces… and then sequencing the DNA therein to print 3D sculptures of what those hairs’ owners might look like. Whoa. The film is also the secret story of the lab run by TEDGlobal 2012 speaker, Ellen Jorgensen. At Genspace, people are able to experiment with DNA-based technology, regardless of their scientific knowledge or experience. As Jorgensen comments in the film, Dewey-Hagborg’s work is super interesting, not to mention searingly contemporary. “It’s a very accessible way for the public to engage with this new technology. It really brings it to light how powerful it is, the idea that a hair from your head can fall on your street and a perfect stranger can pick it up and know something about it,” she says, adding: “With DNA sequencing becoming faster and cheaper, this is the world we’re all going to be living in.”

Read more

Post has attachment
In this case, I wish I could change the distance filter on Yelp Monocle.

+Alex Kesling  Bizarre thing in javascript: calling the sort() method on an array of numbers sorts them as strings rather as numbers (i.e., 10 comes before 2).

Post has shared content
Wouldn't mind a plate of this right now.
Recipe of the Day: Spring Vegetable Risotto with Poached Eggs >>
Wait while more posts are being loaded