Profile

Cover photo
Christopher Fox
Worked at North Carolina State University
Attends North Carolina State University
Lives in Raleigh, NC
101 followers|37,851 views
AboutPostsPhotosVideos

Stream

Christopher Fox

Shared publicly  - 
 
Fun with filters.
1
Add a comment...

Christopher Fox

Shared publicly  - 
 
Huh, people are still responding to this question about random integer partitions I posted several years ago.
1
Jeff Freeman's profile photo
 
It's always weird when I get a notification that turns out to be for a post from long ago.
Add a comment...

Christopher Fox

Shared publicly  - 
 
Mesmerizing
1
Add a comment...

Christopher Fox

Shared publicly  - 
 
In this case, I wish I could change the distance filter on Yelp Monocle.
1
Add a comment...

Christopher Fox

Shared publicly  - 
 
Languages/environments that support probabilistic inference are "in the spotlight," apparently. DARPA is asking for research proposals in this area and is offering funding.
Probabilistic programming languages are in the spotlight. This is due to the announcement of a new DARPA program to support their fundamental research. But what is probabilistic programming? What can ...
1
Alex Kesling's profile photoJohn Salvatier's profile photo
2 comments
 
I am surprised by this, maybe I'm missing something. It seems like inference methods (MCMC, EM etc.) would have to be way better to make this at all practical. 
Add a comment...
In his circles
101 people
Have him in circles
101 people
Robert Gay's profile photo
Melissa Hecht's profile photo
Ryan Card's profile photo
Ian Pike's profile photo
Margaret Rahmoeller's profile photo
Andy Adams-Moran's profile photo
Sujeong Kim's profile photo
Morgan Catha's profile photo
Greg Bacon's profile photo

Christopher Fox

Shared publicly  - 
 
Euler's approach to research, via C. Truesdell, Jeffrey Lagarias, and Dana Ernst. I think this might apply beyond mathematics research.

(1) Always attack a special problem. If possible solve the special problem in a way that leads to a general method.

(2) Read and digest every earlier attempt at a theory of the phenomenon in question.

(3) Let a key problem solved be a father to a key problem posed. The new problem finds its place on the structure provided by the solution of the old; its solution in turn will provide further structure.

(4) If two special problems solved seem cognate, try to unite them in a general scheme. To do so, set aside the differences, and try to build a structure on the common features.

(5) Never rest content with an imperfect or incomplete argument. If you cannot complete and perfect it yourself, lay bare its flaws for others to see.

(6) Never abandon a problem you have solved. There are always better ways. Keep searching for them, for they lead to a fuller understanding. While broadening, deepen and simplify.
Several weeks ago, links to a survey article by Jeffrey Lagarias about Euler's work and its modern developments and a blog post by Richard J. Lipton that discusses Lagarias' paper were circulated o...
1
Add a comment...
 
A look at the thee main problems that had to be addressed to make the Moves app work (activity recognition, trajectory prediction, and significant place detection) and how in a future update most of this data processing will happen on the phone rather than on Move's servers. I wonder if it's a net win for battery life. The transition from server side to client side processing as the algorithms mature is intersting.
 
The next generation of Moves technology

We have not talked publicly much about the technology powering Moves. However, we have recently accomplished a technical milestone that will bring major improvements to the user experience: Currently, the app does all heavy processing and intelligence on our servers, but in the future more processing will be done by the mobile phone itself. This means that you are going to get your activity updates almost instantly – even if you have poor reception or no signal at all, for instance, when going hiking in the wild. The core functionality of Moves will work abroad without worries of data roaming charges and it won’t be consuming almost any of your data plan.  

For the technically minded, here is a brief overview of how Moves actually works – currently and in the future. If you are interested, read on.

The magic of Moves is that it automatically detects both your activities (walking, running, bicycling or transportation) and where you have been. We call the underlying technology, which is completely developed in-house, TRACE (for TRajectory and Activity Classification Engine). The intelligence is based on machine learning and heuristics that operate on the sensor data collected by the phone (mainly the location data and short samples from the accelerometer). We believe that TRACE is the most accurate mobile phone based activity recognition software out there, and we are really proud of it.

TRACE is architected as a “pipeline”, consisting of currently about fifteen “phases”. Each of the phases does one thing, which is either to do some transformation to the data or infer higher level information about it. TRACE has three main tasks:

1.  Activity recognition: Mostly based on the accelerometer data, we use machine learning algorithms to classify short samples to the most likely activity classes. For each sample, we compute dozens of features and apply several transformations to normalize and filter the data. For example, we apply a principal axis transformation (http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors) to the 3D accelerometer matrix and apply typical signal processing techniques (http://en.wikipedia.org/wiki/Digital_signal_processing) to analyze the samples through different filters. Although we use state-of-the-art classification algorithms, they sometimes make mistakes. Thus, we also apply several phases of heuristics to produce the final estimates of the periods of activity. The biggest challenge so far has been to distinguish bicycling from driving a car or bus, train or tram. We have collected a massive amount of training data to make the recognition sufficiently good (and we continue work to improve it further).

2. Trajectory inference: The location data produced by phones is quite noisy and jumpy, for instance when entering buildings or near very tall buildings. Moreover, Moves does use GPS sparingly, so we have to work on much more sparse location data than typical GPS tracker apps. We smooth the location trajectories in order to show users pretty routes on map, to help our place detection algorithms, and to get a reliable estimate of the velocity of movement. Our approaches are based on Bayesian smoothing theory (http://www.lce.hut.fi/~ssarkka/course_k2011/), for which we have an in-house expert.

3. Place recognition and detection: Finally, to produce a meaningful daily storyline, we want to detect the places you visited and show them as individual map snippets. To do this, we first need to infer the significant periods of time that you stayed in some constrained location (for example, stopping at the red light does not usually constitute a meaningful place). Second, we need to compute the geographical location that best identifies the place. If possible, we snap to a place you have previously chosen from the place naming menu in the app. Place detection might appear as the simplest task of TRACE, but we have used at least as much time on it than on the activity recognition part. The difficulties arise from the peculiarities of location data provided by iOS and Android, especially indoors (sometimes the reported location suddenly jumps to a different country and then comes back!), and also from the ambiguity of defining a “place”. For example, a shopping mall is a place that includes many smaller businesses, which also could be recognized as separate places. 

Currently this all happens in the server. The pipeline is mainly written in Python, and we use Numpy (http://www.numpy.org/ ) and Scipy (http://www.scipy.org/) for the maths. In addition, we have written the most heavy parts in C++ using SWIG (http://www.swig.org/). Running the pipeline takes on average just one second, but the user needs to wait several seconds more for the data to be transmitted to the servers and back. But due to the latest break-through, we can now run in the phone all the exactly same intelligence, in roughly the same time. The phone CPU is less powerful than our servers’, but the phone runs native code as opposed to Python on the server, and because we avoid network transmission latencies, the total wait will be usually much shorter than it is now. The new pipeline is written in C++ and thus can be run in iOS, Android -- and in the server. 

The reason we have had a server-based system, despite its drawbacks, is that it is much easier to develop TRACE in the back-end. If we find a bug or want to publish an improvement to the algorithms, we can deploy the fixes almost instantly. With a phone-based solution, we need to go through the App Store update process and wait for users to upgrade. In the beginning, the agility of development was paramount because we could not completely test TRACE prior to launching it to the public (when working on activity recognition, you never know if you have a sufficiently representative sample of different types of movements in the training data -- overfitting (http://en.wikipedia.org/wiki/Overfitting#Machine_learning) is a dangerous pitfall that is difficult to avoid), but we were prepared to iterate very rapidly. Our technology is now more mature, and we are confident about stuffing it into the app itself and accepting longer update cycles. We are very excited about the new developments, and will continue working on developing Moves as the best activity tracker in the market.

--Moves TRACE team
1
Christopher Fox's profile photo
 
A ways down in the article is a comparison to Manhattan:

"As expensive as Manhattan is, and as far along into the gentrification process as the many surrounding communities are, there are still many places to go within the New York orbit to have an affordable, urban way of life.

In the Bay Area, there are far fewer options that fit the criteria of walkable, transit-proximate and affordable."
Add a comment...

Christopher Fox

Shared publicly  - 
 
 
Go behind the scenes with artist Heather Dewey-Hagborg who has been experimenting with taking hair samples of unknown DNA from public places and then sequencing them to produce an approximate of what the person might have looked like. From TED Blog, via Colossal:

DNA Portrait is a lovely short documentary shot by TED’s own Kari Mulholland. It features the work of the artist Heather Dewey-Hagborg, who spent time collecting hairs shed in public spaces… and then sequencing the DNA therein to print 3D sculptures of what those hairs’ owners might look like. Whoa. The film is also the secret story of the lab run by TEDGlobal 2012 speaker, Ellen Jorgensen. At Genspace, people are able to experiment with DNA-based technology, regardless of their scientific knowledge or experience. As Jorgensen comments in the film, Dewey-Hagborg’s work is super interesting, not to mention searingly contemporary. “It’s a very accessible way for the public to engage with this new technology. It really brings it to light how powerful it is, the idea that a hair from your head can fall on your street and a perfect stranger can pick it up and know something about it,” she says, adding: “With DNA sequencing becoming faster and cheaper, this is the world we’re all going to be living in.”

Read morehttp://adafru.it/b68987
12 comments on original post
1
Add a comment...

Christopher Fox

Shared publicly  - 
 
+Alex Kesling  Bizarre thing in javascript: calling the sort() method on an array of numbers sorts them as strings rather as numbers (i.e., 10 comes before 2).
1
Alex Kesling's profile photo
Add a comment...

Christopher Fox

Shared publicly  - 
 
Wouldn't mind a plate of this right now.
 
Recipe of the Day: Spring Vegetable Risotto with Poached Eggs >> http://epi.us/10gfrST
18 comments on original post
1
Add a comment...

Christopher Fox

Shared publicly  - 
 
I need to mess around with the new web speech api in Chrome. It appears to be an easy (and free!) way to access good quality speech recognition. I'm especially excited that it's not just one-shot, i.e., call a function and wait for speech results to be returned. Instead, there's support for continuous listening, where when the user pauses, listening continues but an event is generated containing the results obtained so far. Along with the text to speech support that I believe Chrome has, this could be a great tool for students in a dialogue systems course.
3
Add a comment...
People
In his circles
101 people
Have him in circles
101 people
Robert Gay's profile photo
Melissa Hecht's profile photo
Ryan Card's profile photo
Ian Pike's profile photo
Margaret Rahmoeller's profile photo
Andy Adams-Moran's profile photo
Sujeong Kim's profile photo
Morgan Catha's profile photo
Greg Bacon's profile photo
Work
Occupation
student
Employment
  • North Carolina State University
    research assistant
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Raleigh, NC
Previously
Washougal, WA - Vancouver, WA - Washougal, WA - Portland, OR - Seattle, WA - Raleigh, NC
Links
Other profiles
Contributor to
Story
Tagline
CS grad student the northwest, living in the southeast
Education
  • North Carolina State University
    Computer Science, 2011 - present
  • University of Washington
    Math and Economics, 2005 - 2009
Basic Information
Gender
Male