Trying to upgrade clang at mozilla I found some problems with our build not being easily reproducible and with the tools we use to compare benchmarks.  I hope to write a bit more about it soon, but I wanted to sure a particularly interesting fact first.

Chatting about these problems with Nick Lewycky yesterday he mentioned that assuming a normal distribution might be an over simplification. A computer has a minimum time in which it can perform a task and there are many things that can cause it to be slower than that. A particular load address might cause cache lines to alias, the kernel might move the program just as it had the cache populated, etc.

I decided to try it out. The attached histogram is from 4000 runs of sunspider's 3d-raytrace with firefox's js interpreter (not the jit). Notice the different peaks. It does suggest that a good model is a base time and multiple random problems that slow it down.

With entire companies dedicated to writing benchmarks, I hope someone has studied this before. Anyone knows a reference?
2
Yit Phang Khoo's profile photoRafael Ávila de Espíndola's profile photoMarc-Antoine Ruel's profile photoPaul Biggar's profile photo
6 comments
 
IIRC, sunspider is highly dependent on the CPU wait states. You may want to talk to v8 folks about it.
 
Yit, Paul, thank you so much for pointing me at the paper. There might be some bias because of the circumstances, but I think it is already one of my favourites :-)

Marc-Antoine, I will check it tomorrow. Thanks! 
Add a comment...