Dear people better than I (or at least better versed than I) at math and/or statistics -
I would like to create a "slow request" detector that monitors a log of network requests to a web server, and builds up a model (e.g. mean and variance?) of the latencies. Probably there would be a different model for each URL in the request logs since some handlers will naturally be slower than others. And then as new requests are made and logged, the detector would (a) alert if any requests are slower "than predicted" (by some definition), and (b) update its model(s) with the new data.
So the idea is that some handlers may be naturally quite variable, so they should tolerate slower requests (without raising an alarm) moreso than other handlers that are not as variable normally.
I feel like something like this must already exist, perhaps in the exact form that I am talking about (slow-request detection on servers), but if not then at least in more a general form (something like "anomaly detection in random processes"?). I've tried googling around a bit but without too much success - it may be that I'm jut not searching for the right things.
Any suggestions, ideas, tips, pointers, etc. would be appreciated...