Profile

Cover photo
Wawrzyniec Niewodniczański
Worked at blinkx
Attended Wrocław University of Technology
Lives in Ely, UK
260 followers|324,762 views
AboutPostsPhotosYouTube+1's

Stream

 
I wasn't doing anything with my home Linux machines for some time and recently decided to change it a bit. There are quite a few projects I hope to push a forward. For example put all my pictures from CDROMs/DVDs into Dropbo...
1
Add a comment...
 
Control terminal name and comment block in VIM
Somewhere (I think it was Stackoverflow ) I found simple command to control the name of terminal from commandline which works very nice with iterm2 tabs on MacOSX and decided to add to my zsh environment this function: termname() {
echo -en "\e]1; $1 \a"
}...
Somewhere (I think it was Stackoverflow) I found simple command to control the name of terminal from commandline which works very nice with iterm2 tabs on MacOSX and decided to add to my zsh environment this function: termnam...
1
Add a comment...
 
Get a bundle of Unix ebooks and support charity!
1
Add a comment...
 
 
When intelligence agents tell you they're "just looking at the metadata" of your conversations – who you called and when, but not what you said – they're engaging in a rather spectacular bit of bullshit. "We know you spoke with an HIV testing service, and then your doctor, and then your insurance company. But we don't know what you discussed."

There's an entire technique in intelligence based on this, called traffic analysis. It was born during World War II, when Gordon Welchman realized he could use a combination of times, triangulated locations, durations, and the (necessarily unencrypted) call signs of senders and receivers to build an amazingly comprehensive map of Nazi troop movements. The same traffic analysis technique made it possible to guess that certain messages were routine and would have known words like "weather report" in them – which became key to cracking Nazi codes.

And where did all this happen? At Bletchley Park, about an hour north of London, where Welchman designed the pipeline which handled intercepted communications. He and Alan Turing were essentially the fathers of modern signals intelligence. (GCHQ is directly descended from the Bletchley Park operation)

So I'm looking forward to May's further explanations of how the metadata isn't really that important. I'm particularly looking forward to seeing the Home Office's response to the Freedom of Information request for her own metadata, and their explanations for why there's absolutely no reason to release such a thing – even though, by her own declaration, it's perfectly safe.
During the presentation of the new Draft Investigatory Powers Bill to parliament this week, Home Secretary Theresa May attempted to soothe concerns about surveillance elements of the bill.
46 comments on original post
1
Add a comment...
 
Smart Contracts and Programming Defects Ethereum promises that contracts will 'live forever' in the default case. And, in fact, unless the contract contains a suicide clause, they are not destroyable. This is a double-edged sword. On the one hand, the default suicide mode for a contract is to return all
1
Add a comment...

Communities

7 communities
 
Somewhere (I think it was Stackoverflow) I found simple command to control the name of terminal from commandline which works very nice with iterm2 tabs on MacOSX and decided to add to my zsh environment this function: termnam...
1
Add a comment...
 
As the doors opened on Tuesday morning, and the masses flooded in, one thing became abundantly clear. The tech community -- long tied to a Libertarian mindset -- had turned decisively blue. Not only were Hillary badges on display, everyone I spoke to about the election, especially the Americans, appeared quietly confident.
1
Add a comment...
 
Pay what you want for science and sci-fi books and support charity!
1
Add a comment...
 
 
Time Series Data

Time Series Data is numerical data gathered from data sources in your computer in regular intervals, i.e. load from a kernel, packets/s from a network interface or queries/s from a MySQL server. A single server produces about 500 values per measurement interval, a database server easily 1000 values per interval.

Typical programs to collect that data are collectd, diamond or similar. They usually have a plugin architecture that can run measurement plugins at the prescribed intervals, and then sends (hostname, metric name, timestamp, value) tuples upstream.

The time series database usually collects and persists this data, aggregates metrics, and allows searches on the data, returning measurements per interval that match the query. Smarter time series databases return the data already in a format that is good for graphing.

Dashboards take the time series data and plot it into a single graph. Dashboards usually have a set of predefined parametized graphs so that you can get 'all metrics relevant for a MySQL server' but can supply hostname, interval and start time as parameters. The result then is usually a toilet roll of graphs.


Time series data is hard to collect and persist, because the database has to perform one large matrix transposition, either at write or at read time: 

At write time, data comes in as large rows: for a given point in time, for each server 500 to 1000 values arrive - all different metrics for a fixed point in time.

At read time, a single metric or a small set of metrics is being read, so for a fixed metric, a lot of different point in times need to be read.

So either at read or at write time the data has to be turned sideways by 90 degrees, hence the matrix transposition mentioned above.


If you imagine a data store that is traditional row based database with a btree like structure such as InnoDB or BoltDB, you write data to disk in time stamp order. A single data page of InnoDB in MySQL Enterprise Manager contains 16KB of data, and contains a bunch of row data for a fixed time stamp. This is easily written, but on read there is about 4 to 8 bytes of useful data per 16 KB page. 

So to plot a single metric in a days worth of data, several thousand pages have to read from the database. This is somewhat efficient at write, but hard on access.

You could turn this around, and for example define a table per metric to simulate a column based data store. In InnoDB this is not very efficient, there is one to two dozen bytes of overhead for each row, so with a timestamp (4 bytes) and a single value (4-8 bytes) per row you get rows that are too narrow to amortize.

Storages such as whisper in graphite do this: They define a preallocated ring buffer file per metric in the filesystem. The preallocation here is good, because it reduces fragmentation and makes space management a lot easier as well. Data storage is a lot easier than in InnoDB, because lack of btree = less index management and data access is a simple wrapped array in the ring buffer structured file.

With column based stores such as whisper you get a lot of write amplication and a ton of seeks, though. If you get 500 metrics per minute, each metric has to be written to a different position in a different file. On a rotating disk that is 500 seeks, plus 500 4K block reads followed by 500 4K block writes and 500 fsync. That is of course impossible to do with any rotating disk.

On SSD, it is probabaly worse, because 500 random block updates cause probably 500 64KB flash cell rewrites or whatever structure your SSD has internally - so maximum write amplification and worst DWPD ever.

Obviously you need to collect metrics in memory for a longer interval, and then flush out the data buffers in a more structured way to append more than 4 bytes at a time to each metrics file ring buffer. That will get the number of seeks/number of SSD cell rewrites down, at the cost of data loss on power loss for the time series database machine.

Want to watch people learn how to handle time series data and see every single mistake that could probably be made? Read https://influxdb.com/docs/v0.9/concepts/storage_engine.html, and be ready to bite your fingernails, because painful. Take the Influx way of thinking, 60s sample interval, 500 or 1000 metrics per server and 12k servers, and cringe. It's halloween, enjoy.


Getting rid of data is another problem. In a B+ tree such as InnoDB, data is kept in primary key order. With time series data that is likely chronological order - new data is stored at the right hand side of the tree and older data is typically stored at the left hand side of the tree. Deleting data means deleting a large subtree from the left hand side of the B+ tree, and because it is a balanced tree, it means a lot of tree rebalancing operations.

Smarter people partition data, that is, they create a subdivision of their metrics tables over the time dimension - a table is really a set of tables divided by time: You logically see 'server130103.load1' as a metric, but physically it is subdivided into subtables per hour or per day. Deletion then is dropping subtables from the date range that is being expired. This is much faster and hurts less than an actual delete with all the tree rebalancing from right to left that needs to go on in order to justify the B-aspect of a B+ tree.

On the other hand you probably want to shard by metric - a single metric per tables in the InnoDB case is most likely not efficient because row overhead, but all metric to a single table is also not good because many writes to the right hand side of the same table = lock conflict. So sharding to a reasonable number of tables is best, because more locks and equial pressure distribution betweek all the locks.

For a ring buffer structure such as whisper all this is moot: We get a file per metric and it is a ring - so expiration is automatic, the buffer simply overwrites itself, and no tree means no rebalancing (but metric per file = too many seeks = does not scale on rotating disk and also not on SSD).


To plot data we need to paint pixels. Our graph probably has a reasonable number of pixels along the X-axis, let's just assume it's 800 pixels to have a concrete number.

Of course, if you measure data once per minute you have 1440 samples per day. You can't paint 1440 samples, because then you would paint one or two samples per pixel, so you need to resample and smooth out the values. 

If you plot a month, you have 30 * 1440 / 800 samples or 54 samples per pixel. If you do not know how to draw a graph (I am looking at you, MySQL Enterprise Manager), you actually paint all 54 pixels into a single X-axis slot. The resulting graph is of course useless, because big yellow bar 800 x 50 pixels = zero useful information.

You could average the 54 samples per X-position into a single value and plot that - better, but still limited usefulness. You could take the 54 samples and get the min, median and max values and plot these - extremely useful for multiple metrics per graph.

Our you could take the 54 values per pixel and create a histogram along the Y-axis and shade that in different brightnesses of grey and color in the 5th and 95th percentile and the  median - best idea ever for a single metric per graph (useless for many metrics per graph, but see previous paragraph). Look at what smokeping does, Tobias knows his stuff.


Note how the metric you collect and the metric you plot are different animals. You collect time series raw data, but you never plot that. It's always at least resampled, most of the time also aggregated (min/avg/max, percentiles and median).

Also, in the case of monitoring MySQL, almost all useful metrics are derived metrics of the form (cache hits/total samples * 100 or similar), that is, you see a lot of standardized ratios and other formulas that require combination of two or more raw metrics and a bit of math to actually create a useful plot value. I am looking at you, Zabbix.


Dashboards come in many forms and shapes, most of them useless. The usecase is at least threefold:

Operating wants to see problematic machines. They have many. You want graphs such as 'show me the load time series of the last hour from the 5 machines with the highest load in the set x' or similar. That's a kind of alerting without the actual alert, it answers the question 'where to look'.

For a problematic machine, operating wants to see a predefined toilet roll of meaningful metric. 'For the database server x show me the standard graphs for the time interval from y ago to now'. This answers the question 'when did it start' and 'what's actually the problem'.

For root cause analysis, a last level person wants to construct a graph on an empty canvas, dropping an arbitrary set of formulas with metrics on something with multiple overlapping X-axis and independent left and right Y-axis to do YoY/DoD comparisons, leading and following metrics comparisons (i.e. packets in/packets out/packets dropped) and similar stuff. This is usually not dashboard style work, but canvas style work and pretty free form.

For capacity planning, you want a dashboard with graphs as for the root cause analysis, so you can do YoY overviews, plot in capacity limits and barriers and similar things.
18 comments on original post
1
Add a comment...
2
Add a comment...
Communities
7 communities
Education
  • Wrocław University of Technology
    Material (Molecular) Engineering, 1997 - 2002
  • Wrocław University of Technology
    (Quantum) Chemistry, 2002 - 2008
Basic Information
Gender
Male
Other names
Wawrzek, Paweł, Larry
Work
Occupation
Linux SysAdmin
Employment
  • blinkx
    Cloud Engineer, 2014 - 2016
  • MimeCast
    Site Reliability Engineer, 2012 - 2014
  • Zeebox
    Systems Administrator, 2012 - 2012
  • Citrix Systems Uk Ltd
    Systems Administrator, 2010 - 2012
  • Booking.com Ltd
    Systems Administrator, 2008 - 2010
  • CCDC
    Product Support Engineer, 2007 - 2008
  • R3CEV
    DevOps Engineer, 2016 - present
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Ely, UK
Previously
Cambridge, UK - Lubin, Poland - Wroclaw, Poland - Szczecin, Poland - Bolesławiec, Poland
Contact Information
Home
Email
Wawrzyniec Niewodniczański's +1's are the things they like, agree with, or want to recommend.
Humble Book Bundle: Unix presented by O'Reilly
www.humblebundle.com

Get a bundle of Unix ebooks and support charity!

Humble Book Bundle: Science Fiction by REAL Scientists
www.humblebundle.com

Pay what you want for science and sci-fi books and support charity!

The Weekend Read: April 2
r3cev.com

Left-to-Right: High Lizard Buterin, Master Lizard Swanson, Grand Lizard Grant It’s been a busy week for magic internet money and their cousi

R3 completes trial of five cloud-based emerging blockchain technologies ...
r3cev.com

Chain, Eris Industries, Ethereum, IBM and Intel participate in the most significant implementation of distributed ledger technology to date

Humble Book Bundle: Sci-fi Classics
www.humblebundle.com

Get sci-fi classics from authors like Bester, Asimov, and Zelazny while supporting charity!

Linux Foundation Unites Industry Leaders to Advance Blockchain Technology
www.linuxfoundation.org

New open ledger project to transform the way business transactions are conducted around the world

Many AWS accounts and Zsh
larryn.blogspot.com

That might not be a common problem, but I have to deal with many AWS accounts in the same time. For example I might to have to run an Ansibl

Datadog and many dataseries stacked together
larryn.blogspot.com

Recently, I've started to use Datadog. It has nice features, but I have also found some annoying lacks. One of them is no easy way to prepar

Humble Might & Magic Bundle
www.humblebundle.com

Decades (and even centuries) of adventures in the Humble Might & Magic Bundle!

Humble Sci-Fi Book Bundle 2 presented by WordFire
www.humblebundle.com

Our sci-fi book bundle featuring Kevin J. Anderson's WordFire Press is now worth $88!

Humble Brainiac Books Bundle Presented by No Starch Press
www.humblebundle.com

We've brainstormed a bundle worth over $300 in the Humble Brainiac Book Bundle Presented by No Starch Press

Star Wars Humble Bundle
www.humblebundle.com

May the bundle be with you! Get up to twelve games in our Star Wars Bundle!

Humble Mobile Bundle 10
www.humblebundle.com

Pay what you want for up to 9 Android games in Humble Mobile Bundle 10!

Trailing in its wake
www.economist.com

THEIR rivalry is most vividly expressed each spring, when two boats splash up the River Thames. They compete for brilliant academics and for

Relax on Kepler-16b - Where your shadow always has company
planetquest.jpl.nasa.gov

Like Luke Skywalker's planet "Tatooine" in Star Wars, Kepler-16b orbits a pair of stars. Depicted here as a terrestrial planet, Kepler-16b m

Where the Grass is Always Redder on the Other Side
planetquest.jpl.nasa.gov

Kepler-186f is the first Earth-size planet discovered in the potentially 'habitable zone' around another star, where liquid water could exis

Experience the Gravity of a Super Earth
planetquest.jpl.nasa.gov

Twice as big in volume as the Earth, HD 40307g straddles the line between "Super-Earth" and "mini-Neptune" and scientists aren't sure if it