Profile

Cover photo
Michael Gebetsroither
Works at mgIT
Attended Technische Universität Graz
Lives in Graz
1,652 followers|374,177 views
AboutPostsPhotosReviews

Stream

Michael Gebetsroither

Shared publicly  - 
 
 
Great explainer on Public Key Pinning (HPKP): http://bit.ly/1KICyxb - plus hands-on deployment tips.. read those very carefully before enabling it on your site!
View original post
1
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
Good enumeration of Windows persistence methods. http://goo.gl/kMnbho 
TL;DR: Are you into red teaming? Need persistence? This post is not that long, read it ;) Are you into blue teaming? Have to find those pesky backdoors? This post is not that long, read it ;) In the previous post I listed...
2 comments on original post
1
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
Text based logs just don't work...
 
An interesting blog story about binary logs. And why they are a good thing.
28 comments on original post
2
Jan Mercl's profile photoRussell Nelson's profile photoMichael Gebetsroither's profile photo
4 comments
 
+Jan Mercl I've only ever seen text based logs working when they where unused.
The last resort are some small amount of legacy log files which are parsed with logstash for an elk cluster.
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
In Deutschland ist das Internet für uns alle Neuland. In Singapur posted der Premierminister eigenen C++ Code (https://www.facebook.com/leehsienloong/photos/a.344710778924968.83425.125845680811480/905828379479869/?type=1&theater) und freut sich auf seine Pensionierung, da er dann endlich Haskell lernen kann (http://www.pmo.gov.sg/mediacentre/transcript-speech-prime-minister-lee-hsien-loong-founders-forum-smart-nation-singapore, ".  My children are in IT, two of them – both graduated from MIT.  One of them browsed a book and said, “Here, read this”.  It said “Haskell – learn you a Haskell for great good”, and one day that will be my retirement reading").

Und das, meine deutschen Landesgenossen, ist, warum ihr keine ordentliche Internet-Anbindung haben könnt.
 ·  Translate
I told the Founders Forum two weeks ago that the last computer program I wrote was a Sudoku solver, written in C++ several years ago...
13 comments on original post
1
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
Did Tesla Just Kill Nuclear Power?


It would be almost three hours until Tesla's big announcement, but inside a Northwestern University classroom near Chicago Thursday night, the famed nuclear critic Arnie Gundersen had the inside scoop: Tesla Motors CEO Elon Musk was about to announce an industrial-scale battery, Gundersen said, that would cost about 2¢ per kilowatt hour to use, putting the final nail in the coffin of nuclear power.
1 comment on original post
4
Michael Gebetsroither's profile photoÜmit Seren's profile photoHenk Poley's profile photobarqzr davi's profile photo
8 comments
 
uncertainty about something not yet reality? , i'd say that certainty would be the error, anyhow Michael i've enjoyed yer posts but , if my "opinion" is fud then i guess i don't really need to see yours
Bye Bye :)
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
Beware, MySQL users. Btw, the most part of time, it is better to use  socket communication with your MySQL server. It's faster and you can disable the network connection.
"The BACKRONYM vulnerability allows for an attacker to downgrade and snoop on the SSL/TLS connection that MySQL client libraries use to communicate to a MySQL server."
1 comment on original post
1
Ingo Oeser's profile photoMichael Gebetsroither's profile photo
2 comments
 
+Ingo Oeser ack, he most probably meant Unix sockets.
Add a comment...
Have him in circles
1,652 people
bahar ali's profile photo
Sandie Suwandi's profile photo
Jim A's profile photo
Mathias Dahl's profile photo
Espirit World's profile photo
Todd Seroka's profile photo
STELA TECH's profile photo
santiago ruiz's profile photo
Ingress Enlightened Colorado's profile photo

Michael Gebetsroither

Shared publicly  - 
 
 
"[T]ranslating cancer-sequencing results into potential treatment options often takes weeks with a team of experts to study just one patient's tumour and provide results to guide treatment decisions. Watson appears to help dramatically reduce that timeline," he explained.
View original post
1
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
Kristian Köhntopp originally shared:
 
Distributed Discovery Systems: Zookeeper, etcd, Consul

When you are doing distributed anything, all components of the distributed system need to agree which systems are part of the distributed service, and which aren't. This is a problem that is hard to solve - if you are trying to do it for yourself, you'll end up here: https://aphyr.com/tags/jepsen because you will be doing it wrong. Instead, use a consensus systems such as Zookeeper (ZK), etcd or Consul and be done with it properly.

If your cluster does not use such a system and is also not Jepsen tested, it is likely to be defective. I am looking at you, Openstack.

This kind of system is called a consensus system, because they have a number of nodes that need to agree (find a consensus) on who is the leader in the face of nodes or connections between nodes failing. For that, these systems are using a validated implementation of an algorithm such as Paxos (http://en.wikipedia.org/wiki/Paxos_%28computer_science%29) or Raft (http://en.wikipedia.org/wiki/Raft_(computer_science)).

Services and Operations offered

Once these systems agree between themselves on cluster membership and leadership for the consensus system itself, they provide a service for the rest of the cluster - mostly they create a tree of nodes, and each node can have subnodes and attributes, much like an XML-tree and a little less like a filesystem (that is, each node in a consensus system is usually a directory and a file at the same time, in filesystem terms).

Typically, operations on nodes are atomic. That is, clients to the consensus system can create the same node simultaneously, but because of ordering guarantees in the cluster consensus systemm only one client can succeed, and the cluster agrees on the same client clusterwide, even in the face of adverse conditions such as ongoing node failures or network splits. That latter part is important - many systems are calling themselves cluster systems, and work well as long as the cluster and the network operate in fair weather.

The Jepsen testing harness (https://aphyr.com/tags/jepsen) is a setup in which such distributed systems are seeing a defined test load with a known end state. Jepsen runs the test load and at the same time kills nodes or splits the cluster network randomly. It records the state changes of the cluster and the end state in each node, and compares it to the expected result. If the results differ, the cluster is broken. Most are. In fact, only ZK survived on the first attempt and at the moment only ZK, etcd and Consul are known to survive a Jepsen test.

Once you have a stable and verified cluster core, you can do useful things with it. For that, you need a set of operations to change state, and a mechanism to learn when the state changes. 

The cluster provides operations for clients that allow them to be notified of changes in a node or a subtree starting in a single node - these are called watches. A watch is a substitute for polling. Instead of each client asking "Did the cluster master change yet? Did the cluster master change yet? Did the cluster master change yet?" in a tight loop, clients are being notified once the cluster master changes.

Assume a Zookeeper installation with three machines that are running Zookeeper instances in a single cluster. These nodes will agree among themselves on a master and a common shared state, which in the beginning is an empty tree of nodes, called Znodes.

The ZK API has only a few operations: "create /path data" to create a Znode, "delete /path" do delete it agasin, "exists /path" to check if a path exists, "setData /path newdata" to change the data of a Znode, "getData /path" to get that data and "getChildren /path" to get a list of child Znodes under /path.

It is important to understand that the data in a node can only be read or written as a whole, atomically and never modified, only atomically replaced. The data is supposed to be small, maybe a KiB or four ar most.

A central concept in Zookeeper (and Consul) is the session. You can connect to any Zookeeper in a cluster and will see the same cluster state - after all that is what consensus systems are for. When you establish a connection, you also establish a session. Even when because of problems with the network you are losing the connection the sessions remains. If you manage to connect to any other ZK node in the cluster within the timeout limit you session will remain active. To terminate the session, you are either requesting a session end actively, or you time out because your client is so isolated within a broken network that it cannot reach any ZK node that is still part of the surviving cluster.

Znodes in a ZK hierarchy can be persistent - you create them and they stay around until you delete them. They can also be ephemeral - you create them and when your session ends they will go away. So if a client registers itself with a cluster in an emphemeral node and puts connection information in the data of its Znode, we can be pretty sure that the client is alive and somehow reachable.

Znodes can also be sequential. That is, we provide a base name for a Znode such as "job-" and the cluster appends a unique, monotonically increasing number and tells us the resulting name ("job-17"). Because of the properties of the cluster consensus protocol, all nodes will agree on what name is owned by whom, and on a global order.

A final important concept are versions - each Znode has a version number associated with the data that is being stored, and each time that data is replaced (remember that it can't be changed), the version number is incremented. A couple of operations in the API which as setData and delete can be executed conditionally - the calls take a version number as a parameter and the operation succeeds if and only if the version passed by the client still matches the current version on the server.

Usage

Obviously, we can use this to create an attendance list of worker nodes in a cluster. Each worker will connect to the cluster, create a session and create the persistent directory /workers, if it doesn't exist already.

It will then create an emphemeral node for itself, /workers/hostname and leave connection information as the data in it. Yes, Openstack, these connection endpoints in keystone's MySQL are pretty useless compared to that.

If a client goes away due to node failure or network partition, the session will end and the endpoint information stored in the ephemeral node will go away together with the emphemeral node for that host itself. Until the network or the node recover and register again.

We can also create a /jobs directory, persistent of course, and create /jobs/job- sequential nodes in it. They will be numbered automatically, and we will have a global, agreed on order of jobs in the cluster. These jobs now need to be assigned to available workers. For that we need a scheduler that makes these decisions, and among all eligble nodes only one node can become the scheduler. We are going to call that node the master.

A node that wants to be the master can try to create an ephemeral node master and put its own connection informaton into that node.

Because the node is ephemeral it will go away when the masters session goes away.

Because node creation is atomic and can succeed only if there is no node already in place, it will fail when there already is a master.

Because we can set a watch on /master, we will be notified if the master goes away and we will then try to become master instead of the master. That may or may not succeed depending on the order of events in the surviving cluster, but we don't need to care because ZK does. It will elect a ZK master internally, create a shared agreed on state of the ZK Znode tree, notify the surviving sessions of master loss and then await master node creating attempt. It will order these, create an agreed on order of creation attempts, allow the first master creating attempt to succeed and tell all the others of the failure. It will then deliver the data - the connection information for the newly elected master - on request to everybody.

The master can get a list of unassigned jobs in /jobs, and a list of idle workers in /workers, and assign jobs to workers. It will create a directory /schedules and keep scheduled jobs and their assigned workers in sequential ephemeral nodes in there.

The cluster simply works. Even if the networks breaks around it, nodes join or leave or other things happen.

Why

Clusters are dynamic environments. In a cluster of 100 hardware nodes you do not want to register membership to the cluster manually with calls to a Keystone or Nova API. Nodes will register and unregister themselves with the cluster as they go up and down.

In such a cluster, you will need roles such as API service, compute service, scheduler servive and so on. Of many roles you will need a specific number of instances such as 3 API instances, exactly one scheduler instance and one compute per hardware node.

You do not actually care where your scheduler or API services are running - if there are too few of them, nodes will notice that and simply spawn an instance. That may make it too many, so some of them may decide to kill themselves. Because there is a sequence of time in the cluster, operations are ordered and atomic and you will not get flapping services.

Services may migrate, respawn or change locations, but the cluster manages that automatically through discovery instead of you entering keystone or nova commands to keep track of the cluster state after planned or unplanned topology changes. Their location may change, but as long as you have capacity there will be exactly the right number of instances of roles running in the cluster.

The ZK can keep small files directly in the cluster, or it can store pointers to large files in a highly available store for large files (such as S3 https URLs). 

If you think that this, together with automated respawning and discovery, makes a lot of puppet to set up the cluster redundant you following the discussion very tightly, congratulations.

Differences

ZK is the oldest and best tested system of the three. It has all important concepts and works very well. It has a few idiosyncrasies regarding order across session stop/start blocks that need careful coding.

etcd is part of the systemd/etcd/etcd-sa/fleetd/coreos/docker combo. It does not use persistent connections, but http/https and hence has no concept of a session such as ZK. It also does not have watches. Instead we are seeing the concept of a value TTL replacing ephemeral nodes, and the concept of version numbers in polling to replace watches - you tell it you have seen the history of a subtree up to a point and get all the missing bits since then from the etcd.

Consul is much like ZK, but uses Raft internally (like etcd does) instead of Paxos.

All of them are Jepsen safe in their latest instance.

In the end it matters less which one you are using. You should be worried if your cluster doesn't use any of these. I am still looking at you, Openstack.

#zookeeper   #etcd   #consul   #openstack    #cluster   #consensus  
8 comments on original post
2
Edward McLaughlin's profile photo
 
Thanks Michael.
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
Interesting times ahead. Docker may not have it all its own way in container technology after all.
Summary:Docker faces challenge as CoreOS's open container format gains allies
5 comments on original post
1
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
Optimizer Hints in MySQL 5.7. Danke, +Sveta Smirnova!
Sveta Smirnova presents some useful Optimizer hints in MySQL 5.7.7 along with an example of when query hints are needed.
View original post
1
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
"• Your DOCSIS network is less safe than your wifi network 
• Downstream sniffing is easy and decryption is possible 
• Upstream sniffing is close 
• Active attacks are plausible"
View original post
4
Add a comment...

Michael Gebetsroither

Shared publicly  - 
 
 
To everybody using how-old.net. That viral Microsoft thingy that guesses your age. From the terms linked at the bottom. [1]

You just gave Microsoft a license to use your uploaded pictures in whatever way they want. Advertising, data mining, sharing it with any third party (NSA, anyone?)

So never ever dare talk about how important privacy is and how we need to defend it etc if you are giving it up for the lulz anyway.

Geez

"However, by posting, uploading, inputting, providing, or submitting your Submission, you are granting Microsoft, its affiliated companies, and necessary sublicensees permission to use your Submission in connection with the operation of their Internet businesses (including, without limitation, all Microsoft services), including, without limitation, the license rights to: copy, distribute, transmit, publicly display, publicly perform, reproduce, edit, translate, and reformat your Submission; to publish your name in connection with your Submission; and to sublicense such rights to any supplier of the Website Services."

[1] http://azure.microsoft.com/en-us/support/legal/website-terms-of-use/
16 comments on original post
3
Add a comment...
People
Have him in circles
1,652 people
bahar ali's profile photo
Sandie Suwandi's profile photo
Jim A's profile photo
Mathias Dahl's profile photo
Espirit World's profile photo
Todd Seroka's profile photo
STELA TECH's profile photo
santiago ruiz's profile photo
Ingress Enlightened Colorado's profile photo
Education
  • Technische Universität Graz
Basic Information
Gender
Male
Other names
gebi
Work
Employment
  • mgIT
    Owner, 2009 - present
  • Technische Universität Graz
    2012
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Graz
Previously
Wolfsberg
One of the best cocktails in graz
Food: ExcellentDecor: Very GoodService: Excellent
Public - 2 years ago
reviewed 2 years ago
1 review
Map
Map
Map