Being a lazy sysadm is hard work
We use our Splunk server to (among many other things) grab log files from our cash registers and then do all sorts of crazy, automated searches across those files so we can keep an eye on any problems they might be having, without, you know, manually connecting to thousands of computers and reading thier log files.
To do this, we install the small "Splunk Log Forwarder" application on each cash register (POS) and use it to feed all the log files into the server. This means that we need to make sure that every single one of our POS'es is actually running the log forwarder and reporting in as expected.
I've previously made a small script that installs the log forwarder (or fixes it, if it's already installed) on the POS'es, but needed a smarter way to monitor if the script needed to be run - and where. Here's what I did:
Firstly, I set up a saved search on the Splunk server that returned a list of POS'es that are actively reporting. Then I wrote a Powershell script that utilises Splunk's REST api to execute that search, extract the hostnames from the results and then run a search in our AD for active POS'es and compare the two. Any POS on the AD list that wasn't on the list from the Splunk server would then need to have the log forwarder tool installed. I've previously created a Powershell script for scheduling tasks on lots of computers, so all I had to do was to feed the list of POS'es missing the log forwarder to my bulk scheduling script and that's that. Heck, I think I'll set it up as a scheduled job to run automatically each night to make sure that we never miss any log files.
Powershell script: http://pastebin.com/Z8TANEFZ
Oh, and if you're trying to keep track of more than 3 log files, you need this: www.splunk.com