Shared publicly  - 
"Could cron be fixed? Although almost all current implementation of cron are open source, cron's pathological behavior has been petrified into the Unix standards. So if it isn't broken, it isn't cron. The only solution left is a work-around."
Cronic A cure for Cron's chronic email problem. The Disease: 0 1 * * * backup >/dev/null 2>&1; The Cure: 0 1 * * * cronic backup. Cronic is a shell script to help control the most annoyi...
mathew murphy's profile photoNoah Friedman's profile photoPaul Vixie's profile photoKee Hinckley's profile photo
This is not a problem with cron. This is a problem with the other things you are running.
Cron is not broken, one just failed to adapt cron to changed requirements. Cronic is a workaround for a small aspect of the gap between what cron is and what it should be.

Currently you put a lot into the cron script, what cron should do all by itself.
Cron isn't broken, in that it works to design. It's Cron's design that is broken. Launchd provides a much better model, and a future release of upstart will likewise intend to replace cron's functions.
+Jay Blanc Design wasn't broken as when it was designed (late 70's of the last century).
gack! you're all so, just, well, so wrong! cron wasn't designed. and the open source cron that everybody uses was just my take on system v's cron. i didn't read the posix spec for another decade.

cron can definitely be fixed. first up, i've been trying to round up all the odd vendor-specific features that various bsd and linux folks have put in, and get back to a single unified version. next up, i guess, would be putting it up on savannah and trying to round up a "committer team".
savannah? It's github these days, grampa.
I use something similar written in Ruby to run background tasks. My script also times out and kills the task if it takes too long.

Launchd and upstart? You mean systemd, right? I'm sure if systemd doesn't include cron yet, it will soon.
+Paul Vixie There ain't such thing like "not designed". Even if you code as you go, a certain design process takes place (not always a good one, but it does). And even informal specs (it shell work as i remember the other software) is a spec ;-).

Today a cron would have more complex triggers (non-blocking stuff like file exists or process running), more descriptive config and would come with a library to include in your scripts to handle standard procedures used in nearly every cron script.
(At least syslog is mostly fixed now. Except for the part in the broken spec that requires a 3-byte UTF-8 BOM [sic] on every line.)
the wonderful thing about cron is that it's universal. any given open source package that needs to install a cron job can do this in "make install" using a "crontab" command. those of you who used UNIX in the 1980's will remember that there used to be just one crontab on the system and that we were constantly adding and deleting lines from it. berkeley's innovation of letting you specify a user name in column 6 was unwelcome even though it got us out of "su uucp -c foo" hell.

but the funniest thing in this thread is the part where people hate that cron mails you the output of your command. this was at the time SVID put it in a huge timesaver, and i not only kept it, i extended it (by including the shell environment in the e-mail headers, and by letting you override with MAILTO where the e-mail would be sent.) you can ALWAYS turn off the output once you know it's nonhelpful, but you cannot always add "| sendmail" to the end of your command since that will send you blank e-mail when your command generates no output.

so they fix this with "cronic". funnier than a crutch, as we used to say to each other while gnawing on dinosaur bones.
OK, i finally took a look at cronic. So, because people have been writing scripts that are way too chatty and cargo culting >/dev/null 2&>1 , this is cron's fault?
Well, it's that whole feature creep thing. Once cron does more than just start programs, and starts sending e-mail, people want to shove all the filtering functionality in there too so they don't have to learn to use Awk or Ruby or Perl or any number of other tools that will let you put together a custom output filtering and e-mailing solution. And before you know it, cron has a choice of scripting languages and extensible XML-based configuration.
"jcron is a cron replacement that runs inside a JVM and is compatible with J2EE standards. It keeps track of its jobs inside a MongoDB instance and can send email updates via SOAP to any Exchange server."
Because running a perl instance every ten minutes to handle something that really just needs to check a return value is the Manly Unix Way. Heck, making your system entirely dependant on a script processor to do task scheduling is the way Unix is meant to work! If you aren't wasting your time on dependencies that you're not sure if you really need, it's not a Real Man's Unix!

That's why we have complicated, distribution specific Sys V shell script trees and a 'universal' system scheduler who's design was making up what it should do as we went along.
+Paul Vixie 'make install' is a horrible place to put system configuration changes. It should only be used to install the files, not configure the system. Doing so will result in bad bad things for people trying to create packages, install to sandboxes and so on...
+Jay Blanc if we had a standard for periodic tasks, similar to what freebsd calls /etc/periodic.d/, then installing the files is all we'd need to do. lacking a better portable way to have 'make install' do what you're describing, it also installs crontabs. i'm just glad it's not editing crontabs like in the old days. and as long as there's still an enable/disable for actually starting the service (or doing anything in its cron job) this fails to qualify as "thing most in need of fixing in UNIX".

note to everybody: MAILTO= is your friend. if you have jobs that generate output even when they're succeeding, do MAILTO= in the top of your crontab. all will be well.
I disagree about the need to fix cron. It should be taken out to the woodshed and shot.

Crontab files are an awful legacy from the old days, and most distributions of linux have added on hacks that create cron.daily/cron.hourly configuration directories. Problem being that there's no agreed standard on how to do that, and it doesn't fix cron's other problems.

Error reporting by email is nice, but there's no flexibility in configuring that error reporting, you get all standard output sent to the users email. Want it to go to some other kind of logging system? Tough titty, errors go to email.

Cron might be "universal", being on nearly every unix system. (Very very very likely to be absent on embedded systems.) But that doesn't mean it can't be supplanted by some different common and widespread scheduling service. And where one is present, people do move to it. No one who creates OS X software puts things in crontabs.

Cron is limited to doing it's one thing, and has features that were specifically designed around a multi-user shared system being used by computer science students. It can't handle things like attempting to restart a program if it exits, launching programs when certain files/directories are edited, delaying the start of a program when certain conditions are true (ie, running from battery)...

While it was great, it is now showing it's age, and is being used in radically changed environments it was never designed for.
Incidentally launchd has been released under the Apache license and there was a FreeBSD port. I would really like to see this ported to Linux, if upstartd never meets it's goals of adding task scheduling.
+Jay Blanc If your program actually returns a sensible return value that tells you everything you need to know, then obviously you don't need a scripting language based filter. But we were talking about all the programs that don't report their behavior neatly via return codes and terse output, and that can't necessarily be fixed to do so. Having something like systemd or upstart launch them wouldn't make life any easier.

And then there's the issue of all things people want to handle, like starting programs when files are touched, examining battery status, and so on.

If you build all of that into your task launcher, you end up with a monstrosity not far from jcron as described by rone. That's supposed to be better than the unbearable overhead of running a Perl or Awk binary on occasion?
+mathew murphy And yet launchd exists. You are arguing against the utility and practicality of something that has existed on OS X since 10.4, and is widely used and popular enough to have been ported to FreeBSD now too.

All the crontabs on my OS X desktop are empty, even root's. (Especially root's!) It is there merely for backwards compatibility.
+Jay Blanc As I already hinted, the Linux world has pretty much settled on systemd as the direction for init replacement. The only holdouts are Ubuntu (still sticking with upstart because it's theirs) and Debian (not sure what to do because they want to keep the option of BSD-based Debian).
This is just like Usenet, including people flaming the authors of important utilities for not having 20-20 hindsight. It gets me right here, I tell you.
+Jay Blanc I'm not arguing that a bloated monolithic task launching subsystem with kitchen sink feature list isn't useful or practical. I'm arguing that it isn't the Unix way.

And given Apple's hostility to GNU software, I very much doubt that launchd will ever be adopted by any Linux distribution.
"bloated and monolithic" is also funny. my version of cron was 100K of compiled code, when statically linked on a vax running 4.3bsd. the cron i was replacing was 12K of compiled code, also statically linked. man oh man were people pissed off at me for wasting all that memory.

oh well at least we got <bitstring.h> out of the deal. though nobody knows its there so i may still be its only user.
+Peter da Silva +Paul Vixie It's not a question of hindsight... It's a question of current sight. Cron isn't a useful system scheduler for today's systems. It was great for a task scheduler for multi-user servers being accessed by CS grad students, scientists and DBAs; and when the most complicated thing be schedualed to launch was makewhatis... But it's actively bad for the desktop, embedded and unattended small systems that Linux is moving into. Times do actually change.

+mathew murphy Your GNU exclusive 'Unix Way' old-skool-rulz kind of sucks. I'm glad that there are other 'Unix ways', like Android and OS X and BSD...
+mathew murphy I suggest you read it yourself. "Gabriel argues that this design style has key evolutionary advantages, though he questions the quality of some results."

And you are in the habit of conflating anything open source, and anything unix, with GNU and the FSF. And the NU part of GNU never actually happened, Linux while being released under the GPL was and is not part of the GNU Project. GNU's HURD has gone over two decades without a stable production release.
+Paul Vixie Apparently systemd is about 50,000 lines of code, and 5MB. But it doesn't have a scripting subsystem yet.

My launchd is a comparatively svelte 2.5MB.
+Jay Blanc I didn't say the Unix way wasn't questionable. Nor did I say that the only open source was GNU or FSF, or that the Linux kernel was a GNU project. Jeez you're fond of strawmen.
Where "Make everything simple and small and interconnected" fails is you do not reduce complexity of large systems by doing this, but actually increase it by increasing the number of interfaces.

ie, Cron and Init are small. But to get working start up and schedual systems using them, you end up needing to add scripting in. And that means using a scripting system. Which is dependent on lots of other 'small things done well'... So you instantly multiply the complexity of the system.
+mathew murphy You brought up "The Unix Way". I knocked it down. A strawman would be if I brought it up and knocked it down.

I hate it when people use the term 'Strawman' and have no idea what it actually means, or where the term comes from. ("Building a straw man to knock him down again.")
+Jay Blanc: "Cron isn't a useful system scheduler for today's systems." This is prima facie false. Would you like to rephrase it?
+Jay Blanc You knocked down the Unix way by pointing out that there might be other ways to do it that are better for some purposes? Well, I guess we'd better give up on Unix at once. Thanks for putting us right.
Am I ever glad I spent a good part of yesterday reading some really awesome essays by Hoare and Dijkstra and so today I can just laugh at all of this! :-)
+Ron Echeverri Not particularly. It's the only system scheduler on many Unix implementing systems, and one that to satisfy the requirements of even a modern headless server needs the addition of external scripting frameworks to make use of it. (And all the dependencies of those frameworks) The usefulness comes out of those frameworks, and those frameworks actually do the bulk of the legwork now with cron merely reduced to a regular hourly/daily/weekly/monthly impulse to kick off the scripts. Linux systems right now only use cron because it's what's always been there for it, not because it's been useful.

It's like using a nail to hang up your coat. But having to wrap the nail in duct tape to avoid tearing your coat. Or using a fork welded into the starter key of your car because the key broke off in it. Or a slotted spoon wrapped in tinfoil and used to drink soup.

That's not useful, that's a kludge to try and get some use out of it.
+mathew murphy I was saying that the existence of "the unix way" is not a reason to accept the unix way is the correct way for all problems. While Cron may be a paradigm of the Unix Philosphy, that does not equate to a reason to keep it. Developing to the Unix Philosophy has it's place, cp doesn't need to be as complex as rsync. But cp isn't as comprehensively useful as rsync.

Cron is a scheduler. But it isn't useful as a system scheduler any-more because a modern system needs things that cron doesn't provide. This is not a role where Unix Philosphy is a good design assumption.
+Jay Blanc: i don't know what 'modern headless servers' you've been hanging around with, but i assure you that in the lustrums i've spent working as a sysadmin, cron has done and continues to do its work quite well without needing to rely on the superfluous frameworks that were imposed on it by clueless, if well-meaning, Linux dorks. A poor craftsman, as ever, blame his tools.
+Ron Echeverri Really? Your servers have the binary executables all run direct from crontab with no shell scripts or other kinds of scripts for them?
Oh, i see, you were abusing the term "framework". So what you're saying is, "cron is a poor tool because it doesn't do things the way i think they should be done." You may carry on.
+Ron Echeverri If by that you mean "is a limited in what it can provide without the support of external dependencies on scripting systems that may or may not be present on any specific distribution or system and even then can not provide context sensitive deferral without ugly kludges" then yes.

Paradoxically, despite being praised for it's small focus, Cron isn't used on embedded systems because it creates a dependency on scripting and a mail system to log errors. Embeded systems often end up implimenting their own scheduling daemons tailored to the specifics of the system.

Why does Cron have an implicit dependancy on a mail system to deliver it's error messages? Because it was organically designed for shared user systems being used for projects by engineers, grad students and scientists; and there was always a mail system on those.

Cron is a great, tightly written, well performing schedualer for a shared user system that is called on by users who want to run their own project files at odd times. But that's not what it's being used for now. An interesting question for +Paul Vixie is why it made sense to continue to assume that you would always be able to log error messages to a mailing system, but couldn't to file or pipe?

It's a little weird that it's two decades since the start of pushing *nix out of it's comfort zone of the shared user server to other applications, but we still can't question cron never being changed to adapt. No one regrets the sunset of at and batch, why the defiant defence of cron?
cron doesn't depend on scripting systems; you could very well write a C program that does what a script does, compile it, run that out of cron, and thus remove the "scripting system dependency". I do agree that the mail dependency can be a problem for embedded systems, but i highly doubt that it's the only case where a well-known tool needs to be reworked for the limitations encountered in embedded systems, so i find it curious that you're bagging on cron for this as if it were somehow special in this regard.

I don't think anyone here doesn't want to question cron's adaptation needs. cron does the job we need of it, so the question is, why are you so defiantly against it? If launchd is superior, people will begin migrating to it. That's the way it has worked in our Unixy world. So why are you upset that people aren't migrating?
+Ron Echeverri Right, so now it's dependent on lots of little custom made programs, and the compiler and tool chain to create those programs. Because that's much simpler than the dependency on scripting frameworks?
Indeed cron works great on embedded systems, if there's a purpose for it on them with its granularity of scheduling.

There is actually no "mail" dependency, even without using Vixie's MAILTO= hack, and even if one doesn't have source for one's cron. One can simply replace the program it calls to send mail with another program of one's own choosing since obviously such a mail sending program isn't going to be used by anything else on such an embedded system. Talk about a straw-man! :-)

BTW, for small systems running cron with no network connectivity and no desire to run all of a full sendmail compatible monster I have a small fix for mail.local that adds a "-t" option (with proper full RFC compatible address field parsing via my librfc2822) (and with other command-line no-ops for sendmail compatibility) so that it can work the same as V7 mail and just deliver to mbox files in a local spool directory. And with NetBSD's mailwrapper(8) it's trivial to configure what's used as "sendmail". :-)
I'd say rsync is another case where complexity has gone too far. Fortunately it has the '-a' flag which is what I want 99% of the time, and I can ignore all the other options. I'm sure there's someone out there who cares about whether deletions happen before, during or after the transfer, but I can't help wondering if they shouldn't be using a custom program...
+mathew murphy But imagine if you didn't have rsync at all. You only have cp. And everyone is telling you that cp will do everything you need, you just need to add on perl scripts to do the extra parts that cp doesn't do...

Now do you see my problem with cron?
Your problem with cron is your problem with Unix, it seems to me. Anyway, your objections are getting ridiculously sophistic, so i'll just leave this be, before i need to ignore every friend of Peter's named Jay.
I told them, I said "This golden hammer is useless for putting in screws". They told me I should get a toolbox. I don't want a toolbox full of things I have to put together to do the things the golden hammer should do. I mean, imagine if you were trying to brush your teeth with the golden hammer, and everyone was telling you you just needed a brush attachment. Apple makes much better golden hammers.
+mathew murphy Yes yes, I get it, you're a chest beating manly sysadmin who uses the real GPL Unix. Anyone who dares suggest other systems, licences and software exist is an Apple dupe or worse...
so consensus here seems to be that i ought to have coded cron in a new variant of Scheme, so that it could always be all things to all people for all time? granted, there was a lot of that kind of thinking going on when i was working on this.

the argument about "the unix way" only hits half of this target. the quality of unix has always been low compared to proprietary or house-built systems (i'm thinking of vax/vms here, but you could substitute modern windows or modern mac/os if you weren't alive then.) what made it useful was not its greatness, but its sameness.

if every system you think your code will have to run on is made by a single vendor like apple, or is part of a single small ecosystem like linux, then by all means code to that. but don't call it unix, or the unix way. and don't complain when those of us who don't want to be boxed up like that ignore your work.
+Paul Vixie No, what I'm saying is that the problem that Cron fixed is not the same problem that Cron is being wedged into fixing on desktops now. A system for scheduling batch jobs and overnight runs of engineers, scientists and CS grad students projects is not the same as a system task scheduler. What cron does, and does well, is send an impulse saying 'run this command at this time', but a system task scheduler needs more. Cron is a screwdriver. What we need is a screwdriver and a pair of scissors. What would be elegant is a multitool.

Problems with cron were compounded by the idea of the only logging being to a mail account, which only ever made sense for it's specific use as batch jobs on a shared user system. The assumptions about what to log and how are now wrong for how cron is mainly used, but cron was never updated to do the simple thing of adding a per task argument of file/pipe to redirect output to. This is actually less complicated than handling email, but never happened.

Again Paul, simple question. In the twenty years since unix stopped being used in the sole realm of the shared user server, why was cron never redesigned to take into account different logging needs?
In my experience cron works quite successfully with that scripting framework called "shell" as it has done for a while.

The one compelling idea from cronic is, in my eyes, to make mailing the output of a task depend on the task's exit status. That, as optional behaviour, would be helpful in quite a few cases, and I would really appreciate it if you, +Paul Vixie , would consider including that feature when you do the cron revision that you are talking about above. (I suspect that a mechanism of options flags per crontab line might be necessary to integrate this in a useful way.)

With that wish in mind I think, cron does its one thing quite well. The shell is the glue to put it together with the rest of the system, be it by means of run-parts and cron.d and things, as it has been The Unix Way for a while.
+Jay Blanc if somebody has a good idea for how to make cron's logging more relevant to the modern desktop, and sends me a patch to that effect, it'll go in. the sky's the limit. if you look at the cron tarball there should be a file called THANKS which details what idea and what code came from whom. please consider ways to add your idea and your name to that file. this is "the open source way".
The only problem I've ever had with cron is that it's vulnerable to DST changes and if you don't schedule things carefully, then once a year your nightly job won't run at all, and once a year it'll run twice. (I'm not convinced this is a bug in cron so much as a bug in DST, though) Even vixie-cron does this, and I've always been tempted to set TZ=UTC for crond to avoid it.

For me, the thing is that cron is simple, it doesn't take long to figure out how it works, and most importantly, I'm already familiar with it. Systemd and launchd are great, I do like them, but the learning curve is significantly higher and if all I want to do is run ntpdate once an hour, seriously, do I need all that trouble? I still love cron for what it is.
My problem with systemd and launchd is that they seem to be putting a lot of engineering into solving a problem I don't have: speeding up boot times. I last rebooted 2 weeks ago. If it takes 30 seconds to boot instead of 10, it really doesn't matter to me.
Launchd isn't just about boot times. As I understand it, it generalizes the whole startup/shutdown system to handle sleep and hibernate and scheduling as well.
+Jay Blanc Sure, there are embedded systems that care about start time. But for laptops, desktops and servers, who cares?
Pro Tip: Keeping your desktop powered on 24/7 is neither something normal people want to do, nor something geeks should try to do either. S4 exists for a reason, as does wake from lan. If your desktop computer is always on, you are throwing money down the drain and also using more electricity than you need to and generating more greenhouse gasses.
+mathew murphy I think you're confusing Sleep (S1 through S3) with Hibernate (S4 through S5, and possible to resume from G3).

The blog post you point to is a collection of mislead conclusions, and little technical information. And a basic misunderstanding of sleep states, and what components use the most power. The motherboard cpu and memory which are still on actually use more power than the disk. And all the components on a motherboard certainly do degrade while in constant use, and the MTBF drops radically if you keep something on all the time! (The 'MTBF is lower if you keep it on all the time' is a myth relating back to older types of power suply.)

It's also basically saying "Hey, 4W uses less CO2 than I emit myself. So why bother with turning off anything that's producing less CO2 than I emit myself?". Only when you add up all those less than 4W devices owned that you don't turn off, and times them by the number of people who leave them on 24/7... That turns into a much larger number.
On a Mac, there's only one kind of sleep visible to the user. How that's implemented in terms of CPU or motherboard features isn't relevant to the point I was making.
It certainly is. I suggest you stop arguing with me for a moment, and look up what a resume from G3 involves for a macbook that went into that state when the battery went too low.
Sorry, are you suggesting that resuming from sleep is sometimes a bigger drain on power than a cold boot? If so, please cite sources.
I'm flat confused by your question, I don't understand what you think I am talking about. Please look up the meanings of the terms I am using. Specifically look up what the ACPI states S1...S5 and G3 mean.
I posted about the fact that putting the computer to sleep results in ignorable amounts of wasted electricity. You responded with the scenario of a depleted battery and the machine needing to wake from suspend. Well, unless a cold boot is cheaper than that worst-case scenario, your suggestion of always powering off is going to be worse for energy usage. So come on, how much power does this worst case scenario require? Demonstrate that it makes a difference to the economics of leaving computers on and asleep versus powering them off.
No, you posted a link to a blog post that asserted that the electricity wasted is ignorable. I do not think that is a very good standard of 'fact'. It is barely above 'This bloke down the pub said'. And as I mentioned above, it is easy to rebuke what he says about it being ignorable.
'He' is me. It's my web site. I took the power consumption of a Mac at sleep from Apple's documentation, and did the math. If you think there's a flaw in the calculation, please explain what it is.

If you think there's some situation where leaving a machine in sleep results in significantly more energy use than the Apple-documented consumption, and more than turning it off and then cold booting next time you want to use it, then please describe that situation and show how it makes a difference to the annual CO2 and cost calculations.
+mathew murphy Uh... Well, a fully turned off computer uses no electricity. So, even if you assume it has an old kind of power supply that ran at full whack for a microsecond when powered on... Yeah, I don't think that a computer that is on for the 8 hours I'm asleep is using less power than the computer I switch off and back on 8 hours later.

And yes, a modern ACPI computer uses less electricity in S4/5 (Hibernate) than it does in S2/3 (Sleep).
So the answer to "Can you describe a scenario where a sleeping computer uses so much more electricity than its rated sleeping power consumption that it actually becomes worth it to power off instead of sleep?"... is apparently "No".
Okay, since you apparently refuse to look up the terms I'm using... These are the ACPI power states.

G0 System fully on and working.
- Ready Mode, Graphics card/monitor powered down.
G1 System partially powered down.
- S1 CPU Stopped, CPU and Memory powered on, unmasked peripherals powered down. ("Sleep")
- S2 CPU powered down, cache flushed to RAM.
- S3 All remaining peripherals powered down except those with wake functions. RAM power maintained, disk controller power maintained. ("Standby/Suspend")
- S4 RAM written to disk. Ram powered down, disk powered down once all writes committed. ("Hibernation")
G2 'Soft' System off.
- S5 ACPI system maintains power only to the on-switch, and the specified Wake devices. All other systems switched off. A wake will start a boot process, but POST checks may be reduced. If entered from a previous S4, the OS must handle the restore from hibernation. (Full "Hibernation")
G3 Hardware Off
- Disconnected from mains by hardware switch. A wake will start a boot process. If entered from a previous S4, the OS must handle the restore from hibernation.

I know you didn't know this, because you seem to think that sleep meant the hard disks were switched off, when that doesn't happen till S4! (Maybe you got confused by a common referral to disk spin-down as switching off the disk.)

Sleeping systems use a lot more power than you seem to think. Otherwise, there wouldn't be a point in everything from S4 onwards would there?
So here are Apple's figures for my MacBook Pro:
1.09W in sleep mode. That's 4x as efficient as my 2004 calculation for the G4.
Here's my electricity tariff:
Averaging summer and winter maximum rates gives me 6.92 cents per kWh, which is nowhere near 4x the figure from 2004. So overall, it's even more true today than it was then, that the amount of energy I waste by sleeping my Mac rather than turning it off is negligible.

Now, do you have some actual real-world numbers that suggest otherwise? I'm not interested in reading descriptions of ACPI levels or vague hand-waving about "a lot more power", I want some actual data that supports your conclusion.
And I already pointed out that you made an incorrect assumption that the power difference between sleep and powered off is negligible. If you multiply it out by device ownership, it still accounts for a great deal! Ask someone who runs an office if they want to save 10 x 4w x 8hrs on their daily electricity bill?
Personally, all I care about on Macbook power levels is why the hell Apple doesn't give me the option of choosing hibernate or sleep. Apple's "smart" sleep is anything but. I don't care about the cost of electricity but I do like having a full charge when I haven't been using it for a while.
"you made an incorrect assumption that the power difference between sleep and powered off is negligible" — it is negligible in economic and environmental terms, and I gave the figures to prove it. $2 a year is negligible. 20x less CO2 than you create by breathing is negligible. The difference in power usage might be significant in engineering terms, but that wasn't your claim — you talked about "throwing money down the drain" and "generating more greenhouse gasses", so those were the claims I addressed.

"If you multiply it out by device ownership, it still accounts for a great deal" — any tiny amount of money multiplied by a billion gives you a big number, so what?

"Ask someone who runs an office" — $2 a year multiplied by 1,000 employees might be worth putting up some flyers up asking people to turn stuff off. But if your business is so close to the edge that $2 per employee per year is an expense worth worrying significantly about, it's probably time to start Chapter 11. And the 4W of power used by a sleeping computer will still be negligible compared to the typical situation of multiple 100W lights being left on, 30W LCD monitors being left on, doors being held open leading to increased heat and AC costs, laser printers being used to print unnecessary pages, and so on.

I understand that people commonly believe that they save significant money by turning computers off. I'm not surprised you believed it too. But I actually did the math when I started reading the scare stories about "electrical vampires", and it's just not the big deal you were led to believe. I even have one of those Kill-A-Watt units to measure power consumption of devices in my house. (Worried about your cell phone charger being left plugged in to the wall with no phone attached? I checked mine, and with no device being charged the wall wart PSU uses so little power it's unmeasurable by the Kill-A-Watt.)
OK, now you two nerds are arguing past each other and this has ceased to be at all interesting.
+mathew murphy You confuse individual household economics with aggregate impact. There are almost 2 billion desktop PCs in use around the world. Let's be generous and say that all of them have a little above the 4W sleep use like your Apple notebook. If they all weren't using hibernate, and instead running in sleep 24/7, that's around $8 billion dollars worth of electricity wasted because of wanting to avoid a slow resume speed.

These small things that are 'negligible' to a single household, all add up in the aggregate. And this of course ignores things like XBoxs, PS3s, Tivos, embeded media players in TVs, which are all really computers that add another tier of multiplication onto how much energy gets wasted on 'sleeping' instead of 'hibernating'.
+Peter da Silva apple knows what you want. trust them. even if you wake up in a bathtub full of ice, apple knows what's best for you.
+Jay Blanc Like I said, multiply any small number by a huge number and you get something that looks really big. That doesn't make it anything other than negligible though. That $8 billion is still going to be negligible when put into the context of the trillions of dollars of electricity consumed by other devices. You're playing the same game as the people who rage about NPR funding, making numbers look big by quoting them for massive populations, rather than looking at per capita figures. It's intellectually dishonest.
+mathew murphy You're conflating consumption and waste. All the various small wastes of electricity and fuel across all kinds of devices and equipment add up to a big amount of waste. But by your argument we shouldn't bother trying to cut down on power use, because any single item we reduce is outwayed by all the other items we use. You're totally ignoring that what we are meant to be doing is reducing waste across the board. Sophistic arguments that one particular amount of waste isn't 'worth' reducing compared to the total amount leads to never reducing any waste. It's the "My one cigarette stub isn't contributing that much to the mess, why bother wasting my time on disposing of it properly" argument.

And you're ignoring that as I mentioned desktop computers are only one example of computer device that benefits from hibernation. We're moving towards each household having at least four or five computer devices in their TV, media STB, console, desktop... The XBox is particularly egregious about power consumption because default behaviour is to keep running fully powered on at it's 'dashboard' when not being used. Much like poorly configured 'Away Mode' that simply turns the screen blank, while running everything else at full power. And then there's that mobile devices inherently benefit from hibernation state to save on battery use. Your argument against the utility of hibernation state is very weak.
No, I already dealt with consumption vs waste when I mentioned a bunch of things that waste a hell of a lot more energy than sleeping computers.

And similarly, if you're worrying about the pollution from your one cigarette stub while you're tossing your old mercury batteries in the trash, you're failing to keep a proper sense of perspective.

The fact that the Xbox defaults to "stay on" versus "sleep" is totally a bad thing. However, it's completely irrelevant to a discussion of "turn off" versus "sleep".
+Paul Vixie The only reason that cron is "everywhere" (for "everywhere" read "the places that call themselves Unix") is because the crontab manpage, and the pathological behaviours of cron in processing generated crontab files, were included in the Single Unix Specification. Were the SUS to be updated to include launchd.plist's manpage, it too would end up "everywhere" as well.

Interestingly I note that if you removed cron, rewrote crontab to create launchd.plist's that mimic cron's specific behaviours, then you would still pass a SUS check on crontab.
Circular reasoning. The reason that cron is in the SUS is because it's everywhere.
then by all means let's update the sus spec so that unix-like systems can have whatever improvements you think are needed. just note that 'vixie cron' is one author's take on the old SVID cron, with a few enhancements, but that it tracks the POSIX spec for its core features and compatibility.
Well, obviously XML plist files would make cron more enterprisey.
If there's one thing worse than XML, it's ordered XML.
Add a comment...