I used to say this, and I even started converting machines over. Nowadays I'm not so sure.
As Brad says, the sysadmin costs are not just heavy, but error-prone. There's a new risk factor that you will lose data by botching a command somewhere.
Moreover, some of the best ways to improve the sysadmin costs involve making the system no longer be explicitly RAIDed. In particular, you get most of the advantage if you have a RAID inside the drive package, rather than having five separate drive packages that are all independently plugged in. To the extent higher reliability drives become important, I would think the place to start is by buying higher quality drives that have internal redundancy of various kinds.
An additional complication is that you only benefit from the error recovery if you have procedures in place to notice the failure and do something about it. Both the noticing, and the doing something about it, are something a lot of people are not going to bother with in a lot of contexts.
The main issue, though, is that in most cases, drive failure isn't even the biggest source of risk for data loss. It's much more likely that you will issue a stray delete command than that your SSD will fail at the wrong time! To defend against this more likely case, you end up wanting some sort of distributed backup system, such as Gmail for your email, or GitHub for your source code. Once you do that, though, you already have such a good data protection system that you no longer gain much benefit by RAIDing your local storage.
On the flip side, RAID remains excellent for performance. If you want to make your Witcher 3 levels load faster, then RAID could help you.