Some times you’ll read stories about how AIX/HP-UX Admins are so much pricier than Linux people because they have “knowlegde about their legacy OS” that noone else needs, and keep it as a secret so they can’t be compared.
It’s always been my feeling that the truth is a little different, from my experience as a Unix admin you’ll take it for granted that you’ll also need to know Linux quite well, but the large difference something else:
You’ll never be forgiven if you do something clueless.
run something you don’t know everything about, like i.e. just testing some software, building scripts that don’t fail gracefully, writing a shutdown script that can get hung or do anything at all that risks the holy grail of data consistency. From the few times I did a messup here or there… every single one resulted in endless post-mortem meetings and a lot of “too much pressure”…
Or – something more obvious: The most common example of different thinking must surely be “rm” on Linux and Unix: On no Unix root account you’ll get a warning or question whatsoever before deleting something, whereas it may be aliased to rm -i for the end users.
The reasoning is quite simple: Anybody might delete the wrong stuff by mistake some time, but then rm -i wouldn’t help, and no Unix admin would be so unknowing that he doesn’t know every file that will be along the path of his rm -r.
My colleagues often get confused now because I don’t have a path prompt, instead I’m firing pwd and ls at every directory I enter. It’s for a reason: I don’t need to read the output line-by-line but in every directory I move –
but I need the “old” ls output ready in case something went wrong, or in case something is out of the ordinary there (i.e. stale mounts, …).
Yes you might get along with saying “oups I didn’t notice that”. People can lower their expectations, but I’d say they won’t consider you good any longer?
You just need to know where you are and if that part of the system is in “nomial state”. It makes things safer and it also makes fixing a lot faster, there’s no magic involved other than looking at all non-broken parts till you got the broken one 🙂
Just the same, as a Unix admin you’ll be totally used to working with a staged QA process for applications which is now considered a DevOps idea (haha).
Now, I remembed this article on El Reg …
Just two quotes that make my point:
The average system admin in a Unix shop has 12.7 years of experience (and the average is 11 years for AS.400-i shops), which compares favorably with the 7 years of experience for Windows admins, four years for Linux admins, and three years for Mac OS server admins.
12.7 vs 4 years and only the MacOSX sysadmins have less practice than Linux admins?
Considering that most Unix guys will test and handle Linux boxes, too, you can quite well assume that the average unixadmin has twice the Linux experience of an average Linux admin.
So in practice this means: “Linux admins” are cheaper because company hire less experienced linux admins.
“The experience level for Unix and AS/400 administrators is equivalent to having a master craftsman build something for you or a Grade-A mechanic fixing you car,” says DiDio.
I think the craftsman example is not entirely off base, in december I helped a friend solve his VM backup issues and spend like 15 hours on a simple shell script. Thats how long it takes to build a script that can gracefully fail, can safely be re-run and still has no useless code.
Getting around complexity is the most complex part – you can’t afford
assumptions like normal coders can, so you have to look at every
possible failure for your code and still come up with something SMALL.
i.e today we re-wrote the Xendomains init script for some
half-embedded application. It could still use some work[tn] but went
down from 500+ to 150 lines and works better now than its original.
The core bit now just reads:
while VMS=”$(get_running_vms)”; do
for vm in $VMS; do
done 2>&1 | tee -a /var/log/xen/xendomains.log
Go and compare to your /etc/init.d/xendomains…