I've been thinking about this a lot lately, as each new buzzword layer enters the scene (SaaS, IaaS, MS, VZ, cloud, containers, ...). Each time one of these technologies comes along I look at it and say "boy, is that stupid / inefficient / costly". Of course, I'm thinking as a sysadmin.
A couple of years ago it dawned on me that the whole point of these techs was to eliminate sysadmins. These techs *are* inefficient in that they all take more CPU cycles / RAM / storage; they are more costly in that they aren't as cheap as I can do it myself (i.e. cloud vs dedicated server). But that's me, computer expert, who can do everything myself (from hw to os to sw) very quickly. Not CEO who doesn't even know what an OS is.
I mean, look at docker or flatpacks or even VMs. These things are the opposite of what we spent 30 years working against... duplication. So now we have the same/similar 100-thousand OS files duplicated everywhere. Now they want to undo the entire concept of shared libraries and essentially make libs static again (well, shared but distribute specific lib versions with the apps, duplicating out the wazoo). For an efficiency purist it seems insane.
So what gives? *** It's to eliminate/reduce one of the most highly paid workers in IT: sysadmins. Or at least, to (finally) allow the overseas outsourcing of sysadmin duties. As MUUGers, a lot of us are sysadmins. We should care. Personally, I'm not worried (as I'm sure many of you aren't), as our skillsets are so diversified, we can't be pigeonholed, and, as required, we can be the masters of any new technology.
My only question is one of curiosity, will they succeed in murdering most sysadmins? Will they save their $100kUS$/yr/sysadmin by giving it all to AWS? Will people accept the obvious inefficiencies of flatpacks in order to delete the cost of people? Actually, I'm dubious, as I've read these buzzwords in trade pubs ever since I started in tech ~1997, every single one claiming to be the admin-killing panacea, and they came and went, and most turned out to be waaay overblown. I'll be shocked if they actually succeed in killing sysadmins this time.
Anyone else thinking about these things?
Bonus question: What happens the day AWS (or whatever) screws up / has a major data loss / some other calamity? You sure are putting a lot of trust in *their* sysadmins, who may not be as smart as you are!
Decent new article about it all this here: https://www.informationweek.com/devops/rip-systems-administrator-welcome-dev...
I've been thinking about this a lot lately, as each new buzzword layer enters the scene (SaaS, IaaS, MS, VZ, cloud, containers, ...). Each time one of these technologies comes along I look at it and say "boy, is that stupid / inefficient / costly". Of course, I'm thinking as a sysadmin.
More specifically, you're thinking as a sysadmin who isn't running on zero energy reserves and isn't half burned out already.
A couple of years ago it dawned on me that the whole point of these techs was to eliminate sysadmins.
That's no surprise. Well, virtualization itself isn't about eliminating sysadmins, it's about eliminating CapEx, power, space and cooling, all of which it did & does very, very, well. But a lot of the other technologies you mention explicitly SAY they're about eliminating system administrators and/or network administrators and/or specialists of various stripes.
that they aren't as cheap as I can do it myself (i.e. cloud vs dedicated server). But that's me, computer expert, who can do everything myself (from hw to os to sw) very quickly. Not CEO who doesn't even know
Disagree with cloud being cheaper than dedicated server. At your scale, yes, at larger scales, no, not until you get to massive scale. The tiny end of the scale (<10 servers) can afford to spin up a new server, because their single server closet still has enough power, space and cooling to do so. The middle-market (i.e. companies that have a few dozens of servers) is crippled by the enormous cost of spinning up another physical server wherein they have to find space, power and cooling. The high-end (hundreds of servers) has already figured out how to solve that and how to build that into the cost of spinning up a new server.
I mean, look at docker or flatpacks or even VMs. These things are the opposite of what we spent 30 years working against... duplication. So
Don't forget that Docker images can't be updated. Ever. (You have to replace the entire image with a new image that contains updated components.) So for any (i.e. most) orgs that create a docker image, think "Cool!" and deploy it into production - never to be updated again - I foresee massive swarms of 'bots in the future, that basically can't be fixed.
now we have the same/similar 100-thousand OS files duplicated everywhere. Now they want to undo the entire concept of shared libraries and essentially make libs static again (well, shared but distribute specific lib versions with the apps, duplicating out the wazoo). For an efficiency purist it seems insane.
Yup. There's deduplicating storage, which helps somewhat... But yeah, Docker takes that downside to VMs and magnifies it 100-fold. (In the case of VMs, the files would have been duplicated anyway - just on another physical server & physical disk.)
My only question is one of curiosity, will they succeed in murdering most sysadmins? Will they save their $100kUS$/yr/sysadmin by giving it all to AWS?
I don't know about you, but I've never actually met any of these mythical $100k rainbow unicorns. I get paid less than some of the developers at my company.
Will people accept the obvious inefficiencies of flatpacks in order to delete the cost of people?
Absolutely. Without even a split-second's thought. Because they don't require full-stack knowledge & expertise, it's easier to find replacement people. At anything past about 5 people in a company, you have to serious worry about the "bus factor". I work for a 13-person (as of today) company - I represent one of the biggest business-continuity risks the company has! Bigger than total physical loss of the building & infrastructure, even. We added a 2nd sysadmin both to relieve some of the pressure off me, and to alleviate the bus factor problem. I can't blame my employer for looking at ways to reduce that risk; I *help* my employer look for ways to reduce that risk!
Actually, I'm dubious, as I've read these buzzwords in trade pubs ever since I started in tech ~1997, every single one claiming to be the admin-killing panacea, and they came and went, and most turned out to be waaay overblown. I'll be shocked if they actually succeed in killing sysadmins this time.
Anyone else thinking about these things?
Bonus question: What happens the day AWS (or whatever) screws up / has a major data loss / some other calamity? You sure are putting a lot of trust in *their* sysadmins, who may not be as smart as you are!
It's happened more than once already. And it's OK, because there's a 3rd-party to blame. (Well, it's not "OK", but no-one gets fired for depending on Amazon.) It's also OK because when one AWS Zone goes offline, it takes out dozens if not hundreds (if not thousands) of companies - and the public outrage can be spread so thin across so many companies that no single org really feels the burn very badly. Amazon feels it, but ultimately only 5% of their customers are affected in any given outage, so... If you don't remember, Twitter, Pinterest, Netflix, Imgur, etc... can't remember the others - all went down for almost 24hrs because of an AWS outage two years ago(?). I don't see any lasting damage to them OR to Amazon. (There is a growing recognition that you need to architect systems differently to protect against cloud failure, though.)
Other sysadmins are thinking about the issue, naturally. Not all, but some.
But all that happens is that your specific technical skills change. The ultimate skill set, of being able to visualize the full-stack and orchestrate dozens of technologies to work together to deliver a service, remains a valuable, fairly rare, skill. Whether my job is called "system administrator" or "cloud administrator" in 5 years' time doesn't really change the fundamental nature of what I do all that much.
Systems Administration has never been about knowing the UNIX toolkit inside out and backwards. Or understanding the technical details of how disk accesses are faster at the start of the disk. Or anything like that. It's always been about know *enough* about enough different parts of the system that you can put together a functional service. All those specific technical skills are just tools to manage technical complexity. A machinist today doesn't need to know how to fold and hammer steel to make a sword - they have newer & better technologies at their disposal, but they can still create a tool that lets the customer cut stuff.
As technology evolves, an org might not need a full-blown sysadmin to spin up a single Docker instance, or even an orchestrated group of VMs/Lambdas/Containers/whatevers, because the hard work of doing so has been pre-scripted and automated. That's fine. Guess who built that automation? Guess who gets called in to troubleshoot it? Guess who gets tapped on the shoulder to re-architect the whole structure because performance sucks? People like you and me.
So I don't really care if the "system administrator" dies off. In many ways, that should be a goal for our industry, not something to be feared. It's pathetic that we still can't produce reasonably bug-free systems that interoperate with each other in a plug-and-play (sorry) fashion. It's pathetic that the average 10-person shop *needs* a system administrator, at least on call, if not full-time. It's beyond absurd that my bookkeeper's 2-person office would be dead in the water without someone like me to help them. In an ideal world, our job should not exist.
Since we're human, however, and the technology field doesn’t look like it's going to stop sucking hard (from the mid-sized consumer's standpoint) within my lifetime, there's always going to be some job that really just amounts to "Complexity Management". And you and I will probably still be doing it.
(Oh, and it's not just sysadmins. Remember "visual programming"? Each time it gets reinvented, it's going to let business users do their own programming and get rid of all those overpaid programmers. Yeah, right. I still see lots of programmers. Just, hardly any of them write in assembler or COBOL anymore. Although 4GLs haven't exactly taken over the world...)
-Adam
On 2017-07-09 Adam Thompson wrote:
I mean, look at docker or flatpacks or even VMs. These things are the opposite of what we spent 30 years working against... duplication. So
Don't forget that Docker images can't be updated. Ever. (You have to replace the entire image with a new image that contains updated components.) So for any (i.e. most) orgs that create a docker image, think "Cool!" and deploy it into production - never to be updated again - I foresee massive swarms of 'bots in the future, that basically can't be fixed.
Good point!
now we have the same/similar 100-thousand OS files duplicated everywhere. Now they want to undo the entire concept of shared libraries and essentially make libs static again (well, shared but distribute specific lib versions with the apps, duplicating out the wazoo). For an efficiency purist it seems insane.
Yup. There's deduplicating storage, which helps somewhat... But yeah, Docker takes that downside to VMs and magnifies it 100-fold. (In the case of VMs, the files would have been duplicated anyway - just on another physical server & physical disk.)
Ya, but more worrying is what I alluded to: duplication in memory. Wouldn't techs like Docker mean that you lose the shared library effect? Instead of each app instance taking about 20% real memory (with 80% shared), each app takes 100% of its own footprint, no? Dedupe might fix it (somewhat) on the disk, but won't fix it in RAM, which is far more precious than disk (still).
I don't know about you, but I've never actually met any of these mythical $100k rainbow unicorns. I get paid less than some of the developers at my company.
Yearly industry surveys. Sr. sysadmins are always 80-110k. Of course, this is in the USA (converted to CA$). Wpg prices obviously vary :-)
Bonus question: What happens the day AWS (or whatever) screws up / has a major data loss / some other calamity? You sure are putting a lot of trust in *their* sysadmins, who may not be as smart as you are!
It's happened more than once already. And it's OK, because there's a 3rd-party to blame. (Well, it's not "OK", but no-one gets fired for
I was thinking more along the lines of data loss or massive hacker infiltration/data theft. Outages, meh; data loss, maybe not so meh. I await that day to see industry reaction, dropped jaws, fired CIOs, etc.
So I don't really care if the "system administrator" dies off. In many ways, that should be a goal for our industry, not something to be feared. It's pathetic that we still can't produce reasonably bug-free systems that interoperate with each other in a plug-and-play
I'm not so sure it's pathetic. It's probably a result of the insane complexity of every system. Just to browse the web on a phone you're relying on, what, 20-40 million lines of code? Laymen don't understand, know or appreciate the insane complexity behind "simple" computer things. I talk to my wife about this occasionally and she can't even begin to "get it" (not for lack of brains, either); why doesn't is "just work", "shouldn't be so complicated". One could posit that we will never be "bug free" (in fact, bugs seem to grow over time), and completely "easy" (just "easier", at least we are winning there).
... ah, but you answered your own question...
(Oh, and it's not just sysadmins. Remember "visual programming"? Each time it gets reinvented, it's going to let business users do their own programming and get rid of all those overpaid programmers. Yeah, right. I still see lots of programmers. Just, hardly any of them write in assembler or COBOL anymore. Although 4GLs haven't exactly taken over the world...)
Simplifying programming and firing all the programmers is the same pipe dream as "bug-free systems". They have the same root cause. It's the same class of problem. There's probably a mathematical way to state / model all of this and it's probably the same equation. It's probably NP-complete and will never be solved. I'll bet money on it.
I laugh at all the news sites/papers/shows being filled with "automation / AI" job-loss doom-porn all the time these days. I laugh even harder when they proclaim the imminent elimination of programmers. Never going to happen, and I'll bet money on that too. In my lifetime I can remember: (circa 2000) UML "killing all the programmers"; then I heard some APL wizadry would do it; now it's AI (and probably many more I never got wind of).
For us programmers out there (lucky me, I'm 50/50 programmer/admin), don't worry about this nonsense, because it's never going to happen. Levels may get higher, but someone still must think through all the corner cases and gotchas and flows and bugs, or just plain elicit the #%!@$ specs from the stakeholders! If you think AI can program another kernel like Linux, or a new Firefox, or your next e-commerce idea, dream on, and I'll take that bet too.
Nice reply, Adam, guess the rest of MUUG is off to the cottage, or not an admin, or too busy shaking in their boots to reply :-) As we both said, no reason for anyone to get scared about any of this stuff as long as your skillset is deep and wide.
Got it. We've just worked around an IPv6-related bug in *either* Google's IPv6 DNS service, named, or milter-greylist (AFAIK). So far, it seems mostly Google users were affected. -Adam
On July 10, 2017 7:58:24 AM CDT, John Lange john@johnlange.ca wrote:
Testing... If this makes it to list, please pardon the interruption.
On 2017-07-10 01:27, Trevor Cordes wrote:
On 2017-07-09 Adam Thompson wrote:
I mean, look at docker or flatpacks or even VMs. These things are the opposite of what we spent 30 years working against... duplication. So
Don't forget that Docker images can't be updated. Ever. (You have to replace the entire image with a new image that contains updated components.) So for any (i.e. most) orgs that create a docker image, think "Cool!" and deploy it into production - never to be updated again - I foresee massive swarms of 'bots in the future, that basically can't be fixed.
Good point!
now we have the same/similar 100-thousand OS files duplicated everywhere. Now they want to undo the entire concept of shared libraries and essentially make libs static again (well, shared but distribute specific lib versions with the apps, duplicating out the wazoo). For an efficiency purist it seems insane.
Yup. There's deduplicating storage, which helps somewhat... But yeah, Docker takes that downside to VMs and magnifies it 100-fold. (In the case of VMs, the files would have been duplicated anyway - just on another physical server & physical disk.)
Ya, but more worrying is what I alluded to: duplication in memory. Wouldn't techs like Docker mean that you lose the shared library effect? Instead of each app instance taking about 20% real memory (with 80% shared), each app takes 100% of its own footprint, no? Dedupe might fix it (somewhat) on the disk, but won't fix it in RAM, which is far more precious than disk (still).
I don't know about you, but I've never actually met any of these mythical $100k rainbow unicorns. I get paid less than some of the developers at my company.
Yearly industry surveys. Sr. sysadmins are always 80-110k. Of course, this is in the USA (converted to CA$). Wpg prices obviously vary :-)
Bonus question: What happens the day AWS (or whatever) screws up / has a major data loss / some other calamity? You sure are putting a lot of trust in *their* sysadmins, who may not be as smart as you are!
It's happened more than once already. And it's OK, because there's a 3rd-party to blame. (Well, it's not "OK", but no-one gets fired for
I was thinking more along the lines of data loss or massive hacker infiltration/data theft. Outages, meh; data loss, maybe not so meh. I await that day to see industry reaction, dropped jaws, fired CIOs, etc.
So I don't really care if the "system administrator" dies off. In many ways, that should be a goal for our industry, not something to be feared. It's pathetic that we still can't produce reasonably bug-free systems that interoperate with each other in a plug-and-play
I'm not so sure it's pathetic. It's probably a result of the insane complexity of every system. Just to browse the web on a phone you're relying on, what, 20-40 million lines of code? Laymen don't understand, know or appreciate the insane complexity behind "simple" computer things. I talk to my wife about this occasionally and she can't even begin to "get it" (not for lack of brains, either); why doesn't is "just work", "shouldn't be so complicated". One could posit that we will never be "bug free" (in fact, bugs seem to grow over time), and completely "easy" (just "easier", at least we are winning there).
... ah, but you answered your own question...
(Oh, and it's not just sysadmins. Remember "visual programming"? Each time it gets reinvented, it's going to let business users do their own programming and get rid of all those overpaid programmers. Yeah, right. I still see lots of programmers. Just, hardly any of them write in assembler or COBOL anymore. Although 4GLs haven't exactly taken over the world...)
Simplifying programming and firing all the programmers is the same pipe dream as "bug-free systems". They have the same root cause. It's the same class of problem. There's probably a mathematical way to state / model all of this and it's probably the same equation. It's probably NP-complete and will never be solved. I'll bet money on it.
I laugh at all the news sites/papers/shows being filled with "automation / AI" job-loss doom-porn all the time these days. I laugh even harder when they proclaim the imminent elimination of programmers. Never going to happen, and I'll bet money on that too. In my lifetime I can remember: (circa 2000) UML "killing all the programmers"; then I heard some APL wizadry would do it; now it's AI (and probably many more I never got wind of).
For us programmers out there (lucky me, I'm 50/50 programmer/admin), don't worry about this nonsense, because it's never going to happen. Levels may get higher, but someone still must think through all the corner cases and gotchas and flows and bugs, or just plain elicit the #%!@$ specs from the stakeholders! If you think AI can program another kernel like Linux, or a new Firefox, or your next e-commerce idea, dream on, and I'll take that bet too.
Nice reply, Adam, guess the rest of MUUG is off to the cottage, or not an admin, or too busy shaking in their boots to reply :-) As we both said, no reason for anyone to get scared about any of this stuff as long as your skillset is deep and wide.
Nope, lurking mostly but you guys have done such a great job of summing up the current environment there's not much left to add. If you are lucky enough to work where the management respects you for your knowledge, skills, effort, and professionalism, stick with it.
Dave.
I didn't want to say anything because I don't have a lot more to say than "Ditto. What he said."
I will point out (sorry if I missed it in your voluminous contributions ;-) ) that much of the original goal was to reduce numbers of servers and * reduce power usage * reduce cooling requirements * reduce running up and down many aisles of many tower computers checking and replacing hard drives * reduce stranded assets i.e. servers with big hard drives hardly being used; servers with lots of RAM hardly being used; servers with mostly-idle CPUs
Of course where I work we have many blades, all of which are typically at 10-25% CPU usage. Also, no one is checking on how much of the assigned RAM is actually in use. VMware isn't too bad at monitoring actual RAM usage and letting one oversubscribe RAM without much of a performance hit, but there's still overhead in the management. Oracle VM on SPARC (at least) doesn't allow oversubscription of CPU or RAM *at all*, simplifying that, but then ending up with... stranded assets: idle CPUs and unused RAM.
It was probably later when someone thought "We can sell this as a way to reduce head-count!"
I think it will go in cycles. As in - "Hey, we can use all these technologies to reduce head-count!" - "Oh, I guess we still need them." - "Hey, we can use all these technologies to reduce head-count!" - "Oh, I guess we still need them." - "Hey, we can use all these technologies to reduce head-count!" - "Oh, I guess we still need them." And so on.
On Mon, Jul 10, 2017 at 1:27 AM, Trevor Cordes trevor@tecnopolis.ca wrote:
On 2017-07-09 Adam Thompson wrote:
I mean, look at docker or flatpacks or even VMs. These things are the opposite of what we spent 30 years working against... duplication. So
Don't forget that Docker images can't be updated. Ever. (You have to replace the entire image with a new image that contains updated components.) So for any (i.e. most) orgs that create a docker image, think "Cool!" and deploy it into production - never to be updated again - I foresee massive swarms of 'bots in the future, that basically can't be fixed.
Good point!
now we have the same/similar 100-thousand OS files duplicated everywhere. Now they want to undo the entire concept of shared libraries and essentially make libs static again (well, shared but distribute specific lib versions with the apps, duplicating out the wazoo). For an efficiency purist it seems insane.
Yup. There's deduplicating storage, which helps somewhat... But yeah, Docker takes that downside to VMs and magnifies it 100-fold. (In the case of VMs, the files would have been duplicated anyway - just on another physical server & physical disk.)
Ya, but more worrying is what I alluded to: duplication in memory. Wouldn't techs like Docker mean that you lose the shared library effect? Instead of each app instance taking about 20% real memory (with 80% shared), each app takes 100% of its own footprint, no? Dedupe might fix it (somewhat) on the disk, but won't fix it in RAM, which is far more precious than disk (still).
I don't know about you, but I've never actually met any of these mythical $100k rainbow unicorns. I get paid less than some of the developers at my company.
Yearly industry surveys. Sr. sysadmins are always 80-110k. Of course, this is in the USA (converted to CA$). Wpg prices obviously vary :-)
Bonus question: What happens the day AWS (or whatever) screws up / has a major data loss / some other calamity? You sure are putting a lot of trust in *their* sysadmins, who may not be as smart as you are!
It's happened more than once already. And it's OK, because there's a 3rd-party to blame. (Well, it's not "OK", but no-one gets fired for
I was thinking more along the lines of data loss or massive hacker infiltration/data theft. Outages, meh; data loss, maybe not so meh. I await that day to see industry reaction, dropped jaws, fired CIOs, etc.
So I don't really care if the "system administrator" dies off. In many ways, that should be a goal for our industry, not something to be feared. It's pathetic that we still can't produce reasonably bug-free systems that interoperate with each other in a plug-and-play
I'm not so sure it's pathetic. It's probably a result of the insane complexity of every system. Just to browse the web on a phone you're relying on, what, 20-40 million lines of code? Laymen don't understand, know or appreciate the insane complexity behind "simple" computer things. I talk to my wife about this occasionally and she can't even begin to "get it" (not for lack of brains, either); why doesn't is "just work", "shouldn't be so complicated". One could posit that we will never be "bug free" (in fact, bugs seem to grow over time), and completely "easy" (just "easier", at least we are winning there).
... ah, but you answered your own question...
(Oh, and it's not just sysadmins. Remember "visual programming"? Each time it gets reinvented, it's going to let business users do their own programming and get rid of all those overpaid programmers. Yeah, right. I still see lots of programmers. Just, hardly any of them write in assembler or COBOL anymore. Although 4GLs haven't exactly taken over the world...)
Simplifying programming and firing all the programmers is the same pipe dream as "bug-free systems". They have the same root cause. It's the same class of problem. There's probably a mathematical way to state / model all of this and it's probably the same equation. It's probably NP-complete and will never be solved. I'll bet money on it.
I laugh at all the news sites/papers/shows being filled with "automation / AI" job-loss doom-porn all the time these days. I laugh even harder when they proclaim the imminent elimination of programmers. Never going to happen, and I'll bet money on that too. In my lifetime I can remember: (circa 2000) UML "killing all the programmers"; then I heard some APL wizadry would do it; now it's AI (and probably many more I never got wind of).
For us programmers out there (lucky me, I'm 50/50 programmer/admin), don't worry about this nonsense, because it's never going to happen. Levels may get higher, but someone still must think through all the corner cases and gotchas and flows and bugs, or just plain elicit the #%!@$ specs from the stakeholders! If you think AI can program another kernel like Linux, or a new Firefox, or your next e-commerce idea, dream on, and I'll take that bet too.
Nice reply, Adam, guess the rest of MUUG is off to the cottage, or not an admin, or too busy shaking in their boots to reply :-) As we both said, no reason for anyone to get scared about any of this stuff as long as your skillset is deep and wide. _______________________________________________ Roundtable mailing list Roundtable@muug.ca https://muug.ca/mailman/listinfo/roundtable
You guys covered a lot of ground in your discussion and for the sake of time I won't comment on every point, but basically what you're discussing is the business case for "Cloud" vs. on-prem.
In short, the "Cloud" reduces the complexity of operations while increasing agility and scale-ability.
Cloud reduces complexity by eliminating the management of the infrastructure. You no longer have to worry about servers, storage, power, cooling, network switches, cabling, virtualization etc. etc. etc. (not to mention making all of the above redundant). This is a massive win for I.T. operations because it lifts the burden of the most time consuming and expensive part of operations (building and operating data centers). Most admins also don't enjoy swapping UPS batteries and testing HVAC systems so double win!
Cloud increases agility because it's all billed per-minute. You no longer need to go through a multi-month (year?) process to add a new service. You just click. You can add hundreds of servers to your environment in seconds then turn them off again 5 minutes later. There is no equivalent of that in any other environment.
Cloud increases scale-ability because day-to-day maintenance tasks like monitoring and patching can all be automated. So while it's fine to say you can spin up 100 new servers in 5 minutes, that's not a good thing unless you have a way to manage all that new compute. Cloud has you covered.
Or think of it this way, Cloud is your way to access the most sophisticated & robust I.T. infrastructure and management solutions available. Solutions that, until Cloud, were only available to the worlds largest organizations (and even they struggle to cope).
"Cloud" is like buying a plane ticket. You don't just get a seat on a plane, you get a "slice" of all the complex operations that happen in the background to get you from point-A to point-B safely (air traffic control, airport maintenance, airplane maintenance, etc. etc. etc.) Think of what a different world it would be if we didn't have a "Cloud" of airplanes in the sky to utilize.
Now a quick side-bar on containers (e.g. Docker). The whole point is that they don't get updated. If there is a new version of your code, you push it out to a new "swarm" of containers and all the old ones get torn down. It's easy and that's the whole point. If you had to do patch management on containers, now you're effectively just turning them back into VMs and that defeats the whole purpose.
John
I mostly agree.
Although per-minute billing has yet to be a useful feature to me or any of my clients – none of us are equipped, or even have the need – to operate “at scale” in that manner. Cloud’s ability to scale quickly, tear down quickly, etc. is really only meaningful to orgs and applications that have made the jump to distributed, composable, microservices. My estimation is that represents less than 1% of the mechanized (computerized) processes in existence. Those <1% are highly visible, however (i.e. Netflix). I still spend my days managing large, monolithic systems – even though they happen to be virtualized or running on “someone else’s servers”.
There’s an excellent recent discussion on legacy vs. cloud architecture on JWZ’s (of Netscape, Mozilla, and xscreensaver fame among others) blog: https://www.jwz.org/blog/2017/06/dear-lazyweb-tell-me-about-colo/, and his followup post https://www.jwz.org/blog/2017/06/colo-again/. The discussion actually takes place in the comments, which – particularly for a public comments section on a public blog – are surprisingly readable and polite! (Hint: JWZ doesn’t like cloud architecture any more than I do. It’s still an interesting read.)
As to Docker: yes, I understand the theory. I just believe that we’ll see a raft of businesses deploying Docker (&c.) images without really understanding the ramifications, and then the Docker evangelist will leave or get promoted or whatever, and we’ll be left with millions of unmanaged, unmaintained services because not only aren’t they getting patched (which is the whole point, yes) but they *also* aren’t getting replaced with updated images. Never underestimate the power of inertia!
Meanwhile, yes, I take advantage of Other People’s Servers and Other People’s Bandwidth and Other People’s UPSes, and etc. because you’re right: managing UPSes and HVAC and all the other necessary-but-not-sufficient stuff is a pain in the butt that I’d rather not have to deal with. That doesn’t mean I have to buy into the Cloud “swam” concept at all, mostly because it’s irrelevant to me. And I’m not at all convinced that requiring swarms of immutable VMs to achieve scalability & reliability is a sound architectural choice in the first place – it seems like a reaction, not a design.
(and *whoosh* there we went off on a different tangent altogether)
While I agree that Trevor’s initial discussion could serve as a proxy for some old technical arguments (local vs remote datacenter, rent vs own, etc.) I don’t think the reverse is true. The original concern was that we seem to have targets painted on our backs as a group, for some reason. Which is true – and the reason is that most HR people (in my experience) can’t understand what a system administrator *is*, can’t figure out how to hire them effectively, and as highly-skilled, highly-mobile/portable, highly-paid individuals, that coupled with the difficulty of hiring good admins, inherently makes us a business risk. So of course the “business” is going to respond well to anything that claims to reduce that risk or that cost.
Naturally, that ignores the fact that then you need an AWS jockey, or an Azure admin, or a Docker expert, all of which are equally/more difficult to find, equally/more highly-paid, and have equally/more portable skill sets. But who ever accused groupthink of looking past the immediate payoff? ;-)
-Adam
From: Roundtable [mailto:roundtable-bounces@muug.ca] On Behalf Of John Lange Sent: July 13, 2017 10:31 To: Continuation of Round Table discussion roundtable@muug.ca Subject: Re: [RndTbl] Attempted murder of sysadmin
You guys covered a lot of ground in your discussion and for the sake of time I won't comment on every point, but basically what you're discussing is the business case for "Cloud" vs. on-prem.
In short, the "Cloud" reduces the complexity of operations while increasing agility and scale-ability.
Cloud reduces complexity by eliminating the management of the infrastructure. You no longer have to worry about servers, storage, power, cooling, network switches, cabling, virtualization etc. etc. etc. (not to mention making all of the above redundant). This is a massive win for I.T. operations because it lifts the burden of the most time consuming and expensive part of operations (building and operating data centers). Most admins also don't enjoy swapping UPS batteries and testing HVAC systems so double win!
Cloud increases agility because it's all billed per-minute. You no longer need to go through a multi-month (year?) process to add a new service. You just click. You can add hundreds of servers to your environment in seconds then turn them off again 5 minutes later. There is no equivalent of that in any other environment.
Cloud increases scale-ability because day-to-day maintenance tasks like monitoring and patching can all be automated. So while it's fine to say you can spin up 100 new servers in 5 minutes, that's not a good thing unless you have a way to manage all that new compute. Cloud has you covered.
Or think of it this way, Cloud is your way to access the most sophisticated & robust I.T. infrastructure and management solutions available. Solutions that, until Cloud, were only available to the worlds largest organizations (and even they struggle to cope).
"Cloud" is like buying a plane ticket. You don't just get a seat on a plane, you get a "slice" of all the complex operations that happen in the background to get you from point-A to point-B safely (air traffic control, airport maintenance, airplane maintenance, etc. etc. etc.) Think of what a different world it would be if we didn't have a "Cloud" of airplanes in the sky to utilize.
Now a quick side-bar on containers (e.g. Docker). The whole point is that they don't get updated. If there is a new version of your code, you push it out to a new "swarm" of containers and all the old ones get torn down. It's easy and that's the whole point. If you had to do patch management on containers, now you're effectively just turning them back into VMs and that defeats the whole purpose.
John
Have you ever spun up a VM to test something, or done a POC, or wanted to test how your web site would work on the newest RedHat LAMP stack? Then per-minute billing is relevant to you.
Have you ever scoped hardware and storage based on it's peak expected utilization? Then it sits idle 99.9% of the time and the storage fills up 10X faster than anticipated? Then per-minute billing is relevant to you.
Highly scaleable micro-swarms are not the most common use case for per-minute billing. It's the ability to forego the complex capacity planning stage of a project (which is largely guesswork anyhow) and eliminate large, risky capital investments that's the real win.
I completely disagree that Admins have a target on their back. The Cloud, like any seismic shift in any industry, will demand new things of Admins. They will be asked to embrace new ways of thinking and new technologies which deliver better outcomes for business. The skills experience that today's Admins have are highly transferable to the Cloud so there is great opportunity.
But sure, some Admins will insist that they can still do more with a horse and buggy than a newfangled truck, and truth be told, horses and buggies stuck around a long time after trucks were first invented so existing Admins will be around for many years to come.
John
Yes, I’ve done all those things.
I have *never* needed per-minute granularity as of yet. The closest I’ve come was a PoC where per-day or per-week billing would have saved me money compared to per-month, but monthly billing still would have been entirely acceptable. (Yes, it was billed per-minute, which was of no additional benefit to me.) And per-month would have saved me a LOT of money compared to having to buy and setup a new server.
Per-minute billing isn’t a bad thing in and of itself, but somewhat like per-second billing on cell phones, it’s also irrelevant to the vast majority of users, and is a marketing ploy much more than a real technical “feature” that makes most people’s lives easier. If anything, I’ve found it’s a trap to increase prices while making you believe prices are actually lower.
I honestly don’t see the applicability of per-minute billing to your second scenario; once again, business don’t respond to changing conditions on a minute-by-minute basis. At Avant, I’ve yet to see a decision that took less than a couple of days to make. *Usage-based billing* or even just *rental* are different things than *per-minute* billing, and if you’re just using “per-minute” as a proxy for that, then yes, of course it’s a huge benefit.
As I said earlier, virtualization and associated technologies are mostly about turning CapEx into OpEx, and they succeed extremely well at that. “Cloud” service providers – or even just server renters like OVH – are another important piece in converting CapEx to OpEx, and they too succeed quite well.
And I would tell you to keep in mind that a) horses can go places that trucks can’t; b) a good horse will get you home when you’re drunk or asleep; c) you already bought the horse and your entire business probably revolves around having horses; d) buying a truck means you now have to deal with both Vets *and* mechanics – you don’t get to throw out the horses right away. Why should I – or anyone – immediately jump on the truck bandwagon? If I was a brand-new, green-field startup, then yeah, may as well start with whatever’s current today. But if I have legacy investments and systems that will continue for years to come?
I’m at least a decade past the “hey, it’s new, therefore it’s cool, therefore let’s use it!” stage, and well into the “yeah, sure, prove it” stage of life. Until someone can convince me that the New Ways Of Thinking actually represent an *improvement*, not just a *change*, over the old ways, I’m going to continue to regard them with extreme skepticism. Because I *was* one of those young turks who agitated for change, convinced that newer always meant better. Enough experience finally taught me otherwise. Especially when I note that we’ve basically gone right back to 1970-style Mainframe Partitions with semi-intelligent terminals; the problems we’re dealing with today on the web are EXACTLY the same problems we were dealing with in the early ‘80s with IBM/Amdahl mainframes and IBM 3270-style terminals… just with rounded corners and alpha transparencies. I honestly don’t see a lot of actual improvement in many (not all!) areas. “Everything Old Is New Again.” I’m tired of that hamster-wheel.
(Docker is a perfect example. What a clusterf*ck, as it stands today! I look forward to whoever eventually supplants Docker in the immutable-image space, though, as hopefully they’ll get some of Docker’s mistakes right. Maybe Linux will be dead and replaced with BSD by then, too… not holding my breath for either of those things to happen.)
-Adam
From: Roundtable [mailto:roundtable-bounces@muug.ca] On Behalf Of John Lange Sent: July 13, 2017 11:51 To: Continuation of Round Table discussion roundtable@muug.ca Subject: Re: [RndTbl] Attempted murder of sysadmin
Have you ever spun up a VM to test something, or done a POC, or wanted to test how your web site would work on the newest RedHat LAMP stack? Then per-minute billing is relevant to you.
Have you ever scoped hardware and storage based on it's peak expected utilization? Then it sits idle 99.9% of the time and the storage fills up 10X faster than anticipated? Then per-minute billing is relevant to you.
Highly scaleable micro-swarms are not the most common use case for per-minute billing. It's the ability to forego the complex capacity planning stage of a project (which is largely guesswork anyhow) and eliminate large, risky capital investments that's the real win.
I completely disagree that Admins have a target on their back. The Cloud, like any seismic shift in any industry, will demand new things of Admins. They will be asked to embrace new ways of thinking and new technologies which deliver better outcomes for business. The skills experience that today's Admins have are highly transferable to the Cloud so there is great opportunity.
But sure, some Admins will insist that they can still do more with a horse and buggy than a newfangled truck, and truth be told, horses and buggies stuck around a long time after trucks were first invented so existing Admins will be around for many years to come.
John
I just want to clarify that when I said "per-minute" billing, I really mean consumption based billing, or OpEx vs. CapEx if you prefer. The interval isn't that important although the smaller the interval, the greater the agility. Case in point, I needed to test out something on a sharepoint server farm the other day, just a simple POC. I spun it up in Azure from a template, did my quick test in about 3 hours, then destroyed it. Cost was probably around $10? If it was Monthly, then it would have probably cost $400+ (I'm estimating because I didn't actually look it up).
The second scenario is relevant to cloud because, as I said you don't have to do any capacity planning. Just pick a VM size and go. If you need more or less, just change the size up or down at any time. With monitoring and scheduling you can dynamically increase and decrease the VM size (and cost) any time. So for example, your main application server could be "beefy" 9-5, then scaled to "small" after hours (or even powered off completely). Or seasonally, during year end, scale up, after that, scale down.
Regarding horses vs. trucks; I agree with your points exactly. a) there will always be use cases for on-prem (horses), b) is an example of point (a), c) I agree, as I said it will take time to transition and horses will be around for a long time to come, never the less, trucks will inevitably replace (almost) all horses. d) Once you've proven the business case for trucks, then (d) is actually an argument for transitioning faster. The quicker you get rid of the horses, the better the business case.
I'm sorry Adam but I have to completely disagree with your "we have not made any progress since the mainframe days" rant. There isn't a single industry that isn't massively more productive today than it was in the 80s and that can be attributed almost entirely to computers and automation. The systems today are many many times more complex than they were in the terminal days and yet deliver massively more value.
Furthermore; "Cloud" is proven. It's not just a fad. As I said, it doesn't meet every business case today, but as the economies of Cloud improve (it keeps getting cheaper), it will expand and inevitably take over as the dominant way to deliver compute.
Side note; I happen to own 2 horses AND a truck! lol
John
On 2017-07-13 John Lange wrote:
But sure, some Admins will insist that they can still do more with a horse and buggy than a newfangled truck, and truth be told, horses and buggies stuck around a long time after trucks were first invented so existing Admins will be around for many years to come.
Ah, the old fallacious ridicule/peer-pressure argument. Who would want to be the horse-riding luddite? Heaven, not me! No different from the "right side of history" (non)argument that is overused in similar circumstances.
Cloud, or whatever it's renamed during the next hype-phase, cannot ever replace all on-site servers (let alone desktops). The reason is it is mathematically impossible for the cloud providers to sell their service cheaper than the raw costs dictate. This especially applies in the instance where one needs a constant amount of resources over a long period of time. If I need 12 cores & 64GB steady for a project, and I am (or have on payroll) a computer expert, there is no way AWS or Azure can per-minute me that box for less than I can buy & run it myself.
In fact, I'd be extremely curious to see what multiple of (hard!) cost the cloud providers are using to determine pricing. For instance, if said box is $3k and has a usable life of 3 years, it's easy to add in A/C, etc., to get a TCO. Then figure out what cloud charges for the equivalent of said box. I would not be surprised if it is 2X, 3X or more. Of course it'll constantly be falling (as long as no mono/duo-polies arise) multiple, but it can never be less than 1X.
Granted, my scenario assumes a high level of local talent, and assumes a smaller (micro) level of scale. I'll gladly admit that cloud has its (small, albeit growing) niche, and excels in there. However, to say that niche will become as trucks are to horses smacks of over-optimism. Cloud's niche will grow to a certain level then stabilize. It will never be 99% of the computer market. It'll never be 99% of the server market. It'll never even be 99% of the web server market! I'll bet real money on that on any timeframe you wish during which we'll still be alive.
Also, any conversation about pie in the sky (get it??) technologies must be candid about the effect of geography and scale. A lot of us have learned that what works in Silicon Valley, New York and Toronto, don't necessarily apply to Winnipeg (or any smaller, lower cost of living, lower A/C costs, locale). And what applies to medium/large business does not apply to small/micro business. Adam mentioned the same thing: works great for Netflix, not for my local micro-retail customer. If you don't need quick-scaleability or high time-variability, or capex->opex, then I see very little value to cloud at all.
Additionally, and purely personal, cloud takes away the "fun" factor. I'm sure there's very few MUUGers who don't still get a kick of spec'ing and building a DIY workstation or server and firing it up. There's a sense of satisfaction. It'll be a sad day if/when we lose all sight of the actual hardware.
On 2017-07-13 Adam Thompson wrote:
I’m at least a decade past the “hey, it’s new, therefore it’s cool, therefore let’s use it!” stage, and well into the “yeah, sure, prove it” stage of life. Until someone can convince me that the New Ways Of Thinking actually represent an *improvement*, not just a *change*, over the old ways, I’m going to continue to regard them with extreme
Decade? Perhaps, for us, closer to two :-)
skepticism. Because I *was* one of those young turks who agitated for change, convinced that newer always meant better. Enough experience finally taught me otherwise. Especially when I note that we’ve basically gone right back to 1970-style Mainframe Partitions with semi-intelligent terminals; the problems we’re dealing with today on the web are EXACTLY the same problems we were dealing with in the early ‘80s with IBM/Amdahl mainframes and IBM 3270-style terminals… just with rounded corners and alpha transparencies. I honestly don’t see a lot of actual improvement in many (not all!) areas. “Everything Old Is New Again.” I’m tired of that hamster-wheel.
That's an astoundingly good commentary. Cloud is the same model as time-sharing: sure, vastly more powerful, but still the same model of grabbing a tiny slice of a monolithic hidden beast. Love the "rounded" and "alpha" (neither is on my box!). I can't think of one single actual UI improvement on the desktop in over a decade, save tabs in browsers. :-)