At work I have two Ubuntu and two CentOS servers. What do you recommend as the best practice for applying updates? Specifically, do you do any testing on test machines first, or just wait until the updates are a certain age without hearing of any issues? Automatically apply them, or manually? Do you reboot the servers regularly regardless of whether you've patched them (something Windows administrators still do for their Windows servers!), or just wait until a kernel or other update requires it?
Kevin
At work, we copy the updates repo and point all our servers there. Every so often we freshen the mirror and begin the patch cycle. If you have reliable tests then you can test before rolling out, or just roll out to a couple of development servers first before generally deploying.
For my personal stuff I run Nagios with a plugin that checks the output of "yum list-security" (need the yum-security package) and flags an alert if there's a security related fix. Those I try to install fairly quickly. Otherwise I periodically upgrade the non critical packages, and schedule the critical ones (apache/nginx/php/ruby/mysql). See http://ertw.com/blog/2010/11/19/epel-nginx-rpm-and-upgrade-from-0-6-x-to-0-8... something that recently bit me :(
If you have packages that are critical to your application, you can put them under cfengine/puppet management to automate some of the tasks associated with keeping them up to date.
Most of the servers I take care of now are VPSes, so I never reboot for kernel upgrades.
Sean
On Fri, Nov 26, 2010 at 8:01 PM, Kevin McGregor kevin.a.mcgregor@gmail.comwrote:
At work I have two Ubuntu and two CentOS servers. What do you recommend as the best practice for applying updates? Specifically, do you do any testing on test machines first, or just wait until the updates are a certain age without hearing of any issues? Automatically apply them, or manually? Do you reboot the servers regularly regardless of whether you've patched them (something Windows administrators still do for their Windows servers!), or just wait until a kernel or other update requires it?
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
For CentOS, I'm quite comfortable setting up automatic updates. It's not "best practices" but I've spent a LOT less time fixing post-update problems than I would have spent testing each update, over the years. (This applies to Red Hat in general since RH2.1.)
Ubuntu... Not quite so happy. Their updates come fast and furious sometimes, and the patterns I see don't inspire confidence. That said, I often have automatic updates turned on for Ubuntu desktops and have only had one major problem in ~5yrs.
I think the days of testing patches independently are gone because of manpower reasons, unless you're running in a high-availability environment. Of course, all the HA system vendors I work with now address the problem by *never* patching or upgrading - one telecommunications vendor runs CentOS 4 (4.1 IIRC), with no plans to upgrade or apply *any* patches. Their answer: the systems shouldn't be reachable from the Internet anyway. *sighhhh*
-Adam
-----Original Message----- From: Kevin McGregor kevin.a.mcgregor@gmail.com Sender: roundtable-bounces@muug.mb.ca Date: Fri, 26 Nov 2010 20:01:05 To: MUUG Roundtableroundtable@muug.mb.ca Reply-To: Continuation of Round Table discussion roundtable@muug.mb.ca Subject: [RndTbl] Linux patching best practices
_______________________________________________ Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
On the Ubuntu systems, even the server addition I've had updates that break my existing setup. Do it by hand if you have to update it.
On a somewhat related note, I've taken to using the desktop version of ubuntu even for servers, that way its free for ksplice.
All the best, Rob
On Fri, Nov 26, 2010 at 8:43 PM, Adam Thompson athompso@athompso.netwrote:
For CentOS, I'm quite comfortable setting up automatic updates. It's not "best practices" but I've spent a LOT less time fixing post-update problems than I would have spent testing each update, over the years. (This applies to Red Hat in general since RH2.1.)
Ubuntu... Not quite so happy. Their updates come fast and furious sometimes, and the patterns I see don't inspire confidence. That said, I often have automatic updates turned on for Ubuntu desktops and have only had one major problem in ~5yrs.
I think the days of testing patches independently are gone because of manpower reasons, unless you're running in a high-availability environment. Of course, all the HA system vendors I work with now address the problem by *never* patching or upgrading - one telecommunications vendor runs CentOS 4 (4.1 IIRC), with no plans to upgrade or apply *any* patches. Their answer: the systems shouldn't be reachable from the Internet anyway. *sighhhh*
-Adam
-----Original Message----- From: Kevin McGregor kevin.a.mcgregor@gmail.com Sender: roundtable-bounces@muug.mb.ca Date: Fri, 26 Nov 2010 20:01:05 To: MUUG Roundtableroundtable@muug.mb.ca Reply-To: Continuation of Round Table discussion roundtable@muug.mb.ca Subject: [RndTbl] Linux patching best practices
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
On 2010-11-26 20:43, Adam Thompson wrote:
For CentOS, I'm quite comfortable setting up automatic updates. It's not "best practices" but I've spent a LOT less time fixing post-update problems than I would have spent testing each update, over the years. (This applies to Red Hat in general since RH2.1.)
I would tend to agree here, at least for the repos enabled by default in CentOS-Base.repo, i.e. base, updates, addons and extras. What I do at work is allow auto-updates for those repos on the various workstations and non-critical servers I maintain. For my most critical server, I run "yum update" manually, after I've determined that the updates didn't break anything on the other systems.
Not necessarily safe for third-party repos, however... I've had some minor breakage with rpmforge packages, and catastrophic failures with some EPEL updates that were DOA and pushed out without the slightest bit of testing. (They can also take forever to fix such broken packages.) I'd be sure to test these out on the least critical systems first, before updating anything important.
I think the days of testing patches independently are gone because of manpower reasons, unless you're running in a high-availability environment.
Again, I mostly agree, but I would make exceptions for certain critical packages and/or critical systems, whether HA or not. But, yeah, you can't test every update that comes out.
In theory, regression testing is what you get when you use the the licensed versions of RedHat and Suse etc. so if your asking those kinds of questions you might want to use a licensed version.
But I'm wondering what other people on this list think about that answer?
From my personal perspective, on our licensed versions of SLES as well
as my desktop OpenSUSE I allow auto-update of everything that does not require a reboot.
Since I'm stuck with proprietary ATI drivers on my laptop, if I allow auto-kernel updates my display stops working. On servers, unscheduled unsupervised rebooting is not a good idea so better to plan those updates.
I think that the answer you gave ignores application specific things.
For example, at a former employer, some code we had broke when Microsoft issued a service pack. Our code depended on the [broken] way NT would parse command line options, and that "was fixed" in the service pack.
Red Hat et al seem to be pretty good about just issuing real bug fixes and not jumping software versions during updates, but I don't get any warm fuzzies from the updates being regression tested. I would make sure to test anything related with the language you're using.
Another example, the version of Ruby that used to be in CentOS failed this test:
should "not have that 1.8.5 bigdecimal bug" do assert_equal "$1.01", ("$%0.2f" % BigDecimal.new("1.01")) end
which is roughly the equivalent of sprintf("$%0.2f", 1.01) (it returned "$1.10", not something you want when you're producing financial reports)
Sean
On Mon, Nov 29, 2010 at 4:55 PM, John Lange john@johnlange.ca wrote:
In theory, regression testing is what you get when you use the the licensed versions of RedHat and Suse etc. so if your asking those kinds of questions you might want to use a licensed version.
But I'm wondering what other people on this list think about that answer?
From my personal perspective, on our licensed versions of SLES as well
as my desktop OpenSUSE I allow auto-update of everything that does not require a reboot.
Since I'm stuck with proprietary ATI drivers on my laptop, if I allow auto-kernel updates my display stops working. On servers, unscheduled unsupervised rebooting is not a good idea so better to plan those updates.
-- John Lange www.johnlange.ca
On Mon, Nov 29, 2010 at 11:33 AM, Gilbert E. Detillieux gedetil@cs.umanitoba.ca wrote:
On 2010-11-26 20:43, Adam Thompson wrote:
For CentOS, I'm quite comfortable setting up automatic updates. It's not "best practices" but I've spent a LOT less time fixing post-update problems than I would have spent testing each update, over the years. (This applies to Red Hat in general since RH2.1.)
I would tend to agree here, at least for the repos enabled by default in CentOS-Base.repo, i.e. base, updates, addons and extras. What I do at work is allow auto-updates for those repos on the various workstations and non-critical servers I maintain. For my most critical server, I run "yum update" manually, after I've determined that the updates didn't break anything on the other systems.
Not necessarily safe for third-party repos, however... I've had some minor breakage with rpmforge packages, and catastrophic failures with some EPEL updates that were DOA and pushed out without the slightest bit of testing. (They can also take forever to fix such broken packages.) I'd be sure to test these out on the least critical systems first, before updating anything important.
I think the days of testing patches independently are gone because of manpower reasons, unless you're running in a high-availability environment.
Again, I mostly agree, but I would make exceptions for certain critical packages and/or critical systems, whether HA or not. But, yeah, you can't test every update that comes out.
-- Gilbert E. Detillieux E-mail: gedetil@muug.mb.ca Manitoba UNIX User Group Web: http://www.muug.mb.ca/ PO Box 130 St-Boniface Phone: (204)474-8161 Winnipeg MB CANADA R2H 3B4 Fax: (204)474-7609 _______________________________________________ Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable