Well, not really partitioning. I know how to do that. At work today, the question of how to set up some Linux servers arose. To put it in some kind of context, when I install Ubuntu Linux (server), by default it creates a small /boot partition, the creates a LVM partition with a / ext4 partition and a swap partition inside of that.
Is that optimal? Recommended? Some would say that /home, /tmp, /var and others should reside in separate partitions/filesystems. Discuss. :-)
Thanks for any input!
Kevin
On 06/08/2011 07:20 PM, Kevin McGregor wrote:
Well, not really partitioning. I know how to do that. At work today, the question of how to set up some Linux servers arose. To put it in some kind of context, when I install Ubuntu Linux (server), by default it creates a small /boot partition, the creates a LVM partition with a / ext4 partition and a swap partition inside of that.
Is that optimal? Recommended? Some would say that /home, /tmp, /var and others should reside in separate partitions/filesystems. Discuss. :-)
Thanks for any input!
http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html
Peter
On 2011-06-08 20:55, Peter O'Gorman wrote:
On 06/08/2011 07:20 PM, Kevin McGregor wrote:
Well, not really partitioning. I know how to do that. At work today, the question of how to set up some Linux servers arose. To put it in some kind of context, when I install Ubuntu Linux (server), by default it creates a small /boot partition, the creates a LVM partition with a / ext4 partition and a swap partition inside of that.
Is that optimal? Recommended? Some would say that /home, /tmp, /var and others should reside in separate partitions/filesystems. Discuss. :-)
Thanks for any input!
Benefits of LVM Small?... ;) For a small/home installation, I would tend to agree. Like Adam and Sean, I would also advocate the KISS principle, particularly when less technically sophisticated users might end up administering the system.
Last paragraph in the above LVM article: "Jane simply adds the new disk to her existing volume group and extends her /home logical volume to include the new disk." Yeah, but Jane probably added this new disk years after the first one, and the first one will likely die long before the new one, which will take out her entire /home file system, making both disks useless (unless there's a recent backup). I've seen this very scenario at work. In my case, "Jane" was a prof in our department who liked to do his own Linux installs, and got LVM by default with a Fedora install. The file system in this case was "/" rather than "/home".
For a home system with lots of storage, I'd be more inclined to use mdadm to set up RAID 1 mirroring, and skip LVM entirely.
For the systems that our students use, I tend to go a bit more overboard on the partitioning, having separate file systems for /tmp, /var/tmp, and just about anything else they'll be able to write to. That way, even if they blow away all the free space on one of those, most things on the system keep on working until I can fix the space problem. I've still had problems, though (more than once), with servers dying due to syslog filling up all of /var in a matter of hours!
I’ve taken to making my life as simple as possible:
/dev/sda1 = swap
/dev/sda2 = root (including everything else)
No LVM, no RAID (assuming HW RAID or VM instead).
I haven’t run into a system that can’t boot off a large root partition in quite some time, and I don’t have any systems running root FS types that aren’t bootable, either.
The fewer things I have to remember about how a system is configured, the better, from my perspective.
-Adam
From: roundtable-bounces@muug.mb.ca [mailto:roundtable-bounces@muug.mb.ca] On Behalf Of Kevin McGregor Sent: Wednesday, June 08, 2011 19:21 To: MUUG Roundtable Subject: [RndTbl] Partitioning in Linux
Well, not really partitioning. I know how to do that. At work today, the question of how to set up some Linux servers arose. To put it in some kind of context, when I install Ubuntu Linux (server), by default it creates a small /boot partition, the creates a LVM partition with a / ext4 partition and a swap partition inside of that.
Is that optimal? Recommended? Some would say that /home, /tmp, /var and others should reside in separate partitions/filesystems. Discuss. :-)
Thanks for any input!
Kevin
In *BSD making separate slices for non root file systems is a very good idea for hardening and debugging service configurations (such as the 'nodev','nosuid' mount options... 'nosuid' has exposed some qmail issues I never considered prior to not having this option).
As well slicing and dicing also becomes academic if you use the full size of the drives (also allows you to move the slices around by adding drives/space). Lock step change management style environments would prefer slicing and dicing to keep the allocation size as small as possible and limited to known/a priori style resource management and allows one to grow an FS in place (such as you gave /var 99% of your disk when you could have used 20% and grown other slices based on host specific growth patterns).
Then again if your hosts are internal then simpler is better and I agree with Adam there (though I would put swap on sda2 and sda1 on root but that's largely academic and Adam's probably right depending on hardware layout).
When I design servers I try and fit the shipping OS into 10GB or less and leave the rest for variable growth mounts like /tmp,/var,/opt,/home. In a pinch, being able to grow or zone say /var/log to a new slice allows you to keep /var in place but lessen pressure /var/log is placing on that slice. Anyways that's my two cents (refunds not available).
On 2011-06-08 Kevin McGregor wrote:
Is that optimal? Recommended? Some would say that /home, /tmp, /var and others should reside in separate partitions/filesystems. Discuss. :-)
Better late than never...
I've recently come to the conclusion that for most stuff I do, skip LVM. LVM was great when resize2fs and gparted didn't exist. But with those tools now being so advanced, everything I cared to use LVM for is now handled in a much simpler way.
If you use LVM and still want to do resize type things in a way that LVM can't do (ran into this recently) then it's a major pain to work around LVM to use simple resize2fs commands.
That said, I still use the boot/root/swap 3 partition setup, as Fedora still likes it that way. Maybe separate boot isn't required anymore, but if it ain't broke, why fix it?
On my personal systems I do some other wacky stuff for performance reasons (splitting things on different disks for parallelism). So no harm to having separate /tmp or /var/spool/squid if disks permit.
If it's a system I care about, I'll do md RAID[16]. If it's a less important system (living room mythtv comp) then I'll just do 1 disk.
Is that optimal? Recommended? Some would say that /home, /tmp,
/var
and others should reside in separate partitions/filesystems. Discuss. :-)
Better late than never...
I've recently come to the conclusion that for most stuff I do, skip LVM. LVM was great when resize2fs and gparted didn't exist. But with those tools now being so advanced, everything I cared to use LVM for is now handled in a much simpler way.
If you use LVM and still want to do resize type things in a way that LVM can't do (ran into this recently) then it's a major pain to work around LVM to use simple resize2fs commands.
That said, I still use the boot/root/swap 3 partition setup, as Fedora still likes it that way. Maybe separate boot isn't required anymore, but if it ain't broke, why fix it?
On my personal systems I do some other wacky stuff for performance reasons (splitting things on different disks for parallelism). So no harm to having separate /tmp or /var/spool/squid if disks permit.
If it's a system I care about, I'll do md RAID[16]. If it's a less important system (living room mythtv comp) then I'll just do 1 disk.
Adding another $0.02 to the mix (yeah, I know, I already expressed an opinion a while ago, now I've had time to think about it):
I find that the original rationale to keeing /root, /tmp, /var, etc all separate was to prevent users from filling up critical system partitions. This was in the day when a) most, if not all, UNIX systems had unprivileged users, and b) said systems died quite messily if the filesystem became completely full.
In my line of work, at least, I don't manage *even one single system* that has any unprivileged users, so the only person(s) filling up / or /var have access to do so as root if they choose.
Another rationale was due to the size and cost of disks - one typically wanted to purchase the minimum suitable amount of disk space, because over-buying disk space was ruinously expensive.
I can now purchase a 2TB disk drive for under $100. Even in SAS drives, the smallest disk obtainable today is 73GB. If you're throwing SSDs into the mix, or considering very high-performance environments where load-balancing is of concern, then separating partitions onto separate disks still makes sense.
A modern UNIX does not die catastrophically upon running out of disk space. It can definitely be a pain in the ass to recover from that situation, and if you wish to avoid any and all possibility of dealing with that headache, then by all means keep separate partitions/filesystems for high-growth-potential data.
Another rationale, related to the historical cost of acquiring disk storage, is that keeping all these filesystems separate makes it much easier to migrate them onto larger disks when the original disk fills up, without the need to migrate additional, unrelated filesystems to that new disk. Again, that situation just doesn't happen very often today (in my world), and with tools like resize2fs, it's largely irrelevant anyway.
Another rationale, under Linux and other i386-compatible unices, was to ensure that the system would always be bootable: older BIOSes could not load kernels from beyond cylinder 1024, so once whole-disk sizes increased beyond 1024 cylinders, even with geometry translation, a separate /boot partition was required to hold the kernel image so it could be loaded in real mode.
Any modern LBA-enabled BIOS can boot a kernel from any location on disk... at least up to the 2-point-something TB mark. If you're using 3TB+ disks (whether virtual or physical) then a separate boot partition might still be a good idea.
Another rationale, although somewhat newer, is that for 'hardened' systems, it's possible to mount everything except / with noexec, and in turn to mount / ro. (There are dozens of additional tweaks that can be done.) Run-of-the-mill commercial and personal systems are rarely that secure, and I don't work for an intelligence agency, nor do I work with so-called "high-value targets" or assets. If I need a system *that* secure, I just run OpenBSD instead of Linux and lock it down well. If I need to run an inherently insecure service on a system that needs to be secure, I design it to be a throwaway and use a VM environment to refresh the system from a snapshot nightly (or hourly...) so that despite it inevitably becoming compromised, I don't much care.
Arguing against these factors is supportability and sustainability. IF you have a dedicated admin team, who are all familiar with your custom filesystem layout, AND you have business continuity plans in place (which also implies your custom setup *and its rationale* is fairly well documented) then by all means go crazy optimizing your system to the Nth degree, which would imply not only a multitude of filesystems but also custom mkfs parameters, custom TCP stack sysctls, carefully hand-crafted configuration files, a custom selinux policy... etc, etc, etc. And you're also using a custom-compiled Gentoo, right? Or maybe you compiled your own distro from scratch. :-)
In my opinion, even RHEL's default of / + /boot + swap over LVM is too complicated, although it has the distinct advantage of being "standard", and legions of systems in existence today use that layout.
In my world of DNS, DHCP, LDAP, RADIUS, etc. servers, keeping separate filesystems provides no benefit. Using LVM provides no benefit. I create a single root partition, generally using ext3fs, either on hardware RAID or on MD RAID. (Oh, and there's another reason to keep a separate /boot: neither LILO nor GRUB can boot directly off any kind of software RAID except RAID 1. So if / is RAID[03456] you need a separate /boot partition.) I create a swap partition if the application seems to require any swap.
If I were a lab administrator, for example, with unprivileged - and often untrusted - users abusing the systems every day, my decisions would be vastly different. I do use different layouts for mail servers and sometimes for database servers, for example, because they have more complex requirements.
In all cases I aim for "hands-off" management, and simplicity of management - when I have to log into a system to determine what's wrong, I want to deal with the smallest possible number of variables, because my "time is money". So a single root partition, no swap, no LVM, no /boot, makes diagnosis easier. It doesn't always make resolution *faster* but it usually narrows down the option space to the point where I can come up with one, or two top recommendations to solve the problem almost immediately. That makes it easier to get buy-in from whoever I'm working for. (It also makes it easier to come to clear decisions in my own mind, because my default inclination is make things too complicated.)
YMMV.
-Adam Thompson athompso@athompso.net
On 11-07-11 03:59 PM, Trevor Cordes wrote:
On 2011-06-08 Kevin McGregor wrote:
Is that optimal? Recommended? Some would say that /home, /tmp, /var and others should reside in separate partitions/filesystems. Discuss. :-)
Better late than never...
I've recently come to the conclusion that for most stuff I do, skip LVM. LVM was great when resize2fs and gparted didn't exist. But with those tools now being so advanced, everything I cared to use LVM for is now handled in a much simpler way.
If you use LVM and still want to do resize type things in a way that LVM can't do (ran into this recently) then it's a major pain to work around LVM to use simple resize2fs commands.
That said, I still use the boot/root/swap 3 partition setup, as Fedora still likes it that way. Maybe separate boot isn't required anymore, but if it ain't broke, why fix it?
On my personal systems I do some other wacky stuff for performance reasons (splitting things on different disks for parallelism). So no harm to having separate /tmp or /var/spool/squid if disks permit.
If it's a system I care about, I'll do md RAID[16]. If it's a less important system (living room mythtv comp) then I'll just do 1 disk. _______________________________________________ Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
I know I responded to this earlier too...
At the lab where I volunteer the balance is to consider both the assembly line nature of the work as well as the need for the user (as you saw last year - thanks again BTW).
In a concession to the assembly line part we want to keep things simple. We'd rather not have to recover an entire disk if all we have to do is recover a partition. Since reinstalling the programs is fairly trivial if we decide to do a clean install, we can separate out the "/" directory from the user data.
If the users decide to experiment with upgrades (we don't really care) it is easy to retain their data and settings if the "/home" area is in its own partition. There have been instances on various M$ machines we give out where the clients family or friends messed up the system. By giving them separate unprivileged accounts the likelihood of this in a *NIX environment is less.
We still put on a swap partition just because...
Nice and simple. Quick to maintain. The only problems we've had is when some MTS tech will tell the client Linux (or OS X) doesn't work on the MTS system. Actually we had one guy exchange his system for an M$ system because a couple of gambling sites wouldn't let *NIX systems connect.
Later Mike