Clinton Pavlovic

About  #  Reading List  #  Archive

Prev  #  Next

Installing debian gnu/linux with custom sized partitions


Recently I changed the operating system on my home computer from Windows 7 to Debian GNU/Linux. Up until now I have had a fairly good run using Debian and I haven’t encountered any problems that I couldn’t fix with a little bit of effort and a lot of reading.

When preforming a recent “apt-get dist-upgrade” I encountered this message:

failed to write (No space left on device)

Strange, because I thought I had more than enough space on my system. I check the space left on my partitions:

$ df -h

The “-h” is to make the output human readable; I get this response:

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pc-root 314M 277M 17M 95% /
udev 10M 0 10M 0% /dev
tmpfs 800M 1.5M 798M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.6G 172K 1.6G 1% /run/shm
/dev/sda1 228M 23M 194M 11% /boot
/dev/mapper/pc-home 431G 1.6G 408G 1% /home
/dev/mapper/pc-tmp 360M 2.1M 335M 1% /tmp
/dev/mapper/pc-usr 8.2G 4.5G 3.3G 58% /usr
/dev/mapper/pc-var 2.7G 569M 2.0G 22% /var

The root partition has run out of space.

First, I attempted to free some space with the following:

# apt-get autoremove # Removes all unused packages automatically

# apt-get autoclean # Erases old downloaded archive files

This only freed another 5% space which is clearly still not enough.

During the original installation process on this system, I used the default installation partition sizes, which lead to the 314MB sized partition for the root partition. There are ways to increase the sizes of partitions without repartitioning the entire hard drive and reinstalling the operating system using a live GNU/Linux CD, but but I have been thinking of doing a fresh install so this is the excuse needed.

First thing to determine is the amount of space which should be allocated to each partition initially, so similar problems are not encountered so soon after a fresh install.

This is briefly what each partition does:

To customise the size of each partition during the installation process I followed the following steps:

After the re-installation my partitions now look like this:

/dev/mapper/pc-root 46G 348M 44G 1% /
udev 10M 0 10M 0% /dev
tmpfs 799M 764K 799M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.6G 92K 1.6G 1% /run/shm
/dev/sda1 228M 41M 172M 19% /boot
/dev/mapper/pc-home 92G 106M 87G 1% /home
/dev/mapper/pc-tmp 922M 1.3M 857M 1% /tmp
/dev/mapper/pc-usr 46G 3.0G 41G 7% /usr
/dev/mapper/pc-var 9.1G 1.3G 7.4G 15% /var
none 4.0K 0 4.0K 0% /sys/fs/cgroup

There seems to be a lot of debate regarding the optimal size of swap partitions and root partitions, but with storage space not being too much of a problem on my system I opted for 10GB of swap space and oversized partitions for the rest of the system. This might not be an optimal use of space though if storage space on the system is a problem.

To see a tree map of each partition and the mount points the command is:

$ lsblk

Prev  #  Next

About  #  Reading List  #  Archive
Copyright Notice  #  Legal Disclaimer