Good morning,
I have a stand-alone 2013 workstation (11 years old). It's dual-boot; the other OS is windows-7.
I'm planning to upgrade from f-38 to f-39 in a few days. I've had hard drive space issues before. So I'd like to know how to determine in advance if I have enough hard drive space for the upgrade. Output from "df" and "ls -al /boot" are at the bottom of this post.
How do I do it?
If there's anything else you need/want to know, please ask.
Thank-you in advance.
===============
-bash.5[~]: df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 4096 0 4096 0% /dev tmpfs 8158692 0 8158692 0% /dev/shm tmpfs 3263480 1812 3261668 1% /run /dev/sda6 51422028 30401248 18382956 63% / tmpfs 8158696 0 8158696 0% /tmp /dev/sda3 485348 339555 116097 75% /boot /dev/sda7 947550748 32255828 867135280 4% /home tmpfs 1631736 256 1631480 1% /run/user/1001 -bash.6[~]:
===============
-bash.8[~]: ls -al /boot total 319147 dr-xr-xr-x. 6 root root 5120 Apr 4 13:19 . dr-xr-xr-x. 22 root root 4096 Oct 5 2023 .. -rw-r--r--. 1 root root 269405 Mar 17 18:00 config-6.7.10-100.fc38.x86_64 -rw-r--r--. 1 root root 269340 Mar 26 18:00 config-6.7.11-100.fc38.x86_64 drwx------. 3 root root 1024 Jan 18 2023 efi drwx------. 6 root root 1024 Apr 5 07:35 grub2 -rw-------. 1 root root 116291735 Jun 1 2023 initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img -rw-------. 1 root root 74087638 Mar 28 10:42 initramfs-6.7.10-100.fc38.x86_64.img -rw-------. 1 root root 74047482 Apr 4 13:19 initramfs-6.7.11-100.fc38.x86_64.img drwxr-xr-x. 3 root root 1024 Oct 11 2018 loader drwx------. 2 root root 12288 Mar 17 2013 lost+found -rw-r--r--. 1 root root 147744 Jan 6 17:00 memtest86+x64.bin lrwxrwxrwx. 1 root root 46 Mar 28 10:42 symvers-6.7.10-100.fc38.x86_64.xz -> /lib/modules/6.7.10-100.fc38.x86_64/symvers.xz lrwxrwxrwx. 1 root root 46 Apr 4 13:19 symvers-6.7.11-100.fc38.x86_64.xz -> /lib/modules/6.7.11-100.fc38.x86_64/symvers.xz -rw-r--r--. 1 root root 8852552 Mar 17 18:00 System.map-6.7.10-100.fc38.x86_64 -rw-r--r--. 1 root root 8853161 Mar 26 18:00 System.map-6.7.11-100.fc38.x86_64 -rwxr-xr-x. 1 root root 14329896 Jun 1 2023 vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 -rw-r--r--. 1 root root 161 Nov 1 18:00 .vmlinuz-6.5.10-200.fc38.x86_64.hmac -rw-r--r--. 1 root root 160 Oct 19 18:00 .vmlinuz-6.5.8-200.fc38.x86_64.hmac -rwxr-xr-x. 1 root root 14802760 Mar 17 18:00 vmlinuz-6.7.10-100.fc38.x86_64 -rw-r--r--. 1 root root 161 Mar 17 18:00 .vmlinuz-6.7.10-100.fc38.x86_64.hmac -rwxr-xr-x. 1 root root 14790472 Mar 26 18:00 vmlinuz-6.7.11-100.fc38.x86_64 -rw-r--r--. 1 root root 161 Mar 26 18:00 .vmlinuz-6.7.11-100.fc38.x86_64.hmac -bash.9[~]:
===============
On 4/5/24 09:09, home user wrote:
Good morning,
I have a stand-alone 2013 workstation (11 years old). It's dual-boot; the other OS is windows-7.
I'm planning to upgrade from f-38 to f-39 in a few days. I've had hard drive space issues before. So I'd like to know how to determine in advance if I have enough hard drive space for the upgrade. Output from "df" and "ls -al /boot" are at the bottom of this post.
How do I do it?
If there's anything else you need/want to know, please ask.
Thank-you in advance.
===============
-bash.5[~]: df
"df -h" is much more pleasant.
Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 4096 0 4096 0% /dev tmpfs 8158692 0 8158692 0% /dev/shm tmpfs 3263480 1812 3261668 1% /run /dev/sda6 51422028 30401248 18382956 63% /
Assuming I'm reading this correctly, you have 18GB free, so this is fine.
tmpfs 8158696 0 8158696 0% /tmp /dev/sda3 485348 339555 116097 75% /boot
This looks like just over 100MB which could possibly cause a problem.
/dev/sda7 947550748 32255828 867135280 4% /home tmpfs 1631736 256 1631480 1% /run/user/1001
Thank-you, Samuel.
On 4/5/24 11:34 AM, Samuel Sieb wrote:
On 4/5/24 09:09, home user wrote:
===============
-bash.5[~]: df
"df -h" is much more pleasant.
-bash.1[~]: df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 3.2G 1.8M 3.2G 1% /run /dev/sda6 50G 29G 18G 63% / tmpfs 7.8G 0 7.8G 0% /tmp /dev/sda3 474M 332M 114M 75% /boot /dev/sda7 904G 31G 827G 4% /home tmpfs 1.6G 256K 1.6G 1% /run/user/1001 -bash.2[~]:
I see what you mean.
Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 4096 0 4096 0% /dev tmpfs 8158692 0 8158692 0% /dev/shm tmpfs 3263480 1812 3261668 1% /run /dev/sda6 51422028 30401248 18382956 63% /
Assuming I'm reading this correctly, you have 18GB free, so this is fine.>
tmpfs 8158696 0 8158696 0% /tmp /dev/sda3 485348 339555 116097 75% /boot
This looks like just over 100MB which could possibly cause a problem.
It was only in late February, a mere 1 1/2 months ago, that I cut back by one old kernel. I now have the current kernel + one old kernel + the rescue kernel. The kernel really grew that much in so short a time?! ....or was the kernel a seed that is now sprouting?!
Taking a cue from your "df -h" tip, here's a "more pleasant" ls of /boot (first several lines only)...
-----
-bash.4[~]: ls -ahlS /boot total 312M -rw-------. 1 root root 111M Jun 1 2023 initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img -rw-------. 1 root root 71M Mar 28 10:42 initramfs-6.7.10-100.fc38.x86_64.img -rw-------. 1 root root 71M Apr 4 13:19 initramfs-6.7.11-100.fc38.x86_64.img -rwxr-xr-x. 1 root root 15M Mar 17 18:00 vmlinuz-6.7.10-100.fc38.x86_64 -rwxr-xr-x. 1 root root 15M Mar 26 18:00 vmlinuz-6.7.11-100.fc38.x86_64 -rwxr-xr-x. 1 root root 14M Jun 1 2023 vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 -rw-r--r--. 1 root root 8.5M Mar 26 18:00 System.map-6.7.11-100.fc38.x86_64 -rw-r--r--. 1 root root 8.5M Mar 17 18:00 System.map-6.7.10-100.fc38.x86_64 -rw-r--r--. 1 root root 264K Mar 17 18:00 config-6.7.10-100.fc38.x86_64 -rw-r--r--. 1 root root 264K Mar 26 18:00 config-6.7.11-100.fc38.x86_64
-----
I don't recall using the rescue kernel in a while, but I have used older kernels occasionally. How do I get rid of the rescue kernel? ....or is there a better solution?
On 4/5/24 12:11, home user wrote:
On 4/5/24 11:34 AM, Samuel Sieb wrote:
On 4/5/24 09:09, home user wrote:
tmpfs 8158696 0 8158696 0% /tmp /dev/sda3 485348 339555 116097 75% /boot
This looks like just over 100MB which could possibly cause a problem.
It was only in late February, a mere 1 1/2 months ago, that I cut back by one old kernel. I now have the current kernel + one old kernel + the rescue kernel. The kernel really grew that much in so short a time?! ....or was the kernel a seed that is now sprouting?!
I remember that. You probably have just enough space to do the upgrade.
I don't recall using the rescue kernel in a while, but I have used older kernels occasionally. How do I get rid of the rescue kernel? ....or is there a better solution?
The rescue kernel is primarily for if you change the hardware to something that is different enough from the previous hardware that the drivers are not available. It doesn't seem likely that you are going to do something like that, so you could remove it.
Run the following command: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf
Then you can delete the rescue files from /boot and the rescue file from the loader entries below there.
Thank-you, Samuel. I was severely side-tracked Friday afternoon. I'm ready to resume now.
On 4/5/24 1:43 PM, Samuel Sieb wrote:
On 4/5/24 12:11, home user wrote:
On 4/5/24 11:34 AM, Samuel Sieb wrote:
On 4/5/24 09:09, home user wrote:
tmpfs 8158696 0 8158696 0% /tmp /dev/sda3 485348 339555 116097 75% /boot
This looks like just over 100MB which could possibly cause a problem.
It was only in late February, a mere 1 1/2 months ago, that I cut back by one old kernel. I now have the current kernel + one old kernel + the rescue kernel. The kernel really grew that much in so short a time?! ....or was the kernel a seed that is now sprouting?!
I remember that. You probably have just enough space to do the upgrade.
I don't recall using the rescue kernel in a while, but I have used older kernels occasionally. How do I get rid of the rescue kernel? ....or is there a better solution?
The rescue kernel is primarily for if you change the hardware to something that is different enough from the previous hardware that the drivers are not available. It doesn't seem likely that you are going to do something like that, so you could remove it.
Run the following command: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf
Then you can delete the rescue files from /boot and the rescue file from the loader entries below there.
What is the best practice way to do that: two individual "rm" commands, a "dnf remove" command (dnf remove what?), or something else (what?)?
For the "loader entries", you're referring to "/boot.loader/entries/70857e3fb05849139515e66a3fdc6b38-0-rescue.conf"? Anything else? Just an "rm" command, or what?
On 4/7/24 14:17, home user wrote:
On 4/5/24 1:43 PM, Samuel Sieb wrote:
Run the following command: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf
Then you can delete the rescue files from /boot and the rescue file from the loader entries below there.
What is the best practice way to do that: two individual "rm" commands, a "dnf remove" command (dnf remove what?), or something else (what?)?
Just "rm". They aren't managed by any package.
For the "loader entries", you're referring to "/boot.loader/entries/70857e3fb05849139515e66a3fdc6b38-0-rescue.conf"? Anything else? Just an "rm" command, or what?
Yes, just "rm".
On 4/7/24 8:11 PM, Samuel Sieb wrote:
On 4/7/24 14:17, home user wrote:
On 4/5/24 1:43 PM, Samuel Sieb wrote:
Run the following command: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf
Then you can delete the rescue files from /boot and the rescue file from the loader entries below there.
What is the best practice way to do that: two individual "rm" commands, a "dnf remove" command (dnf remove what?), or something else (what?)?
Just "rm". They aren't managed by any package.
For the "loader entries", you're referring to "/boot.loader/entries/70857e3fb05849139515e66a3fdc6b38-0-rescue.conf"? Anything else? Just an "rm" command, or what?
Yes, just "rm".
-bash.3[~]: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf -bash.4[~]: cd /boot -bash.5[boot]: ls *rescue* initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 -bash.6[boot]: rm initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img rm: remove regular file 'initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img'? y -bash.7[boot]: rm vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 rm: remove regular file 'vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38'? y -bash.8[boot]: cd loader/entries/ -bash.9[entries]: ls *rescue* 70857e3fb05849139515e66a3fdc6b38-0-rescue.conf -bash.10[entries]: rm 70857e3fb05849139515e66a3fdc6b38-0-rescue.conf rm: remove regular file '70857e3fb05849139515e66a3fdc6b38-0-rescue.conf'? y -bash.11[entries]:
The upgrade is planned for Thursday. Thank-you, Samuel.
On 4/7/24 8:20 PM, home user wrote:
On 4/7/24 8:11 PM, Samuel Sieb wrote:
On 4/7/24 14:17, home user wrote:
On 4/5/24 1:43 PM, Samuel Sieb wrote:
Run the following command: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf
Then you can delete the rescue files from /boot and the rescue file from the loader entries below there.
What is the best practice way to do that: two individual "rm" commands, a "dnf remove" command (dnf remove what?), or something else (what?)?
Just "rm". They aren't managed by any package.
For the "loader entries", you're referring to "/boot.loader/entries/70857e3fb05849139515e66a3fdc6b38-0-rescue.conf"? Anything else? Just an "rm" command, or what?
Yes, just "rm".
-bash.3[~]: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf -bash.4[~]: cd /boot -bash.5[boot]: ls *rescue* initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 -bash.6[boot]: rm initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img rm: remove regular file 'initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img'? y -bash.7[boot]: rm vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 rm: remove regular file 'vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38'? y -bash.8[boot]: cd loader/entries/ -bash.9[entries]: ls *rescue* 70857e3fb05849139515e66a3fdc6b38-0-rescue.conf -bash.10[entries]: rm 70857e3fb05849139515e66a3fdc6b38-0-rescue.conf rm: remove regular file '70857e3fb05849139515e66a3fdc6b38-0-rescue.conf'? y -bash.11[entries]:
The upgrade is planned for Thursday. Thank-you, Samuel.
I rebooted. The dracut entry is gone from the grub menu. Disk usage is now: ----- -bash.1[~]: df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 3.2G 1.8M 3.2G 1% /run /dev/sda6 50G 29G 18G 63% / /dev/sda3 474M 208M 238M 47% /boot tmpfs 7.8G 0 7.8G 0% /tmp /dev/sda7 904G 31G 827G 4% /home tmpfs 1.6G 256K 1.6G 1% /run/user/1001 -bash.2[~]: ----- Following Samuel's instructions worked. I should now have plenty of space for the f-38 to f-39 upgrade this Thursday. I've labelled this thread SOLVED. Thank-you Samuel.
One other thing you might want to do. Change the number of kernels to keep. The default is 3. I have one old IBM R61 that has had lot of updates over the years, and its boot partition was to small. As other suggested removing the rescue frees space, but it gets rebuilt when new kernel is installed. That can be turned off, but don't recall that option at moment.
/etc/dnf/dnf.conf contains the install limit. On that machine with small boot, just changed it to 2 instead of the default of 3.
[main] gpgcheck=1 installonly_limit=3 timeout=300 clean_requirements_on_remove=true max_parallel_downloads=20 fastestmirror=False minrate=128K deltarpm=false
On 7 Apr 2024 at 20:33, home user wrote:
Date sent: Sun, 7 Apr 2024 20:33:14 -0600 Subject: Re: planning for upgrade. [SOLVED] To: users@lists.fedoraproject.org From: home user mattisonw@comcast.net Send reply to: Community support for Fedora users users@lists.fedoraproject.org
On 4/7/24 8:20 PM, home user wrote:
On 4/7/24 8:11 PM, Samuel Sieb wrote:
On 4/7/24 14:17, home user wrote:
On 4/5/24 1:43 PM, Samuel Sieb wrote:
Run the following command: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf
Then you can delete the rescue files from /boot and the rescue file from the loader entries below there.
What is the best practice way to do that: two individual "rm" commands, a "dnf remove" command (dnf remove what?), or something else (what?)?
Just "rm". They aren't managed by any package.
For the "loader entries", you're referring to "/boot.loader/entries/70857e3fb05849139515e66a3fdc6b38-0-rescue.conf"? Anything else? Just an "rm" command, or what?
Yes, just "rm".
-bash.3[~]: echo 'dracut_rescue_image="no"' > /etc/dracut.conf.d/02-rescue.conf -bash.4[~]: cd /boot -bash.5[boot]: ls *rescue* initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 -bash.6[boot]: rm initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img rm: remove regular file 'initramfs-0-rescue-70857e3fb05849139515e66a3fdc6b38.img'? y -bash.7[boot]: rm vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38 rm: remove regular file 'vmlinuz-0-rescue-70857e3fb05849139515e66a3fdc6b38'? y -bash.8[boot]: cd loader/entries/ -bash.9[entries]: ls *rescue* 70857e3fb05849139515e66a3fdc6b38-0-rescue.conf -bash.10[entries]: rm 70857e3fb05849139515e66a3fdc6b38-0-rescue.conf rm: remove regular file '70857e3fb05849139515e66a3fdc6b38-0-rescue.conf'? y -bash.11[entries]:
The upgrade is planned for Thursday. Thank-you, Samuel.
I rebooted. The dracut entry is gone from the grub menu. Disk usage is now:
-bash.1[~]: df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 3.2G 1.8M 3.2G 1% /run /dev/sda6 50G 29G 18G 63% / /dev/sda3 474M 208M 238M 47% /boot tmpfs 7.8G 0 7.8G 0% /tmp /dev/sda7 904G 31G 827G 4% /home tmpfs 1.6G 256K 1.6G 1% /run/user/1001
-bash.2[~]:
Following Samuel's instructions worked. I should now have plenty of space for the f-38 to f-39 upgrade this Thursday. I've labelled this thread SOLVED. Thank-you Samuel. -- _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
+------------------------------------------------------------+ Michael D. Setzer II - Computer Science Instructor (Retired) mailto:mikes@guam.net mailto:msetzerii@gmail.com mailto:msetzerii@gmx.com Guam - Where America's Day Begins G4L Disk Imaging Project maintainer http://sourceforge.net/projects/g4l/ +------------------------------------------------------------+
On 4/7/24 8:50 PM, Michael D. Setzer II via users wrote:
One other thing you might want to do. Change the number of kernels to keep. The default is 3. I have one old IBM R61 that has had lot of updates over the years, and its boot partition was to small. As other suggested removing the rescue frees space, but it gets rebuilt when new kernel is installed. That can be turned off, but don't recall that option at moment.
Samuel provided that with the echo command.
/etc/dnf/dnf.conf contains the install limit. On that machine with small boot, just changed it to 2 instead of the default of 3.
[main] gpgcheck=1 installonly_limit=3 timeout=300 clean_requirements_on_remove=true max_parallel_downloads=20 fastestmirror=False minrate=128K deltarpm=false
This was done in a previous thread, back in February. I currently keep only one old kernel. But thank-you for the suggestion.