On 10/1/23 11:20, Stephen Morris wrote:
> On 9/1/23 21:26, John Pilkington wrote:
>>
>>>>>>
>>>>>> Maybe this is
>>>>>>
https://bugzilla.kernel.org/show_bug.cgi?id=216895, referred to
>>>>>> in this current thread:
>>>>>>
>>>>>> Re: Fedora 37 hangs after graphical login
>>>>>>
>>>>>> I have two cifs mounting lines in /etc/fstab; only one has the
>>>>>> hardware connected.
>>
>>>>
>>>> I have cifs... vers=3.0,
>>>>
>>>> //192.168.1.XX/Public /mnt/nas1a cifs
>>>>
credentials=<pathtocredfile>,iocharset=utf8,gid=1000,uid=1000,vers=3.0,file_mode=0777,dir_mode=0777
>>>> 0 0
>>
>> Here is a 'journalctl' output from, first, 6.0.16 (which hung) and
>> then 6.0.15, which completed. Maybe it will help. Or perhaps just
>> wait for 6.0.18...
>>
>>
>> nas1a is on the network, nas2a is not. The main difference here
>> appears to be that with 6.0.16 sddm is not being called.
>>
>> {{{
>>
>>
>> [john@HPFed ~]$ sudo journalctl --since 2023-01-07 | grep -A 20 nas2a
>> Jan 07 10:04:44 HPFed systemd[1]: Mounting mnt-nas2a.mount -
>> /mnt/nas2a...
>> Jan 07 10:04:44 HPFed systemd[1]: Starting rpc-statd-notify.service
>> - Notify NFS peers of a restart...
>> Jan 07 10:04:44 HPFed systemd[1]: iscsi.service: Unit cannot be
>> reloaded because it is inactive.
>> Jan 07 10:04:45 HPFed sm-notify[1304]: Version 2.6.2 starting
>> Jan 07 10:04:45 HPFed systemd[1]: Started rpc-statd-notify.service -
>> Notify NFS peers of a restart.
>> Jan 07 10:04:45 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel
>> msg='unit=rpc-statd-notify comm="systemd"
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
>> res=success'
>> Jan 07 10:04:45 HPFed kernel: FS-Cache: Loaded
>> Jan 07 10:04:45 HPFed kernel: Key type dns_resolver registered
>> Jan 07 10:04:46 HPFed kernel: Key type cifs.spnego registered
>> Jan 07 10:04:46 HPFed kernel: Key type cifs.idmap registered
>> Jan 07 10:04:46 HPFed kernel: CIFS: Attempting to mount
>> \\192.168.1.209\Public
>> Jan 07 10:04:51 HPFed chronyd[1041]: Selected source 83.151.207.133
>> (
2.fedora.pool.ntp.org)
>> Jan 07 10:04:51 HPFed chronyd[1041]: System clock TAI offset set to
>> 37 seconds
>> Jan 07 10:04:52 HPFed mount[1305]: mount error(113): could not
>> connect to 192.168.1.209Unable to find suitable address.
>> Jan 07 10:04:52 HPFed kernel: CIFS: VFS: Error connecting to socket.
>> Aborting operation.
>> Jan 07 10:04:52 HPFed kernel: CIFS: VFS: cifs_mount failed w/return
>> code = -113
>> Jan 07 10:04:52 HPFed kernel: CIFS: Attempting to mount
>> \\192.168.1.67\Public
>> Jan 07 10:04:52 HPFed systemd[1]: mnt-nas2a.mount: Mount process
>> exited, code=exited, status=32/n/a
>> Jan 07 10:04:52 HPFed systemd[1]: mnt-nas2a.mount: Failed with
>> result 'exit-code'.
>> Jan 07 10:04:52 HPFed systemd[1]: Failed to mount mnt-nas2a.mount -
>> /mnt/nas2a.
>> Jan 07 10:04:52 HPFed systemd[1]: Dependency failed for
>> remote-fs.target - Remote File Systems.
>> Jan 07 10:04:52 HPFed systemd[1]: remote-fs.target: Job
>> remote-fs.target/start failed with result 'dependency'.
>> Jan 07 10:04:52 HPFed systemd[1]: Starting
>> systemd-user-sessions.service - Permit User Sessions...
>> Jan 07 10:04:52 HPFed systemd[1]: Finished
>> systemd-user-sessions.service - Permit User Sessions.
>> Jan 07 10:04:52 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel
>> msg='unit=systemd-user-sessions comm="systemd"
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
>> res=success'
>> Jan 07 10:04:52 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel msg='unit=atd
>> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=?
>> terminal=? res=success'
>> Jan 07 10:04:52 HPFed systemd[1]: Started atd.service - Deferred
>> execution scheduler.
>> Jan 07 10:04:52 HPFed systemd[1]: Started crond.service - Command
>> Scheduler.
>> Jan 07 10:04:52 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel msg='unit=crond
>> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=?
>> terminal=? res=success'
>> Jan 07 10:04:52 HPFed systemd[1]: Starting
>> plymouth-quit-wait.service - Hold until boot process finishes up...
>> Jan 07 10:04:52 HPFed systemd[1]: Starting plymouth-quit.service -
>> Terminate Plymouth Boot Screen...
>> Jan 07 10:04:52 HPFed systemd[1]: Mounted mnt-nas1a.mount - /mnt/nas1a.
>> Jan 07 10:04:52 HPFed systemd[1]: Received SIGRTMIN+21 from PID 346
>> (plymouthd).
>> Jan 07 10:04:52 HPFed systemd[1]: Received SIGRTMIN+21 from PID 346
>> (plymouthd).
>> Jan 07 10:04:52 HPFed systemd[1]: Finished
>> plymouth-quit-wait.service - Hold until boot process finishes up.
>> Jan 07 10:04:52 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel
>> msg='unit=plymouth-quit-wait comm="systemd"
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
>> res=success'
>> Jan 07 10:04:52 HPFed systemd[1]: Finished plymouth-quit.service -
>> Terminate Plymouth Boot Screen.
>> Jan 07 10:04:52 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-quit
>> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=?
>> terminal=? res=success'
>> Jan 07 10:04:52 HPFed crond[1357]: (CRON) STARTUP (1.5.7)
>> Jan 07 10:04:52 HPFed crond[1357]: (CRON) INFO (RANDOM_DELAY will be
>> scaled with factor 56% if used.)
>> --
>>
>>
>> Jan 07 10:12:02 HPFed systemd[1]: Mounting mnt-nas2a.mount -
>> /mnt/nas2a...
>> Jan 07 10:12:02 HPFed systemd[1]: Starting rpc-statd-notify.service
>> - Notify NFS peers of a restart...
>> Jan 07 10:12:02 HPFed systemd[1]: iscsi.service: Unit cannot be
>> reloaded because it is inactive.
>> Jan 07 10:12:02 HPFed sm-notify[1331]: Version 2.6.2 starting
>> Jan 07 10:12:02 HPFed systemd[1]: Started rpc-statd-notify.service -
>> Notify NFS peers of a restart.
>> Jan 07 10:12:02 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel
>> msg='unit=rpc-statd-notify comm="systemd"
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
>> res=success'
>> Jan 07 10:12:03 HPFed kernel: FS-Cache: Loaded
>> Jan 07 10:12:03 HPFed kernel: Key type dns_resolver registered
>> Jan 07 10:12:03 HPFed kernel: Key type cifs.spnego registered
>> Jan 07 10:12:03 HPFed kernel: Key type cifs.idmap registered
>> Jan 07 10:12:03 HPFed kernel: CIFS: Attempting to mount
>> \\192.168.1.67\Public
>> Jan 07 10:12:03 HPFed kernel: CIFS: Attempting to mount
>> \\192.168.1.209\Public
>> Jan 07 10:12:03 HPFed systemd[1]: Mounted mnt-nas1a.mount - /mnt/nas1a.
>> Jan 07 10:12:05 HPFed abrtd[929]: Lock file '.lock' was locked by
>> process 1836, but it crashed?
>> Jan 07 10:12:07 HPFed chronyd[1051]: Selected source 185.177.149.33
>> (
2.fedora.pool.ntp.org)
>> Jan 07 10:12:07 HPFed chronyd[1051]: System clock TAI offset set to
>> 37 seconds
>> Jan 07 10:12:08 HPFed chronyd[1051]: Selected source 217.114.59.3
>> (
2.fedora.pool.ntp.org)
>> Jan 07 10:12:10 HPFed mount[1332]: mount error(113): could not
>> connect to 192.168.1.209Unable to find suitable address.
>> Jan 07 10:12:10 HPFed kernel: CIFS: VFS: Error connecting to socket.
>> Aborting operation.
>> Jan 07 10:12:10 HPFed kernel: CIFS: VFS: cifs_mount failed w/return
>> code = -113
>> Jan 07 10:12:10 HPFed systemd[1]: mnt-nas2a.mount: Mount process
>> exited, code=exited, status=32/n/a
>> Jan 07 10:12:10 HPFed systemd[1]: mnt-nas2a.mount: Failed with
>> result 'exit-code'.
>> Jan 07 10:12:10 HPFed systemd[1]: Failed to mount mnt-nas2a.mount -
>> /mnt/nas2a.
>> Jan 07 10:12:10 HPFed systemd[1]: Dependency failed for
>> remote-fs.target - Remote File Systems.
>> Jan 07 10:12:10 HPFed systemd[1]: remote-fs.target: Job
>> remote-fs.target/start failed with result 'dependency'.
>> Jan 07 10:12:10 HPFed systemd[1]: Starting
>> systemd-user-sessions.service - Permit User Sessions...
>> Jan 07 10:12:10 HPFed systemd[1]: Finished
>> systemd-user-sessions.service - Permit User Sessions.
>> Jan 07 10:12:10 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel
>> msg='unit=systemd-user-sessions comm="systemd"
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
>> res=success'
>> Jan 07 10:12:10 HPFed systemd[1]: Started atd.service - Deferred
>> execution scheduler.
>> Jan 07 10:12:10 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel msg='unit=atd
>> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=?
>> terminal=? res=success'
>> Jan 07 10:12:10 HPFed systemd[1]: Started crond.service - Command
>> Scheduler.
>> Jan 07 10:12:10 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel msg='unit=crond
>> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=?
>> terminal=? res=success'
>> Jan 07 10:12:10 HPFed systemd[1]: Starting
>> plymouth-quit-wait.service - Hold until boot process finishes up...
>> Jan 07 10:12:10 HPFed systemd[1]: Starting plymouth-quit.service -
>> Terminate Plymouth Boot Screen...
>> Jan 07 10:12:10 HPFed systemd[1]: Received SIGRTMIN+21 from PID 352
>> (plymouthd).
>> Jan 07 10:12:10 HPFed systemd[1]: Received SIGRTMIN+21 from PID 352
>> (plymouthd).
>> Jan 07 10:12:10 HPFed systemd[1]: Finished
>> plymouth-quit-wait.service - Hold until boot process finishes up.
>> Jan 07 10:12:10 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel
>> msg='unit=plymouth-quit-wait comm="systemd"
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
>> res=success'
>> Jan 07 10:12:10 HPFed systemd[1]: Finished plymouth-quit.service -
>> Terminate Plymouth Boot Screen.
>> Jan 07 10:12:10 HPFed audit[1]: SERVICE_START pid=1 uid=0
>> auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-quit
>> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=?
>> terminal=? res=success'
>> Jan 07 10:12:10 HPFed crond[1385]: (CRON) STARTUP (1.5.7)
>> Jan 07 10:12:10 HPFed crond[1385]: (CRON) INFO (RANDOM_DELAY will be
>> scaled with factor 68% if used.)
>> Jan 07 10:12:10 HPFed systemd[1]: Started sddm.service - Simple
>> Desktop Display Manager.
>> [john@HPFed ~]$
>>
>>
> I did a test by disconnecting the network cable from my nas and
> rebooting F37 with kernel 6.0.16 and the cifs mount process failed
> cleanly, but the nfs mount process sat there running for 2 minutes 4
> seconds before it failed the mount, but the boot process then
> proceeded to completion. I'm writing this email from that booted
> system. Here is my journalctl output for the cifs failure and the nfs
> failure.
>
> Jan 10 11:07:56 fedora systemd[1]: mnt-dlink.mount: Mount process
> exited, code=exited, status=32/n/a
> Jan 10 11:07:56 fedora systemd[1]: mnt-dlink.mount: Failed with
> result 'exit-code'.
> Jan 10 11:07:56 fedora systemd[1]: Failed to mount mnt-dlink.mount -
> /mnt/dlink.
> Jan 10 11:07:57 fedora akmods[1267]: Checking kmods exist for
> 6.0.16-300.fc37.x86_64[ OK ]
> Jan 10 11:07:57 fedora systemd[1]: Finished akmods.service - Builds
> and install new kmods from akmod packages.
> Jan 10 11:07:57 fedora audit[1]: SERVICE_START pid=1 uid=0
> auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
> msg='unit=akmods co
> mm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=?
> terminal=? res=success'
> Jan 10 11:07:57 fedora systemd[1]: nvidia-fallback.service - Fallback
> to nouveau as nvidia did not load was skipped because of a failed condit
> ion check (ConditionPathExists=!/sys/module/nvidia).
> Jan 10 11:08:00 fedora systemd[1]: NetworkManager-dispatcher.service:
> Deactivated successfully.
> Jan 10 11:08:00 fedora audit[1]: SERVICE_STOP pid=1 uid=0
> auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
> msg='unit=NetworkMan
> ager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd"
> hostname=? addr=? terminal=? res=success'
> Jan 10 11:08:13 fedora systemd[1]: systemd-hostnamed.service:
> Deactivated successfully.
> Jan 10 11:08:13 fedora audit[1]: SERVICE_STOP pid=1 uid=0
> auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
> msg='unit=systemd-ho
> stnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
> addr=? terminal=? res=success'
> Jan 10 11:08:13 fedora audit: BPF prog-id=0 op=UNLOAD
> Jan 10 11:08:13 fedora audit: BPF prog-id=0 op=UNLOAD
> Jan 10 11:08:13 fedora audit: BPF prog-id=0 op=UNLOAD
> Jan 10 11:08:13 fedora wpa_supplicant[1672]: wlp5s0:
> CTRL-EVENT-SIGNAL-CHANGE above=0 signal=-62 noise=9999 txrate=29200
> Jan 10 11:09:00 fedora chronyd[1440]: Selected source 180.150.12.46
> (
2.fedora.pool.ntp.org)
> Jan 10 11:09:19 fedora systemd[1]: mnt-nfs.mount: Mounting timed out.
> Terminating.
> Jan 10 11:09:19 fedora systemd[1]: mnt-nfs.mount: Mount process
> exited, code=killed, status=15/TERM
> Jan 10 11:09:19 fedora systemd[1]: mnt-nfs.mount: Failed with result
> 'timeout'.
> Jan 10 11:09:19 fedora systemd[1]: mnt-nfs.mount: Unit process 1799
> (mount.nfs) remains running after unit stopped.
> Jan 10 11:09:19 fedora systemd[1]: Failed to mount mnt-nfs.mount -
> /mnt/nfs.
> Jan 10 11:09:19 fedora systemd[1]: Dependency failed for
> remote-fs.target - Remote File Systems.
> Jan 10 11:09:19 fedora systemd[1]: remote-fs.target: Job
> remote-fs.target/start failed with result 'dependency'.
>
A manual mount of the cifs device while the device is powered off
produces the following message and doesn't hang.
mount error(113): could not connect to 192.168.1.12Unable to find
suitable address.
A subsequent mount after the device was powered on mounts the device
successfully.
regards,
Steve
I'm not sure if it is significant, but I am using gdm as my display
manager even though I am booting into KDE.
regards,
Steve
>> }}}
>> _______________________________________________
>> users mailing list -- users(a)lists.fedoraproject.org
>> To unsubscribe send an email to users-leave(a)lists.fedoraproject.org
>> Fedora Code of Conduct:
>>
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
>> List Archives:
>>
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
>> Do not reply to spam, report it:
>>
https://pagure.io/fedora-infrastructure/new_issue
>
>
> _______________________________________________
> users mailing list --users(a)lists.fedoraproject.org
> To unsubscribe send an email tousers-leave(a)lists.fedoraproject.org
> Fedora Code of
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List
Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
> List
Archives:https://lists.fedoraproject.org/archives/list/users@lists.fedora...
> Do not reply to spam, report it:https://pagure.io/fedora-infrastructure/new_issue
_______________________________________________
users mailing list --users(a)lists.fedoraproject.org
To unsubscribe send an email tousers-leave(a)lists.fedoraproject.org
Fedora Code of
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List
Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List
Archives:https://lists.fedoraproject.org/archives/list/users@lists.fedora...
Do not reply to spam, report it:https://pagure.io/fedora-infrastructure/new_issue