On 04/18/2018 11:23 AM, Bob Goodwin wrote:
On 04/18/18 13:45, Rick Stevens wrote:
> Uhm, that looks like "box86" either isn't in DNS or /etc/hosts so it
> can't be resolved. If this is on the server, try "showmount -e" or
> "showmount -e localhost".
+
I eventually realized that and changed it to showmount -e 192.168.1.86
and unfortunately it still shows the other file,
# showmount -e 192.168.1.86
Export list for 192.168.1.86:
/exports/home 192.168.1.0/24
Maybe I can only have one export file?
Anyway I need to get it out of root and in "/home" instead which is
where the large capacity is.
Df -h shows: /dev/mapper/fedora-home 2.7T 4.8G 2.5T 1% /home
Why the /dev/mapper/fedora ? I selected "Standard Partitions" in the
installer and the rest looks like I would expect it to. The installer
gui is a horror, I always feel like I won the lottery when I get it to
accept what I enter ...
By default, Fedora uses the LVM (logical volume manager) system to
partition the disks. It actually creates regular partitions as a raw
volumes (PVs or "physical volumes"). It then typically creates a VG
(volume group) that has that PVs in it. From there, it carves out
LVs (logical volumes). On my laptop, for example, I have these (among
others) from the "df -h" command:
/dev/mapper/vg_golem4-lv_root 426G 214G 208G 51% /
/dev/sda1 477M 206M 242M 46% /boot
/dev/mapper/vg_golem4-lv_home 252G 49G 191G 21% /home
You can see I have a /dev/sda1 partition that is used as my boot volume
(/boot). Note also that my / and /home filesystems are on LVM. To see
how that's set up:
[root@golem4 ~]# vgdisplay -v
--- Volume group ---
VG Name vg_golem4
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 698.12 GiB
PE Size 32.00 MiB
Total PE 22340
Alloc PE / Size 22340 / 698.12 GiB
Free PE / Size 0 / 0
VG UUID V3EZ9p-3wxH-1LJ8-ho77-Rmbf-A4d0-0oLCY7
--- Logical volume ---
LV Path /dev/vg_golem4/lv_swap
LV Name lv_swap
VG Name vg_golem4
LV UUID lK3HOt-a76V-faDd-3Mfl-mBxZ-DTuG-gb6OqM
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 2
LV Size <9.72 GiB
Current LE 311
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/vg_golem4/lv_home
LV Name lv_home
VG Name vg_golem4
LV UUID GfSiWV-IpHe-HtgD-PfjA-GBuG-G3tD-eKK69c
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 256.00 GiB
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/vg_golem4/lv_root
LV Name lv_root
VG Name vg_golem4
LV UUID rxEwZY-8BDl-zm2b-XeBh-Drqr-3Ci3-bg5JX4
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size <432.41 GiB
Current LE 13837
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Physical volumes ---
PV Name /dev/sda2
PV UUID Qasi0A-L5V4-J4EU-0D5L-fITP-BpDp-6xAash
PV Status allocatable
Total PE / Free PE 22340 / 0
When you look at that, you can see there's a VG (volume group) called
"vg_golem4". That volume group is split up into three logical volumes,
"lv_root", "lv_swap" and "lv_home" and you can see the /dev
names
they're known by. You can also see at the bottom that the VG has a
single PV, /dev/sda2.
Now as far as your /etc/exports file goes, you can have as many lines in
it as you want. To wit (from an NFS server in our datacenter whose DNS
name is "nfssrv598-r1"):
[root@nfssrv598-r1 ~]# cat /etc/exports
# /etc/exports
#
# Storage from HP 9320 Array (volume group "VG_HP9320")...
#
/adcorp 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/adlab 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/back1 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fs0100 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fs0103 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fs0104 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fs0105 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fs0106 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fs0107 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/fssprod 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/scratch 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
/storage 192.168.60.*(rw,no_root_squash) 192.168.69.*(rw,no_root_squash)
"/adcorp" and the like are actually LVs on that server:
root@nfssrv598-r1 ~]# mount | grep adcorp
/dev/mapper/VG_HP9320-UTIL on /adcorp type xfs
(rw,relatime,attr2,inode64,noquota)
Very similar to your stuff. And from a client querying that server:
[root@ing1-r1 ~]# showmount -e nfssrv598-r1
Export list for nfssrv598-r1:
/storage 192.168.69.*,192.168.60.*
/scratch 192.168.69.*,192.168.60.*
/fssprod 192.168.69.*,192.168.60.*
/fs0107 192.168.69.*,192.168.60.*
/fs0106 192.168.69.*,192.168.60.*
/fs0105 192.168.69.*,192.168.60.*
/fs0104 192.168.69.*,192.168.60.*
/fs0103 192.168.69.*,192.168.60.*
/fs0100 192.168.69.*,192.168.60.*
/back1 192.168.69.*,192.168.60.*
/adlab 192.168.69.*,192.168.60.*
/adcorp 192.168.69.*,192.168.60.*
Hope that helps.
----------------------------------------------------------------------
- Rick Stevens, Systems Engineer, AllDigital ricks(a)alldigital.com -
- AIM/Skype: therps2 ICQ: 22643734 Yahoo: origrps2 -
- -
- Polygon: A dead parrot (With apologies to John Cleese) -
----------------------------------------------------------------------