TrueNAS

1 readers
1 users here now

founded 1 year ago
MODERATORS
1
 
 

source


HOWTO: Resize a Linux VM's LLVM Virtual Disk on a ZVOL | TrueNAS Community

If you have a Linux VM, which uses the LLVM filesystem, you can easily increase the disk space available to the VM.

Linux Logical Volume Manager allows you to have logical volumes (LV) on top of logical volume groups (VG) on top of physical volumes (PV) (ie partitions).

This is conceptually similar to zvols on pools on vdevs in zfs.

This was tested with TrueNAS-CORE 12 and Ubuntu 20.04.

Firstly, there are some useful commands:

pvs - list physical volumes
lvs - list logical volumes
lvdisplay - logical volume display
pvdisplay - physical volume display
df - disk free space

So, to start

df -h - show disk free space, human readable

and you should see something like this

Code:

Filesystem                         Size  Used Avail Use% Mounted on
dev                               2.9G     0  2.9G   0% /dev
tmpfs                              595M   61M  535M  11% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  8.4G  8.1G     0 100% /
tmpfs                              3.0G     0  3.0G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock

This is the interesting line:

Code:

/dev/mapper/ubuntu--vg-ubuntu--lv  8.4G  8.1G     0 100% /

it gives you the hint of which LV and VG the root drive is using.

you can list the logical volumes lvs

Code:

root@ubuntu:/# lvs
  LV        VG        Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- <8.50g                                                

and physical volumes pvs

Code:

root@ubuntu:/# pvs
  PV         VG        Fmt  Attr PSize  PFree
  /dev/vda3  ubuntu-vg lvm2 a--  <8.50g    0

Now you can see that the ubuntu-lv LV is on the ubuntu-vg VG is on the PV /dev/vda3

(that's partition 3 of device vda)

Shutdown the VM. Edit the ZVOL to change the size. Restart the VM.

Once you get back, run parted with the device id, repair the GPT information and resize the partition, as per below.

launch parted on the disk parted /dev/vda

Code:

root@ubuntu:~# parted /dev/vda
GNU Parted 3.3
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.

view the partitions

print

Code:

(parted) print                                                        
Warning: Not all of the space available to /dev/vda appears to be used, you can fix the GPT to use all of the space (an extra 188743680
blocks) or continue with the current setting?

Parted will offer to fix the GPT. Fix it. f

Code:

Fix/Ignore? f
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 107GB
Sector size (logical/physical): 512B/16384B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
1      1049kB  538MB   537MB   fat32              boot, esp
2      538MB   1612MB  1074MB  ext4
3      1612MB  10.7GB  9125MB

The disk is resized, but the partition is not.

Resize partition 3 to 100%, resizepart 3 100%

Code:

(parted) resizepart 3 100%
(parted) print                                                        
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 107GB
Sector size (logical/physical): 512B/16384B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
1      1049kB  538MB   537MB   fat32              boot, esp
2      538MB   1612MB  1074MB  ext4
3      1612MB  107GB   106GB

(parted)

And the partition is resized. You can exit parted with quit

now we need to resize the physical volume

pvresize /dev/vda3

Code:

root@ubuntu:~# pvresize /dev/vda3
  Physical volume "/dev/vda3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

You can check the result with pvdisplay

Code:

root@ubuntu:~# pvdisplay
 
***
Physical volume
***
  PV Name               /dev/vda3
  VG Name               ubuntu-vg
  PV Size               <98.50 GiB / not usable 1.98 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              25215
  Free PE               23040
  Allocated PE          2175
  PV UUID               IGdmTf-7Iql-V9UK-q3aD-BdNP-VfBo-VPx1Hs

Then you can use lvextend to resize the LV and resize the the filesystem, over the resized pv.

lvextend --resizefs ubuntu-vg/ubuntu-lv /dev/vda3

Code:

root@ubuntu:~# lvextend --resizefs ubuntu-vg/ubuntu-lv /dev/vda3
  Size of logical volume ubuntu-vg/ubuntu-lv changed from <8.50 GiB (2175 extents) to <98.50 GiB (25215 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 13
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 25820160 (4k) blocks long.

root@ubuntu:~#

and finally... you can check the freespace again.

df -h

Code:

root@ubuntu:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               2.9G     0  2.9G   0% /dev
tmpfs                              595M  1.1M  594M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   97G  8.2G   85G   9% /

85G free instead of 0 much better.

2
 
 

source


Understanding the TrueNAS SCALE "hostPathValidation" setting | TrueNAS Community

What is the “hostPathValidation” setting?

With the recent release of TrueNAS SCALE "Bluefin" 22.12.1, there have been a number of reports of issues with the Kubernetes "hostPathValidation" configuration setting, and requests for clarification regarding this security measure.

The “hostPathValidation” check is designed to prevent the simultaneous sharing of a dataset over a file-level protocol (SMB/NFS) while also being presented as hostPath storage to Kubernetes. This safety check prevents a container application from having the ability to accidentally perform a change in permissions or ownership to existing data in place on a ZFS dataset, or overwrite existing extended attribute (xattr) data, such as photo metadata on MacOS.

What’s the risk?

Disabling the hostPathValidation checkbox under Apps -> Settings -> Advanced Settings allows for this “shared access” to be possible, and opens up a small possibility for data loss or corruption when used incorrectly.

For example, an application that transcodes media files might, through misconfiguration or a bug within the application itself, accidentally delete an “original-quality” copy of a file and retain the lower-resolution transcoded version. Even with snapshots in place for data protection, if the problem is not detected prior to snapshot lifetime expiry, the original file could be lost forever.

Users with complex ACL schemes or who make use of extended attributes should take caution before disabling this functionality. The same risk applies to users running CORE with Jails or Plugins accessing data directly.

A change of this manner could result in data becoming unavailable to connected clients; and unless the permissions were very simplistic (single owner/group, recursive) reverting a large-scale change would require reverting to a previous ZFS snapshot. If no such snapshot exists, recovery would not be possible without manually correcting ownership and permissions.

When was this setting implemented?

In the initial SCALE release, Angelfish 22.02, there was no hostPathValidation check. As of Bluefin 22.12.0, the hostPathValidation setting was added and enabled by default. A bypass was discovered shortly thereafter, which allowed users to present a subdirectory or nested dataset of a shared dataset as a hostPath without needing to uncheck the hostPathValidation setting - thus exposing the potential for data loss. Another bypass was to stop SMB/NFS, start the application, and then start the sharing service again.

Both of these bypass methods were unintended, as they exposed a risk of data loss while the “hostPathValidation” setting was still set. These bugs were corrected in Bluefin 22.12.1, and as such, TrueNAS SCALE Apps that were dependent on these bugs being present in order to function will no longer deploy or start unless the hostPathValidation check is removed.

What’s the future plan for this setting?

We have received significant feedback that these changes and the validation itself have caused challenges. In a future release of TrueNAS SCALE, we will be moving away from a system-wide hostPathValidation checkbox, and instead providing a warning dialog that will appear during the configuration of the hostPath storage for any TrueNAS Apps that conflict with existing SMB/NFS shares.

Users can make the decision to proceed with the hostPath configuration at that time, or cancel the change and set up access to the folder through another method.

If data must be shared between SMB and hostPath, how can these risks be mitigated?

Some applications allow for connections to SMB or NFS resources within the app container itself. This may require additional network configuration, such as a network bridge interface as described in the TrueNAS docs “Accessing NAS from a VM” as well as creating and using a user account specific to the application.

https://www.truenas.com/docs/scale/scaletutorials/virtualization/accessingnasfromvm/

Users who enable third-party catalogs, such as TrueCharts, can additionally use different container path mount methods such as connecting to an NFS export. Filesystem permissions will need to be assigned to the data for the apps user in this case.

3
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

source


  • Step-1 # log is as root via a ssh-console

  • Step-2 # Note down / take a screenshot of the.zvol used by.the.vm (computer symbol)

      hoover above the name
    
  • Step-3 # List the. zvols available in the dataset

root@lion[/1# cd /dev/Cheetah/Virtual-Machines/xvz

root@lion[../Cheetah/Virtual-Machines/xyz]# Is

xyz-rfg1ef xyz-rfg1ef_xyz_01_xyz_xyz20230328b_xyz

  • Step-4 # Create, a snapshot of the dataset used by, the.vm (select the, name found above)

root@lion[./Cheetah/Virtual-Machines/xyz]# zfs snapshot Cheetah/Virtual-Machines/xyz/xyz-rfg1ef_xyz_01_xyz_xyz20230328b_xyz@backup20230424a

  • Step-5: # Create.a copy based on the snapshot in the choosen destination dataset

root@lion[../Cheetah/Virtual-Machines/xyz]# zfs send-v Cheetah/Virtual-Machines/xvz/xyz-rfg1ef_xyz_01_xyz_xyz20230328b_xyz@backup20230424a I zfs receive -ev Olifant/BackUp-VMs/xvz

  • Step-6 # Have a look at the result

root@lion[.../Cheetah/Virtual-Machines/xyz]# zfs list I grep xyz

Cheetah/Virtual-Machines/xvz 130G 1.35T96K /mnt/Cheetah/Virtual-Machines/xyz Cheetah/Virtual-Machines/xyz/xyz-rfg1ef 106G 1.45T4.37G Cheetah/Virtual-Machines/xvz/xyz-rfg1ef_xyz_01_xyz_xyz20230328b_xyz 23.5G 1.35T 25.5G 25.5G 9.04T96K /mnt/Olifant/BackUp-VMs/xyz Olifant/BackUp-VMs/xyz Olifant/BackUp-VMs/xyz/xyz_backup20230424a 25.5G9.04T25.5G

  • Step-7 # Give the zvol a decent name

zfs rename Olifant/BackUp-VMs/xvz/xyz-rfg1ef_xyz_01_xyz_xyz20230328b_xyz Olifant/BackUp-VMs/xvz/xyz_backup20230424a

  • Step-8 # Check the result.

cd /dev/Olifant/BackUp-VMs/xyz root@lion[/dev/Olifant/BackUp-VMs/xvz]# Is xyz_backup20230424a

  • Step-9 # Delete the, snapshot

zfs.destroy Cheetah/Virtual-Machines/xyz/xyz-rfg1ef_xyz_01_xyz_xyz20230328b_xyz@backup20230424a