<head>
    <link href="css/bootstrap.min.css" rel="stylesheet">
    <script src="js/bootstrap.bundle.min.js"></script>
    <script src="js/angular.min.js"></script>
</head>

<body ng-app="myApp">
<div ng-controller="MyCtrl">
    <div class="table-responsive" id="table1">
        <table class="table table-sm table-hover">
            <thead class="thead-light">
                <tr>
                    <th ng-repeat="column in cols">{{column}}</th>
                </tr>
            </thead>
            <tr ng-repeat="row in rows">
                <td ng-repeat="column in cols">{{row[column]}}</td>
            </tr>
        </table>
    </div>
</div>
</body>


var myApp = angular.module('myApp',[]);

function MyCtrl($scope) {
    $scope.rows = [
    {
        "#": "1", 
        "name": "aa", 
        "score": "3.8"
    }, 
    {
        "#": "2", 
        "name": "bb", 
        "score": "4.0"
    }, 
    {
        "#": "3", 
        "name": "cc", 
        "score": "3.6"
    }];
    $scope.cols = Object.keys($scope.rows[0]);
}

Reference

<body ng-app="myapp">
    <div class="container-fluid" ng-controller="myctrl">
        <ul class="nav nav-tabs" id="myTab" role="tablist" style="display:flex">
            <li class="nav-item" role="presentation" id="l1"></li>
            <li class="nav-item" role="presentation" id="l2"></li>
        </ul>

        <div class="tab-content" id="myTabContent" style="display:flex">
            <div class="tab-pane fade show active" id="pane1" role="tabpanel" aria-labelledby="tab1"></div>
            <div class="tab-pane fade" id="pane2" role="tabpanel" aria-labelledby="tab2"></div>
        </div>

        <div class="table-responsive" id="table1"></div>
    </div>
</body>


angular.module("myapp", []).controller("myctrl", ['$scope', '$http', function (scope, http) {
    scope.HideDisplayElements = function () {
        // hide the tabs
        document.getElementById("myTabContent").style.display = "none";   
        document.getElementById("myTab").style.display = "none";

        // hide table
        document.getElementById("table1").style.display = "flex";
    }
}

Reference

To access a sharedv4 volume outside of the Portworx cluster, add the allow_ips label to the volume you wish to export, specifying a semi-colon separated list of IP addresses of non-Portworx nodes you wish to mount your sharedv4 volume to:

1
2
3
4
5
6
7
8
9
10
11
12
$ /opt/pwx/bin/pxctl -j volume create datavol --sharedv4 --label name=data --size 500 --fs ext4 --repl 2 --nodes node1,node2
$ /opt/pwx/bin/pxctl -j volume update datavol --sharedv4_mount_options="vers=4.0"
$ /opt/pwx/bin/pxctl volume update datavol --label "allow_ips=<node3_ip>"
$ /opt/pwx/bin/pxctl host attach datavol
$ mkdir /mnt/datavol
$ /opt/pwx/bin/pxctl host mount --path /mnt/datavol datavol

$ mount | grep datavol
/dev/pxd/pxd229379759785727331 on /mnt/datavol type ext4 (rw,relatime,discard,data=ordered)

$ cat /etc/exports
/var/lib/osd/pxns/229379759785727331 node1(fsid=123e4567-e89b-12d3-002e-ebad1cb15563,rw,no_root_squash,no_subtree_check) node3(fsid=123e4567-e89b-12d3-002e-ebad1cb15563,rw,no_root_squash,no_subtree_check)

To mount the NFS shared volume on non-Portworx node(node3):

1
2
3
4
5
6
7
8
$ showmount -e node1
Export list for node1:
/var/lib/osd/pxns/229379759785727331 node1,node3

$ mkdir /mnt/data
$ mount -t nfs node1:/var/lib/osd/pxns/229379759785727331 /mnt/data
$ mount | grep data
node1:/var/lib/osd/pxns/229379759785727331 on /mnt/data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=node3,local_lock=none,addr=node1)

Reference

iPerf is a tool for measurements of the maximum achievable bandwidth on IP network. The following iperf option can be used on the client side to saturate the network bandwidth if single client thread is not sufficient.

  • -P, –parallel # number of parallel client threads to run

Benchmark a 100GbE network

  1. Single thread test

Start iperf on the server side:

$ iperf -v
iperf version 2.0.13 (21 Jan 2019) pthreads

$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.240 port 5001 connected with 192.168.1.245 port 44292
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  37.7 GBytes  32.4 Gbits/sec

Run iperf benchmark on the client side:

$ iperf -c 192.168.1.240
------------------------------------------------------------
Client connecting to 192.168.1.240, TCP port 5001
TCP window size: 2.44 MByte (default)
------------------------------------------------------------
[  3] local 192.168.1.245 port 44292 connected with 192.168.1.240 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  37.7 GBytes  32.4 Gbits/sec

With the default single thread, the achievable bandwidth is 32.4Gb/s which is far less than the available 100Gb/s bandwidth.

  1. Multi-threads test

We can increase the number of iperf client threads until the maximum bandwidth is hit.

$ iperf -c 192.168.1.240 -P 1
------------------------------------------------------------
Client connecting to 192.168.1.240, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[  3] local 192.168.1.245 port 44436 connected with 192.168.1.240 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  38.8 GBytes  33.4 Gbits/sec

$ iperf -c 192.168.1.240 -P 2
------------------------------------------------------------
Client connecting to 192.168.1.240, TCP port 5001
TCP window size: 1.13 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.245 port 44440 connected with 192.168.1.240 port 5001
[  3] local 192.168.1.245 port 44438 connected with 192.168.1.240 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  35.3 GBytes  30.4 Gbits/sec
[  3]  0.0-10.0 sec  37.6 GBytes  32.3 Gbits/sec
[SUM]  0.0-10.0 sec  72.9 GBytes  62.6 Gbits/sec

$ iperf -c 192.168.1.240 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.240, TCP port 5001
TCP window size:  366 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.245 port 44448 connected with 192.168.1.240 port 5001
[  5] local 192.168.1.245 port 44446 connected with 192.168.1.240 port 5001
[  4] local 192.168.1.245 port 44444 connected with 192.168.1.240 port 5001
[  3] local 192.168.1.245 port 44442 connected with 192.168.1.240 port 5001
[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-10.0 sec  26.0 GBytes  22.3 Gbits/sec
[  5]  0.0-10.0 sec  26.3 GBytes  22.6 Gbits/sec
[  4]  0.0-10.0 sec  26.3 GBytes  22.6 Gbits/sec
[  3]  0.0-10.0 sec  26.4 GBytes  22.6 Gbits/sec
[SUM]  0.0-10.0 sec   105 GBytes  90.1 Gbits/sec

$ iperf -c 192.168.1.240 -P 8
------------------------------------------------------------
Client connecting to 192.168.1.240, TCP port 5001
TCP window size:  518 KByte (default)
------------------------------------------------------------
[ 16] local 192.168.1.245 port 44464 connected with 192.168.1.240 port 5001
[  3] local 192.168.1.245 port 44452 connected with 192.168.1.240 port 5001
[  7] local 192.168.1.245 port 44456 connected with 192.168.1.240 port 5001
[  4] local 192.168.1.245 port 44450 connected with 192.168.1.240 port 5001
[  6] local 192.168.1.245 port 44454 connected with 192.168.1.240 port 5001
[  9] local 192.168.1.245 port 44462 connected with 192.168.1.240 port 5001
[  8] local 192.168.1.245 port 44460 connected with 192.168.1.240 port 5001
[  5] local 192.168.1.245 port 44458 connected with 192.168.1.240 port 5001
[ ID] Interval       Transfer     Bandwidth
[ 16]  0.0-10.0 sec  15.0 GBytes  12.9 Gbits/sec
[  3]  0.0-10.0 sec  9.06 GBytes  7.78 Gbits/sec
[  7]  0.0-10.0 sec  15.1 GBytes  12.9 Gbits/sec
[  4]  0.0-10.0 sec  15.1 GBytes  13.0 Gbits/sec
[  6]  0.0-10.0 sec  15.1 GBytes  12.9 Gbits/sec
[  9]  0.0-10.0 sec  15.1 GBytes  12.9 Gbits/sec
[  8]  0.0-10.0 sec  8.97 GBytes  7.70 Gbits/sec
[  5]  0.0-10.0 sec  15.1 GBytes  12.9 Gbits/sec
[SUM]  0.0-10.0 sec   108 GBytes  93.1 Gbits/sec

We can verify the network throughput with the command sar -n DEV 2.

03:26:05 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
03:26:07 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:26:07 PM      eth0     15.00      0.50      0.91      0.33      0.00      0.00      0.00
03:26:07 PM      eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:26:07 PM      eth1 8045242.00 252491.50 11895015.32  16273.87      0.00      0.00      0.00
03:26:07 PM      eth2      0.00      0.00      0.00      0.00      0.00      0.00      0.00

03:26:07 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
03:26:10 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:26:10 PM      eth0     10.50      1.00      0.65      0.41      0.00      0.00      0.00
03:26:10 PM      eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:26:10 PM      eth1 8046641.50 251968.50 11897084.51  16240.16      0.00      0.00      0.00
03:26:10 PM      eth2      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Create the MySQL docker container

$ docker pull mysql/mysql-server:5.7
$ docker run -d --name=mysqldb -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=perfdb -v /var/lib/osd/mounts/mysql_data:/var/lib/mysql -p 3306:3306 mysql/mysql-server:5.7
$ docker logs mysqldb
$ docker ps -a
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS                    PORTS                                                  NAMES
ff380903d3a0   mysql/mysql-server:5.7   "/entrypoint.sh mysq…"   34 seconds ago   Up 34 seconds (healthy)   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysqldb

Note:

  • The option -e passes a value to the container environment variable.
  • The variable MYSQL_ROOT_HOST=% creates a root user with permission to login from any IP address.
  • The variable MYSQL_ROOT_PASSWORD creates the root password of MySQL. If not provided, a random password will be generated.

Create MySQL database and restore from a db backup(if exists)

$ docker exec -it mysqldb bash

bash-4.2# mysql -V
mysql  Ver 14.14 Distrib 5.7.37, for Linux (x86_64) using  EditLine wrapper

bash-4.2# mysql -u root -p

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| perfdb             |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+---------------+-------------------------------------------+-----------------------+-----------+
| user          | authentication_string                     | plugin                | host      |
+---------------+-------------------------------------------+-----------------------+-----------+
| root          | *2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 | mysql_native_password | localhost |
| mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys     | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| healthchecker | *36C82179AFA394C4B9655005DD2E482D30A4BDF7 | mysql_native_password | localhost |
| root          | *2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 | mysql_native_password | %         |
+---------------+-------------------------------------------+-----------------------+-----------+
5 rows in set (0.01 sec)

bash-4.2# mysql -u root -p perfdb < /var/lib/mysql/perfdb_dump_20220412_121339.sql
bash-4.4# ls -ltr /var/lib/mysql

mysql> use mydb;
mysql> show tables
mysql> SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"  FROM information_schema.tables  GROUP BY table_schema;

Create the phpMyAdmin docker container

$ docker pull phpmyadmin/phpmyadmin:latest
$ docker run --name phpadmin -d --link mysqldb:db -p 8080:80 phpmyadmin/phpmyadmin

Note:

  • The option –link provides access to another container running in the host. In our case, the container mysqldb created in previous step is linked and the resource accessed is the MySQL db.
  • The option -p provides mapping between the host port(8080) and the container port(80, which is used by the apache server for the phpMyAdmin web application in the container).

Check the docker images and containers

$ docker images
REPOSITORY                    TAG       IMAGE ID       CREATED        SIZE
phpmyadmin/phpmyadmin         latest    5682e7556577   2 months ago   524MB
mysql/mysql-server            5.7       b3eaae317eb2   2 months ago   390MB

$ docker ps -a
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS                   PORTS                                                  NAMES
a19c6406ffff   phpmyadmin/phpmyadmin    "/docker-entrypoint.…"   38 seconds ago   Up 38 seconds            0.0.0.0:8080->80/tcp, :::8080->80/tcp                  phpadmin
ff380903d3a0   mysql/mysql-server:5.7   "/entrypoint.sh mysq…"   5 minutes ago    Up 5 minutes (healthy)   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysqldb

Access the phpMyAdmin from browser

Image

Reference

Jack sleepily rubbed his eyes as he woke up to the cool breeze which whipped through the snow capped mountains of Himalayas. The weather was unexpectedly cold. Suddenly, there was an unexpected release of snow and ice. It was the dreaded snow avalanche. In the base camp, there was screaming as the tent collapsed on top of them. The snow engulfed them and the tent blew away from them. Even with his snow goggles, Jack was blinded and his vision went white. Chunks of ice cut across his face like sharp daggers. He moved blindly across the snow and tripped over the edge of the cliff. He toppled over and fell unconscious.

Read more »

Performance tools

  • vmstat
  • iostat
  • ifstat
  • netstat
  • nfsstat
  • mpstat
  • nstat
  • dstat
  • sar
  • iftop
  • pidstat
  • xosview

Benchmark tools

  • fio
  • iozone
  • iperf
  • netperf
  • vdbench
  • sysbench
  • pgbench
  • YCSB
  • SPEC SFS 2014/SPECstorage Solution 2020
  • VMmark

Debugging tools

  • htop
  • lslk
  • lsof
  • top

Process tracing

  • ltrace
  • strace
  • pstack/gstack
  • ftrace
  • systemtap
  • ps
  • pmap
  • blktrace
  • ebpf

Binary debugging

  • ldd
  • file
  • nm
  • objdump
  • readelf

Memory usage tools

  • free
  • memusage
  • memusagestat
  • slabtop

Accounting tools

  • dump-acct
  • dump-utmp
  • sa

Hardware debugging tools

  • dmidecode
  • ifinfo
  • lsdev
  • lshal
  • lshw
  • lsmod
  • lspci
  • lsusb
  • smartctl
  • x86info
  • /opt/QLogic_Corporation/QConvergeConsoleCLI/qaucli

Application debugging

  • mailstats
  • qshape
  • xdpyinfo
  • xrestop

Others

  • collectl
  • proc
  • procinfo

Good afternoon, my name is [] and I will be presenting the speech titled “We shall fight on the beaches”. During the June of 1940 in an attempt to boost his citizen’s morale and confidence, Winston Churchill, the Prime Minister of the United Kingdom, gave his speech “We shall fight on the beaches” at the British House of Commons. I was inspired by his motivational speech, which gave hope and confidence to the people of German-occupied countries. In his speech, Winston Churchill stated that Britain would only end the war when they stand victorious, even if it means using total war.

We Shall Fight on the Beaches

Winston Churchill, June 4,1940, House of Commons

I have, myself, full confidence that if all do their duty, if nothing is neglected, and if the best arrangements are made, as they are being made, we shall prove ourselves once again able to defend our Island home, to ride out the storm of war, and to outlive the menace of tyranny, if necessary for years, if necessary alone.

At any rate, that is what we are going to try to do. That is the resolve of His Majesty’s Government-every man of them. That is the will of Parliament and the nation.

The British Empire and the French Republic, linked together in their cause and in their need, will defend to the death their native soil, aiding each other like good comrades to the utmost of their strength.

Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail.

We shall go on to the end, we shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our Island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and even if, which I do not for a moment believe, this Island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British Fleet, would carry on the struggle, until, in God’s good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old.

OpenZFS

The OpenZFS project is an open source derivative of the Oracle ZFS project. OpenZFS is an outstanding storage platform that encompasses the functionality of traditional filesystems, volume managers, and more, with consistent reliability, functionality and performance across all distributions.

Source1
Source2

ZFS and ZFS Pooled Storage

The ZFS file system is a revolutionary new file system that fundamentally changes the way file systems are administered, with features and benefits not found in any other file system available today. ZFS is robust, scalable, and easy to administer.

ZFS uses the concept of storage pools to manage physical storage. Historically, file systems were constructed on top of a single physical device. To address multiple devices and provide for data redundancy, the concept of a volume manager was introduced to provide a representation of a single device so that file systems would not need to be modified to take advantage of multiple devices. This design added another layer of complexity and ultimately prevented certain file system advances because the file system had no control over the physical placement of data on the virtualized volumes.

ZFS eliminates volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool. The storage pool describes the physical characteristics of the storage (device layout, data redundancy, and so on) and acts as an arbitrary data store from which file systems can be created. File systems are no longer constrained to individual devices, allowing them to share disk space with all file systems in the pool. You no longer need to predetermine the size of a file system, as file systems grow automatically within the disk space allocated to the storage pool. When new storage is added, all file systems within the pool can immediately use the additional disk space without additional work. In many ways, the storage pool works similarly to a virtual memory system: When a memory DIMM is added to a system, the operating system doesn’t force you to run commands to configure the memory and assign it to individual processes. All processes on the system automatically use the additional memory.

Source

Install ZFS on CentOS

ZFS is not included by default in CentOS. We will learn how to install it on CentOS 7.9 in this post.

  1. Add ZFS repository

    $ cat /etc/centos-release
    CentOS Linux release 7.9.2009 (Core)

    $ yum install https://zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm -y

  2. DKMS vs. kABI

DKMS and kABI are the two methods ZFS module can be loaded into the kernel. We are going to use kABI since it doesn’t require kernel re-compilation in case of kernel update. We can enable it by editing the ZFS repository as below.

$ cat /etc/yum.repos.d/zfs.repo
[zfs]
name=OpenZFS for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7.9/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-kmod]
name=OpenZFS for EL7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7.9/kmod/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-source]
name=OpenZFS for EL7 - Source
baseurl=http://download.zfsonlinux.org/epel/7.9/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-next]
name=OpenZFS for EL7 - dkms - Next upcoming version
baseurl=http://download.zfsonlinux.org/epel-next/7.9/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing]
name=OpenZFS for EL7 - dkms - Testing
baseurl=http://download.zfsonlinux.org/epel-testing/7.9/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing-kmod]
name=OpenZFS for EL7 - kmod - Testing
baseurl=http://download.zfsonlinux.org/epel-testing/7.9/kmod/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing-source]
name=OpenZFS for EL7 - Testing Source
baseurl=http://download.zfsonlinux.org/epel-testing/7.9/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

We can disable DKMS and enable KABI by editing the enable= in the following two sections.

$ vim /etc/yum.repos.d/zfs.repo
[zfs]
name=OpenZFS for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7.9/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-kmod]
name=OpenZFS for EL7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7.9/kmod/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
  1. Install ZFS

Install ZFS using the following command:

$ yum install zfs -y
[..]
Installed:
zfs.x86_64 0:2.0.7-1.el7

Dependency Installed:
kmod-zfs.x86_64 0:2.0.7-1.el7      libnvpair3.x86_64 0:2.0.7-1.el7     libuutil3.x86_64 0:2.0.7-1.el7     libzfs4.x86_64 0:2.0.7-1.el7     libzpool4.x86_64 0:2.0.7-1.el7     lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7
sysstat.x86_64 0:10.1.5-19.el7

Reboot the system to load zfs module:

$ reboot
$ lsmod | grep zfs

Use the following command to load the ZFS kernel module if it’s not loaded after reboot:

$ modprobe zfs

$ lsmod | grep zfs
zfs                  4224878  0
zunicode              331170  1 zfs
zzstd                 460780  1 zfs
zlua                  151526  1 zfs
zcommon                94285  1 zfs
znvpair                94388  2 zfs,zcommon
zavl                   15698  1 zfs
icp                   301775  1 zfs
spl                    96750  6 icp,zfs,zavl,zzstd,zcommon,znvpair

$ modinfo zfs
filename:       /lib/modules/3.10.0-1160.11.1.el7.x86_64/weak-updates/zfs/zfs/zfs.ko
version:        2.0.7-1
license:        CDDL
author:         OpenZFS
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
retpoline:      Y
rhelversion:    7.9
srcversion:     CDFB8A7F2D3EE43324CF460
depends:        spl,znvpair,icp,zlua,zzstd,zunicode,zcommon,zavl
vermagic:       3.10.0-1160.49.1.el7.x86_64 SMP mod_unload modversions
[..]

$ zfs version
zfs-2.0.7-1
zfs-kmod-2.0.7-1

Manage ZFS storage pool and file system

  • Create ZFS storage pool

    $ zpool create zpooldemo /dev/sdb

  • Add disk to ZFS storage pool

    $ zpool add zpooldemo /dev/sdc

  • Check ZFS pool status

    $ zpool list
    NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
    zpooldemo 119G 174K 119G - - 0% 0% 1.00x ONLINE -

    $ zpool status
    pool: zpooldemo
    state: ONLINE
    config:

      NAME        STATE     READ WRITE CKSUM
      zpooldemo   ONLINE       0     0     0
      sdb       ONLINE       0     0     0
      sdc       ONLINE       0     0     0
    

    errors: No known data errors

  • Create ZFS file system

When you create a pool, a ZFS file system is created and mounted automatically. The whole file system space can be used as needed.

$ mount | egrep "zfs"
zpooldemo on /zpooldemo type zfs (rw,xattr,noacl)

$ df -h | egrep "Filesystem|zpool"
Filesystem      Size  Used Avail Use% Mounted on
zpooldemo       116G  128K  116G   1% /zpooldemo

$ touch /zpooldemo/testfile
$ ls -la /zpooldemo/
total 6
drwxr-xr-x   3 root root    4 Mar 17 22:49 .
dr-xr-xr-x. 19 root root 4096 Mar 17 22:37 ..
-rw-r--r--   1 root root    0 Mar 17 22:49 testfile

Within a pool, additional file systems can be created. The new create file systems allow users to manage different sets of data within the same pool.

$ zfs create zpooldemo/zfsdemo

$ mount | egrep "zfs"
zpooldemo on /zpooldemo type zfs (rw,xattr,noacl)
zpooldemo/zfsdemo on /zpooldemo/zfsdemo type zfs (rw,xattr,noacl)

$ df -h | egrep "Filesystem|zpool"
Filesystem         Size  Used Avail Use% Mounted on
zpooldemo          116G  128K  116G   1% /zpooldemo
zpooldemo/zfsdemo  116G  128K  116G   1% /zpooldemo/zfsdemo
  • Set ZFS file system properties

The file system property can be set when the ZFS is created.

$ zfs create -o atime=off zpooldemo/zfsdemo
$ mount | grep zfs
zpooldemo on /zpooldemo type zfs (rw,xattr,noacl)
zpooldemo/zfsdemo on /zpooldemo/zfsdemo type zfs (rw,noatime,xattr,noacl)

The storage pool and file system properties can be retrieved as below.

$ zpool get all zpooldemo
NAME       PROPERTY                       VALUE                          SOURCE
zpooldemo  size                           59.5G                          -
zpooldemo  capacity                       0%                             -
zpooldemo  altroot                        -                              default
zpooldemo  health                         ONLINE                         -
zpooldemo  guid                           11167503015555961412           -
zpooldemo  version                        -                              default
zpooldemo  bootfs                         -                              default
zpooldemo  delegation                     on                             default
zpooldemo  autoreplace                    off                            default
zpooldemo  cachefile                      -                              default
zpooldemo  failmode                       wait                           default
zpooldemo  listsnapshots                  off                            default
zpooldemo  autoexpand                     off                            default
zpooldemo  dedupratio                     1.00x                          -
zpooldemo  free                           59.5G                          -
zpooldemo  allocated                      106K                           -
zpooldemo  readonly                       off                            -
zpooldemo  ashift                         0                              default
zpooldemo  comment                        -                              default
zpooldemo  expandsize                     -                              -
zpooldemo  freeing                        0                              -
zpooldemo  fragmentation                  0%                             -
zpooldemo  leaked                         0                              -
zpooldemo  multihost                      off                            default
zpooldemo  checkpoint                     -                              -
zpooldemo  load_guid                      10842965729770643306           -
zpooldemo  autotrim                       off                            default
zpooldemo  feature@async_destroy          enabled                        local
zpooldemo  feature@empty_bpobj            enabled                        local
zpooldemo  feature@lz4_compress           active                         local
zpooldemo  feature@multi_vdev_crash_dump  enabled                        local
zpooldemo  feature@spacemap_histogram     active                         local
zpooldemo  feature@enabled_txg            active                         local
zpooldemo  feature@hole_birth             active                         local
zpooldemo  feature@extensible_dataset     active                         local
zpooldemo  feature@embedded_data          active                         local
zpooldemo  feature@bookmarks              enabled                        local
zpooldemo  feature@filesystem_limits      enabled                        local
zpooldemo  feature@large_blocks           enabled                        local
zpooldemo  feature@large_dnode            enabled                        local
zpooldemo  feature@sha512                 enabled                        local
zpooldemo  feature@skein                  enabled                        local
zpooldemo  feature@edonr                  enabled                        local
zpooldemo  feature@userobj_accounting     active                         local
zpooldemo  feature@encryption             enabled                        local
zpooldemo  feature@project_quota          active                         local
zpooldemo  feature@device_removal         enabled                        local
zpooldemo  feature@obsolete_counts        enabled                        local
zpooldemo  feature@zpool_checkpoint       enabled                        local
zpooldemo  feature@spacemap_v2            active                         local
zpooldemo  feature@allocation_classes     enabled                        local
zpooldemo  feature@resilver_defer         enabled                        local
zpooldemo  feature@bookmark_v2            enabled                        local
zpooldemo  feature@redaction_bookmarks    enabled                        local
zpooldemo  feature@redacted_datasets      enabled                        local
zpooldemo  feature@bookmark_written       enabled                        local
zpooldemo  feature@log_spacemap           active                         local
zpooldemo  feature@livelist               enabled                        local
zpooldemo  feature@device_rebuild         enabled                        local
zpooldemo  feature@zstd_compress          enabled                        local

$ zfs get all zpooldemo/zfsdemo
NAME               PROPERTY              VALUE                  SOURCE
zpooldemo/zfsdemo  type                  filesystem             -
zpooldemo/zfsdemo  creation              Thu Mar 17 23:00 2022  -
zpooldemo/zfsdemo  used                  24K                    -
zpooldemo/zfsdemo  available             57.6G                  -
zpooldemo/zfsdemo  referenced            24K                    -
zpooldemo/zfsdemo  compressratio         1.00x                  -
zpooldemo/zfsdemo  mounted               yes                    -
zpooldemo/zfsdemo  quota                 none                   default
zpooldemo/zfsdemo  reservation           none                   default
zpooldemo/zfsdemo  recordsize            128K                   default
zpooldemo/zfsdemo  mountpoint            /zpooldemo/zfsdemo     default
zpooldemo/zfsdemo  sharenfs              off                    default
zpooldemo/zfsdemo  checksum              on                     default
zpooldemo/zfsdemo  compression           off                    default
zpooldemo/zfsdemo  atime                 off                    local
zpooldemo/zfsdemo  devices               on                     default
zpooldemo/zfsdemo  exec                  on                     default
zpooldemo/zfsdemo  setuid                on                     default
zpooldemo/zfsdemo  readonly              off                    default
zpooldemo/zfsdemo  zoned                 off                    default
zpooldemo/zfsdemo  snapdir               hidden                 default
zpooldemo/zfsdemo  aclmode               discard                default
zpooldemo/zfsdemo  aclinherit            restricted             default
zpooldemo/zfsdemo  createtxg             20                     -
zpooldemo/zfsdemo  canmount              on                     default
zpooldemo/zfsdemo  xattr                 on                     default
zpooldemo/zfsdemo  copies                1                      default
zpooldemo/zfsdemo  version               5                      -
zpooldemo/zfsdemo  utf8only              off                    -
zpooldemo/zfsdemo  normalization         none                   -
zpooldemo/zfsdemo  casesensitivity       sensitive              -
zpooldemo/zfsdemo  vscan                 off                    default
zpooldemo/zfsdemo  nbmand                off                    default
zpooldemo/zfsdemo  sharesmb              off                    default
zpooldemo/zfsdemo  refquota              none                   default
zpooldemo/zfsdemo  refreservation        none                   default
zpooldemo/zfsdemo  guid                  10461278007032944398   -
zpooldemo/zfsdemo  primarycache          all                    default
zpooldemo/zfsdemo  secondarycache        all                    default
zpooldemo/zfsdemo  usedbysnapshots       0B                     -
zpooldemo/zfsdemo  usedbydataset         24K                    -
zpooldemo/zfsdemo  usedbychildren        0B                     -
zpooldemo/zfsdemo  usedbyrefreservation  0B                     -
zpooldemo/zfsdemo  logbias               latency                default
zpooldemo/zfsdemo  objsetid              136                    -
zpooldemo/zfsdemo  dedup                 off                    default
zpooldemo/zfsdemo  mlslabel              none                   default
zpooldemo/zfsdemo  sync                  standard               default
zpooldemo/zfsdemo  dnodesize             legacy                 default
zpooldemo/zfsdemo  refcompressratio      1.00x                  -
zpooldemo/zfsdemo  written               24K                    -
zpooldemo/zfsdemo  logicalused           12K                    -
zpooldemo/zfsdemo  logicalreferenced     12K                    -
zpooldemo/zfsdemo  volmode               default                default
zpooldemo/zfsdemo  filesystem_limit      none                   default
zpooldemo/zfsdemo  snapshot_limit        none                   default
zpooldemo/zfsdemo  filesystem_count      none                   default
zpooldemo/zfsdemo  snapshot_count        none                   default
zpooldemo/zfsdemo  snapdev               hidden                 default
zpooldemo/zfsdemo  acltype               off                    default
zpooldemo/zfsdemo  context               none                   default
zpooldemo/zfsdemo  fscontext             none                   default
zpooldemo/zfsdemo  defcontext            none                   default
zpooldemo/zfsdemo  rootcontext           none                   default
zpooldemo/zfsdemo  relatime              off                    default
zpooldemo/zfsdemo  redundant_metadata    all                    default
zpooldemo/zfsdemo  overlay               on                     default
zpooldemo/zfsdemo  encryption            off                    default
zpooldemo/zfsdemo  keylocation           none                   default
zpooldemo/zfsdemo  keyformat             none                   default
zpooldemo/zfsdemo  pbkdf2iters           0                      default
zpooldemo/zfsdemo  special_small_blocks  0                      default

zfs set command can be used to set any dataset property.

$ zfs set checksum=off zpooldemo/zfsdemo

zfs get command can be used to retrieve any dataset property.

$ zfs get checksum zpooldemo
NAME       PROPERTY  VALUE      SOURCE
zpooldemo  checksum  on         default

$ zfs get checksum zpooldemo/zfsdemo
NAME               PROPERTY  VALUE      SOURCE
zpooldemo/zfsdemo  checksum  off        local
  • Destroy ZFS storage pool and file system

    $ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    zpooldemo 194K 115G 25K /zpooldemo
    zpooldemo/zfsdemo 24K 115G 24K /zpooldemo/zfsdemo

    $ zfs destroy zpooldemo/zfsdemo

    $ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    zpooldemo 169K 115G 25K /zpooldemo

    $ zpool destroy zpooldemo
    $ zpool list
    no pools available

Kernel compatibility

When to install zfs on CentOS, it will check if the already installed kernel version matches the specified the release version. In the following example, the required kernel 3.10.0-1160 will be installed automatically during zfs-release.el7_9 installation.

$ yum install https://zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm -y
$ yum install zfs -y
Installed:
kernel.x86_64 0:3.10.0-1160.59.1.el7                                                                                       
zfs.x86_64 0:2.0.7-1.el7

Dependency Installed:
kmod-zfs.x86_64 0:2.0.7-1.el7                 
libnvpair3.x86_64 0:2.0.7-1.el7                 
libuutil3.x86_64 0:2.0.7-1.el7                 
libzfs4.x86_64 0:2.0.7-1.el7                 
libzpool4.x86_64 0:2.0.7-1.el7

$ reboot
$ uname -r
3.10.0-1160.59.1.el7.x86_64

$ lsmod  |  grep zfs
$ modprobe zfs
$ lsmod  |  grep zfs
zfs                  4224878  0
zunicode              331170  1 zfs
zzstd                 460780  1 zfs
zlua                  151526  1 zfs
zcommon                94285  1 zfs
znvpair                94388  2 zfs,zcommon
zavl                   15698  1 zfs
icp                   301775  1 zfs
spl                    96750  6 icp,zfs,zavl,zzstd,zcommon,znvpair
$ zfs version
zfs-2.0.7-1
zfs-kmod-2.0.7-1

Uninstall ZFS

Remove the installed rpms and remove the repository as below.

$ rpm -ev <pkg-rpm-name>
$ yum remove zfs-release

Reference

0%