Use sysbench for CockroachDB performance benchmarking
Intro
Cockroach uses TPC-C as the official OLTP workload benchmark since it’s a more realistic measurement by modeling the real world applications.
However, sysbench is a straight-forward throughput/latency benchmarking tool. It is a scriptable multi-threaded benchmark tool based on LuaJIT. It is most frequently used for database benchmarks, but can also be used to create arbitrarily complex workloads that do not involve a database server.
- oltp_*.lua: a collection of OLTP-like database benchmarks
- fileio: a filesystem-level benchmark
- cpu: a simple CPU benchmark
- memory: a memory access benchmark
- threads: a thread-based scheduler benchmark
- mutex: a POSIX mutex benchmark
This Cockroach blog explains why we use sysbench for competitive benchmarks.
Sysbench contains a collection of simple SQL workloads. These workloads perform low-level SQL operations. For example, they run concurrent INSERT or UPDATE statements on rows as fast as possible. It gives us a picture on the system performance under different access patterns. Unlike TPC-C, Sysbench does not attempt to model a real application.
Sysbench includes the following workloads:
- oltp_point_select: single-row point selects
- oltp_insert: single-row inserts
- oltp_delete: single-row deletes
- oltp_update_index: single-row update on column that requires update to secondary index
- oltp_update_non_index single-row: update on column that does not require update to secondary index
- oltp_read_only: transactions that run collection of small scans
- oltp_read_write: transactions that run collection of small scans and writes
- oltp_write_only: transactions that run collection of writes
Sysbench supports the following two database drivers.
- mysql - MySQL driver
- pgsql - PostgreSQL driver
From this Cockroach blog, CockroachDB is compatible with PostgreSQL.
In this article, we will explore how to run sysbench with CockroachDB using pqsql driver.
Install the CockroachDB cluster
Refer to this post on how to deploy CockroachDB cluster.
[root@crdb_node1 ~]# cockroach version
Build Tag: v22.1.6
Build Time: 2022/08/23 17:05:04
Distribution: CCL
Platform: linux amd64 (x86_64-pc-linux-gnu)
Go Version: go1.17.11
C Compiler: gcc 6.5.0
Build Commit ID: 760a8253ae6478d69da0330133e3efec8e950e4e
Build Type: release
Interact with the CockroachDB
Use the CockroachDB built-in client
CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line.
To use this client, run the cockroach sql command as following:
[root@node0 ~]# cockroach sql --insecure --host=node1:26257 -e 'create database testdb'
[root@node0 ~]# cockroach sql --insecure --host=node1:26257 -e 'show databases'
database_name | owner | primary_region | regions | survival_goal
----------------+-------+----------------+---------+----------------
defaultdb | root | NULL | {} | NULL
postgres | root | NULL | {} | NULL
system | node | NULL | {} | NULL
testdb | root | NULL | {} | NULL
(4 rows)
[root@node0 ~]# cockroach sql --insecure --host=node1:26257 -e 'show tables from testdb'
SHOW TABLES 0
[root@node0 ~]# cockroach sql --insecure --host=node1:26257 -e 'drop database testdb'
Use the Postgres client
The cockroachDB can also be interacted by using Postgres client.
To install the Postgres client:
[root@node0 ~]# yum -y install postgresql postgresql-libs
Installed:
postgresql.x86_64 0:9.2.24-8.el7_9
To create the database and user:
[root@node0 ~]# psql -h node1 -U root -p 26257
psql (9.2.24, server 13.0.0)
root=> create database testdb;
root=> create user tester;
root=> grant all on database testdb to tester;
root=> show databases;
database_name | owner | primary_region | regions | survival_goal
---------------+-------+----------------+---------+---------------
defaultdb | root | | {} |
postgres | root | | {} |
system | node | | {} |
testdb | root | | {} |
(4 rows)
root=> show users;
username | options | member_of
----------+---------+-----------
admin | | {}
root | | {admin}
tester | | {}
(3 rows)
root=> \c testdb;
psql (9.2.24, server 13.0.0)
WARNING: psql version 9.2, server version 13.0.
Some psql features might not work.
You are now connected to database "testdb" as user "root".
testdb=> \dt;
List of relations
Schema | Name | Type | Owner
--------------+-------------------+-------+--------------------------
pg_extension | geography_columns | table | unknown (OID=3233629770)
pg_extension | geometry_columns | table | unknown (OID=3233629770)
pg_extension | spatial_ref_sys | table | unknown (OID=3233629770)
(3 rows)
testdb=> SELECT * FROM pg_catalog.pg_tables where schemaname != 'pg_catalog' AND schemaname != 'information_schema' and schemaname != 'crdb_internal';
schemaname | tablename | tableowner | tablespace | hasindexes | hasrules | hastriggers | rowsecurity
--------------+-------------------+------------+------------+------------+----------+-------------+-------------
pg_extension | geography_columns | node | | f | f | f | f
pg_extension | geometry_columns | node | | f | f | f | f
pg_extension | spatial_ref_sys | node | | f | f | f | f
(3 rows)
Install sysbench
To install sysbench:
[root@node0 ~]# curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | sudo bash^C
[root@node0 ~]# sudo yum -y install sysbench
[root@node0 ~]# sysbench --version
sysbench 1.0.20
To get familiar with the sysbench parameters:
[root@node0 ~]# sysbench --help
Usage:
sysbench [options]... [testname] [command]
Commands implemented by most tests: prepare run cleanup help
General options:
--threads=N number of threads to use [1]
--events=N limit for total number of events [0]
--time=N limit for total execution time in seconds [10]
--forced-shutdown=STRING number of seconds to wait after the --time limit before forcing shutdown, or 'off' to disable [off]
--thread-stack-size=SIZE size of stack per thread [64K]
--rate=N average transactions rate. 0 for unlimited rate [0]
--report-interval=N periodically report intermediate statistics with a specified interval in seconds. 0 disables intermediate reports [0]
--report-checkpoints=[LIST,...] dump full statistics and reset all counters at specified points in time. The argument is a list of comma-separated values representing the amount of time in seconds elapsed from start of test when report checkpoint(s) must be performed. Report checkpoints are off by default. []
--debug[=on|off] print more debugging info [off]
--validate[=on|off] perform validation checks where possible [off]
--help[=on|off] print help and exit [off]
--version[=on|off] print version and exit [off]
--config-file=FILENAME File containing command line options
--tx-rate=N deprecated alias for --rate [0]
--max-requests=N deprecated alias for --events [0]
--max-time=N deprecated alias for --time [0]
--num-threads=N deprecated alias for --threads [1]
Pseudo-Random Numbers Generator options:
--rand-type=STRING random numbers distribution {uniform,gaussian,special,pareto} [special]
--rand-spec-iter=N number of iterations used for numbers generation [12]
--rand-spec-pct=N percentage of values to be treated as 'special' (for special distribution) [1]
--rand-spec-res=N percentage of 'special' values to use (for special distribution) [75]
--rand-seed=N seed for random number generator. When 0, the current time is used as a RNG seed. [0]
--rand-pareto-h=N parameter h for pareto distribution [0.2]
Log options:
--verbosity=N verbosity level {5 - debug, 0 - only critical messages} [3]
--percentile=N percentile to calculate in latency statistics (1-100). Use the special value of 0 to disable percentile calculations [95]
--histogram[=on|off] print latency histogram in report [off]
General database options:
--db-driver=STRING specifies database driver to use ('help' to get list of available drivers) [mysql]
--db-ps-mode=STRING prepared statements usage mode {auto, disable} [auto]
--db-debug[=on|off] print database-specific debug information [off]
Compiled-in database drivers:
mysql - MySQL driver
pgsql - PostgreSQL driver
mysql options:
--mysql-host=[LIST,...] MySQL server host [localhost]
--mysql-port=[LIST,...] MySQL server port [3306]
--mysql-socket=[LIST,...] MySQL socket
--mysql-user=STRING MySQL user [sbtest]
--mysql-password=STRING MySQL password []
--mysql-db=STRING MySQL database name [sbtest]
--mysql-ssl[=on|off] use SSL connections, if available in the client library [off]
--mysql-ssl-cipher=STRING use specific cipher for SSL connections []
--mysql-compression[=on|off] use compression, if available in the client library [off]
--mysql-debug[=on|off] trace all client library calls [off]
--mysql-ignore-errors=[LIST,...] list of errors to ignore, or "all" [1213,1020,1205]
--mysql-dry-run[=on|off] Dry run, pretend that all MySQL client API calls are successful without executing them [off]
pgsql options:
--pgsql-host=STRING PostgreSQL server host [localhost]
--pgsql-port=N PostgreSQL server port [5432]
--pgsql-user=STRING PostgreSQL user [sbtest]
--pgsql-password=STRING PostgreSQL password []
--pgsql-db=STRING PostgreSQL database name [sbtest]
Compiled-in tests:
fileio - File I/O test
cpu - CPU performance test
memory - Memory functions speed test
threads - Threads subsystem performance test
mutex - Mutex performance test
See 'sysbench <testname> help' for a list of options for each test.
Sysbench includes the following lua scripts to simulate OLTP workloads.
[root@node0 ~]# ls -l /usr/share/sysbench/
total 60
-rwxr-xr-x 1 root root 1452 Apr 24 2020 bulk_insert.lua
-rw-r--r-- 1 root root 14369 Apr 24 2020 oltp_common.lua
-rwxr-xr-x 1 root root 1290 Apr 24 2020 oltp_delete.lua
-rwxr-xr-x 1 root root 2415 Apr 24 2020 oltp_insert.lua
-rwxr-xr-x 1 root root 1265 Apr 24 2020 oltp_point_select.lua
-rwxr-xr-x 1 root root 1649 Apr 24 2020 oltp_read_only.lua
-rwxr-xr-x 1 root root 1824 Apr 24 2020 oltp_read_write.lua
-rwxr-xr-x 1 root root 1118 Apr 24 2020 oltp_update_index.lua
-rwxr-xr-x 1 root root 1127 Apr 24 2020 oltp_update_non_index.lua
-rwxr-xr-x 1 root root 1440 Apr 24 2020 oltp_write_only.lua
-rwxr-xr-x 1 root root 1919 Apr 24 2020 select_random_points.lua
-rwxr-xr-x 1 root root 2118 Apr 24 2020 select_random_ranges.lua
drwxr-xr-x 4 root root 49 Dec 8 20:39 tests
[root@node0 ~]# ls -l /usr/share/sysbench/tests/include/oltp_legacy/
total 52
-rw-r--r-- 1 root root 1195 Apr 24 2020 bulk_insert.lua
-rw-r--r-- 1 root root 4696 Apr 24 2020 common.lua
-rw-r--r-- 1 root root 366 Apr 24 2020 delete.lua
-rw-r--r-- 1 root root 1171 Apr 24 2020 insert.lua
-rw-r--r-- 1 root root 3004 Apr 24 2020 oltp.lua
-rw-r--r-- 1 root root 368 Apr 24 2020 oltp_simple.lua
-rw-r--r-- 1 root root 527 Apr 24 2020 parallel_prepare.lua
-rw-r--r-- 1 root root 369 Apr 24 2020 select.lua
-rw-r--r-- 1 root root 1448 Apr 24 2020 select_random_points.lua
-rw-r--r-- 1 root root 1556 Apr 24 2020 select_random_ranges.lua
-rw-r--r-- 1 root root 369 Apr 24 2020 update_index.lua
-rw-r--r-- 1 root root 578 Apr 24 2020 update_non_index.lua
To get more options for each specific lua workload:
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_insert.lua help
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
oltp_insert.lua options:
--auto_inc[=on|off] Use AUTO_INCREMENT column as Primary Key (for MySQL), or its alternatives in other DBMS. When disabled, use client-generated IDs [on]
--create_secondary[=on|off] Create a secondary index in addition to the PRIMARY KEY [on]
--delete_inserts=N Number of DELETE/INSERT combinations per transaction [1]
--distinct_ranges=N Number of SELECT DISTINCT queries per transaction [1]
--index_updates=N Number of UPDATE index queries per transaction [1]
--mysql_storage_engine=STRING Storage engine, if MySQL is used [innodb]
--non_index_updates=N Number of UPDATE non-index queries per transaction [1]
--order_ranges=N Number of SELECT ORDER BY queries per transaction [1]
--pgsql_variant=STRING Use this PostgreSQL variant when running with the PostgreSQL driver. The only currently supported variant is 'redshift'. When enabled, create_secondary is automatically disabled, and delete_inserts is set to 0
--point_selects=N Number of point SELECT queries per transaction [10]
--range_selects[=on|off] Enable/disable all range SELECT queries [on]
--range_size=N Range size for range SELECT queries [100]
--secondary[=on|off] Use a secondary index in place of the PRIMARY KEY [off]
--simple_ranges=N Number of simple range SELECT queries per transaction [1]
--skip_trx[=on|off] Don't start explicit transactions and execute all queries in the AUTOCOMMIT mode [off]
--sum_ranges=N Number of SELECT SUM() queries per transaction [1]
--table_size=N Number of rows per table [10000]
--tables=N Number of tables [1]
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_write_only.lua help
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
oltp_write_only.lua options:
--auto_inc[=on|off] Use AUTO_INCREMENT column as Primary Key (for MySQL), or its alternatives in other DBMS. When disabled, use client-generated IDs [on]
--create_secondary[=on|off] Create a secondary index in addition to the PRIMARY KEY [on]
--delete_inserts=N Number of DELETE/INSERT combinations per transaction [1]
--distinct_ranges=N Number of SELECT DISTINCT queries per transaction [1]
--index_updates=N Number of UPDATE index queries per transaction [1]
--mysql_storage_engine=STRING Storage engine, if MySQL is used [innodb]
--non_index_updates=N Number of UPDATE non-index queries per transaction [1]
--order_ranges=N Number of SELECT ORDER BY queries per transaction [1]
--pgsql_variant=STRING Use this PostgreSQL variant when running with the PostgreSQL driver. The only currently supported variant is 'redshift'. When enabled, create_secondary is automatically disabled, and delete_inserts is set to 0
--point_selects=N Number of point SELECT queries per transaction [10]
--range_selects[=on|off] Enable/disable all range SELECT queries [on]
--range_size=N Range size for range SELECT queries [100]
--secondary[=on|off] Use a secondary index in place of the PRIMARY KEY [off]
--simple_ranges=N Number of simple range SELECT queries per transaction [1]
--skip_trx[=on|off] Don't start explicit transactions and execute all queries in the AUTOCOMMIT mode [off]
--sum_ranges=N Number of SELECT SUM() queries per transaction [1]
--table_size=N Number of rows per table [10000]
--tables=N Number of tables [1]
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_read_only.lua help
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
oltp_read_only.lua options:
--auto_inc[=on|off] Use AUTO_INCREMENT column as Primary Key (for MySQL), or its alternatives in other DBMS. When disabled, use client-generated IDs [on]
--create_secondary[=on|off] Create a secondary index in addition to the PRIMARY KEY [on]
--delete_inserts=N Number of DELETE/INSERT combinations per transaction [1]
--distinct_ranges=N Number of SELECT DISTINCT queries per transaction [1]
--index_updates=N Number of UPDATE index queries per transaction [1]
--mysql_storage_engine=STRING Storage engine, if MySQL is used [innodb]
--non_index_updates=N Number of UPDATE non-index queries per transaction [1]
--order_ranges=N Number of SELECT ORDER BY queries per transaction [1]
--pgsql_variant=STRING Use this PostgreSQL variant when running with the PostgreSQL driver. The only currently supported variant is 'redshift'. When enabled, create_secondary is automatically disabled, and delete_inserts is set to 0
--point_selects=N Number of point SELECT queries per transaction [10]
--range_selects[=on|off] Enable/disable all range SELECT queries [on]
--range_size=N Range size for range SELECT queries [100]
--secondary[=on|off] Use a secondary index in place of the PRIMARY KEY [off]
--simple_ranges=N Number of simple range SELECT queries per transaction [1]
--skip_trx[=on|off] Don't start explicit transactions and execute all queries in the AUTOCOMMIT mode [off]
--sum_ranges=N Number of SELECT SUM() queries per transaction [1]
--table_size=N Number of rows per table [10000]
--tables=N Number of tables [1]
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_point_select.lua help
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
oltp_point_select.lua options:
--auto_inc[=on|off] Use AUTO_INCREMENT column as Primary Key (for MySQL), or its alternatives in other DBMS. When disabled, use client-generated IDs [on]
--create_secondary[=on|off] Create a secondary index in addition to the PRIMARY KEY [on]
--delete_inserts=N Number of DELETE/INSERT combinations per transaction [1]
--distinct_ranges=N Number of SELECT DISTINCT queries per transaction [1]
--index_updates=N Number of UPDATE index queries per transaction [1]
--mysql_storage_engine=STRING Storage engine, if MySQL is used [innodb]
--non_index_updates=N Number of UPDATE non-index queries per transaction [1]
--order_ranges=N Number of SELECT ORDER BY queries per transaction [1]
--pgsql_variant=STRING Use this PostgreSQL variant when running with the PostgreSQL driver. The only currently supported variant is 'redshift'. When enabled, create_secondary is automatically disabled, and delete_inserts is set to 0
--point_selects=N Number of point SELECT queries per transaction [10]
--range_selects[=on|off] Enable/disable all range SELECT queries [on]
--range_size=N Range size for range SELECT queries [100]
--secondary[=on|off] Use a secondary index in place of the PRIMARY KEY [off]
--simple_ranges=N Number of simple range SELECT queries per transaction [1]
--skip_trx[=on|off] Don't start explicit transactions and execute all queries in the AUTOCOMMIT mode [off]
--sum_ranges=N Number of SELECT SUM() queries per transaction [1]
--table_size=N Number of rows per table [10000]
--tables=N Number of tables [1]
Prepare for the benchmark
Before running the benchmark, the database should be created and the data tables should be populated.
To create the database:
[root@node0 ~]# cockroach sql --insecure --host=node1:26257 -e 'create database testdb;'
To populate the database:
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_write_only.lua --pgsql-host=node1 --pgsql-port=26257 --pgsql-db=testdb --pgsql-user=tester --pgsql-password= --table_size=100000 --tables=24 --threads=1 --db-driver=pgsql prepare
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Creating table 'sbtest1'...
Inserting 100000 records into 'sbtest1'
Creating a secondary index on 'sbtest1'...
Creating table 'sbtest2'...
Inserting 100000 records into 'sbtest2'
Creating a secondary index on 'sbtest2'...
Creating table 'sbtest3'...
Inserting 100000 records into 'sbtest3'
Creating a secondary index on 'sbtest3'...
Creating table 'sbtest4'...
Inserting 100000 records into 'sbtest4'
Creating a secondary index on 'sbtest4'...
Creating table 'sbtest5'...
Inserting 100000 records into 'sbtest5'
Creating a secondary index on 'sbtest5'...
Creating table 'sbtest6'...
Inserting 100000 records into 'sbtest6'
Creating a secondary index on 'sbtest6'...
Creating table 'sbtest7'...
Inserting 100000 records into 'sbtest7'
Creating a secondary index on 'sbtest7'...
Creating table 'sbtest8'...
Inserting 100000 records into 'sbtest8'
Creating a secondary index on 'sbtest8'...
Creating table 'sbtest9'...
Inserting 100000 records into 'sbtest9'
Creating a secondary index on 'sbtest9'...
Creating table 'sbtest10'...
Inserting 100000 records into 'sbtest10'
Creating a secondary index on 'sbtest10'...
Creating table 'sbtest11'...
Inserting 100000 records into 'sbtest11'
Creating a secondary index on 'sbtest11'...
Creating table 'sbtest12'...
Inserting 100000 records into 'sbtest12'
Creating a secondary index on 'sbtest12'...
Creating table 'sbtest13'...
Inserting 100000 records into 'sbtest13'
Creating a secondary index on 'sbtest13'...
Creating table 'sbtest14'...
Inserting 100000 records into 'sbtest14'
Creating a secondary index on 'sbtest14'...
Creating table 'sbtest15'...
Inserting 100000 records into 'sbtest15'
Creating a secondary index on 'sbtest15'...
Creating table 'sbtest16'...
Inserting 100000 records into 'sbtest16'
Creating a secondary index on 'sbtest16'...
Creating table 'sbtest17'...
Inserting 100000 records into 'sbtest17'
Creating a secondary index on 'sbtest17'...
Creating table 'sbtest18'...
Inserting 100000 records into 'sbtest18'
Creating a secondary index on 'sbtest18'...
Creating table 'sbtest19'...
Inserting 100000 records into 'sbtest19'
Creating a secondary index on 'sbtest19'...
Creating table 'sbtest20'...
Inserting 100000 records into 'sbtest20'
Creating a secondary index on 'sbtest20'...
Creating table 'sbtest21'...
Inserting 100000 records into 'sbtest21'
Creating a secondary index on 'sbtest21'...
Creating table 'sbtest22'...
Inserting 100000 records into 'sbtest22'
Creating a secondary index on 'sbtest22'...
Creating table 'sbtest23'...
Inserting 100000 records into 'sbtest23'
Creating a secondary index on 'sbtest23'...
Creating table 'sbtest24'...
Inserting 100000 records into 'sbtest24'
testdb=> show tables from testdb;
schema_name | table_name | type | owner | estimated_row_count | locality
-------------+------------+-------+--------+---------------------+----------
public | sbtest1 | table | tester | 100000 |
public | sbtest10 | table | tester | 100000 |
public | sbtest11 | table | tester | 100000 |
public | sbtest12 | table | tester | 100000 |
public | sbtest13 | table | tester | 100000 |
public | sbtest14 | table | tester | 100000 |
public | sbtest15 | table | tester | 100000 |
public | sbtest16 | table | tester | 100000 |
public | sbtest17 | table | tester | 100000 |
public | sbtest18 | table | tester | 100000 |
public | sbtest19 | table | tester | 100000 |
public | sbtest2 | table | tester | 100000 |
public | sbtest20 | table | tester | 100000 |
public | sbtest21 | table | tester | 100000 |
public | sbtest22 | table | tester | 100000 |
public | sbtest23 | table | tester | 100000 |
public | sbtest24 | table | tester | 100000 |
public | sbtest3 | table | tester | 100000 |
public | sbtest4 | table | tester | 100000 |
public | sbtest5 | table | tester | 100000 |
public | sbtest6 | table | tester | 100000 |
public | sbtest7 | table | tester | 100000 |
public | sbtest8 | table | tester | 100000 |
public | sbtest9 | table | tester | 100000 |
(24 rows)
testdb=> select sum(range_size)/1000 from crdb_internal.ranges where database_name='testdb';
?column?
-----------------------
602418.53400000000000
(1 row)
To populate the database with 2 or more threads:
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_write_only.lua --pgsql-host=node1 --pgsql-port=26257 --pgsql-db=testdb --pgsql-user=root --pgsql-password= --table_size=1000000 --tables=24 --threads=2 --db-driver=pgsql prepare
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Initializing worker threads...
Creating table 'sbtest2'...
Creating table 'sbtest1'...
Inserting 1000000 records into 'sbtest1'
Inserting 1000000 records into 'sbtest2'
Creating a secondary index on 'sbtest2'...
Creating table 'sbtest4'...
Inserting 1000000 records into 'sbtest4'
Creating a secondary index on 'sbtest1'...
Creating table 'sbtest3'...
Inserting 1000000 records into 'sbtest3'
Creating a secondary index on 'sbtest4'...
Creating a secondary index on 'sbtest3'...
Creating table 'sbtest6'...
Inserting 1000000 records into 'sbtest6'
Creating table 'sbtest5'...
Inserting 1000000 records into 'sbtest5'
Creating a secondary index on 'sbtest5'...
Creating a secondary index on 'sbtest6'...
Creating table 'sbtest7'...
Inserting 1000000 records into 'sbtest7'
Creating table 'sbtest8'...
Inserting 1000000 records into 'sbtest8'
Creating a secondary index on 'sbtest7'...
Creating a secondary index on 'sbtest8'...
Creating table 'sbtest9'...
Inserting 1000000 records into 'sbtest9'
Creating table 'sbtest10'...
Inserting 1000000 records into 'sbtest10'
Creating a secondary index on 'sbtest10'...
Creating table 'sbtest12'...
Inserting 1000000 records into 'sbtest12'
Creating a secondary index on 'sbtest9'...
Creating table 'sbtest11'...
Inserting 1000000 records into 'sbtest11'
Creating a secondary index on 'sbtest12'...
Creating table 'sbtest14'...
Inserting 1000000 records into 'sbtest14'
Creating a secondary index on 'sbtest11'...
Creating table 'sbtest13'...
Inserting 1000000 records into 'sbtest13'
Creating a secondary index on 'sbtest14'...
Creating table 'sbtest16'...
Inserting 1000000 records into 'sbtest16'
Creating a secondary index on 'sbtest13'...
Creating table 'sbtest15'...
Inserting 1000000 records into 'sbtest15'
Creating a secondary index on 'sbtest16'...
Creating table 'sbtest18'...
Inserting 1000000 records into 'sbtest18'
Creating a secondary index on 'sbtest15'...
Creating table 'sbtest17'...
Inserting 1000000 records into 'sbtest17'
Creating a secondary index on 'sbtest18'...
Creating table 'sbtest20'...
Inserting 1000000 records into 'sbtest20'
Creating a secondary index on 'sbtest17'...
Creating table 'sbtest19'...
Inserting 1000000 records into 'sbtest19'
Creating a secondary index on 'sbtest20'...
Creating table 'sbtest22'...
Inserting 1000000 records into 'sbtest22'
Creating a secondary index on 'sbtest19'...
Creating table 'sbtest21'...
Inserting 1000000 records into 'sbtest21'
Creating a secondary index on 'sbtest22'...
Creating table 'sbtest24'...
Inserting 1000000 records into 'sbtest24'
Creating a secondary index on 'sbtest21'...
Creating table 'sbtest23'...
Inserting 1000000 records into 'sbtest23'
Creating a secondary index on 'sbtest24'...
Creating a secondary index on 'sbtest23'...
Run sysbench
Now we can run sysbench on the populated database. We can increase the threads to stress the database with more workloads in order to measure the system performance limit.
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_write_only.lua --pgsql-host=node1 --pgsql-port=26257 --pgsql-db=testdb --pgsql-user=tester --pgsql-password= --table_size=100000 --tables=24 --threads=1 --time=60 --report-interval=10 --db-driver=pgsql run
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Report intermediate results every 10 second(s)
Initializing random number generator from current time
Initializing worker threads...
Threads started!
[ 10s ] thds: 1 tps: 80.86 qps: 485.66 (r/w/o: 0.00/84.66/401.00) lat (ms,95%): 20.37 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 1 tps: 82.60 qps: 495.41 (r/w/o: 0.00/92.00/403.41) lat (ms,95%): 20.74 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 1 tps: 81.00 qps: 485.79 (r/w/o: 0.00/94.10/391.69) lat (ms,95%): 20.74 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 1 tps: 81.00 qps: 486.21 (r/w/o: 0.00/102.90/383.31) lat (ms,95%): 20.37 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 1 tps: 79.20 qps: 474.90 (r/w/o: 0.00/103.10/371.80) lat (ms,95%): 21.50 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 1 tps: 78.20 qps: 469.39 (r/w/o: 0.00/107.30/362.09) lat (ms,95%): 21.11 err/s: 0.00 reconn/s: 0.00
SQL statistics:
queries performed:
read: 0
write: 5842
other: 23138
total: 28980
transactions: 4830 (80.48 per sec.)
queries: 28980 (482.90 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 60.0081s
total number of events: 4830
Latency (ms):
min: 6.87
avg: 12.42
max: 32.90
95th percentile: 21.11
sum: 59978.07
Threads fairness:
events (avg/stddev): 4830.0000/0.00
execution time (avg/stddev): 59.9781/0.00
Cleanup the database
Once the sysbench is done, we can cleanup the test data as below.
[root@node0 ~]# sysbench /usr/share/sysbench/oltp_write_only.lua --pgsql-host=node1 --pgsql-port=26257 --pgsql-db=testdb --pgsql-user=tester --pgsql-password= --table_size=100000 --tables=24 --threads=1 --db-driver=pgsql cleanup
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Dropping table 'sbtest1'...
Dropping table 'sbtest2'...
Dropping table 'sbtest3'...
Dropping table 'sbtest4'...
Dropping table 'sbtest5'...
Dropping table 'sbtest6'...
Dropping table 'sbtest7'...
Dropping table 'sbtest8'...
Dropping table 'sbtest9'...
Dropping table 'sbtest10'...
Dropping table 'sbtest11'...
Dropping table 'sbtest12'...
Dropping table 'sbtest13'...
Dropping table 'sbtest14'...
Dropping table 'sbtest15'...
Dropping table 'sbtest16'...
Dropping table 'sbtest17'...
Dropping table 'sbtest18'...
Dropping table 'sbtest19'...
Dropping table 'sbtest20'...
Dropping table 'sbtest21'...
Dropping table 'sbtest22'...
Dropping table 'sbtest23'...
Dropping table 'sbtest24'...
testdb=> show tables from testdb;
schema_name | table_name | type | owner | estimated_row_count | locality
-------------+------------+------+-------+---------------------+----------
(0 rows)
[root@node0 ~]# cockroach sql --insecure --host=node1:26257 -e "drop database testdb"
Reference
- https://github.com/akopytov/sysbench
- https://www.cockroachlabs.com/blog/why-postgres/
- https://www.cockroachlabs.com/compare/cockroachdb-vs-postgresql/
- https://www.cockroachlabs.com/blog/postgresql-vs-cockroachdb/
- https://www.cockroachlabs.com/blog/unpacking-competitive-benchmarks/
- https://blog.purestorage.com/purely-informational/how-to-benchmark-mysql-performance/