web123456

ClickHouse Explained

clickhouse-introduction

ClickHouse is Russia's Yandex in 2016 open source for online analytics (OLAP: Online Analytical Processing) columnar database management system (DBMS: Database Management System) , mainly used for Online Analytical Processing query (OLAP), able to use SQL queries to generate analytical data reports in real time. The full name of ClickHouse is Click Stream, Data WareHouse, ClickHouse for short.

ClickHouse is a fully columnar distributed database management system (DBMS) that allows tables and databases to be created, data to be loaded and queries to be run at runtime without the need to reconfigure and restart the server, supports linear scaling, simplicity and convenience, high reliability and fault tolerance. It does not go with Hadoop ecosystem in the big data area, but uses Local attached storage as storage, so that the whole IO may not have the limitations of the Hadoop set. Its system can be applied to a relatively large scale in the production environment because of its linear scalability and reliability guarantee to natively support shard + replication solution. It also provides some SQL direct interfaces and has a rich set of native clients.

1 Advantages

  1. Flexible MPP architecture, support linear expansion, simple and convenient, high reliability

  2. Multi-server distributed data processing, complete DBMS system

  3. Underlying data columnar storage, support for compression, optimization of data storage, optimization of indexed data Optimization of underlying storage

  4. Fault-tolerant and fast: 5 times faster than Vertica, 279 times faster than Hive, 800 times faster than MySQL, the level of data can be processed has reached 1 billion level.

  5. Multi-functionality: support for statistical data analysis of various scenarios, support for SQL-like queries, off-site replication deployment

    Massive data storage, distributed computing, fast lightning performance, almost real-time data analysis, friendly SQL syntax, excellent function support.

2 Disadvantages

  1. No support for transactions, no support for true deletes/updates (batch)
  2. Does not support high concurrency, the official recommendation qps is 100, you can increase the number of connections by modifying the configuration file, but in the case of a good enough server
  3. Secondary indexes are not supported
  4. Not good at multi-table joins *** Large wide tables
  5. Metadata management requires human intervention ***
  6. Try to do more than 1000 batch write, avoid line by line insert or small batch of insert, update, delete operations

3 Application Scenarios

1. The vast majority of requests are for read access, requiring real-time return results
2. Data needs to be updated in a large batch (>1000 rows) rather than a single row; or no update operation at all
3. Data is just added to the database and there is no need to modify it
4. When reading data, a large number of rows are extracted from the database, but only a small number of columns are used
5. The table is "wide", i.e. it contains a large number of columns.
6. Relatively low query frequency (typically hundreds of queries per second or less per server)
7. For simple queries, a delay of about 50 milliseconds is allowed
8. Column values are relatively small values and short strings (e.g., only 60 bytes per URL)
9. High throughput is required when processing a single query (up to billions of rows per second per server)
10. No need for transactions
11. Lower data consistency requirements [atomicity persistence consistency isolation]
12. Only one large table will be queried in each query. All but one of the large tables are small.
13. The query result is significantly smaller than the data source. That is, the data is filtered or aggregated. Returned results do not exceed the memory size of a single server

4 Core concepts

1) Data slicing

Data slicing is the horizontal slicing of data, which is an effective means of solving storage and query bottlenecks in the face of massive data, and is a manifestation of the idea of partitioning. ClickHouse supports slicing, which relies on clustering. Each cluster consists of one to multiple slices, and each slice corresponds to one service node of ClickHouse. The maximum number of slices depends on the number of nodes (1 slice can only correspond to 1 service node). clickHouse does not have a highly automated slicing functionality like other distributed systems. clickHouse provides the concept of Local Table and Distributed Table. A Local Table is equivalent to a slice of data. Distributed Table itself does not store any data, it is the access agent of the local table, its role is similar to the middleware of the library. With the distributed table, it can proxy access to multiple data slices, thus realizing distributed query. This design is similar to the database library and table, very flexible. For example, at the beginning of the business system on-line, the volume of data is not high, and at this time the data table does not require multiple slices. So the use of a single node of the local table (a single data slice) can meet the business needs, to the business growth, data volume increases, and then through the addition of new data slices of the way the data stream, and through the distributed table to achieve distributed query. This is like a manual gearbox racing car which puts all the options in the hands of the user!

2) Columnar storage

在这里插入图片描述

1) As mentioned earlier, analysis scenarios often require reading a large number of rows but a few columns. In row storage mode, the data is stored continuously in rows, and all columns are stored in a bloCK. Columns that do not participate in the computation have to be read out in IO, and the read operation is seriously amplified. In column storage mode, only the columns that participate in the calculation need to be read, which greatly reduces the IO cost and accelerates the query.

(2) the data in the same column belongs to the same type, the compression effect is significant. Columns often have up to ten times or even higher compression ratio, saving a lot of storage space, reducing storage costs.

3) Higher compression ratio means smaller data size and shorter time to read the corresponding data from disk.

4) Free choice of compression algorithms. Different columns of data have different data types, the applicable compression algorithm is not the same. For different column types, you can choose the most appropriate compression algorithm.

5) High compression ratio means that the same size of memory can hold more data, and the system cache works better.

Official data shows that by using column storage, it is possible to obtain a 100x or even higher acceleration effect in certain analysis scenarios.

Row-oriented

Column-oriented

3) Vectorization

ClickHouse not only stores data by columns, but also performs calculations by columns. Traditional OLTP databases usually use per-row calculations because the benefits of implementing these techniques are not significant enough due to the predominance of point-checking in transaction processing and the small amount of SQL calculations. However, in analytic scenarios, the amount of computation involved in a single SQL can be extremely large, and processing each row as a basic unit will result in serious performance loss:
1) For each line of data to call the corresponding function, function call overhead accounted for a high percentage;
(2) The storage layer stores data by columns and organizes them by columns in memory, but the computing layer handles them by rows, which can't make full use of the CPU cache's pre-reading ability, resulting in a serious CPU Cache miss;
3) Processing on a line-by-line basis does not allow the utilization of efficient SIMD instructions;
ClickHouse implements a Vectorized execution engine, where a batch calls a SIMD instruction once (instead of once per line) on columnar data in memory, which not only reduces the number of function calls and cache miss, but also gives full play to the parallelism of SIMD instructions, significantly reducing computation time. This not only reduces the number of function calls and reduces cache miss, but also fully utilizes the parallelism of SIMD instructions, dramatically reducing computation time. Vector execution engine, usually can bring several times the performance improvement.
(SIMD is known as Single Instruction Multiple Data, a single instruction multiple data stream that is capable of replicating multipleoperandand packaged them in largeprocessor registerclusterinstruction set. In a synchronized manner, the same instruction is executed at the same time.)

4) Table

Concepts of view presentation of upper level data, including the basic structure of tables and data

5) Zoning

ClickHouse supports PARTITION BY clause, when building the table can be specified in accordance with any legal expression for data partitioning operations, such as through the toYYYYMM () will be partitioned according to the month of data, toMonday () will be partitioned according to the day of the week of the data, the Enum type of columns directly each take the value of the partition as a partition and so on. Data to partition the form of unified management and maintenance of a batch of data!

6) Sidebooks

Data storage replica, in cluster mode to achieve high availability , a simple understanding of the same data backup, in the CK through the replica set, we realize the guarantee of data reliability outside, but also through the multiple copies of the way to increase the concurrency of the CK query capacity. Here are generally 2 ways: (1) based on theZooKeeperof table replication; (2) Cluster-based replication. Since our recommended data writing method local table writing prohibits distributed table writing, we only consider ZooKeeper's table replication scheme for our replicated tables.

7) Engine The engine must be specified.

Different engines determine the storage characteristics, location and behavior of table data.

  1. Decide where and how the table will be stored

  2. What queries are supported and how

  3. Concurrent data access

  4. Use of Indexes

  5. Whether multi-threaded requests can be performed

  6. Whether or not a copy of the data is stored

  7. concurrent operation insert into tb_x select * from tb_x ;

Table engine determines how the data is stored in the file system, commonly used and officially recommended storage engine is the MergeTree series, if you need a copy of the data you can use the ReplicatedMergeTree series, which is equivalent to the copy version of MergeTree. Read cluster data need to use distributed table engine Distribute.

II clickhouse-deployment

ClickHouse supports Linux operating systems running on top of the major 64-bit CPU architectures (X86, AArch, and PowerPC), and can be compiled from source and pre-compiled in a zip package,DockerThere are various methods of installation such as mirroring and RPM.

1 Single-node deployment

sudo yum  -y install yum-utils
sudo rpm --import /
sudo yum-config-manager --add-repo /rpm/
sudo yum -y  install clickhouse-server clickhouse-client

sudo /etc//clickhouse-server start -- starts the service

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

在这里插入图片描述

  • Launching the Interactive Client
[root@ck1 /]# clickhouse-client  -m
ClickHouse client version 20.8.3.18.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.8.3 revision 54438.
ck1 :) show  databases ;
  • 1
  • 2
  • 3
  • 4
  • 5
2) Introduction to CK Catalog

The program automatically builds a full set of directory structures during installation, each of which is described next.

(1) /etc/clickhouse-server:

The server-side configuration file directory, including global configuration and user configuration.

在这里插入图片描述

(2)/var/lib/clickhouse:

Default data storage directory (it is common to modify the default path configuration to save data to a path mounted on a high-capacity disk).

在这里插入图片描述

(3)/var/log/clickhouse-server

Default directory where logs are saved (often the path configuration is modified to save logs to a path mounted on a high-capacity disk).

在这里插入图片描述

(4) /usr/bin is added to the system environment variables by default.

find ./ -name "clickhouse*"
clickhouse: the main program executable.
clickhouse-client: a soft link to the ClickHouse executable for the client to connect to.
clickhouse-server: a soft link to the ClickHouse executable for server-side startup.
clickhouse-compressor: a built-in compression utility that can be used to decompress and decompress data.
  • 1
  • 2
  • 3
  • 4
  • 5
3) Initiation

Before starting it is recommended to first modify the core parameters of the CK configuration open the configuration file and modify the address where the data is saved:

/etc/clickhouse-server
[root@linux03 clickhouse-server]# ll
total 44
-rw-r--r--. 1 root root 33742 Dec  8 20:47 
-rw-r--r--. 1 root root  5587 Oct  5  2020 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

​ vi

<path>/chbase/data/</path> 
<tmp_path>/chbase/data/tmp/</tmp_path> 
<user_files_path>/chbase/data/user_files/</user_files_path>
  • 1
  • 2
  • 3

ClickHouse's underlying access interface supports TCP and HTTP protocols, of which, TCP protocol has better performance, its default port is 9000, mainly used for internal communication between clusters and CLI clients; while HTTP protocol has better compatibility, can be widely used in the form of REST services for programming languages such as Java, Python, etc. The default port is 8123. In general, it is not recommended that users directly access ClickHouse using the underlying interface, the more recommended way is through the CLI and JDBC package interfaces, because they are more simple and easy to use!

  • Starting services
clickhouse-server start  
netstat -nltp | grep 9000
tcp6       0      0 :::9000                 :::*                    LISTEN      1354/clickhouse-ser 
  • 1
  • 2
  • 3

Interactive Client
在这里插入图片描述

clickhouse-client -u default --password -m

  • -h
  • –port
  • -m Multi-line functions can be executed in the interactive client.
  • -q
# clickhouse-client  -m
ClickHouse client version 20.8.3.18.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.8.3 revision 54438.
linux03 :) 
  • 1
  • 2
  • 3
  • 4
  • 5

(1) -host/-h: the address of the server, the default value is localhost, if you modify the listen_host inside, you need to rely on this parameter to specify the server address.

(2) -port: the TCP port of the server, the default value is 9000. if you want to modify the internal tcp_port, you need to use this parameter to specify.

(3) -user/-u: the user name for login, the default value is default, if you use another user name other than default to login, you need to use this parameter to specify, such as the code shown below. The customization of user will be introduced later or follow the blog address. /qq_37933018?t=1

(4) -password: password for login, default value is empty. If the password is not set in the user definition, it is not required (e.g. default user).

(5) -database/-d: the login database, the default value is default.

(6) -query/-q: can only be used in non-interactive queries to specify SQL statements.

(7) -multiquery/-n: Allow multiple SQL statements to be run at once, separated by a semicolon, in a non-interactive execution.

(8) -time/-t: In non-interactive execution, the execution time of each SQL is printed.

  • Non-Interactive Clients

The non-interactive execution method is generally executed only once, and does not enter into the customer's non-operation method, user testing, data import, data export is very convenient!

clickhouse-client  -n -q
clickhouse-client  -q -n  'show databases; use test1;' ;
-n Supports the simultaneous execution of multiple SQL statements, using the;apostrophe
-q Executing SQL Statements
  • 1
  • 2
  • 3
  • 4

Importing data and exporting data is also possible using only this method! We'll cover that later!!!!

  • clickhouse-client -h linux01 error connection refused
vi
<listen_host>::</listen_host>
service clickhouse-server restart -- restart sth.
clickhouse-client -h linux01
  • 1
  • 2
  • 3
  • 4

2 Cluster Deployment

/clickhouse/rpm/testing/x86_64/

The rpm binary can be downloaded locally and installed using the rpm command.

Install ZK on each section

Three clickhouse basics to get started

Basic Grammar Demonstration

-- show databases ;
-- create database if not exists test1 ;
-- use test1 ;
-- select currentDatabase() ;
-- drop database test1 ;
  • 1
  • 2
  • 3
  • 4
  • 5

1 Data type

Note that keywords are strictly case-sensitive in CK

When you build a table you usually have to specify the engine!!!!!

create table tb_test1(
id Int8 ,
 name String 
)engine=Memory;
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ id   │ Int8   │              │                    │         │                  │                │
│ name │ String │              │                    │         │                  │                │
└──────┴────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

1.1 Numerical types

1) IntX and UIntX

Previously, we used to use Tinyint, Smallint, Int and Bigint to refer to different ranges of integer values. ClickHouse uses Int8, Int16, Int32 and Int64 to refer to the 4 sizes of Int types, the number at the end of which indicates the size of the bytes occupied (8 bits = 1 byte), and the number at the end of which indicates the size of the bytes occupied.
在这里插入图片描述

ClickHouse supports unsigned integers, which are denoted using the prefix U

在这里插入图片描述

create table test_int(
    id Int8 ,
    age UInt8 ,
    cdId Int32
)engine=Memory ;
  • 1
  • 2
  • 3
  • 4
  • 5
2) FloatX

在这里插入图片描述

Note: As I've known before, this type of data may have data precision problems when the data is particularly precise!

Select 8.0/0 -->inf positive infinity

Select -8.0/0 -->inf Negative Infinity

Select 0/0 -->nan Non-numeric

3) Decimal

For higher precision numerical operations, fixed point numbers are required, and ClickHouse provides fixed point numbers in Decimal32, Decimal64, and Decimal128 precision. Fixed points can be declared in two forms: Decimal32(S) and Decimal64(S),

Decimal128(S) three kinds, the native way is Decimal(P,S), where:

-P stands for precision, which determines the total number of digits (integer part + decimal part) and ranges from 1 to 38; -S stands for scale, which determines the number of decimal places and ranges from 0 to P.

在这里插入图片描述

When two fixed-point numbers of different precision are used in a quadratic operation, their decimal places S change.

When performing addition, S takes the maximum value. For example, in the following query, toDecimal64(2,4) is added to toDecimal32(2,2) and S=4:

When performing subtraction operations, S takes the maximum value. For example, in the following query, toDecimal64(2,4) is subtracted from toDecimal32(2,2) and S=4:

When performing multiplication, S takes the most sum. For example, in the following query, S=4+2 after multiplying toDecimal64(2,4) with toDecimal32(2,2):

When performing a division operation, S takes the maximum value. For example, in the following query, toDecimal64(2,4) is divided by toDecimal32(2,2) and S=4: but make sure that the S of the divisor is greater than the S of the divisor, or you will get an error!

在这里插入图片描述

create table test_decimal(
id Int8 ,
sal Decimal(5,2) -- 5 Total digits 2 Decimal digits Determination 2
)engine=Memory ;
  • 1
  • 2
  • 3
  • 4

1.2 String types

String types can be subdivided into String, FixedString and UUID. From the naming point of view, it doesn't seem to be a type provided by a database, but more like a programming language design, and yes, the CK syntax has the characteristics of a programming language (data + operations).

1) String

Strings are defined by String and are of unlimited length. Therefore there is no need to declare the size when using String. It completely replaces the traditional sense of the database Varchar, Text, Clob and Blob and other character types.String type does not limit the character set, because it does not have this concept, so you can be any encoding of the string into which. However, in order to standardize the program and maintainability, in the same program should follow the use of uniform encoding, such as "unified to maintain the UTF-8 encoding" is a good convention. So in the operation of the data we do not need to pay attention to the encoding and garbled problems!

2) FixedString

The FixedString type is somewhat similar to the Char type in the traditional sense, in that a fixed-length string can be used for some occasions where the characters have an explicit length. Fixed-length strings are declared via FixedString(N), where N denotes the string length. However, unlike Char, the

FixedString uses null bytes to fill in the end characters, whereas Char usually uses space padding. For example, in the following example, the string 'abc' is only 3 bits long but is 5 because there are 2 null characters padded at the end !

create table test_str(
    name String ,
    job FixedString(4)   -- up to 4 characters
)engine=Memory ;
  • 1
  • 2
  • 3
  • 4
3) UUID

UUID is a common primary key type in databases, and is used directly as a data type in ClickHouse.The UUID has 32 bits, and is formatted as 8-4-4-4-12.If a field with a UUID type is not assigned a value when the data is written to it, it will be padded with zeros in accordance with the format

CREATE TABLE test_uuid
(
    `uid` UUID,
    `name` String
)
ENGINE = Log ;
DESCRIBE TABLE test_uuid

┌─name─┬─type───┬
│ uid  │ UUID   │
│ name │ String │ 
└──────┴────────┴
insert into test_uuid select generateUUIDv4() , 'zss' ;
insert into test_uuid  values (generateUUIDv4() , 'zss') ;
select * from test_uuid ;
┌──────────────────────────────────uid─┬─name─┐
│ 47e39e22-d2d6-46fd-8014-7cd3321f4c7b │ zss  │
└──────────────────────────────────────┴──────┘

Default complement 0 for fields of type -------------------------UUID -----------------------------
insert into test_uuid (name) values('hangge') ;
┌──────────────────────────────────uid─┬─name─┐
│ 47e39e22-d2d6-46fd-8014-7cd3321f4c7b │ zss  │
└──────────────────────────────────────┴──────┘
┌──────────────────────────────────uid─┬─name───┐
│ 00000000-0000-0000-0000-000000000000 │ hangge │
└──────────────────────────────────────┴────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

1.3 Time type

1) Date

Date type does not contain specific time information, only accurate to the day, support string form to write:

CREATE TABLE test_date
(
    `id` int,
    `cd` Date
)
ENGINE = Memory ;
DESCRIBE TABLE test_date  ;
┌─name─┬─type──┬
│ id   │ Int32 │
│ ct   │ Date  │
└──────┴───────┴
insert into test_date vlaues(1,'2021-09-11'),(2,now()) ;
select id , ct from test_date ;

┌─id─┬─────────ct─┐
│  12021-09-11 │
│  22021-05-17 │
└────┴────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
2) DateTime

The DateTime type contains hour, minute, and second information, accurate to the second, and supports writing in string form:

create table testDataTime(ctime DateTime) engine=Memory ;
insert into testDataTime values('2021-12-27 01:11:12'),(now()) ;
select * from testDataTime ;
  • 1
  • 2
  • 3
3)DateTime64

DateTime64 can record sub-seconds, which adds precision settings on top of DateTime.

-- table building
CREATE TABLE test_date_time64
(
    `ctime` DateTime64
)
ENGINE = Memory ;
-- table building
CREATE TABLE test_date_time64_2
(
    `ctime` DateTime64(2)
)
ENGINE = Memory ;
-- insert data separately
insert into test_date_time64 values('2021-11-11 11:11:11'),(now()) ;
insert into test_date_time64_2 values('2021-11-11 11:11:11'),(now()) ;
-- Query data
SELECT *
FROM test_date_time64;
┌───────────────────ctime─┐
│ 2021-11-11 11:11:11.000 │
│ 2021-05-17 10:40:51.000 │
└─────────────────────────┘
SELECT 
    *, toTypeName(ctime)
FROM test_date_time64

┌───────────────────ctime─┬─toTypeName(ctime)─┐
│ 2021-11-11 11:11:11.000 │ DateTime64(3)     │
│ 2021-05-17 10:40:51.000 │ DateTime64(3)------------------------------------------------
SELECT 
    *, toTypeName(ctime)
FROM test_date_time64_2

┌──────────────────ctime─┬─toTypeName(ctime)─┐
│ 2021-11-11 11:11:11.00 │ DateTime64(2)     │
│ 2021-05-17 10:41:26.00 │ DateTime64(2)     │
└────────────────────────┴───────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

1.4 Complex types

1) Enum

ClickHouse supports enumeration, a data type that is often used when defining constants. clickHouse provides two types of enumeration, Enum8 and Enum16, which are different except for their value ranges. Enumerations are defined using (String:Int)Key/Value key/value pairs, so Enum8 and Enum16 will correspond to (String:Int8) and (String:Int16) respectively!

create table test_enum(id Int8 , color Enum('red'=1 , 'green'=2 , 'blue'=3)) engine=Memory ;
insert into  test_enum values(1,'red'),(1,'red'),(2,'green');
Inserting data can also be done in this way: the
insert into test_enum values(3,3) ;
  • 1
  • 2
  • 3
  • 4

There are a few things to keep in mind when defining enumerated collections. First, Key and Value are not allowed to be duplicated, to ensure uniqueness. Second, neither Key nor Value can be Null, but Key is allowed to be the empty string. When writing enumeration data, only the Key string part will be used, the

Note: We can actually use strings instead of enums to store data, so why use enums? This is for performance reasons. Because although the definition of the enumeration of the Key belongs to the String type, but in the subsequent enumeration of all operations (including sorting, grouping, de-emphasis, filtering, etc.), will use the Int type of Value value , to improve the efficiency of processing data!

  • Restricting the value of enumerated type fields
  • The bottom layer stores the corresponding Int type of data, using less storage space.
  • String can be used instead of enumeration / no value qualification
  • When inserting data, you can insert the specified string or the corresponding int value.
2) Array(T)

CK supports arrays as a composite data type, and the manipulation of data in the future data analysis plays a very convenient effect! There are two ways of defining an array : array(T) [e1,e2...], where we require the data types in the array to be consistent!

Definition of an array
[1,2,3,4,5]
    array('a' , 'b' , 'c')
[1,2,3,'hello']   -- Error
create table test_array(
id Int8 ,
hobby Array(String)
)engine=Memory ;
insert into test_array values(1,['eat','drink','la']),(2,array('sleep','palyg','sql'));
┌─id─┬─hobby───────────────────┐
│  1['eat','drink','la']    │
│  2['sleep','palyg','sql'] │
└────┴─────────────────────────┘
select id , hobby  , toTypeName(hobby) from test_array ;
┌─id─┬─hobby───────────────────┬─toTypeName(hobby)─┐
│  1['eat','drink','la']    │ Array(String)     │
│  2['sleep','palyg','sql'] │ Array(String)     │
└────┴─────────────────────────┴───────────────────┘
 select id , hobby[2]  , toTypeName(hobby) from test_array ; -- array of values [index] 1-based
 select * , hobby[1] , length(hobby) from test_array ;  length(arr)  -- Length of the array
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
3) Tuple

Encapsulate a user's basic information (id,name,age,gender) in java.

You need to create a POJO/JavaBean class UserBean , and then set the value of the field into it . . Properties manipulate arbitrary properties

Tuple4(1,zss,23,M) manipulates the data by getting the value of the specified position _2

(2,lss,24,F)

The tuple type consists of 1 to n elements, each element is allowed to set a different data type, and are not required to be compatible with each other. Tuples also support type inference, which is still based on the principle of minimum storage cost. Similar to arrays, tuples can be defined in two ways, the conventional way tuple (T): tuples can store a variety of data types, but pay attention to the order of data types

tuple(…)

(…)

col Tuple(Int8 , String …)

('' , '') pairwise tuple entry --> map

select tuple(1,'asb',12.23) as x , toTypeName(x) ;
 ┌─x───────────────┬─toTypeName(tuple(1, 'asb', 12.23))─┐
│ (1,'asb',12.23) │ Tuple(UInt8, String, Float64)      │
└─────────────────┴────────────────────────────────────┘
--- Abbreviated form
SELECT 
    (1, 'asb', 12.23) AS x,
    toTypeName(x)

┌─x───────────────┬─toTypeName(tuple(1, 'asb', 12.23))─┐
│ (1,'asb',12.23) │ Tuple(UInt8, String, Float64)      │
└─────────────────┴────────────────────────────────────┘
Note: If you use a tuple when building a table, you need to specify the data type of the tuple.
CREATE TABLE test_tuple ( 
c1 Tuple(UInt8, String, Float64) 
) ENGINE = Memory; 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • (1,2,3,‘abc’)
  • tuple(1,2,3,‘abc’)
  • col Tuple(Int8,Int8,String) - defines the generalized type
  • tuple(1,‘zss’,12.12)
select tupleElement(c1 , 2)   from test_tuple;  -- Get the value at the specified position in the tuple
  • 1
4) Nested

Nested is a nested table structure. A data table, you can define any number of nested type fields, but the nesting hierarchy of each field only supports one level, that is, you can not continue to use the nested type within the nested table. For simple scenarios of hierarchical or associative relationships, using nested types is also a good choice.

create table test_nested(
    uid Int8 ,
    name String ,
    props Nested(
        pid Int8,
        pnames String ,
        pvalues String
    )
)engine = Memory ;
desc test_nested ;
┌─name──────────┬─type──────────┬
│ uid           │ Int8          │
│ name          │ String        │
│ props.pid     │ Array(Int8)   │
│ props.pnames  │ Array(String) │
│ props.pvalues │ Array(String) │
└───────────────┴───────────────┴
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

A nested type is essentially a multidimensional array structure. Each field in a nested table is an array, and the lengths of the arrays need not be aligned between rows. Note that each array field must be of equal length within the same row of data.

insert into test_nested values(1,'hadoop',[1,2,3],['p1','p2','p3'],['v1','v2','v3']);
-- The number of attributes can vary from row to row, but the number of arrays in the Nested type of the current row must be the same.
insert into test_nested values(2,'spark',[1,2],['p1','p2'],['v1','v2']);
SELECT *
FROM test_nested

┌─uid─┬─name───┬─props.pid─┬─props.pnames─────┬─props.pvalues────┐
│   1 │ hadoop │ [1,2,3]['p1','p2','p3']['v1','v2','v3'] │
└─────┴────────┴───────────┴──────────────────┴──────────────────┘
┌─uid─┬─name──┬─props.pid─┬─props.pnames─┬─props.pvalues─┐
│   2 │ spark │ [1,2]['p1','p2']['v1','v2']   │
└─────┴───────┴───────────┴──────────────┴───────────────┘
SELECT 
    uid,
    name,
    props.pid,
    props.pnames[1]
FROM test_nested;
┌─uid─┬─name───┬─props.pid─┬─arrayElement(props.pnames, 1)─┐
│   1 │ hadoop │ [1,2,3]   │ p1                            │
└─────┴────────┴───────────┴───────────────────────────────┘
┌─uid─┬─name──┬─props.pid─┬─arrayElement(props.pnames, 1)─┐
│   2 │ spark │ [1,2]     │ p1                            │
└─────┴───────┴───────────┴───────────────────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
create table test_nested(
id Int8 ,
name String  ,
scores Nested(
       seq  UInt8 ,
	   sx Float64 ,
       yy Float64 ,  
	   yw Float64
	   )
)engine = Memory ;

insert into test_nested values (1,'wbb',[1,2,3],[11,12,13],[14,14,11],[77,79,10]);
insert into test_nested values (2,'taoge',[1,2],[99,10],[14,40],[77,11]);
-- Note that the number of arrays in each row is the same, and that rows may not always be separated by

┌─id─┬─name─┬─scores.seq─┬─scores.sx──┬─scores.yy──┬─scores.yw──┐
│  1 │ wbb  │ [1,2,3][11,12,13][14,14,11][77,79,10] │
└────┴──────┴────────────┴────────────┴────────────┴────────────┘
┌─id─┬─name──┬─scores.seq─┬─scores.sx─┬─scores.yy─┬─scores.yw─┐
│  2 │ taoge │ [1,2][99,10][14,40][77,11]   │
└────┴───────┴────────────┴───────────┴───────────┴───────────┘
SELECT 
    name,
    scores.sx
FROM test_nested;
┌─name─┬─scores.sx──┐
│ wbb  │ [11,12,13] │
└──────┴────────────┘
┌─name──┬─scores.sx─┐
│ taoge │ [99,10]   │
└───────┴───────────┘

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
The difference from just multiple array types is that the
Consistent length of each attribute array in each row of data
5) Map

clickEvent The user opens a page, clicks a button, and triggers a click event.

set allow_experimental_map_type = 1 ;  -- Enable Map data type
CREATE TABLE test_map (
a Map(String, UInt64)
) ENGINE=Memory;
desc test_map ;

insert into test_map valeus({'lss':21,'zss':22,'ww':23}) ,({'lss2':21,'zss2':22,'ww2':23});

SELECT
    *,
    mapKeys(a),
    mapValues(a),
    a['lss'],
    length(a)
FROM test_map
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
The Map collection is the K->V Mapping Relationships
tuple('zss',23)  There are only two elements in the tuple Pairwise tuple K->V
Contents of Map(tuple2)
cast(v , dataType)  forced type conversion
select cast('21' , 'UInt8')+3 ;
-- Zipper operation
SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map;
([1, 2, 3], ['Ready', 'Steady', 'Go'])
SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map , mapKeys(map) as ks , mapValues(map) as vs;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
6) GEO
  • Point
SET allow_experimental_geo_types = 1;
CREATE TABLE geo_point (p Point) ENGINE = Memory();
INSERT INTO geo_point VALUES((10, 10));
SELECT p, toTypeName(p) FROM geo_point;
┌─p───────┬─toTypeName(p)─┐
│ (10,10)Point         │
└─────────┴───────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Ring
SET allow_experimental_geo_types = 1;
CREATE TABLE geo_ring (r Ring) ENGINE = Memory();
INSERT INTO geo_ring VALUES([(0, 0), (10, 0), (10, 10), (0, 10)]);
SELECT r, toTypeName(r) FROM geo_ring;

┌─r─────────────────────────────┬─toTypeName(r)─┐
│ [(0,0),(10,0),(10,10),(0,10)] │ Ring          │
└───────────────────────────────┴───────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • Polygon
SET allow_experimental_geo_types = 1;
CREATE TABLE geo_polygon (pg Polygon) ENGINE = Memory();
INSERT INTO geo_polygon VALUES([[(20, 20), (50, 20), (50, 50), (20, 50)], [(30, 30), (50, 50), (50, 30)]]);
SELECT pg, toTypeName(pg) FROM geo_polygon;
  • 1
  • 2
  • 3
  • 4
  • MultiPolygon
SET allow_experimental_geo_types = 1;
CREATE TABLE geo_multipolygon (mpg MultiPolygon) ENGINE = Memory();
INSERT INTO geo_multipolygon VALUES([[[(0, 0), (10, 0), (10, 10), (0, 10)]], [[(20, 20), (50, 20), (50, 50), (20, 50)],[(30, 30), (50, 50), (50, 30)]]]);
SELECT mpg, toTypeName(mpg) FROM geo_multipolygon;
  • 1
  • 2
  • 3
  • 4
7)IPV4

Domain types are divided into two categories, IPv4 and IPv6, which are essentially further encapsulations of integers and strings.The IPv4 types are based on UInt32 encapsulation

(1) For convenience reasons. such as IPv4 types support format checking. incorrectly formatted IP data cannot be written. for example:

INSERT INTO IP4_TEST VALUES (‘’,‘192.0.0’)

Code: 441. DB::Exception: Invalid IPv4 value.

(2) For performance considerations, also take IPv4 as an example, IPv4 uses UInt32 storage, which is more compact than String, occupies less space, and the query performance is faster. ipv6 type is based on the FixedString(16) encapsulation, and its usage is different from that of IPv4, and there is one more thing to be noted when you use the Domain type. The Domain type is not a string, although it appears to be the same as a String, so it does not support implicit automatic type conversion. If you want to return the IP as a string, you need to explicitly call the IPv4NumToString or IPv6NumToString functions.

create table test_domain(
id Int8 ,
ip IPv4
)engine=Memory ;
insert  into test_domain values(1,'192.168.133.2') ;
insert  into test_domain values(1,'192.168.133') ; When inserting data, the data will be checked so that this line will report an error.
-- Exception on client:
-- Code: 441. DB::Exception: Invalid IPv4 value.
-- Connecting to database doit1 at localhost:9000 as user default.
-- Connected to ClickHouse server version 20.8.3 revision 54438.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
8) Boolean and Nullable

There is no Boolean type in ck, use 1 and 0 for true and false.

Nullable A data type is allowed to be null, or the mode is NULL if no value is given.

create table test_null(
id  Int8 ,
 age Int8
)engine = Memory ;

create table test_null2(
id  Int8 ,
 age Nullable(Int8)
)engine = Memory ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

2 Basic syntax

2.1 DDL Basics

  • Build Table Specify Engine

Currently only MergeTree, Merge and Distributed are the only three types of table engines that support ALTER modifications., so pay attention to the engine of the table when you perform an alter operation!

CREATE TABLE tb_test1
(
    `id` Int8,
    `name` String
)
ENGINE = Memory ;
-- Only MergeTree supports table structure modification.
-- MergeTree must specify the primary key and sort field order by means two things
CREATE TABLE test_alter1
(
    `id` Int8,
    `name` String
)
ENGINE = MergeTree() 
order by id ;
-- View table build statements View engine type parameter values
show  create table test_alter1 ;
-----------------------------------
CREATE TABLE test_alter1
(
    `id` Int8,
    `name` String
)
ENGINE = MergeTree()
ORDER BY id
-- Parameter Settings Index Strength The default is 8192.
SETTINGS index_granularity = 8192;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • Modify the table structure
-- View table structure
desc tb_test1 ;
┌─name─┬─type───┬
│ id   │ Int8   │
│ name │ String │
└──────┴────────┴
-- Add field
alter table tb_test1 add column age UInt8 ;-- Error reported because the table engine being modified is an in-memory engine that does not support modification of the table structure.
-- Create a table for the MergeTree engine.
CREATE TABLE tb_test2
(
    `id` Int8,
    `name` String
)
ENGINE = MergeTree()
ORDER BY id ;
┌─name─┬─type───┬
│ id   │ Int8   │
│ name │ String │
└──────┴────────┴
-- Add field
alter table tb_test2 add column age UInt8 ;
┌─name─┬─type───┬
│ id   │ Int8   │
│ name │ String │
│ age  │ UInt8  │
└──────┴────────┴
alter table tb_test2 add column gender String after name ; 
┌─name───┬─type───┬
│ id     │ Int8   │
│ name   │ String │
│ gender │ String │
│ age    │ UInt8  │
└────────┴────────┴
-- Delete field
alter table tb_test2 drop column age ;
-- Modify the data type of a field
alter  table  tb_test2 modify column  gender UInt8 default 0 ;
┌─name───┬─type───┬─default_type─┬─default_expression─┬
│ id     │ Int8   │              │                    │
│ name   │ String │              │                    │
│ gender │ UInt8  │ DEFAULT0                  │
└────────┴────────┴──────────────┴────────────────────┴
-- As a good programmer, it is a good habit to use comments for table fields, so it is recommended that you use comments to describe the meaning of the field when operating
-- Modify/add comments to fields The encoding used internally is UTF8 by default.
alter table tb_test2 comment column name 'Username' ;
┌─name───┬─type───┬─default_type─┬─default_expression─┬─comment─┬
│ id │ Int8 │ │ │ │ │ │ │
│ name │ String │ │ │ user ID │
│ gender │ UInt8 │DEFAULT0                  │         │
└────────┴────────┴──────────────┴────────────────────┴─────────┴
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • stopwatch

In Linux, the mv command is intended to move a file from the original location A to the target location B. However, if location A is the same as location B, it can be disguised as a rename. ClickHouse's RENAME query is similar to this, and the complete syntax of the RENAME statement is shown below:

-- Modification of table names
rename table tb_test1 to t1 ;
-- Modification of multiple table names
rename table tb_test2 to t2 , t1 to tt1 ;
-- Move tables to another database
rename table t2 to test1.t ;
-- View all tables under the database
show tables ;
show tables from db_name ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • Setting Table Properties
-- Set default values for columns
create table tb_test3(
   id Int8 ,
    name String comment 'Username' ,
    role String comment 'Role' default 'VIP'
)engine = Log ;
┌─name─┬─type───┬─default_type─┬─default_expression─┬
│ id   │ Int8   │              │                    │
│ name │ String │              │                    │
│ role │ String │ DEFAULT'VIP'              │
└──────┴────────┴──────────────┴────────────────────┴
insert into tb_test3 (id , name) values(1,'HANGGE') ;
SELECT *
FROM tb_test3 ;
┌─id─┬─name───┬─role─┐
│  1 │ HANGGE │ VIP  │
└────┴────────┴──────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

2.2 DML Basics

1) Insertion of data

The INSERT statement supports three syntax paradigms, each of which is different and can be used flexibly depending on the needs of the write.

The first way

Conventional syntax using the VALUES format

INSERT INTO [db.]table [(c1, c2, c3…)] VALUES (v11, v12, v13…), (v21, v22, v23…), …

Among them, c1, c2, and c3 are column field declarations, which can be omitted, and VALUES is immediately followed by tuples of data to be written, which correspond to the column field declarations by subscript bits. The data supports batch statement writing, using comma separation between multiple lines of data.

The second way

在这里插入图片描述

Static data: catuser.txt 
1,zss,23,BJ,M
2,lss,33,NJ,M
3,ww,21,SH,F
create table test_load1(
    id UInt8 ,
    name String ,
    age UInt8 ,
    city String ,
    gender String
)engine=Log ;
-- Importing data into tables
cat user.txt  | clickhouse-client  -q 'insert into default.test_load1 format CSV' --password
clickhouse-client  -q 'insert into default.test_load1 format CSV'  <  user.txt
Both of the above methods can import data into a table
-- we can also enforce splitters for data row attributes
clickhouse-client --format_csv_delimiter='-' -q 'insert into default.test_load1 format CSV'  <   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

/docs/en/interfaces/formats/#formats Supported data file formats

1,a,2017-02-05,200
1,a,2017-02-06,300
1,a,2017-02-07,200
1,a,2017-02-08,400
1,a,2017-02-10,600
1,b,2017-02-05,200
1,b,2017-02-06,300
1,b,2017-02-08,200
1,b,2017-02-09,400
1,b,2017-02-10,600
1,c,2017-01-31,200
1,c,2017-02-01,300
1,c,2017-02-02,200
1,c,2017-02-03,400
1,c,2017-02-10,600
1,a,2017-03-01,200
1,a,2017-03-02,300
1,a,2017-03-03,200
1,a,2017-03-04,400
1,a,2017-03-05,600
create table tb_orders(
id Int8 ,
name String ,
cdate Date ,
money Decimal(5,2)
)
engine = MergeTree
primary key id
order by (id , cdate)
partition by name ;
--Import data
cat | clickhouse-client -q 'insert into tb_orders format CSV' --password

┌─id─┬─name─┬──────cdate─┬──money─┐
│ 1 │ c │ 2017-01-31 │ 200.00 │
│ 1 │ c │ 2017-02-01 │ 300.00 │
│ 1 │ c │ 2017-02-02 │ 200.00 │
│ 1 │ c │ 2017-02-03 │ 400.00 │
│ 1 │ c │ 2017-02-10 │ 600.00 │
└────┴──────┴────────────┴────────┘
┌─id─┬─name─┬──────cdate─┬──money─┐
│ 1 │ a │ 2016-01-01 │ 300.00 │
│ 1 │ a │ 2017-02-05 │ 200.00 │
│ 1 │ a │ 2017-02-06 │ 300.00 │
│ 1 │ a │ 2017-02-07 │ 200.00 │
│ 1 │ a │ 2017-02-08 │ 400.00 │
│ 1 │ a │ 2017-02-10 │ 600.00 │
│ 1 │ a │ 2017-03-01 │ 200.00 │
│ 1 │ a │ 2017-03-02 │ 300.00 │
│ 1 │ a │ 2017-03-03 │ 200.00 │
│ 1 │ a │ 2017-03-04 │ 400.00 │
│ 1 │ a │ 2017-03-05 │ 600.00 │
└────┴──────┴────────────┴────────┘
┌─id─┬─name─┬──────cdate─┬──money─┐
│ 1 │ b │ 2017-02-05 │ 200.00 │
│ 1 │ b │ 2017-02-06 │ 300.00 │
│ 1 │ b │ 2017-02-08 │ 200.00 │
│ 1 │ b │ 2017-02-09 │ 400.00 │
│ 1 │ b │ 2017-02-10 │ 600.00 │
│ 1 │ b │ 2020-01-01 │ 300.00 │
└────┴──────┴────────────┴────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
Third way

INSERT INTO [db.]table [(c1, c2, c3…)] SELECT …

Although both the VALUES and SELECT clauses are of a form that supports the declaration of expressions or functions, expressions and functions incur additional performance overhead, which can lead to degradation of write performance. So if extreme write performance is sought, they should be avoided if possible.

create table log3 as log2 ;
Insert into log3 select * from log2 ;
create table  tb_res  engine = Log as select * from tb_ds ; 
-- create table tb_name like tb_ds ; -- unsupported
  • 1
  • 2
  • 3
  • 4

All data manipulation within ClickHouse is oriented towards theBlock dataof a block, so the INSERT query ends up converting the data into Block data blocks. It is also for this reason that the INSERT statement is atomic in the writing of individual blocks of data. By default, theEach data block can write up to 1048576 lines of data(controlled by the max_insert_block_size parameter). That is, if an INSERT statement writes fewer than max_insert_block_size rows of data, then that batch of data is written with aatomicitythat either all succeed or all fail. Note that this atomic write feature is only available when the ClickHouse server is processing the data, such as when using the JDBC or HTTP interfaces. This is because the max_insert_block_size parameter does not take effect when writing from the CLI command line or the INSERT SELECT clause.

2) Update deleted data

[Generally do not operate] olap query more than

If it's a MergeTree engine table

  1. You can delete the partition and reimport it.

  2. You can delete data based on conditions Update data based on conditions alter table delete/ update where

(mutation operation)

Data deletion and update operations can be realized using special engines provided in CK.CollapsingMergeTree VersionedCollapsingMergeTree

ClickHouse provides the ability to DELETE and UPDATE, which are operations known asMutationQuery, it can be seen as a variant of the ALTER statement. Although Mutation can ultimately achieve the modification and deletion, but can not be completely in the usual sense of UPDATE and DELETE to understand, we must be awake to realize that it is different: First, Mutation statement is a "very heavy" operation, more suitable for batch data modification and deletion; Secondly, it does not support transactions. First, the Mutation statement is a "heavy" operation, more suitable for bulk data modification and deletion; second, it does not support transactions, once the statement is committed to execution, it will immediately affect the existing data, and can not be rolled back; finally, the execution of the Mutation statement is an asynchronous background process, the statement is committed to the statement will be returned immediately. So this does not mean that the specific logic has been executed, its specific execution progress needs to be queried through the system table. Note that data modification and deletion operations are performed using theMergeTreeFamily Engines.

Only data from the MergeTree engine can be modified.

Deleting partition data Modifying the data of an entire partition
-- Create tables
create table test_muta(
	id UInt8 ,
    name String ,
    city String
)engine=MergeTree() 
partition  by city 
order by id ;
-- Import of data
clickhouse-client  -q 'insert into test_muta format CSV' < data.csv 
-- Delete partition data
alter table test_muta drop  partition 'SH' ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
Conditional deletion of data
alter table test_muta delete where id=3 ;  -- certain plus conditions
  • 1
Conditional Update Data
ALTER TABLE [db_name.]table_name UPDATE column1 = expr1 [, ...] WHERE filter_expr 
ALTER TABLE test_ud
UPDATE name = 'my', job = 'teacher' WHERE id = '2' ; 

alter table test_muta update name='Li Si Si'  where id=3 ;
  • 1
  • 2
  • 3
  • 4
  • 5

But pay attention to the time must be specified where the article otherwise it will report an error, this syntax of the where condition can also be used as a subquery ;)

2.3 Partition Table Operation

Currently onlyMergeTreeseries of table engine support for data partitioning, partitioning of the basic concepts and meanings and hive in the same sense, here but not too much detail!

Sorting, merging and de-duplication within the zone

create table test_partition1(
id String , 
ctime DateTime
)engine=MergeTree() 
partition by toYYYYMM(ctime)
order by (id) ;
-- Viewing Table Building Statements
show create table test_partition1;

 CREATE TABLE default.test_partition1
(
    `id` String,
    `ctime` DateTime
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(ctime)
ORDER BY id
SETTINGS index_granularity = 8192  -- Index Granularity Sparse Indexing
-- Insertion of data
insert into test_partition1 values(1,now()) ,(2,'2021-06-11 11:12:13') ;
-- View data
SELECT *
FROM test_partition1 ;
┌─id─┬───────────────ctime─┐
│ 22021-06-11 11:12:13 │
└────┴─────────────────────┘
┌─id─┬───────────────ctime─┐
│ 12021-05-19 13:38:29 │
└────┴─────────────────────┘
-- View partitions in a table
ClickHouse has a number of built-in system tables for querying its own status information. Among them, the parts system table is specifically used to query the partition information of the data table.
SELECT 
    name,
    table,
    partition
FROM system.parts
WHERE table = 'test_partition1' ;
┌─name─────────┬─table───────────┬─partition─┐
│ 202105_1_1_0 │ test_partition1 │ 202105    │
│ 202106_2_2_0 │ test_partition1 │ 202106    │
└──────────────┴─────────────────┴───────────┘
insert into test_partition1 values(1,now()) ,(2,'2021-06-12 11:12:13') ;

┌─name─────────┬─table───────────┬─partition─┐
│ 202105_1_1_0 │ test_partition1 │ 202105    │
│ 202105_3_3_0 │ test_partition1 │ 202105    │
│ 202106_2_2_0 │ test_partition1 │ 202106    │
│ 202106_4_4_0 │ test_partition1 │ 202106    │
└──────────────┴─────────────────┴───────────┘
-- Deletion of partitions
alter table test_partition1 drop partition '202109' ;
After deleting the partition, All data in the partition is deleted.
SELECT 
    name,
    table,
    partition
FROM system.parts
WHERE table = 'test_partition1'
┌─name─────────┬─table───────────┬─partition─┐
│ 202106_2_2_0 │ test_partition1 │ 202106    │
│ 202106_4_4_0 │ test_partition1 │ 202106    │
└──────────────┴─────────────────┴───────────┘
SELECT *
FROM test_partition1
┌─id─┬───────────────ctime─┐
│ 22021-06-12 11:12:13 │
└────┴─────────────────────┘
┌─id─┬───────────────ctime─┐
│ 22021-06-11 11:12:13 │
└────┴─────────────────────┘

-- Duplicate partitions
create  table  tb_y  as  tb_x ;
clickHouse supports copying partitioned data from table A to table B. This feature can be used in scenarios such as fast data writes, data synchronization between multiple tables, and backups, etc. Its full syntax is as follows:
ALTER TABLE  B  REPLACE PARTITION partition_expr FROM A 
ALTER TABLE test_partition2  REPLACE PARTITION '202107' FROM  test_partition1 ;
However, it is important to note that not just any table can be copied from one to the other; they need to fulfill two prerequisites:
-The two tables need to have the same partition key.
-Their table structures are identical.
create table test_partition2  as  test_partition1 ;
show  create table test_partition2 ;  -- View table creation statements for table 2
CREATE TABLE default.test_partition2 as test_partition1 ;CREATE TABLE default.test_partition2
    (
        `id` String,
        `ctime` DateTime
    )
    ENGINE = MergeTree()
    PARTITION BY toYYYYMM(ctime)
    ORDER BY id
    SETTINGS index_granularity = 8192-- The structure of the two tables is identical
-- Copy partitions from one table to another table
SELECT *
FROM test_partition2
┌─id─┬───────────────ctime─┐
│ 22021-06-12 11:12:13 │
└────┴─────────────────────┘
┌─id─┬───────────────ctime─┐
│ 22021-06-11 11:12:13 │
└────┴─────────────────────┘
┌─id─┬───────────────ctime─┐
│ 22021-06-21 11:12:13 │
└────┴─────────────────────┘
----------------------------
alter table test_partition2  replace  partition '202106' from  test_partition1
alter table test_muta2  replace  partition 'BJ' from  test_muta ;
SELECT 
    name,
    table,
    partition
FROM system.parts
WHERE table = 'test_partition2'
┌─name─────────┬─table───────────┬─partition─┐
│ 202106_2_2_0 │ test_partition2 │ 202106    │
│ 202106_3_3_0 │ test_partition2 │ 202106    │
│ 202106_4_4_0 │ test_partition2 │ 202106    │
└──────────────┴─────────────────┴───────────┘

-- reset partition data
If the data in a column of the data table is incorrect, you need to reset it to the initial value, if the default value is set then it is the default value data,If the default value is not set,The system will give default initial values,This can be accomplished at this point using the following statement:
ALTER TABLE tb_name CLEAR COLUMN column_name IN PARTITION partition_expr ;
Note: Primary key and partition fields cannot be reset.
Example.
alter  table test_rep clear column name in partition '202105' ;
alter  table  test_muta  clear column name in partition 'BJ' ;

-- Unmounting partitions
Table partitions can be unmounted with the DETACH statement. When a partition is unmounted, its physical data is not deleted, but is moved to the detached subdirectory of the current data table directory. Mounting a partition, on the other hand, is the reverse operation; it can reload a partition in the detached subdirectory back. Unmounting and mounting are companion operations that are commonly used in partition data migration and backup scenarios.

┌─id─┬─name─┬───────────────ctime─┐
│1 │      │ 2021-05-19 13:59:49 │
│  2 │      │ 2021-05-19 13:59:49 │
└────┴──────┴─────────────────────┘
┌─id─┬─name─┬───────────────ctime─┐
│  3 │ ww   │ 2021-04-11 11:12:13 │
└────┴──────┴─────────────────────┘
alter table test_rep detach partition '202105' ;
alter table test_muta detach partition 'BJ' ;
┌─id─┬─name─┬───────────────ctime─┐
│  3 │ ww   │ 2021-04-11 11:12:13 │
└────┴──────┴─────────────────────┘
-- Load partition
alter table test_rep attach partition '202105' ;
alter table test_muta attach partition 'BJ' ;

┌─id─┬─name─┬───────────────ctime─┐
│  1 │      │ 2021-05-19 13:59:49 │
│  2 │      │ 2021-05-19 13:59:49 │
└────┴──────┴─────────────────────┘
┌─id─┬─name─┬───────────────ctime─┐
│  3 │ ww   │ 2021-04-11 11:12:13 │
└────┴──────┴─────────────────────┘
-- Remember, once a partition has been moved to the detached subdirectory, it is out of ClickHouse's control and ClickHouse does not actively clean up these files. These partition files will always be there unless we actively delete them or reload them using the ATTACH statement!
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • Delete Partition
  • Add Partition
  • Copying Partition Data
  • Uninstalling a Partition
  • mounting partition

2.4 Views

1) General View

ClickHouse has both normal and materialized views, where the materialized view has independent storage** and the normal view is just a simple layer of query proxies**.

CREATE VIEW [IF NOT EXISTS] [db_name.]view_name AS SELECT ... 
  • 1

Ordinary view will not store any data, it is just a layer of simple SELECT query mapping, play a role in simplifying the query, clear semantics, query performance will not have any enhancement.

create view  test3_view as select id , upper(name) , role from tb_test3 ;
┌─name────────────┐
│ tb_test3        │
│ test3_view      │
│ test_partition1 │
│ test_partition2 │
│ test_rep        │
│ tt1             │
└─────────────────┘
drop view test3_view ;   -- Delete View
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
2) Physical view

Materialized views support table engines, and the form in which the data is stored is determined by its table engine. The full syntax for creating a materialized view is as follows

create materialized view mv_log engine=Log populate as select * from log ;

Once the materialized view is created, if new data is written to the source table, the materialized view is updated simultaneously.POPULATEThe modifier determines the initialization strategy of the materialized view: if the POPULATE modifier is used, the view is created with the data that already exists in the source table, just as if an INTO SELECT had been performed; conversely, if the POPULATE modifier is not used, the materialized view is created with no data, and is synchronized with the data that has been written to the source table since then. It will only synchronize data that has been written to the source table since then. Materialized views do not currently support synchronous deletion; if data is deleted from the source table, the data in the materialized view remains.

create materialized view test3_view engine = Log populate as select * from tb_test3 ;
-- Synchronize the data when building the table, when the data is updated the data in the materialized view will be updated, but when the data is deleted, the data in the materialized view will not be deleted.
SELECT *
FROM test3_view ;
┌─id─┬─name───┬─role─┐
│  1 │ HANGGE │ VIP  │
│  2 │ BENGE  │ VIP  │
│  3 │ PINGGE │ VIP  │
└────┴────────┴──────┘
-- Erasure of data into the source table
SELECT *
FROM test3_view

┌─id─┬─name──┬─role─┐
│  4 │ TAOGE │ VIP  │
└────┴───────┴──────┘
┌─id─┬─name───┬─role─┐
│  1 │ HANGGE │ VIP  │
│  2 │ BENGE  │ VIP  │
│  3 │ PINGGE │ VIP  │
└────┴────────┴──────┘
-- Delete the data in the source table, the data in the materialized view will not change. ****
Note: The data deletion syntax only applies to tables in the MergeTree engine. The basic syntax is as follows
ALTER TABLE db_name.table_name DROP PARTITION '20210601'
ALTER TABLE db_name.table_name DELETE WHERE day = '20210618'
ALTER TABLE <table_name> UPDATE col1 = expr1, ... WHERE <filter>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

Show tables ; in fact, a materialized view is a special kind of table

IV Engine Details

Table engines are a major feature of ClickHouse's design and implementation, what characteristics a table has, what form the data is stored in, and how it is loaded. clickHouse has a very large system of table engines, with more than 20 table engines in six categories: MergeTree, External Storage, In-Memory, File, Interfaces, and others, at the time of this book's completion. Among these table engines, the MergeTree table engine and its family (*MergeTree) are the most powerful and are used in most scenarios in production environments. Only the MergeTree family of table engines supports primary key indexing, data partitioning, data replication, and data sampling, and only this family of table engines supports ALTER-related operations.

The MergeTree family itself has several table engine variants. The MergeTree is the most basic table engine in the family, providing basic capabilities such as primary key indexing, data partitioning, data replication, and data sampling, while the other table engines in the family have their own strengths based on the MergeTree. For example, the ReplacingMergeTree table engine has the ability to remove duplicates, while the SummingMergeTree table engine automatically aggregates data by sort key. If you add the Replicated prefix to the MergeTree family of table engines, you get a set of table engines that support data replication, such as ReplicatedMergeTree, ReplicatedReplacingMergeTree, ReplicatedSummingMergeTree, and so on.

The table engine (i.e., the type of table) determines that:

  1. How and where data is stored, where it is written to and where it is read from
  2. What queries are supported and how.
  3. Concurrent data access.
  4. The use of indexes (if they exist).
  5. Whether multi-threaded requests can be executed.
  6. The data copy parameter, whether or not a copy of the data can be stored.
  7. Distributed Engine Implementing Distributed

… …

1 Log Series Engine

The Log family has the least powerful [lightweight engine. This type of engine is most effective when you need to quickly write many small tables (up to about 1 million rows) and read them as a whole later.

1.1 TinyLog Engine

The simplest table engine for storing data on disk. Each column is stored in a separate zip file, and when written, the data is appended to the end of the file. This engine has no concurrency control

 1, the simplest engine
 2, no index, no marking block
 3, write is append write
 4, the data is stored in a column field file
 5, does not allow simultaneous reading and writing
  • 1
  • 2
  • 3
  • 4
  • 5
-- table building
create table test_tinylog(
	id UInt8 ,
    name String ,
    age UInt8
)engine=TinyLog ;
-- View table structure
desc test_tinylog ;
-- Viewing Table Building Statements
SHOW CREATE TABLE test_tinylog ;
-- Insertion of data
insert into test_tinylog values(1,'liubei',45),(2,'guanyu',43),(3,'zhangfei',41) ;

SELECT *
FROM test_tinylog

┌─id─┬─name─────┬─age─┐
│  1 │ liubei   │  45 │
│  2 │ guanyu   │  43 │
│  3 │ zhangfei │  41 │
└────┴──────────┴─────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

View Number Underlying Storage

[root@doit01 test_tinylog]# pwd
/var/lib/clickhouse/data/default/test_tinylog
-rw-r-----. 1 clickhouse clickhouse 29 May 19 15:29 
-rw-r-----. 1 clickhouse clickhouse 29 May 19 15:29 
-rw-r-----. 1 clickhouse clickhouse 50 May 19 15:29 
-rw-r-----. 1 clickhouse clickhouse 90 May 19 15:29
-- After inserting the data again, append the written data to each file.
-rw-r-----.1 clickhouse clickhouse  58 May 19 15:31 
-rw-r-----. 1 clickhouse clickhouse  58 May 19 15:31 
-rw-r-----. 1 clickhouse clickhouse 100 May 19 15:31 
-rw-r-----. 1 clickhouse clickhouse  91 May 19 15:31 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

insert into t select * from t will corrupt the table structure : delete table catalog delete metadata

1.2 StripeLog Engine

 1、 Store all the data
 2、 Index the data
 3、 Data size
 4、 Concurrent reading and writing
  • 1
  • 2
  • 3
  • 4
create table test_stripelog(
	id UInt8 ,
    name String ,
    age UInt8
)engine=StripeLog ;
-- Insertion of data
insert into test_stripelog values(1,'liubei',45),(2,'guanyu',43),(3,'zhangfei',41) ;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

View underlying data

/var/lib/clickhouse/data/default/test_stripelog
-rw-r-----. 1 clickhouse clickhouse 167 May 19 15:43 Storing data for all columns
-rw-r-----. 1 clickhouse clickhouse 75 May 19 15:43 Record index information for data
-rw-r-----. 1 clickhouse clickhouse 68 May 19 15:43 Record size of file contents
  • 1
  • 2
  • 3
  • 4

1.3 Log Engine

Logging differs from TinyLog in that small files of "markers" exist with the column files. These markers are written on each data block and contain offsets that indicate where to start reading the file in order to skip a specified number of rows. This makes it possible to read table data in multiple threads. For concurrent data access, read operations can be performed simultaneously while write operations block reads and other writes. the Log engine does not support indexing. Likewise, if a write to a table fails, the table will be corrupted and reading from it will return an error.The Log engine is suitable for temporary data, write-once tables, and for test or demonstration purposes.

 1、 *.bin store the data of each field
 2、 Data block marking
 3、Support multi-threaded processing
 4、Concurrent read and write
  • 1
  • 2
  • 3
  • 4
drop table if exists test_log ;
create table test_log(
	id UInt8 ,
    name String ,
    age UInt8
)engine=Log ;
insert into test_log values(1,'liubei',45),(2,'guanyu',43),(3,'zhangfei',41) ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

View Data

-rw-r-----. 1 clickhouse clickhouse  29 May 19 15:46 
-rw-r-----. 1 clickhouse clickhouse  29 May 19 15:46 
-rw-r-----. 1 clickhouse clickhouse  48 May 19 15:46 __marks.mrk  *** 
-rw-r-----. 1 clickhouse clickhouse  50 May 19 15:46 
-rw-r-----. 1 clickhouse clickhouse 120 May 19 15:46 
  • 1
  • 2
  • 3
  • 4
  • 5

Log and StripeLog engine support:

Locks for concurrent access to data.

INSERT The table is locked while the request is executing, and all other read and write requests wait until the lock is lifted. If there are no write requests, any number of read requests can be executed concurrently.

Parallel reading of data.

ClickHouse uses multiple threads when reading data. Each thread handles a different block of data.

The Log engine uses a different file for each column in the table.StripeLog stores all the data in one file. StripeLog stores all the data in one file. Therefore the StripeLog engine uses fewer descriptors in the operating system, but the Log engine provides higher read performance.

The TinyLog engine is the simplest engine in the family and offers the least amount of functionality and the lowest performance.The TingLog engine does not support parallel reads and concurrent data access and stores each column in a different file. It is slower to read than the other two engines that support parallel reading and uses as many descriptors as the Log engine. You can use it in simple low load scenarios.

2 MergeTree family of engines

The MergeTree family of table engines is at the heart of ClickHouse's data storage capabilities. They provide most of the features used for resilient and high-performance data retrieval: column stores, custom partitions, sparse primary indexes, secondary data skipping indexes, and more.

Basic [MergeTreeThe table engine can be considered the default table engine for single-node ClickHouse instances because it is generic and practical for a variety of use cases.

In addition to the base MergeTree, other commonly used table engines include ReplacingMergeTree, SummingMergeTree, AggregatingMergeTree, CollapsingMergeTree, and VersionedCollapsingMergeTree. Each merge tree variant inherits the capabilities of the base MergeTree and adds unique features. The word "merge" in its name is in the DNA of all MergeTree types, and all their special logic is activated when a merge is triggered.

Key Features:

  • memory stickPrimary Key SortThe data.

This allows you to create a small sparse index to help find data faster.

  • If [Partition Key] is specified then [Partition) can be used.

Some of the partitioning operations supported by ClickHouse are more efficient than regular operations on the same data, with the same results. clickHouse also automatically cuts off partitioned data where a partition key is specified in the query. This also improves query performance.

  • Data replication support.

The ReplicatedMergeTree table family provides data replication. For more information.

  • Data sampling support.

If necessary, you can set the data sampling method in the table.

2.1 MergeTree Engine

When MergeTree writes a batch of data, the data will always be written to disk in the form of data fragments, and the data fragments cannot be modified. In order to avoid too many fragments, ClickHouse will merge these data fragments periodically through background threads, and data fragments belonging to the same partition will be combined into a new fragment. This reciprocal merging of data fragments is the origin of the merge tree name.

grammatical

CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster1](
    name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
    name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
    ...
INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2)  
ENGINE = MergeTree()
ORDER BY expr
[PARTITION BY expr]
[PRIMARY KEY expr]
[SAMPLE BY expr]
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...][SETTINGS name=value, ...]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

The MergeTree table engine has a number of unique configuration options in addition to the usual parameters. The next section highlights a few of these important parameters.

(1) PARTITION BY [optional]: A partition key that specifies the criteria by which the table data should be partitioned. The partition key can be a single column field or multiple column fields in the form of a tuple, and it also supports the use of column expressions. If no partition key is declared, ClickHouse generates a partition named all. Proper use of data partitioning can effectively reduce the scanning range of data files when querying. More details about data partitioning are described in Section 6.2.

(2) ORDER BY [required]: a sort key that specifies the criteria by which the data is to be sorted within a data fragment. By default, the primary key (PRIMARY KEY) is the same as the sort key. The sort key can either be a single column field, such as ORDER BY CounterID, or multiple column fields can be used in a tuple, such as ORDER BY (CounterID,EventDate). When sorting with multiple column fields, such as ORDER BY (CounterID,EventDate), within a single data fragment, the data is first sorted by CounterID, and data with the same CounterID is then sorted by EventDate.

(3) PRIMARY KEY [optional]: The primary key, as the name suggests, will generate a first-level index according to the primary key field after declaration, which is used to speed up the table query. By default, the primary key is the same as the sort key (ORDER BY), so you usually use ORDER BY to specify the primary key instead, without deliberately declaring it through PRIMARY KEY. So in general, within a single data fragment, the data is sorted in ascending order by the same rules as the primary index. Unlike other databases, MergeTree primary keys allow for duplicate data (ReplacingMergeTree can de-duplicate).

(4) SAMPLE BY [optional]: Sampling expression to declare the criteria by which the data will be sampled. If this configuration item is used, the same expression needs to be declared in the configuration of the primary key, for example:

an omission... 
) ENGINE = MergeTree() 
ORDER BY (CounterID, EventDate, intHash32(UserID) SAMPLE BY intHash32(UserID) 
  • 1
  • 2
  • 3

(5) SETTINGS: index_granularity [optional]: index_granularity is a very important parameter for MergeTree, it indicates the granularity of the index, the default value is 8192, that is to say, the index of the MergeTree will only generate an index every 8,192 rows of data by default. That is to say, by default, the index of MergeTree will be generated every 8192 rows of data, the specific declaration is shown below:

index1 -----> data1

index2------>data2

index3------>data3

an omission... 
) ENGINE = MergeTree()
an omission... 
SETTINGS index_granularity = 8192; -- upgraded
  • 1
  • 2
  • 3
  • 4

8192 is a magic number that is divisible by a large number of numeric parameters in ClickHouse (e.g., min_compress_block_size:65536). You don't usually need to change this parameter, but understanding how it works will help you to use MergeTree better, and the details of how indexing works will be explained later.

(6) SETTINGS: index_granularity_bytes [optional]: Prior to version 19.11, ClickHouse only supported fixed-size index intervals controlled by index_granularity, which defaults to 8192. In this new version, it adds the feature of adaptive interval size, which dynamically divides the interval size according to the volume of data written in each batch. In the new version, it adds the feature of adaptive interval size, that is, it dynamically divides the interval size according to the volume of data written in each batch. The data volume size is controlled by the index_granularity_bytes parameter, which defaults to 10M (10×1024×1024), and setting it to 0 means that the adaptive function is not activated.

(7) SETTINGS: enable_mixed_granularity_parts [optional]: Set whether to enable the function of adaptive index interval, the default is enabled.

(8) SETTINGS: merge_with_ttl_timeout [optional]: Starting with version 19.6, MergeTree provides data TTL.

(9) SETTINGS: storage_policy [optional]: Starting with version 19.15, MergeTree provides a multi-path storage policy, for details on this section, the

1) Create a table
drop table if exists tb_merge_tree ;
create table tb_merge_tree(
id Int8 ,
city String ,
ctime Date 
)
engine=MergeTree()
order by id 
partition by city ;
-- Viewing Table Building StatementsCREATE TABLE default.tb_merge_tree
(
    `id` Int8,
    `city` String,
    `ctime` Date
)
ENGINE = MergeTree()
PARTITION BY city
ORDER BY id
SETTINGS index_granularity = 8192
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
2) Import data
insert into tb_merge_tree values(1,'BJ',now()) ,(2,'NJ',now()),(3,'DJ',now());
insert into tb_merge_tree values(4,'BJ',now()) ,(5,'NJ',now()),(6,'DJ',now());
insert into tb_merge_tree values(7,'BJ',now()) ,(8,'NJ',now()),(9,'DJ',now());
insert into tb_merge_tree values(10,'BJ',now()) ,(11,'NJ',now()),(12,'DJ',now());
┌─id─┬─city─┬──────ctime─┐
│  9 │ DJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  2 │ NJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  5 │ NJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│ 12 │ DJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  8 │ NJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│ 11 │ NJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  1 │ BJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  3 │ DJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  4 │ BJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  6 │ DJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  7 │ BJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│ 10 │ BJ   │ 2021-05-19 │
└────┴──────┴────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
3) Consolidation of data
optimize table tb_merge_tree final ;  Merge all data by partition at once
SELECT *
FROM tb_merge_tree
┌─id─┬─city─┬──────ctime─┐
│  3 │ DJ   │ 2021-05-19 │
│  6 │ DJ   │ 2021-05-19 │
│  9 │ DJ   │ 2021-05-19 │
│ 12 │ DJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  2 │ NJ   │ 2021-05-19 │
│  5 │ NJ   │ 2021-05-19 │
│  8 │ NJ   │ 2021-05-19 │
│ 11 │ NJ   │ 2021-05-19 │
└────┴──────┴────────────┘
┌─id─┬─city─┬──────ctime─┐
│  1 │ BJ   │ 2021-05-19 │
│  4 │ BJ   │ 2021-05-19 │
│  7 │ BJ   │ 2021-05-19 │
│ 10 │ BJ   │ 2021-05-19 │
└────┴──────┴────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

CK will automatically merge the data of the partitions and delete the data of the redundant folders.

4) Data storage principle
# Before merger
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_11_11_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_2_2_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_5_5_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_8_8_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_12_12_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_3_3_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_6_6_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_9_9_0
drwxr-x---. 2 clickhouse clickhouse   6 May 19 16:48 detached
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_10_10_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_1_1_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_4_4_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_7_7_0
-rw-r-----. 1 clickhouse clickhouse   1 May 19 16:48 format_version.txt
# Merged
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_11_11_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:49 0b1654565b11c57ce8e06fba0d990406_2_11_1
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_2_2_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_5_5_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 0b1654565b11c57ce8e06fba0d990406_8_8_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_12_12_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:49 4ac8f272bc049477e80a3f42338ca531_3_12_1
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_3_3_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_6_6_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 4ac8f272bc049477e80a3f42338ca531_9_9_0
drwxr-x---. 2 clickhouse clickhouse   6 May 19 16:48 detached
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_10_10_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_1_1_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:49 e35d0ca9d946a627c9fc98b8f80391ce_1_10_1
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_4_4_0
drwxr-x---. 2 clickhouse clickhouse 223 May 19 16:48 e35d0ca9d946a627c9fc98b8f80391ce_7_7_0
-rw-r-----. 1 clickhouse clickhouse   1 May 19 16:48 format_version.txt
-- Go to a partition directory to view
-rw-r-----.1 clickhouse clickhouse 385 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse  38 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse  48 May 19 17:12 city.mrk2
-rw-r-----. 1 clickhouse clickhouse  74 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse   1 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse  34 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse  48 May 19 17:12 ctime.mrk2
-rw-r-----. 1 clickhouse clickhouse  30 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse  48 May 19 17:12 id.mrk2
-rw-r-----. 1 clickhouse clickhouse   6 May 19 17:12 minmax_city.idx
-rw-r-----. 1 clickhouse clickhouse   3 May 19 17:12 
-rw-r-----. 1 clickhouse clickhouse   2 May 19 17:12 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47

在这里插入图片描述

(1) partition: partition directory, the rest of the various types of data files (, [Column].mrk, [Column].bin, etc.) are organized in the form of partition directory is stored, belonging to the same partition of the data will eventually be merged into the same partition directory, while the data of the different partitions, will never be merged together.

(2): Checksum file, stored using binary format. It saves the SIZE size of the remaining types of files (, etc.) and the hash value of SIZE, which is used to quickly check the integrity and correctness of the file.

(3): Column information file, stored using plaintext format. Used to store column field information under this data partition, for example:

(4): Count file, stored in plaintext format. It is used to record the total number of rows of data in the current data partition directory.

(5): first-level index file, stored in binary format. It is used to store sparse indexes, and a MergeTree table can only declare the primary index once (by ORDER BY or PRIMARY KEY). With the help of sparse index, data files outside the range of primary key conditions can be excluded during data query, thus effectively reducing the data scanning range and speeding up the query.

(6) [Column].bin: data file, stored in compressed format, default is LZ4 compressed format, used to store the data of a column. Because MergeTree uses columnar storage, so each column field has an independent .bin data file, and named after the column field name (such as , etc.).

(7)[Column].mrk: Column field marker files, stored using binary format. The tag file stores the offset information of the data in the .bin file. The tag file is aligned with the sparse index and corresponds to the .bin file, so MergeTree establishes the mapping relationship between the sparse index and the .bin data file through the tag file. So MergeTree establishes the mapping relationship between the sparse index and the .bin data file through the tag file, i.e., first find the offset information (.mrk) of the corresponding data through the sparse index (), and then read the data from the .bin file directly through the offset. Since the .mrk tag file corresponds to the .bin file one-to-one, each column field in MergeTree will have its corresponding .mrk tag file (e.g., etc.).

(8)[Column].mrk2: If an adaptively sized index interval is used . . then the markup file will be named .mrk2. It works and is the same as the .mrk tag file.

(9) with minmax_[Column].idx: if a partition key is used, such as PARTITION BY EventTime, an additional file with the minmax index will be generated and they are stored in binary format. The minmax index is used to record the minimum and maximum values of the raw data corresponding to the partitioned fields under the current partition. For example, the original data corresponding to the EventTime field is 2019-05-01, 2019-05-05, and the partition expression is PARTITION BY toYYYYMM(EventTime). The value stored in the minmax index will be 2019-05, while the value stored in the minmax index will be 2019-05-012019-05-05.

With these partition indexes, data queries can quickly skip unnecessary data partition directories, thus reducing the range of data that ultimately needs to be scanned.

(10) skp_idx_[Column].idx and skp_idx_[Column].mrk: If a secondary index is declared in the table construction statement, the corresponding secondary index and token file will be generated additionally, and they also use binary storage. Secondary indexes, also known as skip count indexes in ClickHouse, are currently of four types: minmax, set, ngrambf_v1, and tokenbf_v1. The ultimate goal of these indexes is the same as the primary sparse indexes, which is to further reduce the range of data that needs to be scanned in order to speed up the entire query process.

2.2 ReplacingMergeTree

This engine is based on MergeTree.Added "Handle Duplicate Data" functionThe difference between this engine and MergeTree is that it removes the data with the same (in-region)Sort the sameof duplicate entries. The de-duplication of data will only occur during the merge. The merge will take place in the background at an unknown time (manual merge), so you cannot plan ahead. Some data may remain unprocessed. Therefore, ReplacingMergeTree is suitable for removing duplicates in the background to save space.But it doesn't guarantee that no duplicates will appear

1 No version parameters

Depending on the time of insertion of the data, data inserted later is retained.

drop table if  exists test_replacingMergeTree1 ;
create table test_replacingMergeTree1(
	oid Int8 ,
	ctime DateTime ,
    cost Decimal(10,2)
)engine = ReplacingMergeTree()
order by oid 
partition by toDate(ctime) ;
-- Day Partitioning The same oid on the same day will be de-emphasized.

-- Insertion of data
insert into test_replacingMergeTree1 values(3,'2021-01-01 11:11:11',30) ;
insert into test_replacingMergeTree1 values(1,'2021-01-01 11:11:14',40) ;
insert into test_replacingMergeTree1 values(1,'2021-01-01 11:11:11',10);
insert into test_replacingMergeTree1 values(2,'2021-01-01 11:11:11',20) ;
insert into test_replacingMergeTree1 values(1,'2021-01-02 11:11:11',41) ;

-- Optimization of consolidation
optimize table test_replacingMergeTree1 final ;
┌─oid─┬───────────────ctime─┬──cost─┐
│   12021-01-02 11:11:1141.00 │
└─────┴─────────────────────┴───────┘
┌─oid─┬───────────────ctime─┬──cost─┐
│   12021-01-01 11:11:1110.00 │
│   22021-01-01 11:11:1120.00 │
│   32021-01-01 11:11:1130.00 │
└─────┴─────────────────────┴───────┘
Since the system's operations on CK are executed in multiple threads, So there is no guarantee of the order in which the data will be inserted, Data deletion errors may occur
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
-- primary key oid sort field two Verify that the de-duplication rule is by primary key or sort field
drop table if  exists test_replacingMergeTree2 ;
create table test_replacingMergeTree2(
	oid Int8 ,
	ctime DateTime ,
    cost Decimal(10,2)
)engine = ReplacingMergeTree()
primary key oid
order by (oid ,ctime)
partition by toDate(ctime) ;

insert into test_replacingMergeTree2 values(1,'2021-01-01 11:11:11',10) ;
insert into test_replacingMergeTree2 values(1,'2021-01-01 11:11:11',20) ;
insert into test_replacingMergeTree2 values(1,'2021-01-01 11:11:11',30);
insert into test_replacingMergeTree2 values(1,'2021-01-01 11:11:12',40) ;
insert into test_replacingMergeTree2 values(1,'2021-01-01 11:11:13',50) ;
-- So you can see that the de-duplication is not based on the primary key, but rather on the fact that data that is sorted the same way within the zone will be deleted.
┌─oid─┬───────────────ctime─┬──cost─┐
│   12021-01-01 11:11:1130.00 │
│   12021-01-01 11:11:1240.00 │
│   12021-01-01 11:11:1350.00 │
└─────┴─────────────────────┴───────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
2 With version parameters
  • The version field can be a numeric value
  • The version field can be time
drop table if exists test_replacingMergeTree3 ;
create table test_replacingMergeTree3(
	oid Int8 ,
	ctime DateTime ,
    cost Decimal(10,2)
)engine = ReplacingMergeTree(ctime)
order by oid
partition by toDate(ctime) ;

insert into test_replacingMergeTree3 values(1,'2021-01-01 11:11:11',10) ;
insert into test_replacingMergeTree3 values(1,'2021-01-01 11:11:12',20) ;
insert into test_replacingMergeTree3 values(1,'2021-01-01 11:11:10',30);
insert into test_replacingMergeTree3 values(1,'2021-01-01 11:11:19',40) ;
insert into test_replacingMergeTree3 values(1,'2021-01-01 11:11:13',50) ;
-- After consolidation of data Retained is the most recent piece of data in time
┌─oid─┬───────────────ctime─┬──cost─┐
│ 1 │ 2021-01-01 11:11:19 │ 40.00 │
└─────┴─────────────────────┴───────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

Summary.

(1) Use the ORDER BY sort key as the sole basis for determining duplicate data.

(2) The logic to remove duplicates is triggered only when merging partitions.

(3) Delete duplicate data on a data partition basis. When partitions are merged, duplicates within the same partition are deleted; duplicates between different partitions are not deleted.

(4) When performing data de-duplication, it is possible to find those adjacent duplicates because the data within the partition has been sorted based on ORBER BY.

(5) There are two data de-duplication strategies:

  1. If the ver version number is not set, the last row in the same set of duplicates is retained.
  2. If the ver version number is set, the row with the largest value for the ver field in the same set of duplicates is retained.

Data updates can be achieved using this engine

2.3 CollapsingMergeTree

CollapsingMergeTree is an example of the type of tree that can be created via theadditions instead of deletionsIt is a table engine that supports row-level data modification and deletion along the lines of the It records the status of a row of data by defining a sign bit field. If the sign flag is 1, it means this is a valid row of data; if the sign flag is -1, it means this row needs to be deleted. When CollapsingMergeTree partitions are merged, a group of data marked with a sign of 1 and -1 within the same data partition will be offset and deleted. This 1 and -1 offsetting operation is like folding a piece of corrugated paper. This intuitive analogy must also be the origin of the name CollapsingMergeTree.

Multiple rows of data with the same sorting status of 1 will be collapsed into one row , Retain last line

Two rows of the same sort, with statuses 1 and -1 are deleted.

ENGINE = CollapsingMergeTree(sign)

drop table if exists tb_cps_merge_tree1 ;
CREATE TABLE tb_cps_merge_tree1
(
    user_id UInt64,
    name String,
    age UInt8,
    sign Int8
)
ENGINE = CollapsingMergeTree(sign)
ORDER BY user_id;
-- Insertion of data
insert into tb_cps_merge_tree1 values(1,'xiaoluo',23,1),(2,'xiaoyu',24,1),(3,'xiaofeng',25,1) ;
insert into tb_cps_merge_tree1 values(1,'xiaoluo_',23,-1),(2,'xiaoyu_',24,-1),(3,'xiaofeng2',25,1) ;
-- Merger optimization
optimize table tb_cps_merge_tree1 ;
-- Deletion of data and updating of data that already exists was realized.
SELECT *
FROM tb_cps_merge_tree1
┌─user_id─┬─name──────┬─age─┬─sign─┐
│       3 │ xiaofeng2 │  251 │
└─────────┴───────────┴─────┴──────┘

Although CollapsingMergeTree solves the problem of instant deletion of data with the same primary key, in the case of continuous state change and multi-threaded parallel writing, the position of the state rows and the canceled rows may be disordered, resulting in the inability to collapse properly. The only way to ensure that the old state rows are on top of the canceled rows is to ensure that the old state rows are on top of the canceled rows., The new status line is below the cancel line! But multithreading can't guarantee the order of writes!

drop table if exists tb_cps_merge_tree2 ;
CREATE TABLE tb_cps_merge_tree2
(
    user_id UInt64,
    name String,
    age UInt8,
    sign Int8
)
ENGINE = CollapsingMergeTree(sign)
ORDER BY user_id;
insert into tb_cps_merge_tree2 values(1,'xiaoluo_',23,-1),(2,'xiaoyu_',24,-1),(3,'xiaofeng2',25,1) ;
insert into tb_cps_merge_tree2 values(1,'xiaoluo',23,1),(2,'xiaoyu',24,1),(3,'xiaofeng',25,1) ;
-- Merger optimization
optimize table tb_cps_merge_tree2 ;
┌─user_id─┬─name─────┬─age─┬─sign─┐
│       1 │ xiaoluo_ │  23-1 │
│       1 │ xiaoluo  │  231 │
│       2 │ xiaoyu_  │  24-1 │
│       2 │ xiaoyu   │  241 │
│       3 │ xiaofeng │  251 │
└─────────┴──────────┴─────┴──────┘
Suppose we have the same sorted data,And the states are all1,Data update is possible,If we can't guarantee that the collapsed rows are below the state rows,Data cannot be guaranteed to be deleted properly
---- query for correct data
select 
tb_cps_merge_tree2.*
from
tb_cps_merge_tree2
join
(
select 
user_id ,
sum(sign) as sum_sign
from 
tb_cps_merge_tree2
group by user_id
having sum_sign = 1
 )t
 on tb_cps_merge_tree2.user_id =t.user_id ;
 
 ┌─user_id─┬─name─────┬─age─┬─sign─┐
│       3 │ xiaofeng │  251 │
└─────────┴──────────┴─────┴──────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66

2.4 VersionedCollapsingMergeTree

in order toSolve the problem that CollapsingMergeTree can not be collapsed (deleted) properly in case of disordered writing.The VersionedCollapsingMergeTree table engine adds a new column Version to the table build statement to record the correspondence between status rows and canceled rows in the case of disorder. Rows with the same primary key (sorted) and the same Version and opposite Sign will be deleted on Compaction. Similar to CollapsingMergeTree, in order to get the correct result, the business layer needs to rewrite the SQL to rewrite count() and sum(col) to sum(Sign) and sum(col * Sign) respectively.

drop table if exists tb_vscmt ;
CREATE TABLE tb_vscmt
(
    uid UInt64,
    name String,
    age UInt8,
    sign Int8,
    version UInt8
)
ENGINE = VersionedCollapsingMergeTree(sign, version)
ORDER BY uid;

INSERT INTO tb_vscmt VALUES (1001, 'ADA', 18, -1, 1);
INSERT INTO tb_vscmt VALUES (1001, 'ADA', 18, 1, 1),(101, 'DAD', 19, 1, 1),(101, 'DAD', 11, 1, 3); 
INSERT INTO tb_vscmt VALUES(101, 'DAD', 11, 1, 2) ;
-- Ensures that data to be deleted will be deleted, and that data not marked as collapsed will be retained.
optimize table tb_vscmt ; 
┌─uid─┬─name─┬─age─┬─sign─┬─version─┐
│ 101 │ DAD  │  1911 │
│ 101 │ DAD  │  1112 │
│ 101 │ DAD  │  1113 │
└─────┴──────┴─────┴──────┴─────────┘
Data with inconsistent versions will not be collapsed for deletion
┌─uid─┬─name─┬─age─┬─sign─┬─version─┐
│101 │ DAD  │  1911 │
│  101 │ DAD  │  1112 │
│  101 │ DAD  │  1113 │
│ 1001 │ ADA  │  18-11 │
│ 1001 │ ADA  │  1812 │
└──────┴──────┴─────┴──────┴─────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

2.5 SummingMergeTree

Assume that there is such a query requirement: end-users only need to query the summary results of the data, do not care about the detailed data, and data aggregation conditions are pre-specified (GROUP BY conditions are clear, and will not be changed arbitrarily).

How to solve such a query scenario in ClickHouse? The most straightforward solution is to use MergeTree to store the data, then aggregate the query by GROUP BY and summarize the results using the SUM aggregation function. There are two problems with this scenario.

  1. There is additional storage overhead: end-users don't query any of the breakdowns and only care about the summary results, so they shouldn't have to keep all the breakdowns all the time.
  2. There is additional query overhead: end-users only care about aggregated results, and while MergeTree is powerful, performing real-time aggregation calculations for each query is a performance drain.

SummingMergeTree is designed to cope with this type of query scenario. As the name implies, it can be merged in the merge partition in accordance with the pre-defined conditions of the aggregated summary data, the same grouping of multiple rows of data aggregated into a single line, which reduces the number of rows of data, but also reduces the overhead of the subsequent summary query.

Tip.

ORDER BY (A, B, C, D)
PRIMARY KEY A
This mandatory constraint guarantees that the primary key will remain the prefix of the sort key even if the two are defined differently, and there will be no problems with the indexes and data being out of order.
When defining the primary key of a table, we will consider the index on the primary key to find data quickly.
ORDER BY (B, C) PRIMARY KEY A This is wrong!
  • 1
  • 2
  • 3
  • 4
  • 5
drop table summing_table ;
CREATE TABLE summing_table( 
id String, 
city String, 
sal UInt32, 
comm Float64, 
ctime DateTime 
)ENGINE = SummingMergeTree() 
PARTITION BY toDate(ctime) 
ORDER BY (id, city) 
PRIMARY KEY id ;
-- When merging, all numeric fields of the same sorted rows are summed (sum) within the partition.
-- Insertion of data
insert into summing_table
values 
(1,'shanghai',10,20,'2021-06-12 01:11:12'),
(1,'shanghai',20,30,'2021-06-12 01:11:12'),
(3,'shanghai',10,20,'2021-11-12 01:11:12'),
(3,'Beijing',10,20,'2021-11-12 01:11:12') ;

optimize table summing_table ;

┌─id─┬─city─────┬─sal─┬─comm─┬───────────────ctime─┐
│ 3  │ Beijing  │  10202021-11-12 01:11:12 │
│ 3  │ shanghai │  10202021-11-12 01:11:12 │
└────┴──────────┴─────┴──────┴─────────────────────┘
┌─id─┬─city─────┬─sal─┬─comm─┬───────────────ctime─┐
│ 1  │ shanghai │  30502021-06-12 01:11:12 │
└────┴──────────┴─────┴──────┴─────────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

The above example does not specify the sum field, then the table meets the requirements of all the numerical fields will be summed, we can specify the sum of the field when building the table

drop table summing_table2 ;
CREATE TABLE summing_table2( 
id String, 
city String, 
money UInt32, 
num UInt32, 
ctime DateTime 
)ENGINE = SummingMergeTree(money) 
PARTITION BY toDate(ctime) 
ORDER BY city ;
--Gross sales per day per city
insert into summing_table2 values(1,'BJ',100,11,now()),
(2,'BJ',100,11,now()),
(3,'BJ',100,11,now()),
(4,'NJ',100,11,now()),
(5,'NJ',100,11,now()),
(6,'SH',100,11,now()),
(7,'BJ',100,11,'2021-05-18 11:11:11'),
(8,'BJ',100,11,'2021-05-18 11:11:11') ;

SELECT * 
FROM summing_table2 ;
┌─id─┬─city─┬─money─┬─num─┬───────────────ctime─┐
│ 1  │ BJ   │   300112021-05-19 21:53:49 │
│ 4  │ NJ   │   200112021-05-19 21:53:49 │
│ 6  │ SH   │   100112021-05-19 21:53:49 │
└────┴──────┴───────┴─────┴─────────────────────┘
┌─id─┬─city─┬─money─┬─num─┬───────────────ctime─┐
│ 7  │ BJ   │   200112021-05-18 11:11:11 │
└────┴──────┴───────┴─────┴─────────────────────┘
SELECT city ,money
FROM summing_table2 ;
┌─city─┬─money─┐
│ BJ   │   300 │
│ NJ   │   200 │
│ SH   │   100 │
└──────┴───────┘
┌─city─┬─money─┐
│ BJ   │   200 │
└──────┴───────┘

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

Supports summing operations in nested formats

CREATE TABLE summing_table_nested( 
id String, 
nestMap Nested( 
id UInt32, 
key UInt32, 
val UInt64 
), 
create_time DateTime 
)ENGINE = SummingMergeTree() 
PARTITION BY toYYYYMM(create_time) 
ORDER BY id ; 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

Summary.

(1) Use the ORBER BY sort key as the conditional Key for aggregated data.

(2) The logic for summarization is triggered only when merging partitions.

(3) Aggregate data in terms of data partitions. When partitions are merged, data with the same aggregated Key within the same data partition will be merged and summarized, while data between different partitions will not be summarized.

(4) SUM aggregates columns summary columns (numeric type fields that are not primary keys) if they are specified in the definition of the engine, or aggregates all numeric type fields that are not primary keys if they are not specified.

(5) When summarizing the data, because the data within the partition has been sorted based on ORBER BY, it is possible to find data that is adjacent and has the same aggregation Key.

(6) When summarizing data, multiple rows of data with the same aggregation key within the same partition will be merged into one row. Among them, the aggregated fields will be SUM calculation; for those non-aggregated fields, the value of the first row of data will be used.

(7) Nested structures are supported, but column field names must end in a Map suffix. In the nested type, the first field is used as the aggregate Key by default, except for the first field, any field whose name ends with Key, Id, or Type suffix will form the composite Key together with the first field.

2.5 AggregatingMergeTree

AggregatingMergeTree has a fewdata cubeIt is able to aggregate data according to predefined conditions when merging partitions. At the same time, the data is calculated according to the predefined aggregation function and stored in the table through the binary format. Aggregating multiple rows of data under the same grouping into one row reduces both the number of data rows and the overhead of subsequent aggregation queries. It can be said that AggregatingMergeTree isUpgraded version of SummingMergeTreeMany of their design ideas are the same, such as the reasons and purposes of defining ORDER BY and PRIMARY KEY at the same time. But in the use of methods, there are obvious differences between the two, it should be said that the definition of AggregatingMergeTree is one of the most special MergeTree family.

NGINE = AggregatingMergeTree() 
  • 1

The AggregatingMergeTree does not have any additional setup parameters and is aggregated by ORDER BY within each data partition during the partition merge. Which aggregation function to use and for which column fields is calculated is achieved by defining the AggregateFunction datatype. In insert and select, there is also a unique writing style and requirements: write with -State syntax, query with -Merge syntax.

AggregateFunction(arg1 , arg2) ;

Parameter 1 Aggregation Function

Parameter II Data type

sum_cnt AggregateFunction(sum, Int64) ;

First create the original table - Insert data - > Create pre-aggregated table - Import data by Insert, the data will be aggregated according to the specified aggregation function to aggregate the pre-data!

在这里插入图片描述

-- 1) Establishment of schedules
CREATE TABLE detail_table
(id UInt8,
 ctime Date,
 money UInt64
) ENGINE = MergeTree() 
PARTITION BY toDate(ctime) 
ORDER BY id;

-- 2) Insertion of detailed data
INSERT INTO detail_table VALUES(1, '2021-08-06', 100);
INSERT INTO detail_table VALUES(1, '2021-08-06', 100);
INSERT INTO detail_table VALUES(1, '2021-08-06', 300); ***
INSERT INTO detail_table VALUES(2, '2021-08-07', 200);
INSERT INTO detail_table VALUES(2, '2021-08-07', 200);

-- 3) create pre-aggregated tables.
-- Note: the type of the UserID column is AggregateFunction(uniq, UInt64).
CREATE TABLE agg_table
(id UInt8,
ctime Date,
cnt AggregateFunction(count, UInt64)
) ENGINE = AggregatingMergeTree() 
PARTITION BY  toDate(ctime) 
ORDER BY id;

-- 4) Read data from the detail table and insert it into the aggregated table.
-- Note: the aggregation function used in the subquery is uniqState, which corresponds to the write syntax <agg>-State
INSERT INTO agg_table
select id, ctime, countState(money)
from detail_table
group by id, ctime ;

-- Data cannot be inserted into an AggregatingMergeTree using the normal insert statement.
-- roots or stems of plantsSQLwill report an error:Cannot convert UInt64 to AggregateFunction(uniq, UInt64)
INSERT INTO agg_table VALUES(1, '2020-08-06', 1);

-- 5) Query from an aggregated table.
-- Note: The aggregation function used in select is uniqMerge, which corresponds to the query syntax <agg>-Merge.
SELECT
id, ctime ,
countMerge(uid) AS state 
FROM agg_table 
GROUP BY id, ctime;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

Synchronizing Aggregate Data with Materialized Views

-- Establishment of schedules
drop table orders ;
CREATE TABLE orders
(
    uid UInt64,
    money UInt64,
    ctime Date,
    Sign Int8
)
ENGINE = MergeTree()
ORDER BY uid;

--Insert data
insert into orders values(1,100,toDate(now()),1) ;
insert into orders values(1,100,toDate(now()),1) ;
insert into orders values(1,100,toDate(now()),1) ;
insert into orders values(2,200,toDate(now()),1) ;
insert into orders values(2,200,toDate(now()),1) ;
insert into orders values(2,200,toDate(now()),1) ;
insert into orders values(2,100,toDate(now()),1) ;
-- Create materialized views of aggregation logic
CREATE MATERIALIZED VIEW orders_agg_view
ENGINE = AggregatingMergeTree() 
PARTITION BY toDate(ctime)
ORDER BY uid 
populate
as select 
uid ,
ctime ,
sumState(money) as mm  -- Note the aliases
from 
orders
group by uid , ctime;

-- Querying materialized view data
select uid,ctime,sumMerge(mm) from orders_agg_view group by uid, ctime ;

-- Updating of breakdown data, real-time calculation of data updates in the materialized view
insert into orders values(1,100,toDate(now()),1);
┌─uid─┬──────ctime─┬─sumMerge(mm)─┐
│   22021-05-19400 │
│   12021-05-19200 │
└─────┴────────────┴──────────────┘
┌─uid─┬──────ctime─┬─sumMerge(mm)─┐
│   22021-05-19400 │
│   12021-05-19300 │
└─────┴────────────┴──────────────┘
┌─uid─┬──────ctime─┬─sumMerge(mm)─┐
│   22021-05-19400 │
│   12021-05-19400 │
└─────┴────────────┴──────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51

Summary.

(1) Use the ORBER BY sort key as the conditional Key for aggregated data.

(2) Use the AggregateFunction field type to define the type of aggregate function and the fields to aggregate.

(3) The logic of aggregation calculation is triggered only when merging partitions.

(4) Aggregate data by data partition. When partitions are merged, data with the same Aggregation Key within the same data partition will be combined and counted, while data between different partitions will not be counted.

(5) When calculating the data, because the data in the partition has been sorted based on ORBER BY, it is possible to find the data that are adjacent and have the same aggregation key.

(6) When aggregating data, multiple rows of data with the same aggregated key in the same partition are merged into one row. For fields that are not primary keys or AggregateFunction types, the value of the first row of data is used.

(7) Fields of type AggregateFunction are stored in binary, and when writing data, you need to call theState function; and when querying the data, you need to call the correspondingMerge function. Where * indicates the aggregation function used in the definition.

(8) AggregatingMergeTree is usually used as a table engine for materialized views, in conjunction with a normal MergeTree.

This query attempts to merge unscheduled data portions of an initialized table using the table engines in the [MergeTree] family. The [MaterializedView and [Buffer] engines OPTMIZE are also supported. Other table engines are not supported.

When OPTIMIZE with the use of the [ReplicatedMergeTree] table engine, ClickHouse creates the merge and waits for it to be executed on all nodes (if the task REPLICATION_ALTER_PARTITIONS_SYNC is enabled set).

- If OPTIMIZE does not perform a merge for any reason, the client is not notified. To enable notification, use the [optimize_throw_if_noop] setting.

- If PARTITION is specified, only the specified partitions are optimized. [How to set the partition expression].

- If FINAL is specified, optimization is performed even if all data is already in a section.

- If DEDUPLICATE is specified, deduplication will be performed on identical rows (comparing all columns), which only makes sense for the MergeTree engine.

3 External storage engines

3.1 HDFS Engine

Clickhouse can load data directly from a specified directory in HDFS, it doesn't store the data itself, it just reads it.

ENGINE = HDFS(hdfs_uri,format)
-hdfs_uri indicates the file storage path of HDFS;
-format indicates the file format (refers to the file format supported by ClickHouse, the common ones are CSV, TSV and JSON, etc.).
  • 1
  • 2
  • 3

Our general expectation is that the data will be written to the HDFS system in some other way, and that the data will be loaded and analyzed using CK's HDFS engine.

This form is similar to Hive's external table, by other systems to write files directly to HDFS. hdfs_uri and format parameters through the HDFS table engine to establish a mapping with the HDFS file paths, file formats, respectively. Among them, hdfs_uri supports the following common configuration methods:

  • Absolute path: will read a single file at the specified path, e.g. /clickhouse/hdfs_table1.

  • Wildcard: matches all characters, e.g. path /clickhouse/hdfs_table/, then all files under the path /click-house/hdfs_table will be read.

  • ? Wildcard: matches a single character, for example, if the path is /clickhouse/hdfs_table/organization_? .csv, then it will read the files under the path /clickhouse/hdfs_table that match organization_? .csv file in the path of /clickhouse/hdfs_table will read the file matching organization_? represents any legal character.

  • {M...N} number interval: match the file with the specified number, for example, if the path is /clickhouse/hdfs_table/organization_{1...3}.csv, it will read the files under the path /clickhouse/hdfs_table/ organization_1.csv, organization_2.csv and organization_3.csv.

create table test_hdfs1(
id Int8 ,
name String ,
age Int8
)engine=HDFS('hdfs://linux01:8020/ck/test1/*' ,CSV) ;

Creating Documents,Uploading files to a specified directory
1.txt
1,zss,21
2,ww,22

2.txt 
3,lss,33
4,qaa,32

3.txt 
5,as,31
6,ghf,45
--Match a single character
create table test_hdfs2(
id Int8 ,
name String ,
age Int8
)engine=HDFS('hdfs://linux01:8020/ck/test1/?.txt' ,CSV) ;

-- Match files between numbers
create table test_hdfs3(
id Int8 ,
name String ,
age Int8
)engine=HDFS('hdfs://linux01:8020/ck/test1/a_{1..2}.txt' ,CSV) ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

3.2 mysql engine

The MySQL Table Engine can be mapped to data tables in a MySQL database and remotely queried via SQL, including SELECT and INSERT.

It is declared in the following way:

CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
    name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
    name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
    ...
) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

The meanings of each of these parameters are as follows:

-host:port indicates the MySQL address and port.

-database indicates the name of the database.

-table indicates the name of the table to be mapped.

-user indicates the MySQL username.

-password indicates the MySQL password.

-replace_query defaults to 0 and corresponds to MySQL's REPLACE INTO syntax. If it is set to 1, REPLACE INTO is used instead of INSERT INTO.

The -on_duplicate_clause defaults to 0 and corresponds to MySQL's ON DUPLICATE KEY syntax. If you need to use this setting, you must set replace_query to 0.

Before using the MySQL engine officially, the current machine should have the permission to operate MySQL data, and open the remote connection permission of MySQL as follows.

1)  set global validate_password_policy=0;
2)  set global validate_password_length=1;   With these two settings, the password is easy and won't report errors.
3)  grant all privileges on *.* to 'root'@'%' identified by 'root' with grant option;
4)  flush privileges;
  • 1
  • 2
  • 3
  • 4
-- create table in mysql
create table tb_x(id int, name varchar(25), age int) ;
insert into tb_x values(1,'zss',23),(2,'lss',33) ;
-- Create tables in clickhouse
CREATE TABLE tb_mysql
(
    `id` Int8,
    `name` String,
    `age` Int8
)
ENGINE = MySQL('doit01:3306', 'test1', 'tb_x', 'root', 'root');
-- View data
-- Insertion of data
insert into tb_mysql values(3,'ww',44) ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • Support for querying data
  • Support for inserting data
  • Delete and update operations are not supported

3.3 File engine

File table engine can directly read the data of local files, usually used as a means of expansion. For example: it can read data files generated by other systems, if the external system directly modifies the file, it will disguise the purpose of data update; it can export ClickHouse data to local files; it can also be used for data format conversion and other scenarios. In addition, the File table engine is also used in the clickhouse-local tool.

ENGINE = File(format)
  • 1
drop table if exists test_file1 ;
create table test_file1(
id String ,
name String ,
age UInt8
)engine=File("CSV") ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

A folder will be created in the default directory, and files can be written in the folder, but the name of the file must be

insert into  test_file1 values('u001','hangge',33) ; 
  • 1
file table functions

- Go to the specified path and load the local data.

select * from file('/ck/','CSV','id Int8 , name String ,gender String,age UInt8') ;
  • 1

The default load is a specific folder, the data must be in the specified folder to be loaded.

Modify the default folder where data is loaded

vi /etc/clickhouse-server/
/path nNext
<!-- Directory with user provided files that are accessible by 'file' table function. -->
    <user_files_path>/</user_files_path>
  • 1
  • 2
  • 3
  • 4

restart sth.

service clickhouse-server restart 
  • 1

3.4 MySQL Database Engine

grammatical

CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
ENGINE = MySQL('host:port', ['database' | database], 'user', 'password')
  • 1
  • 2
create database db_ck_mysql engine=MySQL('doit01:3306','test1','root','root') ;
  • 1

Commonly used for data merge, load mysql data and ck data merge, do not do data modification and table construction.

4 Memory Engine

The next few table engines that will be introduced are all oriented towards in-memory queries, where the data will be accessed directly from memory, so they are categorized as Memory type. However, this does not mean that memory-type table engines do not support physical storage; in fact, with the exception of the Memory table engine, the rest of the table engines write data to disk, which is a means of failure recovery in order to prevent data loss. Instead, when data tables are loaded, they load the full amount of data into memory for queries. Putting the full amount of data in memory is a double-edged sword for table engines: on the one hand, it means having better query performance; on the other hand, if the amount of data loaded in the table is too large, it may bring great memory consumption and burden!

4.1 Memory

The Memory table engine stores the data directly in memory. The data is neither compressed nor formatted, and the data is stored in memory in the same form as you see it when you query it. Because of this, when the ClickHouse service is restarted, all the data in the Memory table will be lost. Therefore, in some occasions, Memory will be used as a test table, many beginners in the learning ClickHouse when the Hello World program is likely to use the Memory table. Because there is no need for disk reads, serialization and deserialization operations, the Memory table engine supports parallel queries, and in simple query scenarios can achieve the same query performance as MergeTree (up to 100 million rows of data).The creation of a Memory table is shown below.

CREATE TABLE memory_1 ( 
id UInt64 
)ENGINE = Memory() ;
  • 1
  • 2
  • 3

A more general application of the Memory table is within the ClickHouse, where it is used as a storage vehicle to distribute data between clusters. For example, in the case of a distributed IN query, the Memory temporary table is utilized to store the query result of the IN clause and transfer it to the remote node over the network.

4.2 Set

Set table engine is having physical storage, the data is first written to memory and then synchronized to the disk file. So when the service is restarted, its data will not be lost, and when the data table is reloaded, the file data will be fully loaded into memory again. It is well known that all elements in a Set data structure are unique.The Set table engine has the ability to de-duplicate data, and duplicates are automatically ignored during data writes. However, the use of Set table engine is both special and limited, it supports normal INSERT write, but can not be directly queried using SELECT, Set table engine can only be used indirectly as the right-hand side of the IN query conditions are queried using the

The storage structure of the Set table engine consists of two parts which are:

The [num].bin data file: holds data for all column fields. Where num is a self-incrementing id, starting at 1. With each batch of data written (each INSERT), a new .bin file is generated and num is incremented by one.

tmp temporary directory: data files will be written to this directory first, and when a batch of data has been written, the data files will be moved out of this directory.

create table test_set(
id Int8 ,
name String
)engine=Set();
  • 1
  • 2
  • 3
  • 4

I found that there is a corresponding directory in the database directory, so the data will be stored on disk.

在这里插入图片描述

Insert data.
But this kind of table doesn't allow us to query it directly

CREATE TABLE x
(
  `id` Int8,
  `name` String
)
ENGINE =Set
insert into x values(1,'zss'),(4,'ww') ; 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

The correct query is to use the Set table engine as the right-hand side condition of the IN query

select * from x where (id,name) in test_set ;

CREATE TABLE test_set_source
(
    `id` Int8,
    `name` String,
    `age` Int8
)
ENGINE = Log ;
insert into test_set_source values(1,'lss',21),(2,'ww',33),(3,'zl',11) ;
-- Filter data based on data in the set table.
select * from test_set_source where id in  test_set;
Attention.inThe fields of the condition table are consistent with the fields of the
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

4.3 Buffer

The Buffer table engine loads data entirely using memory and does not support persistent storage of files, so the data in the table will be emptied when the service is restarted.The Buffer table engine is not designed for query-oriented scenarios; it is meant to act as a buffer. Suppose there is a scenario where we need to write data to the target MergeTree table A. Since the writtenThe concurrency count is high, which may cause MergeTree table A to merge slower than it writes (because each INSERT generates a new partitioned directory). At this point, a Buffer table can be introduced to alleviate such problems by using the Buffer table as a buffer for data writes. The data is first written to the Buffer table, which automatically flushes the data to the target table when the preset conditions are met

在这里插入图片描述

ENGINE = Buffer(database, table, num_layers, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes)

Among them, the parameters can be divided into two categories: base parameters and conditional parameters, and the role of base parameters is explained first:

  • database: the database of the target table.

  • table: the name of the target table, the data in the Buffer table will be automatically refreshed to the target table.

  • num_layers: can be interpreted as the number of threads, Buffer table will open threads according to the number of num_layers to refresh the data to the target table in a parallel way, the official recommendation is set to 16.

The Buffer table does not refresh data in real time; it only refreshes when a threshold condition is met. The threshold conditions consist of three sets of minimum and maximum values. The specific meanings of the three sets of extreme value condition parameters are described next:

  • min_time and max_time: the minimum and maximum values of the time condition in seconds, calculated from the first write to the table;

  • min_rows and max_rows: the minimum and maximum values of the data row condition;

  • min_bytes and max_bytes: the minimum and maximum values of the data volume condition in bytes.

According to the above conditions, it can be seen that there are three judgment bases for Buffer table refreshing, and if any of them is satisfied, the Buffer table will refresh the data, and they are:

  1. If all the minimum thresholds of the three sets of conditions have been met, the refresh action is triggered;

  2. If at least one of the three sets of conditions is satisfied by the maximum threshold condition, the refresh action is triggered;

One more thing to note is that the above three sets of conditions are computed separately in each num_layers. Assuming num_layers=16, the Buffer table will open up to 16 threads in response to data writes, which receive requests in a polling fashion, and within each thread, the process of determining the above conditions will be performed independently. That is, assuming a Buffer table of

max_bytes=100000000(approximately)100 MB),num_layers=16, then the maximum amount of data that this Buffer table can handle at the same time is about1.6 GB。
create table xx(
id Int64
)engine=Log ;
CREATE TABLE buffer_to_xx AS memory_1 
ENGINE = Buffer(default, xx, 16, 10, 100, 10000, 1000000, 10000000, 100000000) ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

INSERT INTO TABLE buffer_to_xx SELECT number FROM numbers(1000000) ;

At this point, there is data in buffer_to_xx and none in the target table memory_1 because none of the maximum thresholds have been reached so far, whether judged by time, data rows or data size. So it takes roughly 100 seconds before the data is refreshed from buffer_to_xx to xx.

Relevant logging information can be found in ClickHouse's logs:

INSERT INTO TABLE buffer_to_xx SELECT number FROM numbers(1000001) ;

Data is inserted directly into the table

Create a 'merge.hits_buffer' table with the same structure as '' and use the Buffer engine. When writing to this table, the data is buffered in RAM and then written to the " "table. 16 buffers are created. Each of these will be flushed if 100 seconds have elapsed, or if a million rows have been written, or if 100 MB of data have been written; or if 10 seconds have elapsed and 10,000 rows and 10 MB of data have been written at the same time. For example, if only one row has been written, it will be flushed after 100 seconds anyway. However, if many rows have been written, the data will be refreshed faster.

When the server is stopped, using DROP TABLE or DETACH TABLE, the buffer data will also be flushed to the target table.

You can set empty strings in single quotes for database names and table names. This indicates that there is no target table. In this case, the buffer is simply cleared when the data refresh condition is reached. This may be useful to keep the data window in memory.

When reading from a buffer table, the data is processed from the buffer and the target table (if any).
please note

l Buffer tables do not support indexing. In other words, the data in the buffer has been fully scanned, which can be slow for large buffers. (For data in subordinate tables, their supported indexes will be used.)

l If the set of columns in the Buffer table does not match the set of columns in the Slave table, insert a subset of columns that exist in both tables.

l If the type does not match any of the columns in the buffer table and the slave table, an error message is entered in the server log and the buffer is cleared.
The same thing happens if the slave table does not exist when the buffer is flushed.

l If you need to run ALTER on the subordinate table and the Buffer table, it is recommended that you delete the Buffer table first, run ALTER on the subordinate table, and then create the Buffer table again.

l If the server restarts abnormally, the data in the buffer will be lost.

l FINAL and SAMPLE do not work properly for buffer tables. These conditions will be passed to the target table but will not be used to process data in the buffer. If these features are needed, it is recommended that only buffer tables be used for writes when reading from the target table.

l One of the buffers is locked when data is added to the buffer. If a read operation is performed from a table at the same time, it causes a delay.

The data inserted into the buffer table may end up in the slave table in a different order and in different blocks. As a result, it is difficult to write to CollapsingMergeTree correctly using the Buffer table. to avoid the problem, you can set "num_layers" to 1.

l If the target table is replicated, some of the expected characteristics of the replicated table are lost when writing to the buffer table. Random variations in the order and size of rows in the data section can cause deduplication to quit working, which means that it is not possible to reliably write to the replicated table "just once".

l Because of these drawbacks, we recommend the use of Buffer tables only in rare cases.

l Buffer tables will be used when too many INSERTs are received from a large number of servers in a single unit of time and the data cannot be buffered before insertion, which means that INSERTs cannot run fast enough.

l Note that there is no point in inserting the buffer table even once. This produces only a few thousand rows per second, whereas inserting a larger block of data produces more than a million rows per second (see the section on "Performance").

1) Create a target table
create table tb_user_target(uid Int8 , name String) engine=TinyLog ;
2) Create a cache table
CREATE TABLE tb_user_buffer AS tb_user_target ENGINE = Buffer(doit26, tb_user_target, 16, 10, 100, 10000, 1000000, 10000000, 100000000) ;
 CREATE TABLE tb_user_buffer2 AS tb_user_target ENGINE = Buffer(doit26, tb_user_target, 16, 10, 100, 2, 10, 10000000, 100000000) ;


3) Inserting data into a cached table
insert into tb_user_buffer values(1,’Yang’),(2,'Haha') ,(3,'ADA') ;
4) Wait to see the data in the target table later
select * from tb_user ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

V Query Syntax

with x as (select from ) ,y as(select from) select from x , y …

5.1 with

ClickHouse supports CTE (Common Table Expression) to enhance the expression of query statements.

SELECT pow(2, 2)
┌─pow(2, 2)─┐
│         4 │
└───────────┘
SELECT pow(pow(2, 2), 2)

┌─pow(pow(2, 2), 2)─┐
│                16 │
└───────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

The readability and maintainability of statements can be greatly improved when changing to the CTE form, \

with pow(2,2) as a select pow(a,3) ;
  • 1

1) Defining Variables

WITH 
    1 AS start,
    10 AS end
SELECT 
    id + start,
    *
FROM tb_mysql

┌─plus(id, start)─┬─id─┬─name─┬─age─┐
│               21 │ zss  │  23 │
│               32 │ lss  │  33 │
│               43 │ ww   │  44 │
│               21 │ zss  │  23 │
│               32 │ lss  │  33 │
│               21 │ zss  │  23 │
│               32 │ lss  │  33 │
└─────────────────┴────┴──────┴─────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

2) Calling Functions

SELECT *
FROM tb_partition
┌─id─┬─name─┬────────────birthday─┐
│  1 │ xl   │ 2021-05-20 10:50:46 │
│  2 │ xy   │ 2021-05-20 11:17:47 │
└────┴──────┴─────────────────────┘
┌─id─┬─name─┬────────────birthday─┐
│  3 │ xf   │ 2021-05-19 11:11:12 │
└────┴──────┴───────────---------─┘
WITH toDate(birthday) AS bday
SELECT 
    id,
    name,
    bday
FROM tb_partition

┌─id─┬─name─┬───────bday─┐
│  1 │ xl   │ 2021-05-20 │
│  2 │ xy   │ 2021-05-20 │
└────┴──────┴────────────┘
┌─id─┬─name─┬───────bday─┐
│  3 │ xf   │ 2021-05-19 │
└────┴──────┴────────────┘
-  practice
WITH
    count(1) AS cnt,
    groupArray(cdate) AS list
SELECT
    name,
    cnt,
    list
FROM tb_shop2
GROUP BY name
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

3) Sub-queries

You can define a subquery, but you must also note that the subquery can only return one row of results, otherwise it will run out of exceptions.


WITH 
    (
        SELECT *
        FROM tb_partition
        WHERE id = 1
    ) AS sub
SELECT 
    *,
    sub
FROM tb_partition

┌─id─┬─name─┬────────────birthday─┬─sub────────────────────────────┐
│  1 │ xl   │ 2021-05-20 10:50:46(1,'xl','2021-05-20 10:50:46') │
│  2 │ xy   │ 2021-05-20 11:17:47(1,'xl','2021-05-20 10:50:46') │
└────┴──────┴─────────────────────┴────────────────────────────────┘
┌─id─┬─name─┬────────────birthday─┬─sub────────────────────────────┐
│  3 │ xf   │ 2021-05-19 11:11:12(1,'xl','2021-05-20 10:50:46') │
└────┴──────┴─────────────────────┴────────────────────────────────┘

 with (select * from tb_shop2 where name = 'a' and cdate = '2017-03-01') as  x  select * from tb_shop2 where (name,cdate,money)=x ;
 1  The result of the subquery must be a single piece of data
2  where (id,name,age) = (1,'zss',23)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

5.2 from

SQL is a collection-oriented programming language, and from determines where the program reads data from.

  1. Querying data in a table

  2. Querying data in subqueries

  3. Query data in table function select * from numbers(3) ;

file hdfs mysql numbers(10)

table function

A function that builds a table , used in the following scenario.

The [FROM) clause of the SELECT query.

Create table AS Query.

在这里插入图片描述

1 file
file(path, format, structure)
path — The relative path to the file from user_files_path. Path to file support following globs in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, `'abc', 'def' — strings.
format — The format of the file.
structure — Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...'.
  • 1
  • 2
  • 3
  • 4

The data files must be in the specified directory /var/lib/clickhouse/user_files

SELECT *
FROM file('', 'CSV', 'id Int8,name String , age UInt8')
-- any document in the folder
SELECT *
FROM file('*', 'CSV', 'id Int8,name String , age UInt8')
  • 1
  • 2
  • 3
  • 4
  • 5
2 numbers
SELECT *
FROM numbers(10) ;

SELECT *
FROM numbers(2, 10) ;

SELECT *
FROM numbers(10) limit 3 ;

SELECT toDate('2020-01-01') + number AS d
FROM numbers(365)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
3 mysql
CH can query data directly from mysql service
mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
SELECT *
FROM mysql('linux01:3306', 'db_doit_ch', 'emp', 'root', 'root')
  • 1
  • 2
  • 3
  • 4

在这里插入图片描述

4 hdfs

SELECT *FROM hdfs(‘hdfs://hdfs1:9000/test’, ‘TSV’, ‘column1 UInt32, column2 UInt32, column3 UInt32’)LIMIT 2

SELECT *

FROM hdfs(‘hdfs://linux01:8020/’, ‘CSV’, ‘id Int8 ,name String , age Int8’)

在这里插入图片描述

5.3 array join

The ARRAY JOIN clause allows JOIN operations with arrays or nested types of fields inside a data table to expand a row of arrays into multiple rows. Similar to what the explode blowup function does in hive!

CREATE TABLE test_arrayjoin
(
    `name` String,
    `vs` Array(Int8)
)
ENGINE = Memory ;
insert into test_arrayjoin values('xw',[1,2,3]),('xl',[4,5]),('xk',[1]);
-- Expanding data in an array
SELECT 
    *,
    s
FROM test_arrayjoin
ARRAY JOIN vs AS s
┌─name─┬─vs──────┬─s─┐
│ xw   │ [1,2,3]1 │
│ xw   │ [1,2,3]2 │
│ xw   │ [1,2,3]3 │
│ xl   │ [4,5]4 │
│ xl   │ [4,5]5 │
│ xk   │ [1]1 │
└──────┴─────────┴───┘
-- arrayMap Higher-order function that operates on each element of an array.
SELECT 
    *,
    arrayMap(x->x*2 , vs) vs2
FROM test_arrayjoin ;
SELECT 
    *,
    arrayMap(x -> (x * 2), vs) AS vs2
FROM test_arrayjoin
┌─name─┬─vs──────┬─vs2─────┐
│ xw   │ [1,2,3][2,4,6] │
│ xl   │ [4,5][8,10]  │
│ xk   │ [1][2]     │
└──────┴─────────┴─────────┘
SELECT 
    *,
    arrayMap(x -> (x * 2), vs) AS vs2 ,
    vv1 ,
    vv2
FROM test_arrayjoin
array join 
vs as vv1 ,
vs2 as vv2 ;
┌─name─┬─vs──────┬─vs2─────┬─vv1─┬─vv2─┐
│ xw   │ [1,2,3][2,4,6]12 │
│ xw   │ [1,2,3][2,4,6]24 │
│ xw   │ [1,2,3][2,4,6]36 │
│ xl   │ [4,5][8,10]48 │
│ xl   │ [4,5][8,10]510 │
│ xk   │ [1][2]12 │
└──────┴─────────┴─────────┴─────┴─────┘
select
id ,
h ,
xx
from
tb_array_join 
array join 
hobby  as h  ,
arrayEnumerate(hobby) as xx ;
┌─id─┬─h─────┬─xx─┐
│  1 │ eat   │  1 │
│  1 │ drink │  2 │
│  1 │ sleep │  3 │
│  2 │ study │  1 │
│  2 │ sport │  2 │
│  2read3 │
└────┴───────┴────┘
┌─id─┬─h─────┬─xx─┐
│  3 │ eat   │  1 │
│  3 │ drink │  2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72

case (law)

a,2017-02-05,200
a,2017-02-06,300
a,2017-02-07,200
a,2017-02-08,400
a,2017-02-08,300
a,2017-02-10,600
b,2017-02-05,200
b,2017-02-06,300
b,2017-02-08,200
b,2017-02-09,400
b,2017-02-10,600
c,2017-01-31,200
c,2017-02-01,300
c,2017-02-02,200
c,2017-02-03,400
c,2017-02-10,600
a,2017-03-01,200
a,2017-03-02,300
a,2017-03-03,200
a,2017-03-04,400
a,2017-03-05,600
drop table if exists tb_shop ;
CREATE TABLE tb_shop
(
    `name` String,
    `cdate` Date,
    `cost` Float64
)engine=ReplacingMergeTree(cdate)
order by (name,cdate) ;
-- Import of data
clickhouse-client -q 'insert into doit23.tb_shop format CSV' < shop.txt ;  
┌─name─┬──────cdate─┬─cost─┐
│ a    │ 2017-02-05200 │
│ a    │ 2017-02-06300 │
│ a    │ 2017-02-07200 │
│ a    │ 2017-02-08400 │
│ a    │ 2017-02-10600 │
│ a    │ 2017-03-01200 │
│ a    │ 2017-03-02300 │
│ a    │ 2017-03-03200 │
│ a    │ 2017-03-04400 │
│ a    │ 2017-03-05888 │
│ b    │ 2017-02-05200 │
│ b    │ 2017-02-06300 │
│ b    │ 2017-02-08200 │
│ b    │ 2017-02-09400 │
│ b    │ 2017-02-10600 │
│ c    │ 2017-01-31200 │
│ c    │ 2017-02-01300 │
│ c    │ 2017-02-02200 │
│ c    │ 2017-02-03400 │
│ c    │ 2017-02-10600 │
└──────┴────────────┴──────┘
select
 name ,
 groupArray(cdate) arr ,
 arrayEnumerate(arr) as indexs
from
tb_shop
group by name;

┌─name─┬─arr─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─indexs─────────────────┐
│ b    │ ['2017-02-05','2017-02-06','2017-02-08','2017-02-09','2017-02-10'][1,2,3,4,5]            │
│ c    │ ['2017-01-31','2017-02-01','2017-02-02','2017-02-03','2017-02-10'][1,2,3,4,5]            │
│ a    │ ['2017-02-05','2017-02-06','2017-02-07','2017-02-08','2017-02-10','2017-03-01','2017-03-02','2017-03-03','2017-03-04','2017-03-05'][1,2,3,4,5,6,7,8,9,10] │
└──────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┘
select 
name ,
dt - num
from
(select
 name ,
 groupArray(cdate) arr ,
 arrayEnumerate(arr) as indexs
from
tb_shop
group by name
)
array  join 
arr as dt ,
indexs as num ;
┌─name─┬─minus(dt, num)─┐
│ b    │     2017-02-04 │
│ b    │     2017-02-04 │
│ b    │     2017-02-05 │
│ b    │     2017-02-05 │
│ b    │     2017-02-05 │
│ c    │     2017-01-30 │
│ c    │     2017-01-30 │
│ c    │     2017-01-30 │
│ c    │     2017-01-30 │
│ c    │     2017-02-05 │
│ a    │     2017-02-04 │
│ a    │     2017-02-04 │
│ a    │     2017-02-04 │
│ a    │     2017-02-04 │
│ a    │     2017-02-05 │
│ a    │     2017-02-23 │
│ a    │     2017-02-23 │
│ a    │     2017-02-23 │
│ a    │     2017-02-23 │
│ a    │     2017-02-23 │
└──────┴────────────────┘
select
name ,
diff ,
count(1) cnt
from
(select 
name ,
(dt - num) as diff
from
(select
 name ,
 groupArray(cdate) arr ,
 arrayEnumerate(arr) as indexs
from
tb_shop
group by name
)
array  join 
arr as dt ,
indexs as num
)
group by name , diff;

┌─name─┬───────diff─┬─count(1)─┐
│ b    │ 2017-02-042 │
│ a    │ 2017-02-235 │
│ c    │ 2017-01-304 │
│ c    │ 2017-02-051 │
│ a    │ 2017-02-044 │
│ b    │ 2017-02-053 │
│ a    │ 2017-02-051 │
└──────┴────────────┴──────────┘
select
name ,
diff ,
count(1) cnt
from
(select 
name ,
(dt - num) as diff
from
(select
 name ,
 groupArray(cdate) arr ,
 arrayEnumerate(arr) as indexs
from
tb_shop
group by name
)
array  join 
arr as dt ,
indexs as num
)
group by name , diff
order by cnt desc 
limit 1 by name  ;

┌─name─┬───────diff─┬─cnt─┐
│ a    │ 2017-02-235 │
│ c    │ 2017-01-304 │
│ b    │ 2017-02-053 │
└──────┴────────────┴─────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165

5.4 Linked queries

All standardsSQL JOIN Support Type.

  • INNER JOIN, only matching rows are returned.
  • LEFT OUTER JOIN, non-matching rows from left table are returned in addition to matching rows.
  • RIGHT OUTER JOIN, non-matching rows from right table are returned in addition to matching rows.
  • FULL OUTER JOIN, non-matching rows from both tables are returned in addition to matching rows.
  • CROSS JOIN, produces cartesian product of whole tables, “join keys” are not specified.

The JOIN clause allows for the joining of data from left and right tables, which is one of the most commonly used query clauses. Its syntax contains two parts: join precision and join type.

在这里插入图片描述

Connection accuracy

The join precision determines the strategy used by a JOIN query to join data, and currently supports ALL, ANY and ASOF. If you don't declare it, the default is ALL, and you can modify the default connection precision type with the join_default_strictness configuration parameter.

The determination of whether the data are joined and matched is made using a JOIN KEY, which is currently only supported for equals (EQUAL JOIN). A CROSS JOIN does not require the use of a JOIN KEY because it produces a Cartesian product.

-- Preparation of data
drop table if exists yg ;
create table yg(
id Int8 ,
name String ,
age UInt8  ,
bid Int8
)engine=Log ;
insert into  yg values(1,'AA',23,1) ,
(2,'BB',24,2) ,
(3,'VV',27,1) ,
(4,'CC',13,3) ,
(5,'KK',53,3) ,
(6,'MM',33,3)  ;

drop table if exists bm ;
create table bm(
bid Int8 ,
name String 
)engine=Log ;
insert into bm values(1,'x'),(2,'Y'),(3,'Z');

drop table if exists gz ;
drop table gz ;
create table gz(
id Int8 ,
jb Int64 ,
jj Int64
)engine=Log ;
insert into gz values (1,1000,2000),(1,1000,2000),(2,2000,1233),(3,2000,3000),(4,4000,1000),(5,5000,2000);
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
1)all

If a row of data in the left table, there are multiple rows of data in the right table with which the connection matches, then return all the connected data in the right table. The basis for determining the connection match is the data in the left table and the right table, based on the value of the connection key (JOIN KEY) is completely equal (equal), equivalent to =.

SELECT *
FROM yg 
ALL INNER JOIN gz ON yg.id = gz.id ;


SELECT *
FROM yg AS inser
ALL  JOIN gz ON yg.id = gz.id ;
SELECT *
FROM yg AS inser
JOIN gz ON yg.id = gz.id ;

┌─id─┬─name─┬─age─┬─bid─┬─gz.id─┬───jb─┬───jj─┐
│  1 │ AA   │  231110002000 │
│  1 │ AA   │  231110002000 │
│  2 │ BB   │  242220001233 │
│  3 │ VV   │  271320003000 │
│  4 │ CC   │  133440001000 │
│  5 │ KK   │  533550002000 │
└────┴──────┴─────┴─────┴───────┴──────┴──────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
2)any

If a row of data in the left table is matched by multiple rows of data in the right table, then only the first row of data in the right table is returned.ANY is the same as the basis for ALL to judge the connection match.


SELECT *
FROM yg
ANY INNER JOIN gz ON yg.id = gz.id

┌─id─┬─name─┬─age─┬─bid─┬─gz.id─┬───jb─┬───jj─┐
│  1 │ AA   │  231110002000 │
│  2 │ BB   │  242220001233 │
│  3 │ VV   │  271320003000 │
│  4 │ CC   │  133440001000 │
│  5 │ KK   │  533550002000 │
└────┴──────┴─────┴─────┴───────┴──────┴──────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
3)asof

The asof concatenation key is appended to define a fuzzy concatenation match condition asof_column.

drop table if exists emp1 ;
create table emp1(
id Int8 ,
name String ,
ctime DateTime
)engine=Log ;
insert into emp1 values(1,'AA','2021-01-03 00:00:00'),
(1,'AA','2021-01-02 00:00:00'),
(2,'CC','2021-01-01 00:00:00'),
(3,'DD','2021-01-01 00:00:00'),
(4,'EE','2021-01-01 00:00:00');

drop table if exists emp2 ;
create table emp2(
id Int8 ,
name String ,
ctime DateTime
)engine=Log ;
insert into emp2 values(1,'aa','2021-01-02 00:00:00'),
(1,'aa','2021-01-02 00:00:00'),
(2,'cc','2021-01-01 00:00:00'),
(3,'dd','2021-01-01 00:00:00');

-- ASOF inner join 
SELECT *
FROM emp2
ASOF INNER JOIN emp1 ON (emp1.id = emp2.id) AND (emp1.ctime > emp2.ctime)

┌─id─┬─name─┬───────────────ctime─┬─emp1.id─┬─emp1.name─┬──────────emp1.ctime─┐
│  1 │ aa   │ 2021-01-02 00:00:001 │ AA        │ 2021-01-03 00:00:00 │
│  1 │ aa   │ 2021-01-02 00:00:001 │ AA        │ 2021-01-03 00:00:00 │
└────┴──────┴─────────────────────┴─────────┴───────────┴─────────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

5.5 with model

  • with cube
  • with rollup
  • with totals
drop table is exists tb_with ;
create table tb_with(
	id UInt8 ,
	vist UInt8,
    province String ,
    city String ,
    area String
)engine=MergeTree() 
order by id ;
insert into tb_with values(1,12,'Shandong','Jinan','Calendar') ;
insert into tb_with values(2,12,'Shandong','Jinan','Calendar') ;
insert into tb_with values(3,12,'Shandong','Jinan','Flyover') ;
insert into tb_with values(4,12,'Shandong','Jinan','Flyover') ;
insert into tb_with values(5,88,'Shandong','Qingdao','Yellow Island') ;
insert into tb_with values(6,88,'Shandong','Qingdao','Yellow Island') ;
insert into tb_with values(7,12,'Shanxi','Taiyuan','Little Shop') ;
insert into tb_with values(8,12,'Shanxi','Taiyuan','Little Shop') ;
insert into tb_with values(9,112,'Shanxi','Taiyuan','Tsim Sha Tsui') ;
SELECT 
    province,
    city,
    area,
    sum(vist)
FROM tb_with
GROUP BY 
    province,
    city,
    area
    WITH CUBE ;
  ┌─province─┬─city─┬─area───┬─sum(vist)─┐
│ Shandong │ Qingdao │ Huangdao │176 │
│ Shandong │ Jinan │ Flyover │24 │
│ Shandong │ Taiyuan │ Tsimshatsui │112 │
│ Shandong │ Jinan │ Rixia │24 │
│ Shanxi │ Taiyuan │ Xiaodian │12 │
│ Shandong │ Taiyuan │ Xiaodian │12 │
└──────────┴──────┴────────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│ Shandong │ Qingdao │ │ │176 │
│ Shandong │ Jinan │ │ │48 │
│ Shanxi │ Taiyuan │ │ │12 │
│ Shandong │ Taiyuan │ │ │124 │
└──────────┴──────┴──────┴───────────┘
┌─province─┬─city─┬─area───┬─sum(vist)─┐
│ Shandong │ │ Lixia │24 │
│ Shandong │ │ Xiaodian │12 │
│ Shandong │ │ Flyover │24 │
│ Shanxi │ │ Xiaodian │12 │
│ Shandong │ │ Tsimshatsui │112 │
│ Shandong │ │ Huangdao │176 │
└──────────┴──────┴────────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│ Shanxi │ │ │ │ │12 │
│ Shandong │ │ │ │348 │
└──────────┴──────┴──────┴───────────┘
┌─province─┬─city─┬─area───┬─sum(vist)─┐
│ │ Jinan │ Rixia │24 │
│ │ Jinan │ Flyover │24 │
│ │ Taiyuan │ Tsimshatsui │112 │
│ │ Qingdao │ Huangdao │176 │
│ │ Taiyuan │ Xiaodian │24 │
└──────────┴──────┴────────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│ │ Qingdao, subprovincial city in Shandong │ │ │176 │
│ │ Ji'nan, subprovincial city and capital of Shandong province in northeast China │ │ │48 │
│ │ Taiyuan │ │ │136 │
└──────────┴──────┴──────┴───────────┘
┌─province─┬─city─┬─area───┬─sum(vist)─┐
│ │ │ Tianqiao district in Beijing, formerly a center of folk culture │24 │
│ │ │ │ small store │24 │
│ │ │ │ Huangdao prefecture level city in Hubei │176 │
│ │ │ Lijiazhuang county in Wanzhou suburbs of north Chongqing municipality, formerly in Sichuan │24 │
│ │ │ Tsim Sha Tsui (area in *) │112 │
└──────────┴──────┴────────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│          │      │      │       360 │
└──────────┴──────┴──────┴───────────┘
 SELECT 
    province,
    city,
    area,
    sum(vist)
FROM tb_with
GROUP BY 
    province,
    city,
    area
    WITH ROLLUP;

┌─province─┬─city─┬─area───┬─sum(vist)─┐
│ Shandong │ Qingdao │ Huangdao │176 │
│ Shandong │ Jinan │ Flyover │24 │
│ Shandong │ Taiyuan │ Tsimshatsui │112 │
│ Shandong │ Jinan │ Rixia │24 │
│ Shanxi │ Taiyuan │ Xiaodian │12 │
│ Shandong │ Taiyuan │ Xiaodian │12 │
└──────────┴──────┴────────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│ Shandong │ Qingdao │ │ │176 │
│ Shandong │ Jinan │ │ │48 │
│ Shanxi │ Taiyuan │ │ │12 │
│ Shandong │ Taiyuan │ │ │124 │
└──────────┴──────┴──────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│ Shanxi │ │ │ │ │12 │
│ Shandong │ │ │ │348 │
└──────────┴──────┴──────┴───────────┘
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│          │      │      │       360 │
└──────────┴──────┴──────┴───────────┘
SELECT 
    province,
    city,
    area,
    sum(vist)
FROM tb_with
GROUP BY 
    province,
    city,
    area
    WITH TOTALS;
┌─province─┬─city─┬─area───┬─sum(vist)─┐
│ Shandong │ Qingdao │ Huangdao │176 │
│ Shandong │ Jinan │ Flyover │24 │
│ Shandong │ Taiyuan │ Tsimshatsui │112 │
│ Shandong │ Jinan │ Rixia │24 │
│ Shanxi │ Taiyuan │ Xiaodian │12 │
│ Shandong │ Taiyuan │ Xiaodian │12 │
└──────────┴──────┴────────┴───────────┘
Totals:
┌─province─┬─city─┬─area─┬─sum(vist)─┐
│          │      │      │       360 │
└──────────┴──────┴──────┴───────────┘ 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133

VI Functions

ClickHouse mainly provides two types of functions - common functions and aggregate functions. Common functions are defined by the IFunction interface, which has dozens of function implementations, such as FunctionFormatDateTime, FunctionSubstring and so on. In addition to some common functions (such as four operations, date conversion, etc.), there is no lack of some very practical functions, such as URL extraction functions, IP address desensitization functions. Ordinary functions have no state, and the effect of the function is applied to each line of data. Of course, in the specific implementation of the function in the process, and will not be one line by one line of arithmetic, but the use of vectorization directly on a whole column of data. Aggregate function defined by the IAggregateFunction interface, compared to the ordinary stateless function, aggregate function is stateful. COUNT aggregation function, for example, the state of its AggregateFunctionCount using the whole UInt64 record. The state of an aggregate function supports serialization and deserialization, so it can be transferred between distributed nodes for incremental computation.

6.1 Ordinary Functions

6.1.1 Type conversion functions

  • toInt8(expr) — Results in the Int8 data type.
  • toInt16(expr) — Results in the Int16 data type.
  • toInt32(expr) — Results in the Int32 data type.
  • toInt64(expr) — Results in the Int64 data type.
  • toInt128(expr) — Results in the Int128 data type.
  • toInt256(expr) — Results in the Int256 data type.
SELECT toInt64(nan), toInt32(32), toInt16('16'), toInt8(8.8);
┌─────────toInt64(nan)─┬─toInt32(32)─┬─toInt16('16')─┬─toInt8(8.8)─┐
│ -922337203685477580832168 │
└──────────────────────┴─────────────┴───────────────┴─────────────┘
  • 1
  • 2
  • 3
  • 4
  • toUInt(8|16|32|64|256)OrZero

  • toUInt(8|16|32|64|256)OrNull

  • toFloat(32|64)

  • toFloat(32|64)OrZero

  • toFloat(32|64)OrNull

  • toDate

  • toDateOrZero

  • toDateOrNull

  • toDateTime

  • toDateTimeOrZero

  • toDateTimeOrNull

  • toDecimal(32|64|128|256)

  • toYYYYMM

toString

    now() AS now_local,
    toString(now(), 'Asia/Yekaterinburg') AS now_yekat;
    ┌───────────now_local─┬─now_yekat───────────┐
    │ 2016-06-15 00:11:212016-06-15 02:11:21 │
    └─────────────────────┴─────────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • CAST(x, T)
Arguments
- `x` — Any type.
- `T` — Destination type. String
**Returned value**
  • 1
  • 2
  • 3
  • 4
SELECT
    '2016-06-15 23:00:00' AS timestamp,
    CAST(timestamp AS DateTime) AS datetime,
    CAST(timestamp AS Date) AS date,
    CAST(timestamp, 'String') AS string,
    CAST(timestamp, 'FixedString(22)') AS fixed_string;
    ┌─timestamp───────────┬────────────datetime─┬───────date─┬─string──────────────┬─fixed_string──────────
│ 2016-06-15 23:00:002016-06-15 23:00:002016-06-152016-06-15 23:00:002016-06-15 23:00:00\0\0\0 │
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

6.1.2 Date functions

SELECT
    toDateTime('2016-06-15 23:00:00') AS time,
    toDate(time) AS date_local,
    toDate(time, 'Asia/Yekaterinburg') AS date_yekat,
    toString(time, 'US/Samoa') AS time_samoa
┌────────────────time─┬─date_local─┬─date_yekat─┬─time_samoa──────────┐
│ 2016-06-15 23:00:002016-06-152016-06-162016-06-15 09:00:00 │
└─────────────────────┴────────────┴────────────┴─────────────────────┘

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • toDate
  • toYear
  • toMonth
  • toHour
  • toMinute
  • toSecond
  • toUnixTimestamp
  • date_trunc truncates the time date_trunc(unit, value[, timezone])
second
minute
hour
day
week
month
quarter
year

SELECT now(), date_trunc('hour', now());
┌───────────────now()─┬─date_trunc('hour', now())─┐
│ 2021-05-21 13:52:42 │       2021-05-21 13:00:00 │
└─────────────────────┴───────────────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • date_add
date_add(unit, value, date)
second
minute
hour
day
week
month
quarter
year
SELECT date_add(YEAR, 3, toDate('2018-01-01'));
date_diff('unit', startdate, enddate, [timezone])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • date_diff
  • date_sub
  • timestamp_add
  • timestamp_sub
  • toYYYYMM
  • toYYYYMMDD
  • toYYYYMMDDhhmmss
  • formatDateTime
%C year divided by 100 and truncated to integer (00-99) 20
%d day of the month, zero-padded (01-31) 02
%D Short MM/DD/YY date, equivalent to %m/%d/%y 01/02/18
%e day of the month, space-padded ( 1-31) 2
%F short YYYY-MM-DD date, equivalent to %Y-%m-%d 2018-01-02
%G four-digit year format for ISO week number, calculated from the week-based year [defined by the ISO 860 standard, normally useful only with %V 2018
%g two-digit year format, aligned to ISO 8601, abbreviated from four-digit notation 18
%H hour in 24h format (00-23) 22
%I hour in 12h format (01-12) 10
%j day of the year (001-366) 002
%m month as a decimal number (01-12) 01
%M minute (00-59) 33
%n new-line character (‘’)
%p AM or PM designation PM
%Q Quarter (1-4) 1
%R 24-hour HH:MM time, equivalent to %H:%M 22:33
%S second (00-59) 44
%t horizontal-tab character (’)
%T ISO 8601 time format (HH:MM:SS), equivalent to %H:%M:%S 22:33:44
%u ISO 8601 weekday as number with Monday as 1 (1-7) 2
%V ISO 8601 week number (01-53) 01
%w weekday as a decimal number with Sunday as 0 (0-6) 2
%y Year, last two digits (00-99) 18
%Y Year 2018
%% a % sign %
SELECT formatDateTime(now(), '%D')

┌─formatDateTime(now(), '%D')─┐
│ 05/21/21                    │
└─────────────────────────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • FROM_UNIXTIME

    SELECT FROM_UNIXTIME(423543535)
    
    ┌─FROM_UNIXTIME(423543535)─┐
    │      1983-06-04 10:58:55 │
    └──────────────────────────┘
    
    • 1
    • 2
    • 3
    • 4
    • 5

6.1.3 Conditional functions

  • if(exp1 , exp2,exp3)
  • multiIf()
drop table if exists tb_if;
create table if not exists tb_if(
    uid Int16, 
    name String ,
    gender String
)engine = TinyLog ;
insert into tb_if values(1,'zss1','M') ;
insert into tb_if values(2,'zss2','M') ;
insert into tb_if values(3,'zss3','F') ;
insert into tb_if values(4,'zss4','O') ;
insert into tb_if values(5,'zss5','F') ;
-------- single conditional judgment ---------
SELECT 
    *,
    if(gender = 'M', 'Male', 'Female')
FROM tb_if

┌─uid─┬─name─┬─gender─┬─if(equals(gender, 'M'), 'Male', 'Female')─┐
│   1 │ zss1 │ M │ male │
│2 │ zss2 │ M │ male │
│3 │ zss3 │ F │ women │
│4 │ zss4 │ O │ women │
│5 │ zss5 │ F │ women │
-------------------------------------------------------------
multiconditional judgment

SELECT 
    *,
    multiIf(gender = 'M', 'Male', gender = 'F', 'Female', 'Confidentiality') AS sex
FROM tb_if

┌─uid─┬─name─┬─gender─┬─sex──┐
│   1 │ zss1 │ M │ male │
│2 │ zss2 │ M │ male │
│3 │ zss3 │ F │ women │
│4 │ zss4 │ O │ keep sth. confidential │ │
│5 │ zss5 │ F │ women │
└─────┴──────┴────────┴──────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

6.1.6 Other

visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"';
visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}';
select  JSONExtract('{"a":"hello","b":"tom","c":12}', 'Tuple(String,String,UInt8)') as kn;
-- tuple functions
select tupleElement((1,2,3,4,66),5);

-- BitMap function
-- bitmapBuild
SELECT 
    bitmapBuild([1, 2, 3, 4, 5]) AS res,
    toTypeName(res)
┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─┐
│     │ AggregateFunction(groupBitmap, UInt8)    │
└─────┴──────────────────────────────────────────┘
-- bitmapToArray
SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res;
-- bitmapSubsetInRange
SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 100, 200, 500]), toUInt32(30), toUInt32(200))) AS res
┌─res───────────────┐
│ [30,31,32,33,100] │
└───────────────────┘
-- bitmapContains
SELECT bitmapContains(bitmapBuild([1, 5, 7, 9]), toUInt32(9)) AS res

┌─res─┐
│   1 │
└─────┘
-- bitmapHasAny has any one element
SELECT bitmapHasAny(bitmapBuild([1, 2, 3]), bitmapBuild([3, 4, 5])) AS res

┌─res─┐
│   1 │
└─────
-- bitmapHasAll has any one of the elements

-- bitmapMin 
-- bitmapMax
-- bitmapAnd intersection
-- bitmapOr Parallelization
-- bitmapAndnot difference set

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

VII Distributed

1 Install the ck service on each node of the cluster
2 <listen_host>::<listen_host>
service clickhouse-server restart 
clickhouse-client  -h ck01 link8123
3 Configure zookeeper to start normally
  • 1
  • 2
  • 3
  • 4
  • 5

On page request http://ck01:8123/play

Clustering is the foundation of replication and sharding, which extends ClickHouse's service topology from a single node to multiple nodes, but it does not require all nodes to form a single large cluster as some systems in the Hadoop ecosystem do.ClickHouse's clustering configuration is very flexible, allowing users to either form a single cluster of all nodes or divide the nodes into multiple smaller clusters according to the business requirements. Nodes can be divided into multiple small clusters according to business requirements. The number of nodes, partitions, and replicas can vary between each small cluster region.

在这里插入图片描述

The other is to distinguish from the functional role level, the main purpose of using replicas is to prevent data loss and increase the redundancy of data storage, while the main purpose of using slices is to realize the horizontal slicing of data.

The naming convention for MergerTree has been explained. If you add the prefix Replicated to *MergeTree, you can combine it with a new variant of the engine, the Replicated-MergeTree replication table!

在这里插入图片描述

在这里插入图片描述

The ability to apply a replica can only be applied if the ReplicatedMergeTree replica table family engine is used (another replica implementation will be described later). Or to understand it in a more direct way, a data table that uses ReplicatedMergeTree is a replica. ReplicatedMergeTree is a MergeTree-derived engine that adds distributed collaboration capabilities to MergeTree.

在这里插入图片描述

In MergeTree, a data partition passes through two types of storage areas from the beginning of its creation to its full completion.

(1) Memory: Data is first written to a memory buffer.

(2) Local disk: The data will then be written to the tmp temporary directory partition, and then the temporary directory will be renamed to the official partition when it is all done.

ReplicatedMergeTree in the above basis for the addition of the ZooKeeper part , it will further create a series of listening nodes in the ZooKeeper , and thus realize the communication between multiple instances . In the whole communication process, ZooKeeper does not involve the transfer of table data.

  • Dependency on ZooKeeper: When executing INSERT and ALTER queries, ReplicatedMergeTree requires the distributed collaboration capabilities of ZooKeeper to synchronize between multiple replicas. However, ZooKeeper is not required for querying replicas, which will be covered in more detail later.
  • Table-level replicas: Replicas are defined at the table level, so each table's replica configuration can be personalized according to its actual needs, including the number of replicas, and the replica's distribution location within the cluster.
  • Multi Master: INSERT and ALTER queries can be executed on any of the replicas and they have the same effect. These operations are distributed to each replica for local execution with the help of ZooKeeper's collaboration capabilities.
  • Block: When you write data with the INSERT command, the data will be divided into several blocks according to the size of max_insert_block_size (default 1048576 rows). Therefore, block data is the basic unit of data writing, and has the atomicity and uniqueness of writing.
  • Atomicity: at the time of data write, either all the data within a Block block is written successfully or all of it fails.
  • Uniqueness: When writing a Block data block, a hash digest is calculated and recorded according to the data order, data rows and data size of the current Block data block. After that, if a Block to be written has the same Hash digest (same data order, data size, and data rows within the Block) as a previously written Block, the Block is ignored. This design prevents the problem of duplicate block writes caused by anomalies.

7.0 Segmentation Concepts

By introducing data replicas, although it can effectively reduce the risk of data loss (multiple storage) and improve the performance of queries (apportioned querying, read-write separation), there is still one problem that has not been solved, which is the capacity of the data table. So far, each replica since

body, still preserves the full amount of data in the data table. Therefore, in scenarios with very large business volumes, relying on replicas does not solve the performance bottleneck of a single table. To fundamentally solve this kind of problem, we need to use another means, that is, to further slice the data horizontally, that is, we are going to introduce the data slicing. each service node in ClickHouse can be called a shard (slice). Theoretically, assuming that there are N (N>=1) data tables A, distributed in N ClickHouse service nodes, and these data tables do not duplicate each other, then it can be said that data table A has N shards. However, in engineering practice, if there are only these sharded tables, then the whole Sharding scheme is basically unusable. For a complete scheme, it is also necessary to consider how the data is evenly written to each shard when it is written, and how the data is routed to each shard when it is queried and combined into a result set. So ClickHouse

The data sharding needs to be used in conjunction with the Distributed table engine.

在这里插入图片描述

Distributed table engine itself does not store any data, it can be used as a layer of distributed table transparent agent, in the cluster to automatically carry out data writing, distribution, querying, routing and so on.

7.1 Configuring zookeeper

The location of the ZK needs to be configured on each CK's node

ClickHouse uses a set of zookeeper tags to define the relevant configuration, which by default is defined in the global configuration. However, the Zookeeper configuration used by each replica is usually the same, and in order to make it easier to replicate the configuration file across multiple nodes, it is more common to extract this part of the configuration and save it independently using a single file.

First, configure the file


 <zookeeper-servers> 
 <node index="1"> 
 <host>doit01</host>
 <port>2181</port>
 </node>
  <node index="2"> 
 <host>doit02</host>
 <port>2181</port>
 </node>
  <node index="3"> 
 <host>doit03</host>
 <port>2181</port>
 </node>
 </zookeeper-servers>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

The names of the nodes incl and the configuration file should correspond to each other. At this point, the entire configuration process is complete.

ClickHouse rather thoughtfully provides a proxy table called zookeeper in its system table. From this table, data within the remote ZooKeeper can be read using a SQL query. One thing to note is that in the SQL statement used for the query, the path condition must be specified, the

Synchronize configuration files to other cluster nodes!!!!

scp    linux02:$PWD 
scp    linux03:$PWD
restart sth.
  • 1
  • 2
  • 3

7.2 Creating a replica table

Before you can create a replica table, you need to start the zookeeper in the cluster first

First, the risk of data loss is reduced because of the increased redundant storage of data; second, because the replica employs a multi-master

architecture, so each replica instance can be used as an entry point for data reads and writes, which undoubtedly shares the load of the nodes.

When using the single-copy feature, we don't need to configure the CK cluster to store multiple copies of the data! We just need to specify the location of the engine and ZK when we build the table.

ENGINE = ReplicatedMergeTree('zk_path', 'replica_name') 

-- /clickhouse/tables/{shard}/table_name

-- /clickhouse/tables/ is an agreed path-fixing prefix that indicates the root path where the data table is stored.
  • 1
  • 2
  • 3
  • 4
  • 5

-{shard} denotes the slice number, which is usually replaced by a numeric value, such as 01, 02, 03. A data table can have multiple slices, and each slice has its own replica.

-table_name denotes the name of the data table, which is usually the same as the name of the physical table for ease of maintenance (although ClickHouse does not mandate that the table name in the path be the same as the physical table name); and replica_name serves to define the name of the replica created in ZooKeeper, which uniquely distinguishes between different replica instances. One convention for naming is to use the domain name of the server where it resides.

For zk_path, different copies of the same slice of the same data table should define the same path, while for replica_name, different copies of the same slice of the same data table should define different names.

1) One slice, multiple replica tables

-- lixnu01 machine
create table tb_demo1 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo1', 'linux01') 
order by id ;
-- lixnu02 machine
create table tb_demo1 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo1', 'linux02') 
order by id ;
-- lixnu03 machine
create table tb_demo1 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo1', 'linux03') 
order by id ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

Viewing the contents of zookeeper

[zk: localhost:2181(CONNECTED) 0] ls /
[a, zookeeper, clickhouse, DNS, datanode1, server1, hbase]
[zk: localhost:2181(CONNECTED) 1] ls /clickhouse
[tables, task_queue]
[zk: localhost:2181(CONNECTED) 2] ls /clickhouse/tables
[01]
[zk: localhost:2181(CONNECTED) 3] ls /clickhouse/tables/01
[tb_demo1]
[zk: localhost:2181(CONNECTED) 4] ls /clickhouse/tables/01/tb_demo1
[metadata, temp, mutations, log, leader_election, columns, blocks, nonincrement_block_numbers, replicas, quorum, block_numbers]
[zk: localhost:2181(CONNECTED) 5] ls /clickhouse/tables/01/tb_demo1/replicas
[linux02, linux03, linux01]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
SELECT *
FROM system.zookeeper
WHERE path = '/' ;
  • 1
  • 2
  • 3
On any node,insert data, Data can be synchronized on all other nodes
  • 1

2) Two slices, one with a copy and one with no copy

-- lixnu01 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo2', 'linux01') 
order by id ;
-- lixnu02 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo2', 'linux02') 
order by id ;
-- lixnu03 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo2', 'linux03') 
order by id ;

-------------------

-- lixnu01 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo2', 'linux01') 
order by id ;
-- lixnu02 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/01/tb_demo2', 'linux02') 
order by id ;
-- lixnu03 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/02/tb_demo2', 'linux03') 
order by id ;

-- lixnu04 machine
create table tb_demo2 (
    id Int8 ,
    name String)engine=ReplicatedMergeTree('/clickhouse/tables/02/tb_demo2', 'linux03') 
order by id ;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40

7.3 Distributed Engines

Distributed table engine is synonymous with distributed table, it does not store any data itself, but acts as a transparent agent for data slicing, which can automatically route data to various nodes in the cluster, so Distributed table engine needs to work together with other data table engines.

在这里插入图片描述

Distributed tables are generally used for two purposes, the

  • One is to store multiple copies of the table and have a large number of concurrent operations, we can use distributed tables to share the pressure of requests to solve the problem of concurrency

  • One is that the table is particularly large and consists of multiple slices, and each slice can also store a copy of the data.

  • Local table: usually named with _local suffix. Local table is the carrier of data, can use any table engine that is not Distributed, a local table corresponds to a data slice.

  • Distributed tables: usually named with the _all suffix. Distributed tables can only use the Distributed table engine, which forms a one-to-many mapping relationship with local tables, and will later manipulate multiple local tables through the Distributed Table Agent.

ENGINE = Distributed(cluster, database, table [,sharding_key]) 
  • 1
  • cluster: cluster name, corresponding to the customized name in the cluster configuration. During writes and queries to distributed tables, it will use the cluster configuration information to find the corresponding host node.

  • database and table: correspond to the names of the database and table, respectively. Distributed tables are mapped to local tables using this set of configurations.

  • sharding_key: sharding key, optional parameter. During the data writing process, the distributed table will distribute the data to the local table of each host node based on the rules of sharding key.

7.3.1 No copies

This example is, using a cluster , to create a multi-sliced table without replica configured a cluster cluster1 cluster has three machines ck1 ck2 ck3, no replica, if you build a table on this cluster, the table data will have three slices, there is no copy of the data storage

<clickhouse_remote_servers>
<cluster1>
<! -- The cluster name is cluster1 There are three slices per table in the entire cluster, on lx01 lx02 lx03 -->.
 <shard>
 <replica>
 <host>linux01</host>
 <port>9000</port>
 </replica>
 </shard>
 <shard>
 <replica>
 <host>linux02</host>
 <port>9000</port>
 </replica>
 </shard>
 <shard>
 <replica>
 <host>linux03</host>
 <port>9000</port>
 </replica>
 </shard>
</cluster1>
 <cluster2>
<! -- The cluster is named cluster2 one slice three replicas -- >!
 <shard>  
 <replica>
 <host>linux01</host>
 <port>9000</port>
 </replica>
 <replica>
 <host>linux02</host>
 <port>9000</port>
 </replica>
 <replica>
 <host>linux03</host>
 <port>9000</port>
 </replica>
 </shard>
</cluster2>
    <! --cluster three Multiple slices Reserved copies Note that a host is only used once -->.
<cluster3>
 <shard>  
 <replica>
 <host>doit01</host>
 <port>9000</port>
 </replica>
 <replica>
 <host>doit02</host>
 <port>9000</port>
 </replica>
 </shard>
  <shard>  
 <replica>
 <host>doit03</host>
 <port>9000</port>
 </replica>
 <replica>
 <host>doit04</host>
 <port>9000</port>
 </replica>
 </shard>
</cluster3>
    
</clickhouse_remote_servers>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
Synchronizing configuration files to the cluster
-- Creation of local tables
create table tb_demo3 on cluster cluster1(
id  Int8 ,
name String 
)engine=MergeTree() 
order by  id ;
-- Create distributed tables
create table demo3_all on cluster cluster1 engine=Distributed('cluster1','default','tb_demo3',id) as tb_demo3 ;
--Insert data into the distributed table , the data will be inserted into different slices according to the insertion rules
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
7.3.2 Configuration with replicas
<! -- Configure cluster 2 , the tables in the cluster have two slices , where slice 1 has two replicas -->!
<cluster2>
 <shard>
	<replica>
		<host>linux01</host>
		<port>9000</port>
	</replica>
    <replica>
		<host>linux02</host>
		<port>9000</port>
	</replica>
 </shard>
 <shard>
	<replica>
		<host>linux03</host>
		<port>9000</port>
	</replica>
 </shard>
</cluster2>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
-- Creation of local tables
create table tb_demo4 on cluster cluster2(
id  Int8 ,
name String 
)engine=MergeTree() 
order by  id ;
-- Create distributed tables
create table demo4_all on cluster cluster2 engine=Distributed('cluster2','default','tb_demo4',id) as tb_demo4 ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

First :
1) When data1 is sharded, multiple replicas can be used without distributed tables.
Second.
2) Use distributed tables when there are multiple slices to allocate data to the slices.

Multiple sliced tables
Multiple replica tables
Multiple slices Multiple replicas
  A node can only be used once in a cluster
  • 1
  • 2
  • 3
  • 4

7.4 Distributed DDL

ClickHouse supports cluster mode, where a cluster has one to many nodes. the DDL statements CREATE, ALTER, DROP, RENMAE, and TRUNCATE all support distributed execution. This means that if a DDL statement is executed on any node in the cluster, every node in the cluster will execute the same statement in the same order. This is a very significant feature that eliminates the need to go to individual nodes and execute DDL in sequence, just like a batch command. Converting an ordinary DDL statement to distributed execution is as simple as adding the ON CLUSTER cluster_name statement. For example, the following statement will broadcast the DDL statement to all nodes in the ch_cluster cluster:

-- build table on cluster cluster1
create table tb_demo3 on cluster cluster1(
id  Int8 ,
name String 
)engine=MergeTree() 
order by  id ;
-- Delete all local or distributed tables in the cluster
drop table if exists tb_demo3 on cluster cluster1;
-- Modify the table structure in the cluster
alter table t3 on cluster cluster1 add column age Int8 ;
-- Delete field
-- Deletion of partitions
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

7.5 Principles of Distributed Collaboration

The core processes of replica collaboration are INSERT, MERGE, MUTATION and ALTER, which correspond to data writing, partition merging, data modification and metadata modification respectively.INSERT and ALTER queries are executed in a distributed manner. With ZooKeeper's event notification mechanism, multiple replicas automatically collaborate effectively with each other, but they do not use ZooKeeper to store any partitioned data. Other queries that do not support distributed execution include SELECT, CREATE, DROP, RENAME, and ATTACH. e.g., in order to create multiple replicas, we need to log into each ClickHouse node separately. Next, the working mechanism of the above processes will be described in turn. In order to make it easier to understand, I will first recognize how each process is introduced as a whole.

7.5.1 Principle of insert

在这里插入图片描述

7.5.2 Merge Principle

Regardless of the replica from which the MERGE operation is initiated, the merge plan is left to the primary replica, just like insert.

在这里插入图片描述

7.5.3 Principles of mutation

alter table x update name=zss where

alter table x delete where

When the ReplicatedMergeTree ALTER DELETE or ALTER UPDATE operation, that is, it will enter the MUTATION part of the logic, it's the core process as Figure

在这里插入图片描述

7.5.4 Principle of alter

When an ALTER operation is performed on a ReplicatedMergeTree to make metadata changes, it enters the ALTER section.

The logic of the points, such as adding, deleting table fields, and so on.

在这里插入图片描述

VIII Applications

1 Users and permissions

Add user configuration in

<?xml version="1.0"?>
<yandex>
<profiles><default><max_memory_usage>10000000000</max_memory_usage><use_uncompressed_cache>0</use_uncompressed_cache><load_balancing>random</load_balancing></default><readonly><readonly>1</readonly></readonly>
  </profiles>
  <users><default><password></password><networks incl="networks" replace="replace"><ip>::/0</ip></networks><profile>default</profile><quota>default</quota></default>
<! -- Define a user with the username hangge Password in plaintext root -->
<hangge>
<password_sha256_hex>f493c8a7a3c37088731336766459cc37e4b094e95b918038726660cc42013fcd</password_sha256_hex><networks incl="networks" replace="replace"><ip>::/0</ip></networks><profile>default</profile><quota>default</quota></hangge>
  </users>
  <quotas><default><interval><duration>3600</duration><queries>0</queries><errors>0</errors><result_rows>0</result_rows><read_rows>0</read_rows><execution_time>0</execution_time></interval></default>
  </quotas>
</yandex>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

plaintext password

<password></password> There is no configuration in there to indicate that there is no password.
  • 1

SHA256 encryption:

When using the SHA256 encryption algorithm, the password needs to be defined via the password_sha256_hex tag

[root@ck1 ~]#  echo -n hangge | openssl dgst -sha256    

(stdin)= f493c8a7a3c37088731336766459cc37e4b094e95b918038726660cc42013fcd
  • 1
  • 2
  • 3
<hangge>
 <password_sha256_hex>f493c8a7a3c37088731336766459cc37e4b094e95b918038726660cc42013fcd</password_sha256_hex>
            <networks incl="networks" replace="replace">
                <ip>::/0</ip>
            </networks>
            <profile>default</profile>
            <quota>default</quota>
</hangge>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

double_sha1 encryption.

When using the double_sha1 encryption algorithm, the password needs to be defined with the password_double_sha1_hex tag.

<password_double_sha1_hex>23ae809ddacaf96af0fd78ed04b6a265e05aa257</password_double_sha1_hex> 

\# echo -n 123 | openssl dgst -sha1 -binary | openssl dgst -sha1 

(stdin)= 23ae809ddacaf96af0fd78ed04b6a265e05aa257 
  • 1
  • 2
  • 3
  • 4
  • 5

User Privilege Control'

<hangge>
<password_sha256_hex>60cd41aedc4e47e8883682b416109e7b7e345e15decc63c2c98ecdab5e8e053a</password_sha256_hex>
    <networks incl="networks" />
    <profile>readonly</profile>   
    <quota>default</quota>
    <allow_databases>
        <database>default</database>
    </allow_databases>
</hangge>



## 3 JDBC and client-side tools

### 3.1 JDBC```xml
 <!-- /artifact//clickhouse-jdbc -->
    <dependency>
        <groupId>ru.yandex.clickhouse</groupId>
        <artifactId>clickhouse-jdbc</artifactId>
        <version>0.2.4</version>
    </dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
public class Demo1 {

    public static void main(String[] args) throws Exception {
        Class.forName("");
        String url = "jdbc:clickhouse://linux01:8123/default";
        String username = "default";
        String password = "";
        Connection con = DriverManager.getConnection(url, username, password);
        Statement stmt = con.createStatement();
        ResultSet resultSet = stmt.executeQuery("select * from tb_demo1");
        while (resultSet.next()) {
            int id = resultSet.getInt("id");
            String name = resultSet.getString("name");
            System.out.println(id + ":" + name);
        }
        con.close();
        stmt.close();
        resultSet.close();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

High Availability Mode allows multiple host addresses to be set, and each time a connection will be made to a randomly selected one of the available addresses, in High Availability Mode, you need to get the connection through the BalancedClickhouseDataSource object.

 public static void main(String[] args) throws Exception {
        // Initialize the driver
        Class.forName("");
        // url
        String url = "jdbc:clickhouse://linux01:8123,linux02:8123,linux03:8123/default";
        //Set JDBC parameters
        ClickHouseProperties clickHouseProperties = new ClickHouseProperties();
        clickHouseProperties.setUser("default");

        // Declare the data source
        BalancedClickhouseDataSource balanced = new BalancedClickhouseDataSource(url, clickHouseProperties);
        // ping each host to exclude unavailable dead connections
        balanced.actualize();
        // Getting a JDBC connection
        Connection con = balanced.getConnection();
        Statement stmt = con.createStatement();
        ResultSet resultSet = stmt.executeQuery("select * from demo3_all");
        while (resultSet.next()) {
            int id = resultSet.getInt("id");
            String name = resultSet.getString("name");

            System.out.println(id + ":" + name);
        }

        con.close();
        stmt.close();
        resultSet.close();
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

3.2 Client toolsDBeaver

在这里插入图片描述

在这里插入图片描述

/#!/login A page visualization tool provided

在这里插入图片描述

在这里插入图片描述

4 User behavior analysis

4.1 The windowFunnel function

(Parameter 1) Units of time Size of window Units of time (time , event chain)

uid1 event1 1551398404
uid1 event2 1551398406
uid1 event3 1551398408
uid2 event2 1551398412
uid2 event3 1551398415
uid3 event3 1551398410
uid3 event4 1551398413
————————————————
-- table building
drop table if exists test_funnel ;
CREATE TABLE test_funnel(
    uid String, 
    eventid String, 
    eventTime UInt64) 
ENGINE = MergeTree 
ORDER BY (uid, eventTime) ;
-- Import of data
insert into test_funnel values
('uid1','event1',1551398404),
('uid1','event2',1551398406),
('uid1','event3',1551398408),
('uid2','event2',1551398412),
('uid2','event3',1551398415),
('uid3','event3',1551398410),
('uid3','event4',1551398413);
-- View data
┌─uid──┬─eventid─┬──eventTime─┐
│ uid1 │ event1  │ 1551398404 │
│ uid1 │ event2  │ 1551398406 │
│ uid1 │ event3  │ 1551398408 │
│ uid2 │ event2  │ 1551398412 │
│ uid2 │ event3  │ 1551398415 │
│ uid3 │ event3  │ 1551398410 │
│ uid3 │ event4  │ 1551398413 │
└──────┴─────────┴────────────┘

select
uid ,
windowFunnel(4)(
toDateTime(eventTime),
eventid='event1' ,
eventid='event2' ,
eventid='event3' 
) as funnel
from
test_funnel 
group by uid ;
┌─uid──┬─funnel─┐
│ uid3 │      0 │
│ uid1 │      3 │
│ uid2 │      0 │
└──────┴────────┘
select
uid ,
windowFunnel(4)(
toDateTime(eventTime),
eventid='event2' ,
eventid='event3' 
) as funnel
from
test_funnel 
group by uid ;
┌─uid──┬─funnel─┐
│ uid3 │      0 │
│ uid1 │      2 │
│ uid2 │      2 │
└──────┴────────┘
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67

case (law)

Create table Import data
clickhouse-client -q 'insert into test1.ods_log format JSONAsString'  < event.log 

drop table if exists test_log ;
create table test_log engine=MergeTree() order by (id,ts)
as
with visitParamExtractUInt(line,'timeStamp') as ts ,
visitParamExtractString(line ,'account')as account,
visitParamExtractString(line ,'deviceId')as deviceId,
visitParamExtractString(line ,'sessionId')as sessionId,
visitParamExtractString(line ,'ip')as ip,
visitParamExtractString(line ,'eventId')as eventId,
visitParamExtractRaw(line ,'properties')as properties
select 
if(account='' , deviceId , account) id ,
account ,
deviceId,
sessionId,
ip,
eventId,
properties,
ts
from
tb_ods_log  ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
select
id ,
windowFunnel(100000)(
    toDateTime(ts),
    eventId='productView' ,
    eventId='adClick' ,
    eventId='productView' ,
    eventId='collect' 
) as funnel
from
test_log 
group by  id ;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

4.2 sequenceCount

sequenceCount the number of times the requirement is satisfied

SELECT   id,   
sequenceCount('(?1)')
( 
 FROM_UNIXTIME(ts) ,    
 eventId='adShow' ,
 eventId='productView' ,
 eventId='collect' ,
 eventId='addCart' )AS cnt 
 FROM test_log 
 GROUP BY id having id='0T7136zA3BZI';
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

4.3 sequenceMatch

This function all requires the specification of a pattern string, a time column, and a desired sequence of events (up to 32 events). There are three syntaxes for pattern strings:
(?N): indicates the Nth event in the time series, starting from 1. For example, in the above SQL, (?2) means event_type = 'shtKkclick' AND column_type = 'homePage'.
(?t op secs): inserted between two events, indicating the time conditions (in seconds) that need to be met when they occur. For example, in the above SQL, (?1)(?t<=15)(?2) means that the time interval between the occurrence of events 1 and 2 is within 15 seconds.
. *: Indicates an arbitrary unspecified event.

SELECT
  id, 
  sequenceMatch('(?1)(?t<=10)(?2)(?3).*(?4)')(
   FROM_UNIXTIME(ts) ,  
   eventId='adClick' ,
   eventId='productView' ,
   eventId='collect' ,
   eventId='addCart' 
  ) AS is_match
FROM log
GROUP BY id
having id='dGHDHV7WOrpJ';


SELECT
  id, 
  sequenceCount('(?1)(?t<=10)(?2)(?3).*(?4)')(
   FROM_UNIXTIME(ts) ,  
   eventId='adClick' ,
   eventId='productView' ,
   eventId='collect' ,
   eventId='addCart' 
  ) AS cnt
FROM test_log
GROUP BY id ;


having id='dGHDHV7WOrpJ 0T7136zA3BZI';


select id ,eventId from test_log
 where id = 'dGHDHV7WOrpJ'
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32