web123456

Database transactions, global locks, table locks, and row locks detailed explanations

Table of contents

1. What is a transaction

2. Four major characteristics of transactions (ACID)

III. Transaction isolation level

IV. Database lock mechanism

1. What is a database lock?

2. Database lock classification

3. Global lock

4. Table-level lock

4.1. Table-level lock classification

4.2. Table lock

4.3. Metadata Lock

4.4. Intention lock

5. Row-level lock

5.1. Classification of row-level locks

5.2. Lock

5.3. Check whether the line lock is successfully added through the following statement

5.4. Gap lock and key lock


1. What is a transaction

Database Transaction is a logical unit in the execution process of the database management system, consisting of a series of operations on data. Transactions are an important mechanism for databases to maintain data consistency, integrity and atomicity. The following are several key features of database transactions, commonly known as ACID properties

2. Four major characteristics of transactions (ACID)

  1. Atomicity: All operations in a transaction are either completed or not completed, and will not end at a point in the middle. If an operation in the transaction fails, the entire transaction will be rolled back to the starting state, just as the transaction has never happened.

  2. Consistency: Transactions must ensure that the database is transferred from one consistent state to another. The database integrity constraints should be satisfied before the transaction begins and after the transaction ends.

  3. IsolationconcurrentThe executed transactions will not affect each other. Database systems usually provide different isolation levels to balance performance and isolation, preventing problems such as dirty reading, non-repeatable reading and phantom reading.

  4. Durability: Once a transaction is committed, the modifications it makes will be permanently saved in the database and will not be lost even if a system fails.

III. Transaction isolation level

  • Read-uncommitted

At this isolation level, all transactions can see the execution results of other uncommitted transactions. This isolation level is rarely used in practical applications because itsperformanceNot much better than other levels. Read unsubmitted data, also known asDirty reading(Dirty Read)。

A: Start the transaction, the data is in the initial state
B: Start the transaction, update the data, but not commit it
A: Read the data again and find that the data has been modified. This is the so-called "dirty reading"
B: Rollback transactions
A: Read the data again and find that the data returns to the initial state
Through the above experiment, we can conclude that Transaction B updated a record, but did not commit. At this time, Transaction A can query for uncommitted records.
Causes dirty reading. Uncommitted reads are the lowest isolation level.

  • Read-committed

This is the default isolation level for most database systems (but notMySQLdefault). It satisfies the simple definition of quarantine: a transaction can only see changes made by the transaction that have been submitted. This isolation level also supports so-called Nonrepeatable Read, because other instances of the same transaction may have new commits during the instance processing, so the same select may return different results.

A: Start the transaction, the data is in the initial state
B: Start the transaction, update the data, but not commit it
A: Read the data again and find that the data has not been modified
B: Submit transaction
A: Read the data again and find that the data has changed, which means that the modification submitted by B was read by A in the transaction. This is the so-called "cannot be repeated"
Through the above experiment, we can conclude that the read isolation level has been submitted to solve the problem of dirty reading, but there is a problem of non-repeatable reading.
That is, the data of transaction A is inconsistent in the two queries because transaction B updates one data between the two queries.
Submitted Reads are only allowed to read submitted records, but are not required to read repeatedly.

  • Repetable-read

This is the default for MySQLTransaction isolation level, It ensures that multiple instances of the same transaction will see the same data row when reading data concurrently. But in theory, this can lead to another difficult problem: Phantom Read. Simply put, phantom reading refers to when the user reads a data row in a certain range, another transaction inserts a new row in that range. When the user reads a data row in that range, a new "phantom" row will be found. The InnoDB and Falcon storage engines solve this problem through the Multiversion Concurrency Control mechanism.

A: Start the transaction, the data is in the initial state
B: Start the transaction, update the data, but not commit it
A: Read the data again and find that the data has not been modified
B: Submit transaction
A: Read the data again and find that the data has not changed, which means you can read it repeatedly this time.
B: Insert a new data and submit it
A: Read the data again and find that the data has not changed. Although you can read it repeatedly, you find that it is not the latest data. This is the so-called "illusion reading"
A: Submit this transaction, read the data again, and found that the reading was normal
From the above experiment, we can conclude that the repeatable read isolation level allows only the submitted records to be read, and during the period when a transaction reads one record twice,
Updated by other affairs departments. However, this transaction does not require serialization with other transactions.
For example, when a transaction can find records updated by a committed transaction, it may cause phantom reading problems (note that it is possible,
Because the database has different implementations of isolation levels). Like the above experiments, there is no problem with data phantom reading.

  • Serializable

This is the highest isolation level. It forces transactions to execute serially, avoiding the fantasy reading phenomenon mentioned above. Simply put, it will lock on every row of data read, so it may cause a large number of timeouts and lock contention problems.

A: Start the transaction, the data is in the initial state
B: I found that B entered the waiting state at this time. The reason is that A's transaction has not been submitted yet and can only wait (at this time, B may have a waiting timeout)
A: Submit transaction
B: I found that the insertion was successful
serializable completely locks the field. If a transaction querys the same data, it must wait until the previous transaction completes and unlocks.
It is a complete isolation level, which will lock the corresponding data table, and therefore will have efficiency problems.

IV. DatabaseLocking mechanism

1. What is a database lock?

  • Lock is a computer that coordinates multiple processes or threadsconcurrentMechanism for accessing a resource.
  • existdatabaseIn addition to the contention of traditional computing resources (CPU, RAM, I/O), data is also a resource shared by many users. How to ensure the consistency and effectiveness of concurrent data access is a problem that all databases must solve, and lock conflicts are also an important factor affecting the performance of database concurrent access.

2. Database lockClassification

  • Global lock:Lock all tables in the database.
  • Table-level lock:Lock the entire table with each operation.
  • Row-level lock:Each operation locks the corresponding row data.

3. Global lock

Global lock is rightThe entire databaseThe instance is locked. After locking, the entire instance is in a read-only state. Subsequent DML, DDL statements, and transaction submission statements that have been updated will be blocked.

3.1 Application scenarios:

  • Make logical backups of the entire library and lock all tables to obtain a consistent view and ensure data integrity.
  • If the global lock is not added, data backup and business data update operations will be performed successively, resulting in inconsistency in data

3.2 The process of using global locks to perform logical backup of database:

  • Add global lock
flush tables with read lock;
  • mysqldumpIt is a tool used by databases for data backup, performing data backup
  • Note: mysqldump is a tool provided by MySql, not a SQL statement, and needs to be executed in the Windows command line.
mysqldump -uroot -p123456 user>
  • After locking, DML and DDL are blocked, and other clients cannot write data, but DQL can be executed, and other clients can look up data
  • The backup is over, and the backup file is obtained, and the lock is released
unlock tables;

3.3 Benefits of global lock:

  • Ensure the integrity of the data.

3.4 Disadvantages of global lock:

  • The granularity is very large. If you back up on the main library, updates cannot be performed during the backup period, and the business will basically be shut down.
  • If the business database is not a stand-alone version but a master-slave structure, and is separated by read and write, backup on the slave library will not affect the read and write operations of the master library. However, the slave library cannot perform binary logs (binlogs) synchronized by the master library during the backup, which will cause master-slave delay.

3.5 Other ways to implement consistent data backup:

In the InnoDB engine, parameters can be added during backup--single-transactionParameters to complete unlocked consistent data backup. Its underlying layer is implemented through snapshot reading.

4. Table-level lock

Table-level lock, as the name suggests, locks the entire table at each operation. Applied in storage engines such as MyISAM, InnoDB, BDB, etc.

4.1. Table-level lock classification

  • Table lock
  • Metadata lock
  • Intention lock

4.2. Table lock

Table lock classification:

  • Table sharing read lock (abbreviated as: read lock)
  • Table exclusive write lock (abbreviated as: write lock)

Locking syntax:

lock tables tb1 , tb2... read / write

Syntax for releasing locks:

unlock tables release lock

4.3. Metadata Lock

  • Meta data lock (MDL), the lock isAutomatic system control, automatically locked when accessing a table.
  • Metadata can be simply understood as a table structure. The function of metadata lock is to maintain the data consistency of the table structure.Avoid conflicts between DML and DDL, ensure correct reading and writing
  • MDL was introduced in MySQL5.5. When adding, deleting, modifying and checking a table, add MDL shared read lock and shared write lock (shared_read / shared_write); when changing the table structure, add MDL exclusive lock (exclusive).
  • Shared locks are compatible with each other, which means that they can read and write; shared locks and exclusive locks are mutually exclusive, which means that changes in table structure cannot be performed at the same time when adding, deleting, modifying and checking.

4.4. Intention lock

  • To avoid DML execution,The conflict between row locks added by client A and table locks added by client B, Intention lock was introduced in InnoDB
  • Intent lock makes client B do not need to check whether each row of data is locked when trying to add a table lock. It directly determines whether the table lock can be added successfully based on whether there is an intention lock and the type of intent lock, reducing the check of table locks.

Classification of intention locks:

  • Intent Shared Lock (IS): compatible with table lock shared_read and mutually exclusive with table lock exclusive lock (write). By statementselect ... lock in share modeAdd to .
  • Intent exclusive lock (IX): Both shared_read and exclusive locks (write) are mutually exclusive, and intention locks are not mutually exclusive. Depend oninsert、update、delete、select...for updateAdd to .

We can check whether the intention lock was successfully added through the following statement:

select object_schema,object_name,index_name,lock_type,lock_mode,lock_data from performance_schema.data_locks;

5. Row-level lock

  • Row-level lock: Each time the lock is added, the gap between the corresponding data row and the row is locked, the lock has the smallest granularity and the highest concurrency.
  • InnoDB's data is organized based on indexes, and row locks are implemented by locking index items on the index, rather than locking records.

5.1. Classification of row-level locks

  • Record Lock:Locking a single row record, prevent other transactions from updating and deleteing this line. Both are supported under Read Commit and Read Repeatable isolation levels.
  • Gap Lock:Lock index record gap (excluding this record), ensure that the index record gap remains unchanged, prevent other transactions from inserting in this gap, and generate phantom reading. All supported under Read Repeatable isolation level.
  • Next-Key Lock:Line lock and gap lock combination, lock the data at the same time and lock the gap in front of the data Gap. Supported under Read Repeatable isolation level.

5.2. Lock

  • There are two types of line locks, divided intoShared lockandExclusive lock
  • Shared lock (S):Allows a transaction to read a row, preventing other transactions from obtaining exclusive locks (X) for the same dataset. That is, it is compatible with the shared lock, and the shared lock and the exclusive lock are mutually exclusive.
  • Exclusive lock (X):Allows transactions that acquire exclusive locks to update data, preventing other transactions from obtaining shared locks and exclusive locks for the same data set.

By default, InnoDB runs at the REPEATABLE READ transaction isolation level, and InnoDB uses pro-key locks for searching and indexing scans to prevent phantom reading.

  • When searching for unique indexes, when matching existing records equally, it will be automatically optimized to row locks.
  • InnoDB's row lock is a lock added to the index. If a field does not create an index, that is, the data of the field is not retrieved through the index condition, then InnoDB will lock all records in the table, and it will be upgraded to a table lock.

5.3. To check whether the line lock is successfully added through the following statement

select object_schema,object_name,index_name,lock_type,lock_mode,lock_data from performance_schema.data_locks;

5.4. Gap lock and key lock

  • By default, InnoDB runs at the REPEATABLE READ transaction isolation level, and InnoDB uses pro-key locks for searching and indexing scans to prevent phantom reading.
  • The only purpose of the gap lock is to prevent other things

    Please insert the gap. Gap locks can coexist, and the gap lock adopted by one transaction will not prevent another transaction from adopting a gap lock on the same gap.

Original blog:Understand the "lock" in the database in one article (detailed explanation of the pictures and texts) - Tencent Cloud Developer Community - Tencent Cloud