I'm running a report in MySQL. One of the queries involves inserting a large amount of rows into a temp table. When I try to run it, I get this error:

Error code 1206: The number of locks exceeds the lock table size.

The queries in question are:

create temporary table SkusBought(customerNum int(11),sku int(11),typedesc char(25),key `customerNum` (customerNum))ENGINE=InnoDB DEFAULT CHARSET=latin1;insert into skusBoughtselect t1.* from(select customer, sku, typedesc from transactiondatatransitwhere (cat = 150 or cat = 151)AND daysfrom07jan1 > 731group by customer, skuunionselect customer, sku, typedesc from transactiondatadelawarewhere (cat = 150 or cat = 151)AND daysfrom07jan1 > 731group by customer, skuunionselect customer, sku, typedesc from transactiondataprestigewhere (cat = 150 or cat = 151)AND daysfrom07jan1 > 731group by customer, sku) t1join(select customernum from topThreetransit group by customernum) t2on t1.customer = t2.customernum;

I've read that changing the configuration file to increase the buffer pool size will help, but that does nothing. What would be the way to fix this, either as a temporary workaround or a permanent fix?

EDIT: changed part of the query. Shouldn't affect it, but I did a find-replace all and didn't realize it screwed that up. Doesn't affect the question.

EDIT 2: Added typedesc to t1. I changed it in the query but not here.

14

Best Answer


This issue can be resolved by setting the higher values for the MySQL variable innodb_buffer_pool_size. The default value for innodb_buffer_pool_size will be 8,388,608.

To change the settings value for innodb_buffer_pool_size please see the below set.

  1. Locate the file my.cnf from the server. For Linux servers this will be mostly at /etc/my.cnf
  2. Add the line innodb_buffer_pool_size=64MB to this file
  3. Restart the MySQL server

To restart the MySQL server, you can use anyone of the below 2 options:

  1. service mysqld restart
  2. /etc/init.d/mysqld restart

Reference The total number of locks exceeds the lock table size

I found another way to solve it - use Table Lock. Sure, it can be unappropriate for your application - if you need to update table at same time.

See:Try using LOCK TABLES to lock the entire table, instead of the default action of InnoDB's MVCC row-level locking. If I'm not mistaken, the "lock table" is referring to the InnoDB internal structure storing row and version identifiers for the MVCC implementation with a bit identifying the row is being modified in a statement, and with a table of 60 million rows, probably exceeds the memory allocated to it. The LOCK TABLES command should alleviate this problem by setting a table-level lock instead of row-level:

SET @@AUTOCOMMIT=0;LOCK TABLES avgvol WRITE, volume READ;INSERT INTO avgvol(date,vol)SELECT date,avg(vol) FROM volumeGROUP BY date;UNLOCK TABLES;

Jay Pipes,Community Relations Manager, North America, MySQL Inc.

From the MySQL documentation (that you already have read as I see):

1206 (ER_LOCK_TABLE_FULL)

The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size. Within an individual application, a workaround may be to break a large operation into smaller pieces. For example, if the error occurs for a large INSERT, perform several smaller INSERT operations.

If increasing innodb_buffer_pool_size doesnt help, then just follow the indication on the bolded part and split up your INSERT into 3. Skip the UNIONs and make 3 INSERTs, each with a JOIN to the topThreetransit table.

First, you can use sql command show global variables like 'innodb_buffer%'; to check the buffer size.

Solution is find your my.cnf file and add,

[mysqld]innodb_buffer_pool_size=1G # depends on your data and machine

DO NOT forget to add [mysqld], otherwise, it won't work.

In my case, ubuntu 16.04, my.cnf is located under the folder /etc/mysql/.

I am running MySQL windows with MySQL workbench.Go to Server > Server statusAt the top it says configuration file: "path" (C:\ProgramData\MySQL\...\my.ini)

Then in the file "my.ini" press control+F and find buffer_pool_size. Set the value higher, I would recommend 64 MB (default is 8 MB).

Restart the server by going to Instance>Startup/Shutdown > Stop server (and then later start server again)

In my case I could not delete entries from my table.

Fixing Error code 1206: The number of locks exceeds the lock table size.

In my case, I work with MySQL Workbench (5.6.17) running on Windows with WampServer 2.5.

For Windows/WampServer you have to edit the my.ini file (not the my.cnf file)

To locate this file go to Menu Server/Server Status (in MySQL Workbench) and look under Server Directories/ Base Directory

MySQL Server - Server Status

In my.ini file there are defined sections for different settings, look for section [mysqld] (create it if it does not exist) and add the command: innodb_buffer_pool_size=4G

[mysqld]
innodb_buffer_pool_size=4G

The size of the buffer_pool file will depend on your specific machine, in most cases, 2G or 4G will fix the problem.

Remember to restart the server so it takes the new configuration, it corrected the problem for me.

Hope it helps!

Same issue I'm getting in my MYSQL while running sql script Please look into below image..Error code 1206: The number of locks exceeds the lock table size Picture

This is Mysql configuration issue so I made some changes in my.iniand It's working on my system & issue resolved.

We need to make some changes in my.ini which is available on following Path:- C:\ProgramData\MySQL\MySQL Server 5.7\my.iniand please update following changes in my.ini config file fields:-

key_buffer_size=64Mread_buffer_size=64Mread_rnd_buffer_size=128Minnodb_log_buffer_size=10Minnodb_buffer_pool_size=256Mquery_cache_type=2max_allowed_packet=16M

After all above changes please restart the MYSQL Service.Please refer the image:- Microsoft MYSQL Service Picture

If you have properly structured your tables so that each contains relatively unique values, then the less intensive way to do this would be to do 3 separate insert-into statements, 1 for each table, with the join-filter in place for each insert -

INSERT INTO SkusBought...SELECT t1.customer, t1.SKU, t1.TypeDescFROM transactiondatatransit AS T1LEFT OUTER JOIN topThreetransit AS T2ON t1.customer = t2.customernumWHERE T2.customernum IS NOT NULL

Repeat this for the other two tables - copy/paste is a fine method, simply change the FROM table name.** IF you are trying to prevent duplicated entries in your SkusBought table you can add the following join code in each section prior to the WHERE clause.

LEFT OUTER JOIN SkusBought AS T3ON t1.customer = t3.customerAND t1.sku = t3.sku

-and then the last line of WHERE clause-

AND t3.customer IS NULL

Your initial code is using a number of sub-queries, and the UNION statement can be expensive as it will first create its own temporary table to populate the data from the three separate sources before inserting into the table you want ALONG with running another sub-query to filter results.

in windows: if you have mysql workbench. Go to server status. find the location of running server file in my case it was:

C:\ProgramData\MySQL\MySQL Server 5.7

open my.ini file and find the buffer_pool_size. Set the value high. default value is 8M. This is how i fixed this problem

This answer below does not directly answer the OP's question. However,I'm adding this answer here because this page is the first result whenyou Google "The total number of locks exceeds the lock table size".


If the query you are running is parsing an entire table that spans millions of rows, you can try a while loop instead of changing limits in the configuration.

The while look will break it into pieces. Below is an example looping over an indexed column that is DATETIME.

# DropDROP TABLE IF EXISTSnew_table;# Create (we will add keys later)CREATE TABLEnew_table(num INT(11),row_id VARCHAR(255),row_value VARCHAR(255),row_date DATETIME);# Change the delimimterDELIMITER //# Create procedureCREATE PROCEDURE do_repeat(IN current_loop_date DATETIME)BEGIN# Loops WEEK by WEEK until NOW(). Change WEEK to something shorter like DAY if you still get the lock errors like.WHILE current_loop_date <= NOW() DO# Do somethingINSERT INTOuser_behavior_search_tagged_keyword_statistics_with_type(num,row_id,row_value,row_date)SELECT# Do something interesting herenum,row_id,row_value,row_dateFROMold_tableWHERErow_date >= current_loop_date ANDrow_date < current_loop_date + INTERVAL 1 WEEK;# IncrementSET current_loop_date = current_loop_date + INTERVAL 1 WEEK;END WHILE;END//# RunCALL do_repeat('2017-01-01');# CleanupDROP PROCEDURE IF EXISTS do_repeat//# Change the delimimter backDELIMITER ;# Add keysALTER TABLEnew_tableMODIFY COLUMNnum int(11) NOT NULL,ADD PRIMARY KEY(num),ADD KEYrow_id (row_id) USING BTREE,ADD KEYrow_date (row_date) USING BTREE;

You can also adapt it to loop over the "num" column if your table doesn't use a date.

Hope this helps someone!

It is worth saying that the figure used for this setting is in BYTES - found that out the hard way!

Good Day,

I have had the same error when trying to remove millions of rows from a MySQL table.

My resolution was nothing to do with changing the configuration file of MySQL, but just to reduce the number of rows I am targeting by specifying the max id per transaction. Instead of targeting all the rows with 1 transaction, I would suggest targeting portions per transaction. It might take more transactions to get the job done, but atleast you will get somewhere other than trying to fiddle around with MySQL configurations.

Example:

delete from myTable where groupKey in ('any1', 'any2') and id < 400000;

and not

delete from myTable where groupKey in ('any1', 'any2');

The query might still be optimized by using groupBy and orderBy clauses.

I was facing a similar problem when I was trying to insert some million rows into a database using python. The solution is to group these inserts together into smaller chunks to reduce memory usage, using the executemany function to insert each chuck and then committing simultaneously instead of executing one commit in the end.

def insert(query, items, conn):GROUPS = 10total = items.shape[0]group = total // GROUPSitems = list(items.itertuples(name=None, index=None))for i in range(GROUPS):cursor.executemany(query, items[group * i : group * (i + 1)])conn.commit()print('#', end='')print()

There's also a neat progress bar in the implementation above.

It helps to insert/update rows in batches.

To insert rows from one table to another:

CREATE TABLE `table_from` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`foo` VARCHAR(10),`bar` VARCHAR(10),`baz` VARCHAR(10));INSERT INTO `table_from` VALUES(NULL, 'test1', 'test2', 'test3'),(NULL, 'test4', 'test5', 'test6'),(NULL, 'test7', 'test8', 'test9');CREATE TABLE `table_to` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`foo` VARCHAR(10),`bar` VARCHAR(10),`baz` VARCHAR(10));DELIMITER $$CREATE PROCEDURE `insert_rows_into_table`()BEGINDECLARE `start` INT DEFAULT 0;DECLARE `batch_size` INT DEFAULT 100000;DECLARE `total_rows` INT DEFAULT 0;-- Get the total row count.SELECT COUNT(*) INTO `total_rows` FROM `table_from`;-- Process rows in batches.WHILE `start` < `total_rows` DOSTART TRANSACTION;INSERT INTO `table_to` (`foo`, `bar`, `baz`)WITH `rows_to_insert` AS(SELECT `foo`, `bar`, `baz`FROM `table_from`LIMIT `start`, `batch_size`)SELECT `foo`, `bar`, `baz` FROM `rows_to_insert`;COMMIT;SET `start` = `start` + `batch_size`;END WHILE;END $$DELIMITER ;CALL `insert_rows_into_table`();SELECT * FROM `table_to`;

DB Fiddle

To update all rows:

CREATE TABLE `my_table` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`foo` VARCHAR(10),`bar` VARCHAR(10),`baz` VARCHAR(10));INSERT INTO `my_table` VALUES(NULL, 'test1', 'test2', 'test3'),(NULL, 'test4', 'test5', 'test6'),(NULL, 'test7', 'test8', 'test9');DELIMITER $$CREATE PROCEDURE `update_column_in_table`()BEGINDECLARE `start` INT DEFAULT 0;DECLARE `batch_size` INT DEFAULT 100000;DECLARE `total_rows` INT DEFAULT 0;-- Get the total row count.SELECT COUNT(*) INTO `total_rows` FROM `my_table`;-- Process rows in batches.WHILE `start` < `total_rows` DOSTART TRANSACTION;WITH `rows_to_update` AS(SELECT `id`FROM `my_table`LIMIT `start`, `batch_size`)UPDATE `my_table`JOIN `rows_to_update` ON `my_table`.`id` = `rows_to_update`.`id`SET`my_table`.`foo` = UPPER(`my_table`.`foo`),`my_table`.`bar` = REVERSE(`my_table`.`bar`),`my_table`.`baz` = REPEAT(`my_table`.`baz`, 2)WHERE `my_table`.`id` = `rows_to_update`.`id`;COMMIT;SET `start` = `start` + `batch_size`;END WHILE;END $$DELIMITER ;CALL `update_column_in_table`();SELECT * FROM `my_table`;

DB Fiddle