In my.ini file there are defined sections for different settings, look for section [mysqld] (create it if it does not exist) and add the command: innodb_buffer_pool_size=4G
[mysqld]
innodb_buffer_pool_size=4G
The size of the buffer_pool file will depend on your specific machine, in most cases, 2G or 4G will fix the problem.
Remember to restart the server so it takes the new configuration, it corrected the problem for me.
Hope it helps!
Same issue I'm getting in my MYSQL while running sql script Please look into below image..Error code 1206: The number of locks exceeds the lock table size Picture
This is Mysql configuration issue so I made some changes in my.ini
and It's working on my system & issue resolved.
We need to make some changes in my.ini
which is available on following Path:- C:\ProgramData\MySQL\MySQL Server 5.7\my.ini
and please update following changes in my.ini
config file fields:-
key_buffer_size=64Mread_buffer_size=64Mread_rnd_buffer_size=128Minnodb_log_buffer_size=10Minnodb_buffer_pool_size=256Mquery_cache_type=2max_allowed_packet=16M
After all above changes please restart the MYSQL Service.Please refer the image:- Microsoft MYSQL Service Picture
If you have properly structured your tables so that each contains relatively unique values, then the less intensive way to do this would be to do 3 separate insert-into statements, 1 for each table, with the join-filter in place for each insert -
INSERT INTO SkusBought...SELECT t1.customer, t1.SKU, t1.TypeDescFROM transactiondatatransit AS T1LEFT OUTER JOIN topThreetransit AS T2ON t1.customer = t2.customernumWHERE T2.customernum IS NOT NULL
Repeat this for the other two tables - copy/paste is a fine method, simply change the FROM table name.** IF you are trying to prevent duplicated entries in your SkusBought table you can add the following join code in each section prior to the WHERE clause.
LEFT OUTER JOIN SkusBought AS T3ON t1.customer = t3.customerAND t1.sku = t3.sku
-and then the last line of WHERE clause-
AND t3.customer IS NULL
Your initial code is using a number of sub-queries, and the UNION statement can be expensive as it will first create its own temporary table to populate the data from the three separate sources before inserting into the table you want ALONG with running another sub-query to filter results.
in windows: if you have mysql workbench. Go to server status. find the location of running server file in my case it was:
C:\ProgramData\MySQL\MySQL Server 5.7
open my.ini file and find the buffer_pool_size. Set the value high. default value is 8M. This is how i fixed this problem
This answer below does not directly answer the OP's question. However,I'm adding this answer here because this page is the first result whenyou Google "The total number of locks exceeds the lock table size".
If the query you are running is parsing an entire table that spans millions of rows, you can try a while loop instead of changing limits in the configuration.
The while look will break it into pieces. Below is an example looping over an indexed column that is DATETIME.
# DropDROP TABLE IF EXISTSnew_table;# Create (we will add keys later)CREATE TABLEnew_table(num INT(11),row_id VARCHAR(255),row_value VARCHAR(255),row_date DATETIME);# Change the delimimterDELIMITER //# Create procedureCREATE PROCEDURE do_repeat(IN current_loop_date DATETIME)BEGIN# Loops WEEK by WEEK until NOW(). Change WEEK to something shorter like DAY if you still get the lock errors like.WHILE current_loop_date <= NOW() DO# Do somethingINSERT INTOuser_behavior_search_tagged_keyword_statistics_with_type(num,row_id,row_value,row_date)SELECT# Do something interesting herenum,row_id,row_value,row_dateFROMold_tableWHERErow_date >= current_loop_date ANDrow_date < current_loop_date + INTERVAL 1 WEEK;# IncrementSET current_loop_date = current_loop_date + INTERVAL 1 WEEK;END WHILE;END//# RunCALL do_repeat('2017-01-01');# CleanupDROP PROCEDURE IF EXISTS do_repeat//# Change the delimimter backDELIMITER ;# Add keysALTER TABLEnew_tableMODIFY COLUMNnum int(11) NOT NULL,ADD PRIMARY KEY(num),ADD KEYrow_id (row_id) USING BTREE,ADD KEYrow_date (row_date) USING BTREE;
You can also adapt it to loop over the "num" column if your table doesn't use a date.
Hope this helps someone!
It is worth saying that the figure used for this setting is in BYTES - found that out the hard way!
Good Day,
I have had the same error when trying to remove millions of rows from a MySQL table.
My resolution was nothing to do with changing the configuration file of MySQL, but just to reduce the number of rows I am targeting by specifying the max id per transaction. Instead of targeting all the rows with 1 transaction, I would suggest targeting portions per transaction. It might take more transactions to get the job done, but atleast you will get somewhere other than trying to fiddle around with MySQL configurations.
Example:
delete from myTable where groupKey in ('any1', 'any2') and id < 400000;
and not
delete from myTable where groupKey in ('any1', 'any2');
The query might still be optimized by using groupBy and orderBy clauses.
I was facing a similar problem when I was trying to insert some million rows into a database using python
. The solution is to group these inserts together into smaller chunks to reduce memory usage
, using the executemany
function to insert each chuck and then committing simultaneously instead of executing one commit in the end.
def insert(query, items, conn):GROUPS = 10total = items.shape[0]group = total // GROUPSitems = list(items.itertuples(name=None, index=None))for i in range(GROUPS):cursor.executemany(query, items[group * i : group * (i + 1)])conn.commit()print('#', end='')print()
There's also a neat progress bar in the implementation above.
It helps to insert/update rows in batches.
To insert rows from one table to another:
CREATE TABLE `table_from` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`foo` VARCHAR(10),`bar` VARCHAR(10),`baz` VARCHAR(10));INSERT INTO `table_from` VALUES(NULL, 'test1', 'test2', 'test3'),(NULL, 'test4', 'test5', 'test6'),(NULL, 'test7', 'test8', 'test9');CREATE TABLE `table_to` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`foo` VARCHAR(10),`bar` VARCHAR(10),`baz` VARCHAR(10));DELIMITER $$CREATE PROCEDURE `insert_rows_into_table`()BEGINDECLARE `start` INT DEFAULT 0;DECLARE `batch_size` INT DEFAULT 100000;DECLARE `total_rows` INT DEFAULT 0;-- Get the total row count.SELECT COUNT(*) INTO `total_rows` FROM `table_from`;-- Process rows in batches.WHILE `start` < `total_rows` DOSTART TRANSACTION;INSERT INTO `table_to` (`foo`, `bar`, `baz`)WITH `rows_to_insert` AS(SELECT `foo`, `bar`, `baz`FROM `table_from`LIMIT `start`, `batch_size`)SELECT `foo`, `bar`, `baz` FROM `rows_to_insert`;COMMIT;SET `start` = `start` + `batch_size`;END WHILE;END $$DELIMITER ;CALL `insert_rows_into_table`();SELECT * FROM `table_to`;
DB Fiddle
To update all rows:
CREATE TABLE `my_table` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`foo` VARCHAR(10),`bar` VARCHAR(10),`baz` VARCHAR(10));INSERT INTO `my_table` VALUES(NULL, 'test1', 'test2', 'test3'),(NULL, 'test4', 'test5', 'test6'),(NULL, 'test7', 'test8', 'test9');DELIMITER $$CREATE PROCEDURE `update_column_in_table`()BEGINDECLARE `start` INT DEFAULT 0;DECLARE `batch_size` INT DEFAULT 100000;DECLARE `total_rows` INT DEFAULT 0;-- Get the total row count.SELECT COUNT(*) INTO `total_rows` FROM `my_table`;-- Process rows in batches.WHILE `start` < `total_rows` DOSTART TRANSACTION;WITH `rows_to_update` AS(SELECT `id`FROM `my_table`LIMIT `start`, `batch_size`)UPDATE `my_table`JOIN `rows_to_update` ON `my_table`.`id` = `rows_to_update`.`id`SET`my_table`.`foo` = UPPER(`my_table`.`foo`),`my_table`.`bar` = REVERSE(`my_table`.`bar`),`my_table`.`baz` = REPEAT(`my_table`.`baz`, 2)WHERE `my_table`.`id` = `rows_to_update`.`id`;COMMIT;SET `start` = `start` + `batch_size`;END WHILE;END $$DELIMITER ;CALL `update_column_in_table`();SELECT * FROM `my_table`;
DB Fiddle