Improve Query Performance of Select Statment inside IN Statment
Clash Royale CLAN TAG#URR8PPP
My query is:
(3) UPDATE wp_postmeta set meta_value = 'outofstock' where meta_key = '_stock_status' and
post_id in
(
(2)SELECT post_id FROM
(
(1) SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
)
To improve the speed of this query, I analysed it using the "Explain" statment. Below are the results:
When I added explain
to (1) location in above query and run the subquery. The results are as below:
When I added explain
to (2) location and run that subqyery, results are as below.
When I added explain
to (3) location and run the whole query, results are as below:
My analysis is there is no speed problem with the (1) subquery, but after I select the data from this subquery to a derived table (2), somehow there is a 55.277.640 "rows" comes, and which seems to be the reason why my query is so slow. How can I optimize it ? What is wrong here ?
Edit: The tables are Wordpress WooCommerce module standart tables. I didn't modified them. Here SHOW CREATE TABLE
results:
wp_postmeta
CREATE TABLE `wp_postmeta` (
`meta_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`meta_key` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`meta_value` longtext COLLATE utf8mb4_unicode_ci,
PRIMARY KEY (`meta_id`),
KEY `post_id` (`post_id`),
KEY `meta_key` (`meta_key`(191))
) ENGINE=MyISAM AUTO_INCREMENT=11119572 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_posts
CREATE TABLE `wp_posts` (
`ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_author` bigint(20) unsigned NOT NULL DEFAULT '0',
`post_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_title` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_excerpt` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
`comment_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`ping_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`post_password` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`post_name` varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`to_ping` text COLLATE utf8mb4_unicode_ci NOT NULL,
`pinged` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_modified` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_modified_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content_filtered` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
`guid` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`menu_order` int(11) NOT NULL DEFAULT '0',
`post_type` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
`post_mime_type` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`comment_count` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`),
KEY `post_name` (`post_name`(191)),
KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
KEY `post_parent` (`post_parent`),
KEY `post_author` (`post_author`)
) ENGINE=MyISAM AUTO_INCREMENT=352598 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_term_relationships
CREATE TABLE `wp_term_relationships` (
`object_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_taxonomy_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_order` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`object_id`,`term_taxonomy_id`),
KEY `term_taxonomy_id` (`term_taxonomy_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Edit2: After manually changing to innoDB, the explains are changed as below.
Explain in location 1:
Explain in location 2:
Explain in location 3:
Here some information on what I am trying to do. In my website there are a lot of products variations. The wp_postmeta table has meta information for all variations. (For example, price, color, size, instock etc).
My query is used to update stock information. So for example, I have a product with 9 colors, and 9 size. Which means 81 variations. If I need to update stocks of color 'bordo' for all sizes. This means it is 9 rows has to be updated in wp_postmeta. (All sizes for color: bordo).
Here some row numbers if I update above queries to select count(*)
:
wp_postmeta has 9.929.761
The first subquery where I indicated as (1), returns 3612 rows.
The first subquery where I indicated as (2), returns 3612 rows as
well.
The first subquery where I indicated as (3), returns 3612 rows too.
So basicly there are 3612 "bordo" colored product variations in my website, and I need to update stock information of all these variations.
When I checked my database, there are 227K rows with meta_key "_stock_status". (All colors and all product sizes). I know the cost of updating all that data is a little bit costly. But I am not using this function all the time.
When I add 20 new products and when I get information from production team about latest stock status of products and colors, I click this "Update Stocks" button to update all stock information in the website to the last state.
Edit2
I read the modification requirments for wp_postmeta table from Rick. They all seemed to be helpful but I didn't wanted to change the default structure of wp_postmeta table as I am not a DB expert and may not revert back future problems that can be caused by modifing the core wordpress tables.
So to stay on the safe side, I read the other two answers from ypercubeᵀᴹ and Old Pro.
They suggested to add a new index to wp_postmeta table, and this is my only change on this core wordpress table. After I tested both of their queries, and they gave the same explain
statement results as below.
So basicly their query is doing the same thing with different structure. Now the query's performance has increased quite much. The only question in my mind, how safe it is to add a manual index to a wordpress core table?
mysql query-performance optimization select derived-tables
|
show 6 more comments
My query is:
(3) UPDATE wp_postmeta set meta_value = 'outofstock' where meta_key = '_stock_status' and
post_id in
(
(2)SELECT post_id FROM
(
(1) SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
)
To improve the speed of this query, I analysed it using the "Explain" statment. Below are the results:
When I added explain
to (1) location in above query and run the subquery. The results are as below:
When I added explain
to (2) location and run that subqyery, results are as below.
When I added explain
to (3) location and run the whole query, results are as below:
My analysis is there is no speed problem with the (1) subquery, but after I select the data from this subquery to a derived table (2), somehow there is a 55.277.640 "rows" comes, and which seems to be the reason why my query is so slow. How can I optimize it ? What is wrong here ?
Edit: The tables are Wordpress WooCommerce module standart tables. I didn't modified them. Here SHOW CREATE TABLE
results:
wp_postmeta
CREATE TABLE `wp_postmeta` (
`meta_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`meta_key` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`meta_value` longtext COLLATE utf8mb4_unicode_ci,
PRIMARY KEY (`meta_id`),
KEY `post_id` (`post_id`),
KEY `meta_key` (`meta_key`(191))
) ENGINE=MyISAM AUTO_INCREMENT=11119572 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_posts
CREATE TABLE `wp_posts` (
`ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_author` bigint(20) unsigned NOT NULL DEFAULT '0',
`post_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_title` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_excerpt` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
`comment_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`ping_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`post_password` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`post_name` varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`to_ping` text COLLATE utf8mb4_unicode_ci NOT NULL,
`pinged` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_modified` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_modified_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content_filtered` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
`guid` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`menu_order` int(11) NOT NULL DEFAULT '0',
`post_type` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
`post_mime_type` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`comment_count` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`),
KEY `post_name` (`post_name`(191)),
KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
KEY `post_parent` (`post_parent`),
KEY `post_author` (`post_author`)
) ENGINE=MyISAM AUTO_INCREMENT=352598 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_term_relationships
CREATE TABLE `wp_term_relationships` (
`object_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_taxonomy_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_order` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`object_id`,`term_taxonomy_id`),
KEY `term_taxonomy_id` (`term_taxonomy_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Edit2: After manually changing to innoDB, the explains are changed as below.
Explain in location 1:
Explain in location 2:
Explain in location 3:
Here some information on what I am trying to do. In my website there are a lot of products variations. The wp_postmeta table has meta information for all variations. (For example, price, color, size, instock etc).
My query is used to update stock information. So for example, I have a product with 9 colors, and 9 size. Which means 81 variations. If I need to update stocks of color 'bordo' for all sizes. This means it is 9 rows has to be updated in wp_postmeta. (All sizes for color: bordo).
Here some row numbers if I update above queries to select count(*)
:
wp_postmeta has 9.929.761
The first subquery where I indicated as (1), returns 3612 rows.
The first subquery where I indicated as (2), returns 3612 rows as
well.
The first subquery where I indicated as (3), returns 3612 rows too.
So basicly there are 3612 "bordo" colored product variations in my website, and I need to update stock information of all these variations.
When I checked my database, there are 227K rows with meta_key "_stock_status". (All colors and all product sizes). I know the cost of updating all that data is a little bit costly. But I am not using this function all the time.
When I add 20 new products and when I get information from production team about latest stock status of products and colors, I click this "Update Stocks" button to update all stock information in the website to the last state.
Edit2
I read the modification requirments for wp_postmeta table from Rick. They all seemed to be helpful but I didn't wanted to change the default structure of wp_postmeta table as I am not a DB expert and may not revert back future problems that can be caused by modifing the core wordpress tables.
So to stay on the safe side, I read the other two answers from ypercubeᵀᴹ and Old Pro.
They suggested to add a new index to wp_postmeta table, and this is my only change on this core wordpress table. After I tested both of their queries, and they gave the same explain
statement results as below.
So basicly their query is doing the same thing with different structure. Now the query's performance has increased quite much. The only question in my mind, how safe it is to add a manual index to a wordpress core table?
mysql query-performance optimization select derived-tables
It won't affect performance bu thatOR ('bordo' = 'hepsi')
is useless and can be removed.
– ypercubeᵀᴹ
Feb 10 at 17:40
1
It would help us if you posted theSHOW CREATE TABLE name;
for all 3 tables in the query. And the version of MySQL.
– ypercubeᵀᴹ
Feb 10 at 17:47
@ypercubeᵀᴹ updated the question withSHOW CREATE TABLE name;
for the 3 tables in the query. They are all wordpress - woocommerce plugins standart tables.
– HOY
Feb 10 at 18:52
1
"How safe it is to add a manual index to a wordpress core table?" I'd say 100%. I can't see how it could break anything. And if (very unlikely) it does break something in the future (say a WP migration that doesn't like the name of the index), you can always drop or rename the index.
– ypercubeᵀᴹ
Feb 11 at 19:31
1
I agree, it is very safe to add an index, although I recommend redefining the existing indexes, which is slightly safer in the case of automated schema changes. Note that ypercube and I are recommending different indexes to add, but you can easily add both by redefining the existingpost_id
andmeta_key
indexes.
– Old Pro
Feb 11 at 20:18
|
show 6 more comments
My query is:
(3) UPDATE wp_postmeta set meta_value = 'outofstock' where meta_key = '_stock_status' and
post_id in
(
(2)SELECT post_id FROM
(
(1) SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
)
To improve the speed of this query, I analysed it using the "Explain" statment. Below are the results:
When I added explain
to (1) location in above query and run the subquery. The results are as below:
When I added explain
to (2) location and run that subqyery, results are as below.
When I added explain
to (3) location and run the whole query, results are as below:
My analysis is there is no speed problem with the (1) subquery, but after I select the data from this subquery to a derived table (2), somehow there is a 55.277.640 "rows" comes, and which seems to be the reason why my query is so slow. How can I optimize it ? What is wrong here ?
Edit: The tables are Wordpress WooCommerce module standart tables. I didn't modified them. Here SHOW CREATE TABLE
results:
wp_postmeta
CREATE TABLE `wp_postmeta` (
`meta_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`meta_key` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`meta_value` longtext COLLATE utf8mb4_unicode_ci,
PRIMARY KEY (`meta_id`),
KEY `post_id` (`post_id`),
KEY `meta_key` (`meta_key`(191))
) ENGINE=MyISAM AUTO_INCREMENT=11119572 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_posts
CREATE TABLE `wp_posts` (
`ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_author` bigint(20) unsigned NOT NULL DEFAULT '0',
`post_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_title` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_excerpt` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
`comment_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`ping_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`post_password` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`post_name` varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`to_ping` text COLLATE utf8mb4_unicode_ci NOT NULL,
`pinged` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_modified` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_modified_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content_filtered` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
`guid` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`menu_order` int(11) NOT NULL DEFAULT '0',
`post_type` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
`post_mime_type` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`comment_count` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`),
KEY `post_name` (`post_name`(191)),
KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
KEY `post_parent` (`post_parent`),
KEY `post_author` (`post_author`)
) ENGINE=MyISAM AUTO_INCREMENT=352598 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_term_relationships
CREATE TABLE `wp_term_relationships` (
`object_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_taxonomy_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_order` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`object_id`,`term_taxonomy_id`),
KEY `term_taxonomy_id` (`term_taxonomy_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Edit2: After manually changing to innoDB, the explains are changed as below.
Explain in location 1:
Explain in location 2:
Explain in location 3:
Here some information on what I am trying to do. In my website there are a lot of products variations. The wp_postmeta table has meta information for all variations. (For example, price, color, size, instock etc).
My query is used to update stock information. So for example, I have a product with 9 colors, and 9 size. Which means 81 variations. If I need to update stocks of color 'bordo' for all sizes. This means it is 9 rows has to be updated in wp_postmeta. (All sizes for color: bordo).
Here some row numbers if I update above queries to select count(*)
:
wp_postmeta has 9.929.761
The first subquery where I indicated as (1), returns 3612 rows.
The first subquery where I indicated as (2), returns 3612 rows as
well.
The first subquery where I indicated as (3), returns 3612 rows too.
So basicly there are 3612 "bordo" colored product variations in my website, and I need to update stock information of all these variations.
When I checked my database, there are 227K rows with meta_key "_stock_status". (All colors and all product sizes). I know the cost of updating all that data is a little bit costly. But I am not using this function all the time.
When I add 20 new products and when I get information from production team about latest stock status of products and colors, I click this "Update Stocks" button to update all stock information in the website to the last state.
Edit2
I read the modification requirments for wp_postmeta table from Rick. They all seemed to be helpful but I didn't wanted to change the default structure of wp_postmeta table as I am not a DB expert and may not revert back future problems that can be caused by modifing the core wordpress tables.
So to stay on the safe side, I read the other two answers from ypercubeᵀᴹ and Old Pro.
They suggested to add a new index to wp_postmeta table, and this is my only change on this core wordpress table. After I tested both of their queries, and they gave the same explain
statement results as below.
So basicly their query is doing the same thing with different structure. Now the query's performance has increased quite much. The only question in my mind, how safe it is to add a manual index to a wordpress core table?
mysql query-performance optimization select derived-tables
My query is:
(3) UPDATE wp_postmeta set meta_value = 'outofstock' where meta_key = '_stock_status' and
post_id in
(
(2)SELECT post_id FROM
(
(1) SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
)
To improve the speed of this query, I analysed it using the "Explain" statment. Below are the results:
When I added explain
to (1) location in above query and run the subquery. The results are as below:
When I added explain
to (2) location and run that subqyery, results are as below.
When I added explain
to (3) location and run the whole query, results are as below:
My analysis is there is no speed problem with the (1) subquery, but after I select the data from this subquery to a derived table (2), somehow there is a 55.277.640 "rows" comes, and which seems to be the reason why my query is so slow. How can I optimize it ? What is wrong here ?
Edit: The tables are Wordpress WooCommerce module standart tables. I didn't modified them. Here SHOW CREATE TABLE
results:
wp_postmeta
CREATE TABLE `wp_postmeta` (
`meta_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`meta_key` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`meta_value` longtext COLLATE utf8mb4_unicode_ci,
PRIMARY KEY (`meta_id`),
KEY `post_id` (`post_id`),
KEY `meta_key` (`meta_key`(191))
) ENGINE=MyISAM AUTO_INCREMENT=11119572 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_posts
CREATE TABLE `wp_posts` (
`ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`post_author` bigint(20) unsigned NOT NULL DEFAULT '0',
`post_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_title` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_excerpt` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
`comment_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`ping_status` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
`post_password` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`post_name` varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`to_ping` text COLLATE utf8mb4_unicode_ci NOT NULL,
`pinged` text COLLATE utf8mb4_unicode_ci NOT NULL,
`post_modified` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_modified_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`post_content_filtered` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`post_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
`guid` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`menu_order` int(11) NOT NULL DEFAULT '0',
`post_type` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
`post_mime_type` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`comment_count` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`),
KEY `post_name` (`post_name`(191)),
KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
KEY `post_parent` (`post_parent`),
KEY `post_author` (`post_author`)
) ENGINE=MyISAM AUTO_INCREMENT=352598 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
wp_term_relationships
CREATE TABLE `wp_term_relationships` (
`object_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_taxonomy_id` bigint(20) unsigned NOT NULL DEFAULT '0',
`term_order` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`object_id`,`term_taxonomy_id`),
KEY `term_taxonomy_id` (`term_taxonomy_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Edit2: After manually changing to innoDB, the explains are changed as below.
Explain in location 1:
Explain in location 2:
Explain in location 3:
Here some information on what I am trying to do. In my website there are a lot of products variations. The wp_postmeta table has meta information for all variations. (For example, price, color, size, instock etc).
My query is used to update stock information. So for example, I have a product with 9 colors, and 9 size. Which means 81 variations. If I need to update stocks of color 'bordo' for all sizes. This means it is 9 rows has to be updated in wp_postmeta. (All sizes for color: bordo).
Here some row numbers if I update above queries to select count(*)
:
wp_postmeta has 9.929.761
The first subquery where I indicated as (1), returns 3612 rows.
The first subquery where I indicated as (2), returns 3612 rows as
well.
The first subquery where I indicated as (3), returns 3612 rows too.
So basicly there are 3612 "bordo" colored product variations in my website, and I need to update stock information of all these variations.
When I checked my database, there are 227K rows with meta_key "_stock_status". (All colors and all product sizes). I know the cost of updating all that data is a little bit costly. But I am not using this function all the time.
When I add 20 new products and when I get information from production team about latest stock status of products and colors, I click this "Update Stocks" button to update all stock information in the website to the last state.
Edit2
I read the modification requirments for wp_postmeta table from Rick. They all seemed to be helpful but I didn't wanted to change the default structure of wp_postmeta table as I am not a DB expert and may not revert back future problems that can be caused by modifing the core wordpress tables.
So to stay on the safe side, I read the other two answers from ypercubeᵀᴹ and Old Pro.
They suggested to add a new index to wp_postmeta table, and this is my only change on this core wordpress table. After I tested both of their queries, and they gave the same explain
statement results as below.
So basicly their query is doing the same thing with different structure. Now the query's performance has increased quite much. The only question in my mind, how safe it is to add a manual index to a wordpress core table?
mysql query-performance optimization select derived-tables
mysql query-performance optimization select derived-tables
edited Feb 11 at 19:15
HOY
asked Feb 10 at 17:34
HOYHOY
1117
1117
It won't affect performance bu thatOR ('bordo' = 'hepsi')
is useless and can be removed.
– ypercubeᵀᴹ
Feb 10 at 17:40
1
It would help us if you posted theSHOW CREATE TABLE name;
for all 3 tables in the query. And the version of MySQL.
– ypercubeᵀᴹ
Feb 10 at 17:47
@ypercubeᵀᴹ updated the question withSHOW CREATE TABLE name;
for the 3 tables in the query. They are all wordpress - woocommerce plugins standart tables.
– HOY
Feb 10 at 18:52
1
"How safe it is to add a manual index to a wordpress core table?" I'd say 100%. I can't see how it could break anything. And if (very unlikely) it does break something in the future (say a WP migration that doesn't like the name of the index), you can always drop or rename the index.
– ypercubeᵀᴹ
Feb 11 at 19:31
1
I agree, it is very safe to add an index, although I recommend redefining the existing indexes, which is slightly safer in the case of automated schema changes. Note that ypercube and I are recommending different indexes to add, but you can easily add both by redefining the existingpost_id
andmeta_key
indexes.
– Old Pro
Feb 11 at 20:18
|
show 6 more comments
It won't affect performance bu thatOR ('bordo' = 'hepsi')
is useless and can be removed.
– ypercubeᵀᴹ
Feb 10 at 17:40
1
It would help us if you posted theSHOW CREATE TABLE name;
for all 3 tables in the query. And the version of MySQL.
– ypercubeᵀᴹ
Feb 10 at 17:47
@ypercubeᵀᴹ updated the question withSHOW CREATE TABLE name;
for the 3 tables in the query. They are all wordpress - woocommerce plugins standart tables.
– HOY
Feb 10 at 18:52
1
"How safe it is to add a manual index to a wordpress core table?" I'd say 100%. I can't see how it could break anything. And if (very unlikely) it does break something in the future (say a WP migration that doesn't like the name of the index), you can always drop or rename the index.
– ypercubeᵀᴹ
Feb 11 at 19:31
1
I agree, it is very safe to add an index, although I recommend redefining the existing indexes, which is slightly safer in the case of automated schema changes. Note that ypercube and I are recommending different indexes to add, but you can easily add both by redefining the existingpost_id
andmeta_key
indexes.
– Old Pro
Feb 11 at 20:18
It won't affect performance bu that
OR ('bordo' = 'hepsi')
is useless and can be removed.– ypercubeᵀᴹ
Feb 10 at 17:40
It won't affect performance bu that
OR ('bordo' = 'hepsi')
is useless and can be removed.– ypercubeᵀᴹ
Feb 10 at 17:40
1
1
It would help us if you posted the
SHOW CREATE TABLE name;
for all 3 tables in the query. And the version of MySQL.– ypercubeᵀᴹ
Feb 10 at 17:47
It would help us if you posted the
SHOW CREATE TABLE name;
for all 3 tables in the query. And the version of MySQL.– ypercubeᵀᴹ
Feb 10 at 17:47
@ypercubeᵀᴹ updated the question with
SHOW CREATE TABLE name;
for the 3 tables in the query. They are all wordpress - woocommerce plugins standart tables.– HOY
Feb 10 at 18:52
@ypercubeᵀᴹ updated the question with
SHOW CREATE TABLE name;
for the 3 tables in the query. They are all wordpress - woocommerce plugins standart tables.– HOY
Feb 10 at 18:52
1
1
"How safe it is to add a manual index to a wordpress core table?" I'd say 100%. I can't see how it could break anything. And if (very unlikely) it does break something in the future (say a WP migration that doesn't like the name of the index), you can always drop or rename the index.
– ypercubeᵀᴹ
Feb 11 at 19:31
"How safe it is to add a manual index to a wordpress core table?" I'd say 100%. I can't see how it could break anything. And if (very unlikely) it does break something in the future (say a WP migration that doesn't like the name of the index), you can always drop or rename the index.
– ypercubeᵀᴹ
Feb 11 at 19:31
1
1
I agree, it is very safe to add an index, although I recommend redefining the existing indexes, which is slightly safer in the case of automated schema changes. Note that ypercube and I are recommending different indexes to add, but you can easily add both by redefining the existing
post_id
and meta_key
indexes.– Old Pro
Feb 11 at 20:18
I agree, it is very safe to add an index, although I recommend redefining the existing indexes, which is slightly safer in the case of automated schema changes. Note that ypercube and I are recommending different indexes to add, but you can easily add both by redefining the existing
post_id
and meta_key
indexes.– Old Pro
Feb 11 at 20:18
|
show 6 more comments
3 Answers
3
active
oldest
votes
postmeta
has inefficient. See https://stackoverflow.com/questions/43859351/why-are-references-to-wp-postmeta-so-slow
for a discussion of what to do about it. Or: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta or
https://wordpress.stackexchange.com/questions/248207/simple-sql-query-on-wp-postmeta-very-slow
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem ofIN (SELECT...)
problem that I am suffering ?
– HOY
Feb 10 at 19:23
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
1
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
|
show 3 more comments
First, the Wordpress database design has several flaws as Rick James points out in his answer and the linked ones. The wp_postmeta
in particular is a common cause of performance problems in many WP installations, as soon they grow enough and have more than a few thousands rows.
Before going the difficult and long road of addressing that, I'll suggest something that might help improving the specific query in the short time:
Rewrite the IN (complex subquery)
to a JOIN
:
UPDATE
(
SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
JOIN
wp_postmeta AS upd
ON
upd.post_id = DerivedTable.post_id
SET
upd.meta_value = 'outofstock'
WHERE
upd.meta_key = '_stock_status' ;
and add an index on wp_postmeta (meta_key(191), post_id)
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
I wanted to suggest a(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?
– ypercubeᵀᴹ
Feb 10 at 19:46
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
While what I am proposing is not necessarily the best idea in the general case, I think for this specific situation you would be better off using nested IN
clauses rather than the complicated self-join. Try this:
UPDATE
(
SELECT B.post_id FROM wp_postmeta B where B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
and B.post_id in
(
SELECT A.post_id from wp_postmeta A
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships
ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
WHERE A.meta_key = 'attribute_pa_beden'
and A.meta_value in
('12yas','34yas','56yas','78yas','910yas','1112yas')
)
) AS targets
JOIN wp_postmeta ON wp_postmeta.post_id = targets.post_id
SET meta_value = 'outofstock'
WHERE meta_key = '_stock_status'
And also, in wp_postmeta
, replace
KEY `post_id` (`post_id`),
with
KEY `post_id` (`post_id`, `meta_key`(191)),
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f229348%2fimprove-query-performance-of-select-statment-inside-in-statment%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
postmeta
has inefficient. See https://stackoverflow.com/questions/43859351/why-are-references-to-wp-postmeta-so-slow
for a discussion of what to do about it. Or: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta or
https://wordpress.stackexchange.com/questions/248207/simple-sql-query-on-wp-postmeta-very-slow
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem ofIN (SELECT...)
problem that I am suffering ?
– HOY
Feb 10 at 19:23
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
1
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
|
show 3 more comments
postmeta
has inefficient. See https://stackoverflow.com/questions/43859351/why-are-references-to-wp-postmeta-so-slow
for a discussion of what to do about it. Or: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta or
https://wordpress.stackexchange.com/questions/248207/simple-sql-query-on-wp-postmeta-very-slow
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem ofIN (SELECT...)
problem that I am suffering ?
– HOY
Feb 10 at 19:23
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
1
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
|
show 3 more comments
postmeta
has inefficient. See https://stackoverflow.com/questions/43859351/why-are-references-to-wp-postmeta-so-slow
for a discussion of what to do about it. Or: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta or
https://wordpress.stackexchange.com/questions/248207/simple-sql-query-on-wp-postmeta-very-slow
postmeta
has inefficient. See https://stackoverflow.com/questions/43859351/why-are-references-to-wp-postmeta-so-slow
for a discussion of what to do about it. Or: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta or
https://wordpress.stackexchange.com/questions/248207/simple-sql-query-on-wp-postmeta-very-slow
answered Feb 10 at 18:52
Rick JamesRick James
43.1k22259
43.1k22259
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem ofIN (SELECT...)
problem that I am suffering ?
– HOY
Feb 10 at 19:23
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
1
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
|
show 3 more comments
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem ofIN (SELECT...)
problem that I am suffering ?
– HOY
Feb 10 at 19:23
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
1
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,
IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem of IN (SELECT...)
problem that I am suffering ?– HOY
Feb 10 at 19:23
Hi Rick, I am happy to see you. Actually you helped me for the same issue on a previous [topic] (stackoverflow.com/questions/53854249/…). In which you told me that "In some situations,
IN ( SELECT ... )
is much slower because it reevaluates the SELECT for every post_id". This is why I thought that this is something related to my SQL syntax rather than the postmeta itself. I checked the first link you shared. They seems to be helpful, but does it really removes the problem of IN (SELECT...)
problem that I am suffering ?– HOY
Feb 10 at 19:23
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
Are there really 5.5M rows and 227K rows for the intermediate results? If so, my postmeta tips will speed it up, but you may have a much bigger problem -- namely why do you need to update 227K rows??
– Rick James
Feb 10 at 19:38
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@HOY - And... Even if you get past the inefficiencies of the in-select and postmeta, the cost of updating 227K rows is high. This is because of the need to capture the previous copy of each row incase of a crash and rollback.
– Rick James
Feb 10 at 19:44
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
@RickJames Rollback? The tables are MyiSAM ...
– ypercubeᵀᴹ
Feb 10 at 19:45
1
1
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
@HOY - OK, 3612 is tolerable; it won't take too long. Glad you are moving to InnoDB.
– Rick James
Feb 11 at 22:52
|
show 3 more comments
First, the Wordpress database design has several flaws as Rick James points out in his answer and the linked ones. The wp_postmeta
in particular is a common cause of performance problems in many WP installations, as soon they grow enough and have more than a few thousands rows.
Before going the difficult and long road of addressing that, I'll suggest something that might help improving the specific query in the short time:
Rewrite the IN (complex subquery)
to a JOIN
:
UPDATE
(
SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
JOIN
wp_postmeta AS upd
ON
upd.post_id = DerivedTable.post_id
SET
upd.meta_value = 'outofstock'
WHERE
upd.meta_key = '_stock_status' ;
and add an index on wp_postmeta (meta_key(191), post_id)
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
I wanted to suggest a(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?
– ypercubeᵀᴹ
Feb 10 at 19:46
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
First, the Wordpress database design has several flaws as Rick James points out in his answer and the linked ones. The wp_postmeta
in particular is a common cause of performance problems in many WP installations, as soon they grow enough and have more than a few thousands rows.
Before going the difficult and long road of addressing that, I'll suggest something that might help improving the specific query in the short time:
Rewrite the IN (complex subquery)
to a JOIN
:
UPDATE
(
SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
JOIN
wp_postmeta AS upd
ON
upd.post_id = DerivedTable.post_id
SET
upd.meta_value = 'outofstock'
WHERE
upd.meta_key = '_stock_status' ;
and add an index on wp_postmeta (meta_key(191), post_id)
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
I wanted to suggest a(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?
– ypercubeᵀᴹ
Feb 10 at 19:46
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
First, the Wordpress database design has several flaws as Rick James points out in his answer and the linked ones. The wp_postmeta
in particular is a common cause of performance problems in many WP installations, as soon they grow enough and have more than a few thousands rows.
Before going the difficult and long road of addressing that, I'll suggest something that might help improving the specific query in the short time:
Rewrite the IN (complex subquery)
to a JOIN
:
UPDATE
(
SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
JOIN
wp_postmeta AS upd
ON
upd.post_id = DerivedTable.post_id
SET
upd.meta_value = 'outofstock'
WHERE
upd.meta_key = '_stock_status' ;
and add an index on wp_postmeta (meta_key(191), post_id)
First, the Wordpress database design has several flaws as Rick James points out in his answer and the linked ones. The wp_postmeta
in particular is a common cause of performance problems in many WP installations, as soon they grow enough and have more than a few thousands rows.
Before going the difficult and long road of addressing that, I'll suggest something that might help improving the specific query in the short time:
Rewrite the IN (complex subquery)
to a JOIN
:
UPDATE
(
SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
JOIN
wp_postmeta AS upd
ON
upd.post_id = DerivedTable.post_id
SET
upd.meta_value = 'outofstock'
WHERE
upd.meta_key = '_stock_status' ;
and add an index on wp_postmeta (meta_key(191), post_id)
edited Feb 10 at 19:37
answered Feb 10 at 19:32
ypercubeᵀᴹypercubeᵀᴹ
77.2k11134215
77.2k11134215
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
I wanted to suggest a(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?
– ypercubeᵀᴹ
Feb 10 at 19:46
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
I wanted to suggest a(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?
– ypercubeᵀᴹ
Feb 10 at 19:46
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
It is useless to have any columns after a prefix; the rest will be ignored. Think of it this way -- a prefix is essentially a 'range'.
– Rick James
Feb 10 at 19:42
I wanted to suggest a
(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?– ypercubeᵀᴹ
Feb 10 at 19:46
I wanted to suggest a
(meta_key, post_id)
but the VARCHAR(255) utf8mb4 does not allow it. Correct?– ypercubeᵀᴹ
Feb 10 at 19:46
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
Yep. And I give five workarounds here; none is perfect.
– Rick James
Feb 10 at 19:53
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
While what I am proposing is not necessarily the best idea in the general case, I think for this specific situation you would be better off using nested IN
clauses rather than the complicated self-join. Try this:
UPDATE
(
SELECT B.post_id FROM wp_postmeta B where B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
and B.post_id in
(
SELECT A.post_id from wp_postmeta A
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships
ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
WHERE A.meta_key = 'attribute_pa_beden'
and A.meta_value in
('12yas','34yas','56yas','78yas','910yas','1112yas')
)
) AS targets
JOIN wp_postmeta ON wp_postmeta.post_id = targets.post_id
SET meta_value = 'outofstock'
WHERE meta_key = '_stock_status'
And also, in wp_postmeta
, replace
KEY `post_id` (`post_id`),
with
KEY `post_id` (`post_id`, `meta_key`(191)),
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
While what I am proposing is not necessarily the best idea in the general case, I think for this specific situation you would be better off using nested IN
clauses rather than the complicated self-join. Try this:
UPDATE
(
SELECT B.post_id FROM wp_postmeta B where B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
and B.post_id in
(
SELECT A.post_id from wp_postmeta A
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships
ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
WHERE A.meta_key = 'attribute_pa_beden'
and A.meta_value in
('12yas','34yas','56yas','78yas','910yas','1112yas')
)
) AS targets
JOIN wp_postmeta ON wp_postmeta.post_id = targets.post_id
SET meta_value = 'outofstock'
WHERE meta_key = '_stock_status'
And also, in wp_postmeta
, replace
KEY `post_id` (`post_id`),
with
KEY `post_id` (`post_id`, `meta_key`(191)),
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
While what I am proposing is not necessarily the best idea in the general case, I think for this specific situation you would be better off using nested IN
clauses rather than the complicated self-join. Try this:
UPDATE
(
SELECT B.post_id FROM wp_postmeta B where B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
and B.post_id in
(
SELECT A.post_id from wp_postmeta A
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships
ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
WHERE A.meta_key = 'attribute_pa_beden'
and A.meta_value in
('12yas','34yas','56yas','78yas','910yas','1112yas')
)
) AS targets
JOIN wp_postmeta ON wp_postmeta.post_id = targets.post_id
SET meta_value = 'outofstock'
WHERE meta_key = '_stock_status'
And also, in wp_postmeta
, replace
KEY `post_id` (`post_id`),
with
KEY `post_id` (`post_id`, `meta_key`(191)),
While what I am proposing is not necessarily the best idea in the general case, I think for this specific situation you would be better off using nested IN
clauses rather than the complicated self-join. Try this:
UPDATE
(
SELECT B.post_id FROM wp_postmeta B where B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
and B.post_id in
(
SELECT A.post_id from wp_postmeta A
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships
ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
WHERE A.meta_key = 'attribute_pa_beden'
and A.meta_value in
('12yas','34yas','56yas','78yas','910yas','1112yas')
)
) AS targets
JOIN wp_postmeta ON wp_postmeta.post_id = targets.post_id
SET meta_value = 'outofstock'
WHERE meta_key = '_stock_status'
And also, in wp_postmeta
, replace
KEY `post_id` (`post_id`),
with
KEY `post_id` (`post_id`, `meta_key`(191)),
edited Feb 11 at 11:09
answered Feb 10 at 21:19
Old ProOld Pro
1413
1413
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
Query execution failed, Reason: SQL Error [1093] [HY000]: You can't specify target table 'wp_postmeta' for update in FROM clause
– HOY
Feb 11 at 10:35
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
@HOY sorry, I don't have a system to test these queries on, and forgot that you cannot include the table you are updating in the subquery. I updated my answer to materialize the subquery into a table you can use, just as all the other answers and your original query do. Please try the new query and let me know how it goes.
– Old Pro
Feb 11 at 11:07
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
I edited (edit number 2) the question based on your answer, you may check it.
– HOY
Feb 11 at 19:20
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f229348%2fimprove-query-performance-of-select-statment-inside-in-statment%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
It won't affect performance bu that
OR ('bordo' = 'hepsi')
is useless and can be removed.– ypercubeᵀᴹ
Feb 10 at 17:40
1
It would help us if you posted the
SHOW CREATE TABLE name;
for all 3 tables in the query. And the version of MySQL.– ypercubeᵀᴹ
Feb 10 at 17:47
@ypercubeᵀᴹ updated the question with
SHOW CREATE TABLE name;
for the 3 tables in the query. They are all wordpress - woocommerce plugins standart tables.– HOY
Feb 10 at 18:52
1
"How safe it is to add a manual index to a wordpress core table?" I'd say 100%. I can't see how it could break anything. And if (very unlikely) it does break something in the future (say a WP migration that doesn't like the name of the index), you can always drop or rename the index.
– ypercubeᵀᴹ
Feb 11 at 19:31
1
I agree, it is very safe to add an index, although I recommend redefining the existing indexes, which is slightly safer in the case of automated schema changes. Note that ypercube and I are recommending different indexes to add, but you can easily add both by redefining the existing
post_id
andmeta_key
indexes.– Old Pro
Feb 11 at 20:18